]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephfs/snap-schedule.rst
update ceph source to reef 18.1.2
[ceph.git] / ceph / doc / cephfs / snap-schedule.rst
CommitLineData
20effc67
TL
1.. _snap-schedule:
2
f67539c2
TL
3==========================
4Snapshot Scheduling Module
5==========================
6This module implements scheduled snapshots for CephFS.
7It provides a user interface to add, query and remove snapshots schedules and
8retention policies, as well as a scheduler that takes the snapshots and prunes
9existing snapshots accordingly.
10
11
12How to enable
13=============
14
15The *snap_schedule* module is enabled with::
16
17 ceph mgr module enable snap_schedule
18
19Usage
20=====
21
22This module uses :doc:`/dev/cephfs-snapshots`, please consider this documentation
23as well.
24
25This module's subcommands live under the `ceph fs snap-schedule` namespace.
26Arguments can either be supplied as positional arguments or as keyword
27arguments. Once a keyword argument was encountered, all following arguments are
28assumed to be keyword arguments too.
29
30Snapshot schedules are identified by path, their repeat interval and their start
31time. The
32repeat interval defines the time between two subsequent snapshots. It is
33specified by a number and a period multiplier, one of `h(our)`, `d(ay)` and
34`w(eek)`. E.g. a repeat interval of `12h` specifies one snapshot every 12
35hours.
36The start time is specified as a time string (more details about passing times
37below). By default
38the start time is last midnight. So when a snapshot schedule with repeat
39interval `1h` is added at 13:50
40with the default start time, the first snapshot will be taken at 14:00.
1e59de90
TL
41The time zone is assumed to be UTC if none is explicitly included in the string.
42An explicit time zone will be mapped to UTC at execution.
43The start time must be in ISO8601 format. Examples below:
44
45UTC: 2022-08-08T05:30:00 i.e. 5:30 AM UTC, without explicit time zone offset
46IDT: 2022-08-08T09:00:00+03:00 i.e. 6:00 AM UTC
47EDT: 2022-08-08T05:30:00-04:00 i.e. 9:30 AM UTC
f67539c2
TL
48
49Retention specifications are identified by path and the retention spec itself. A
50retention spec consists of either a number and a time period separated by a
51space or concatenated pairs of `<number><time period>`.
52The semantics are that a spec will ensure `<number>` snapshots are kept that are
53at least `<time period>` apart. For Example `7d` means the user wants to keep 7
54snapshots that are at least one day (but potentially longer) apart from each other.
55The following time periods are recognized: `h(our), d(ay), w(eek), m(onth),
56y(ear)` and `n`. The latter is a special modifier where e.g. `10n` means keep
57the last 10 snapshots regardless of timing,
58
39ae355f 59All subcommands take optional `fs` argument to specify paths in
f67539c2 60multi-fs setups and :doc:`/cephfs/fs-volumes` managed setups. If not
39ae355f 61passed `fs` defaults to the first file system listed in the fs_map.
f67539c2
TL
62When using :doc:`/cephfs/fs-volumes` the argument `fs` is equivalent to a
63`volume`.
64
65When a timestamp is passed (the `start` argument in the `add`, `remove`,
66`activate` and `deactivate` subcommands) the ISO format `%Y-%m-%dT%H:%M:%S` will
67always be accepted. When either python3.7 or newer is used or
68https://github.com/movermeyer/backports.datetime_fromisoformat is installed, any
69valid ISO timestamp that is parsed by python's `datetime.fromisoformat` is valid.
70
71When no subcommand is supplied a synopsis is printed::
72
73 #> ceph fs snap-schedule
74 no valid command found; 8 closest matches:
39ae355f
TL
75 fs snap-schedule status [<path>] [<fs>] [<format>]
76 fs snap-schedule list <path> [--recursive] [<fs>] [<format>]
77 fs snap-schedule add <path> <snap_schedule> [<start>] [<fs>]
78 fs snap-schedule remove <path> [<repeat>] [<start>] [<fs>]
79 fs snap-schedule retention add <path> <retention_spec_or_period> [<retention_count>] [<fs>]
80 fs snap-schedule retention remove <path> <retention_spec_or_period> [<retention_count>] [<fs>]
81 fs snap-schedule activate <path> [<repeat>] [<start>] [<fs>]
82 fs snap-schedule deactivate <path> [<repeat>] [<start>] [<fs>]
f67539c2
TL
83 Error EINVAL: invalid command
84
39ae355f
TL
85Note:
86^^^^^
87A `subvolume` argument is no longer accepted by the commands.
88
89
f67539c2
TL
90Inspect snapshot schedules
91--------------------------
92
93The module offers two subcommands to inspect existing schedules: `list` and
94`status`. Bother offer plain and json output via the optional `format` argument.
95The default is plain.
96The `list` sub-command will list all schedules on a path in a short single line
97format. It offers a `recursive` argument to list all schedules in the specified
98directory and all contained directories.
99The `status` subcommand prints all available schedules and retention specs for a
100path.
101
102Examples::
103
104 ceph fs snap-schedule status /
b3b6e05e 105 ceph fs snap-schedule status /foo/bar --format=json
f67539c2 106 ceph fs snap-schedule list /
b3b6e05e 107 ceph fs snap-schedule list / --recursive=true # list all schedules in the tree
f67539c2
TL
108
109
110Add and remove schedules
111------------------------
112The `add` and `remove` subcommands add and remove snapshots schedules
113respectively. Both require at least a `path` argument, `add` additionally
114requires a `schedule` argument as described in the USAGE section.
115
116Multiple different schedules can be added to a path. Two schedules are considered
117different from each other if they differ in their repeat interval and their
118start time.
119
120If multiple schedules have been set on a path, `remove` can remove individual
121schedules on a path by specifying the exact repeat interval and start time, or
122the subcommand can remove all schedules on a path when just a `path` is
123specified.
124
125Examples::
126
127 ceph fs snap-schedule add / 1h
128 ceph fs snap-schedule add / 1h 11:55
129 ceph fs snap-schedule add / 2h 11:55
130 ceph fs snap-schedule remove / 1h 11:55 # removes one single schedule
b3b6e05e 131 ceph fs snap-schedule remove / 1h # removes all schedules with --repeat=1h
f67539c2
TL
132 ceph fs snap-schedule remove / # removes all schedules on path /
133
134Add and remove retention policies
135---------------------------------
136The `retention add` and `retention remove` subcommands allow to manage
137retention policies. One path has exactly one retention policy. A policy can
138however contain multiple count-time period pairs in order to specify complex
139retention policies.
140Retention policies can be added and removed individually or in bulk via the
141forms `ceph fs snap-schedule retention add <path> <time period> <count>` and
142`ceph fs snap-schedule retention add <path> <countTime period>[countTime period]`
143
144Examples::
145
146 ceph fs snap-schedule retention add / h 24 # keep 24 snapshots at least an hour apart
147 ceph fs snap-schedule retention add / d 7 # and 7 snapshots at least a day apart
148 ceph fs snap-schedule retention remove / h 24 # remove retention for 24 hourlies
149 ceph fs snap-schedule retention add / 24h4w # add 24 hourly and 4 weekly to retention
150 ceph fs snap-schedule retention remove / 7d4w # remove 7 daily and 4 weekly, leaves 24 hourly
151
1e59de90
TL
152.. note: When adding a path to snap-schedule, remember to strip off the mount
153 point path prefix. Paths to snap-schedule should start at the appropriate
154 CephFS file system root and not at the host file system root.
155 e.g. if the Ceph File System is mounted at ``/mnt`` and the path under which
156 snapshots need to be taken is ``/mnt/some/path`` then the acutal path required
157 by snap-schedule is only ``/some/path``.
158
159.. note: It should be noted that the "created" field in the snap-schedule status
160 command output is the timestamp at which the schedule was created. The "created"
161 timestamp has nothing to do with the creation of actual snapshots. The actual
162 snapshot creation is accounted for in the "created_count" field, which is a
163 cumulative count of the total number of snapshots created so far.
164
f67539c2
TL
165Active and inactive schedules
166-----------------------------
167Snapshot schedules can be added for a path that doesn't exist yet in the
168directory tree. Similarly a path can be removed without affecting any snapshot
169schedules on that path.
170If a directory is not present when a snapshot is scheduled to be taken, the
171schedule will be set to inactive and will be excluded from scheduling until
172it is activated again.
173A schedule can manually be set to inactive to pause the creating of scheduled
174snapshots.
175The module provides the `activate` and `deactivate` subcommands for this
176purpose.
177
178Examples::
179
180 ceph fs snap-schedule activate / # activate all schedules on the root directory
181 ceph fs snap-schedule deactivate / 1d # deactivates daily snapshots on the root directory
182
183Limitations
184-----------
185Snapshots are scheduled using python Timers. Under normal circumstances
186specifying 1h as the schedule will result in snapshots 1 hour apart fairly
187precisely. If the mgr daemon is under heavy load however, the Timer threads
188might not get scheduled right away, resulting in a slightly delayed snapshot. If
189this happens, the next snapshot will be schedule as if the previous one was not
190delayed, i.e. one or more delayed snapshots will not cause drift in the overall
191schedule.
192
193In order to somewhat limit the overall number of snapshots in a file system, the
194module will only keep a maximum of 50 snapshots per directory. If the retention
195policy results in more then 50 retained snapshots, the retention list will be
196shortened to the newest 50 snapshots.
197
198Data storage
199------------
200The snapshot schedule data is stored in a rados object in the cephfs metadata
201pool. At runtime all data lives in a sqlite database that is serialized and
202stored as a rados object.