]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephfs/snap-schedule.rst
74e2e2fd96b998a77245e4de52eba13c9041814e
[ceph.git] / ceph / doc / cephfs / snap-schedule.rst
1 .. _snap-schedule:
2
3 ==========================
4 Snapshot Scheduling Module
5 ==========================
6 This module implements scheduled snapshots for CephFS.
7 It provides a user interface to add, query and remove snapshots schedules and
8 retention policies, as well as a scheduler that takes the snapshots and prunes
9 existing snapshots accordingly.
10
11
12 How to enable
13 =============
14
15 The *snap_schedule* module is enabled with::
16
17 ceph mgr module enable snap_schedule
18
19 Usage
20 =====
21
22 This module uses :doc:`/dev/cephfs-snapshots`, please consider this documentation
23 as well.
24
25 This module's subcommands live under the `ceph fs snap-schedule` namespace.
26 Arguments can either be supplied as positional arguments or as keyword
27 arguments. Once a keyword argument was encountered, all following arguments are
28 assumed to be keyword arguments too.
29
30 Snapshot schedules are identified by path, their repeat interval and their start
31 time. The
32 repeat interval defines the time between two subsequent snapshots. It is
33 specified by a number and a period multiplier, one of `h(our)`, `d(ay)` and
34 `w(eek)`. E.g. a repeat interval of `12h` specifies one snapshot every 12
35 hours.
36 The start time is specified as a time string (more details about passing times
37 below). By default
38 the start time is last midnight. So when a snapshot schedule with repeat
39 interval `1h` is added at 13:50
40 with the default start time, the first snapshot will be taken at 14:00.
41 The time zone is assumed to be UTC if none is explicitly included in the string.
42 An explicit time zone will be mapped to UTC at execution.
43 The start time must be in ISO8601 format. Examples below:
44
45 UTC: 2022-08-08T05:30:00 i.e. 5:30 AM UTC, without explicit time zone offset
46 IDT: 2022-08-08T09:00:00+03:00 i.e. 6:00 AM UTC
47 EDT: 2022-08-08T05:30:00-04:00 i.e. 9:30 AM UTC
48
49 Retention specifications are identified by path and the retention spec itself. A
50 retention spec consists of either a number and a time period separated by a
51 space or concatenated pairs of `<number><time period>`.
52 The semantics are that a spec will ensure `<number>` snapshots are kept that are
53 at least `<time period>` apart. For Example `7d` means the user wants to keep 7
54 snapshots that are at least one day (but potentially longer) apart from each other.
55 The following time periods are recognized: `h(our), d(ay), w(eek), m(onth),
56 y(ear)` and `n`. The latter is a special modifier where e.g. `10n` means keep
57 the last 10 snapshots regardless of timing,
58
59 All subcommands take optional `fs` argument to specify paths in
60 multi-fs setups and :doc:`/cephfs/fs-volumes` managed setups. If not
61 passed `fs` defaults to the first file system listed in the fs_map.
62 When using :doc:`/cephfs/fs-volumes` the argument `fs` is equivalent to a
63 `volume`.
64
65 When a timestamp is passed (the `start` argument in the `add`, `remove`,
66 `activate` and `deactivate` subcommands) the ISO format `%Y-%m-%dT%H:%M:%S` will
67 always be accepted. When either python3.7 or newer is used or
68 https://github.com/movermeyer/backports.datetime_fromisoformat is installed, any
69 valid ISO timestamp that is parsed by python's `datetime.fromisoformat` is valid.
70
71 When no subcommand is supplied a synopsis is printed::
72
73 #> ceph fs snap-schedule
74 no valid command found; 8 closest matches:
75 fs snap-schedule status [<path>] [<fs>] [<format>]
76 fs snap-schedule list <path> [--recursive] [<fs>] [<format>]
77 fs snap-schedule add <path> <snap_schedule> [<start>] [<fs>]
78 fs snap-schedule remove <path> [<repeat>] [<start>] [<fs>]
79 fs snap-schedule retention add <path> <retention_spec_or_period> [<retention_count>] [<fs>]
80 fs snap-schedule retention remove <path> <retention_spec_or_period> [<retention_count>] [<fs>]
81 fs snap-schedule activate <path> [<repeat>] [<start>] [<fs>]
82 fs snap-schedule deactivate <path> [<repeat>] [<start>] [<fs>]
83 Error EINVAL: invalid command
84
85 Note:
86 ^^^^^
87 A `subvolume` argument is no longer accepted by the commands.
88
89
90 Inspect snapshot schedules
91 --------------------------
92
93 The module offers two subcommands to inspect existing schedules: `list` and
94 `status`. Bother offer plain and json output via the optional `format` argument.
95 The default is plain.
96 The `list` sub-command will list all schedules on a path in a short single line
97 format. It offers a `recursive` argument to list all schedules in the specified
98 directory and all contained directories.
99 The `status` subcommand prints all available schedules and retention specs for a
100 path.
101
102 Examples::
103
104 ceph fs snap-schedule status /
105 ceph fs snap-schedule status /foo/bar --format=json
106 ceph fs snap-schedule list /
107 ceph fs snap-schedule list / --recursive=true # list all schedules in the tree
108
109
110 Add and remove schedules
111 ------------------------
112 The `add` and `remove` subcommands add and remove snapshots schedules
113 respectively. Both require at least a `path` argument, `add` additionally
114 requires a `schedule` argument as described in the USAGE section.
115
116 Multiple different schedules can be added to a path. Two schedules are considered
117 different from each other if they differ in their repeat interval and their
118 start time.
119
120 If multiple schedules have been set on a path, `remove` can remove individual
121 schedules on a path by specifying the exact repeat interval and start time, or
122 the subcommand can remove all schedules on a path when just a `path` is
123 specified.
124
125 Examples::
126
127 ceph fs snap-schedule add / 1h
128 ceph fs snap-schedule add / 1h 11:55
129 ceph fs snap-schedule add / 2h 11:55
130 ceph fs snap-schedule remove / 1h 11:55 # removes one single schedule
131 ceph fs snap-schedule remove / 1h # removes all schedules with --repeat=1h
132 ceph fs snap-schedule remove / # removes all schedules on path /
133
134 Add and remove retention policies
135 ---------------------------------
136 The `retention add` and `retention remove` subcommands allow to manage
137 retention policies. One path has exactly one retention policy. A policy can
138 however contain multiple count-time period pairs in order to specify complex
139 retention policies.
140 Retention policies can be added and removed individually or in bulk via the
141 forms `ceph fs snap-schedule retention add <path> <time period> <count>` and
142 `ceph fs snap-schedule retention add <path> <countTime period>[countTime period]`
143
144 Examples::
145
146 ceph fs snap-schedule retention add / h 24 # keep 24 snapshots at least an hour apart
147 ceph fs snap-schedule retention add / d 7 # and 7 snapshots at least a day apart
148 ceph fs snap-schedule retention remove / h 24 # remove retention for 24 hourlies
149 ceph fs snap-schedule retention add / 24h4w # add 24 hourly and 4 weekly to retention
150 ceph fs snap-schedule retention remove / 7d4w # remove 7 daily and 4 weekly, leaves 24 hourly
151
152 .. note: When adding a path to snap-schedule, remember to strip off the mount
153 point path prefix. Paths to snap-schedule should start at the appropriate
154 CephFS file system root and not at the host file system root.
155 e.g. if the Ceph File System is mounted at ``/mnt`` and the path under which
156 snapshots need to be taken is ``/mnt/some/path`` then the acutal path required
157 by snap-schedule is only ``/some/path``.
158
159 .. note: It should be noted that the "created" field in the snap-schedule status
160 command output is the timestamp at which the schedule was created. The "created"
161 timestamp has nothing to do with the creation of actual snapshots. The actual
162 snapshot creation is accounted for in the "created_count" field, which is a
163 cumulative count of the total number of snapshots created so far.
164
165 Active and inactive schedules
166 -----------------------------
167 Snapshot schedules can be added for a path that doesn't exist yet in the
168 directory tree. Similarly a path can be removed without affecting any snapshot
169 schedules on that path.
170 If a directory is not present when a snapshot is scheduled to be taken, the
171 schedule will be set to inactive and will be excluded from scheduling until
172 it is activated again.
173 A schedule can manually be set to inactive to pause the creating of scheduled
174 snapshots.
175 The module provides the `activate` and `deactivate` subcommands for this
176 purpose.
177
178 Examples::
179
180 ceph fs snap-schedule activate / # activate all schedules on the root directory
181 ceph fs snap-schedule deactivate / 1d # deactivates daily snapshots on the root directory
182
183 Limitations
184 -----------
185 Snapshots are scheduled using python Timers. Under normal circumstances
186 specifying 1h as the schedule will result in snapshots 1 hour apart fairly
187 precisely. If the mgr daemon is under heavy load however, the Timer threads
188 might not get scheduled right away, resulting in a slightly delayed snapshot. If
189 this happens, the next snapshot will be schedule as if the previous one was not
190 delayed, i.e. one or more delayed snapshots will not cause drift in the overall
191 schedule.
192
193 In order to somewhat limit the overall number of snapshots in a file system, the
194 module will only keep a maximum of 50 snapshots per directory. If the retention
195 policy results in more then 50 retained snapshots, the retention list will be
196 shortened to the newest 50 snapshots.
197
198 Data storage
199 ------------
200 The snapshot schedule data is stored in a rados object in the cephfs metadata
201 pool. At runtime all data lives in a sqlite database that is serialized and
202 stored as a rados object.