]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephfs/snap-schedule.rst
aa1f31ea9f754df3db716cfbcdc6dc8b3469aa8d
[ceph.git] / ceph / doc / cephfs / snap-schedule.rst
1 ==========================
2 Snapshot Scheduling Module
3 ==========================
4 This module implements scheduled snapshots for CephFS.
5 It provides a user interface to add, query and remove snapshots schedules and
6 retention policies, as well as a scheduler that takes the snapshots and prunes
7 existing snapshots accordingly.
8
9
10 How to enable
11 =============
12
13 The *snap_schedule* module is enabled with::
14
15 ceph mgr module enable snap_schedule
16
17 Usage
18 =====
19
20 This module uses :doc:`/dev/cephfs-snapshots`, please consider this documentation
21 as well.
22
23 This module's subcommands live under the `ceph fs snap-schedule` namespace.
24 Arguments can either be supplied as positional arguments or as keyword
25 arguments. Once a keyword argument was encountered, all following arguments are
26 assumed to be keyword arguments too.
27
28 Snapshot schedules are identified by path, their repeat interval and their start
29 time. The
30 repeat interval defines the time between two subsequent snapshots. It is
31 specified by a number and a period multiplier, one of `h(our)`, `d(ay)` and
32 `w(eek)`. E.g. a repeat interval of `12h` specifies one snapshot every 12
33 hours.
34 The start time is specified as a time string (more details about passing times
35 below). By default
36 the start time is last midnight. So when a snapshot schedule with repeat
37 interval `1h` is added at 13:50
38 with the default start time, the first snapshot will be taken at 14:00.
39
40 Retention specifications are identified by path and the retention spec itself. A
41 retention spec consists of either a number and a time period separated by a
42 space or concatenated pairs of `<number><time period>`.
43 The semantics are that a spec will ensure `<number>` snapshots are kept that are
44 at least `<time period>` apart. For Example `7d` means the user wants to keep 7
45 snapshots that are at least one day (but potentially longer) apart from each other.
46 The following time periods are recognized: `h(our), d(ay), w(eek), m(onth),
47 y(ear)` and `n`. The latter is a special modifier where e.g. `10n` means keep
48 the last 10 snapshots regardless of timing,
49
50 All subcommands take optional `fs` and `subvol` arguments to specify paths in
51 multi-fs setups and :doc:`/cephfs/fs-volumes` managed setups. If not
52 passed `fs` defaults to the first file system listed in the fs_map, `subvolume`
53 defaults to nothing.
54 When using :doc:`/cephfs/fs-volumes` the argument `fs` is equivalent to a
55 `volume`.
56
57 When a timestamp is passed (the `start` argument in the `add`, `remove`,
58 `activate` and `deactivate` subcommands) the ISO format `%Y-%m-%dT%H:%M:%S` will
59 always be accepted. When either python3.7 or newer is used or
60 https://github.com/movermeyer/backports.datetime_fromisoformat is installed, any
61 valid ISO timestamp that is parsed by python's `datetime.fromisoformat` is valid.
62
63 When no subcommand is supplied a synopsis is printed::
64
65 #> ceph fs snap-schedule
66 no valid command found; 8 closest matches:
67 fs snap-schedule status [<path>] [<subvol>] [<fs>] [<format>]
68 fs snap-schedule list <path> [<subvol>] [--recursive] [<fs>] [<format>]
69 fs snap-schedule add <path> <snap_schedule> [<start>] [<fs>] [<subvol>]
70 fs snap-schedule remove <path> [<repeat>] [<start>] [<subvol>] [<fs>]
71 fs snap-schedule retention add <path> <retention_spec_or_period> [<retention_count>] [<fs>] [<subvol>]
72 fs snap-schedule retention remove <path> <retention_spec_or_period> [<retention_count>] [<fs>] [<subvol>]
73 fs snap-schedule activate <path> [<repeat>] [<start>] [<subvol>] [<fs>]
74 fs snap-schedule deactivate <path> [<repeat>] [<start>] [<subvol>] [<fs>]
75 Error EINVAL: invalid command
76
77 Inspect snapshot schedules
78 --------------------------
79
80 The module offers two subcommands to inspect existing schedules: `list` and
81 `status`. Bother offer plain and json output via the optional `format` argument.
82 The default is plain.
83 The `list` sub-command will list all schedules on a path in a short single line
84 format. It offers a `recursive` argument to list all schedules in the specified
85 directory and all contained directories.
86 The `status` subcommand prints all available schedules and retention specs for a
87 path.
88
89 Examples::
90
91 ceph fs snap-schedule status /
92 ceph fs snap-schedule status /foo/bar format=json
93 ceph fs snap-schedule list /
94 ceph fs snap-schedule list / recursive=true # list all schedules in the tree
95
96
97 Add and remove schedules
98 ------------------------
99 The `add` and `remove` subcommands add and remove snapshots schedules
100 respectively. Both require at least a `path` argument, `add` additionally
101 requires a `schedule` argument as described in the USAGE section.
102
103 Multiple different schedules can be added to a path. Two schedules are considered
104 different from each other if they differ in their repeat interval and their
105 start time.
106
107 If multiple schedules have been set on a path, `remove` can remove individual
108 schedules on a path by specifying the exact repeat interval and start time, or
109 the subcommand can remove all schedules on a path when just a `path` is
110 specified.
111
112 Examples::
113
114 ceph fs snap-schedule add / 1h
115 ceph fs snap-schedule add / 1h 11:55
116 ceph fs snap-schedule add / 2h 11:55
117 ceph fs snap-schedule remove / 1h 11:55 # removes one single schedule
118 ceph fs snap-schedule remove / 1h # removes all schedules with repeat=1h
119 ceph fs snap-schedule remove / # removes all schedules on path /
120
121 Add and remove retention policies
122 ---------------------------------
123 The `retention add` and `retention remove` subcommands allow to manage
124 retention policies. One path has exactly one retention policy. A policy can
125 however contain multiple count-time period pairs in order to specify complex
126 retention policies.
127 Retention policies can be added and removed individually or in bulk via the
128 forms `ceph fs snap-schedule retention add <path> <time period> <count>` and
129 `ceph fs snap-schedule retention add <path> <countTime period>[countTime period]`
130
131 Examples::
132
133 ceph fs snap-schedule retention add / h 24 # keep 24 snapshots at least an hour apart
134 ceph fs snap-schedule retention add / d 7 # and 7 snapshots at least a day apart
135 ceph fs snap-schedule retention remove / h 24 # remove retention for 24 hourlies
136 ceph fs snap-schedule retention add / 24h4w # add 24 hourly and 4 weekly to retention
137 ceph fs snap-schedule retention remove / 7d4w # remove 7 daily and 4 weekly, leaves 24 hourly
138
139 Active and inactive schedules
140 -----------------------------
141 Snapshot schedules can be added for a path that doesn't exist yet in the
142 directory tree. Similarly a path can be removed without affecting any snapshot
143 schedules on that path.
144 If a directory is not present when a snapshot is scheduled to be taken, the
145 schedule will be set to inactive and will be excluded from scheduling until
146 it is activated again.
147 A schedule can manually be set to inactive to pause the creating of scheduled
148 snapshots.
149 The module provides the `activate` and `deactivate` subcommands for this
150 purpose.
151
152 Examples::
153
154 ceph fs snap-schedule activate / # activate all schedules on the root directory
155 ceph fs snap-schedule deactivate / 1d # deactivates daily snapshots on the root directory
156
157 Limitations
158 -----------
159 Snapshots are scheduled using python Timers. Under normal circumstances
160 specifying 1h as the schedule will result in snapshots 1 hour apart fairly
161 precisely. If the mgr daemon is under heavy load however, the Timer threads
162 might not get scheduled right away, resulting in a slightly delayed snapshot. If
163 this happens, the next snapshot will be schedule as if the previous one was not
164 delayed, i.e. one or more delayed snapshots will not cause drift in the overall
165 schedule.
166
167 In order to somewhat limit the overall number of snapshots in a file system, the
168 module will only keep a maximum of 50 snapshots per directory. If the retention
169 policy results in more then 50 retained snapshots, the retention list will be
170 shortened to the newest 50 snapshots.
171
172 Data storage
173 ------------
174 The snapshot schedule data is stored in a rados object in the cephfs metadata
175 pool. At runtime all data lives in a sqlite database that is serialized and
176 stored as a rados object.