]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephfs/snap-schedule.rst
import ceph pacific 16.2.5
[ceph.git] / ceph / doc / cephfs / snap-schedule.rst
CommitLineData
f67539c2
TL
1==========================
2Snapshot Scheduling Module
3==========================
4This module implements scheduled snapshots for CephFS.
5It provides a user interface to add, query and remove snapshots schedules and
6retention policies, as well as a scheduler that takes the snapshots and prunes
7existing snapshots accordingly.
8
9
10How to enable
11=============
12
13The *snap_schedule* module is enabled with::
14
15 ceph mgr module enable snap_schedule
16
17Usage
18=====
19
20This module uses :doc:`/dev/cephfs-snapshots`, please consider this documentation
21as well.
22
23This module's subcommands live under the `ceph fs snap-schedule` namespace.
24Arguments can either be supplied as positional arguments or as keyword
25arguments. Once a keyword argument was encountered, all following arguments are
26assumed to be keyword arguments too.
27
28Snapshot schedules are identified by path, their repeat interval and their start
29time. The
30repeat interval defines the time between two subsequent snapshots. It is
31specified by a number and a period multiplier, one of `h(our)`, `d(ay)` and
32`w(eek)`. E.g. a repeat interval of `12h` specifies one snapshot every 12
33hours.
34The start time is specified as a time string (more details about passing times
35below). By default
36the start time is last midnight. So when a snapshot schedule with repeat
37interval `1h` is added at 13:50
38with the default start time, the first snapshot will be taken at 14:00.
39
40Retention specifications are identified by path and the retention spec itself. A
41retention spec consists of either a number and a time period separated by a
42space or concatenated pairs of `<number><time period>`.
43The semantics are that a spec will ensure `<number>` snapshots are kept that are
44at least `<time period>` apart. For Example `7d` means the user wants to keep 7
45snapshots that are at least one day (but potentially longer) apart from each other.
46The following time periods are recognized: `h(our), d(ay), w(eek), m(onth),
47y(ear)` and `n`. The latter is a special modifier where e.g. `10n` means keep
48the last 10 snapshots regardless of timing,
49
50All subcommands take optional `fs` and `subvol` arguments to specify paths in
51multi-fs setups and :doc:`/cephfs/fs-volumes` managed setups. If not
52passed `fs` defaults to the first file system listed in the fs_map, `subvolume`
53defaults to nothing.
54When using :doc:`/cephfs/fs-volumes` the argument `fs` is equivalent to a
55`volume`.
56
57When a timestamp is passed (the `start` argument in the `add`, `remove`,
58`activate` and `deactivate` subcommands) the ISO format `%Y-%m-%dT%H:%M:%S` will
59always be accepted. When either python3.7 or newer is used or
60https://github.com/movermeyer/backports.datetime_fromisoformat is installed, any
61valid ISO timestamp that is parsed by python's `datetime.fromisoformat` is valid.
62
63When no subcommand is supplied a synopsis is printed::
64
65 #> ceph fs snap-schedule
66 no valid command found; 8 closest matches:
67 fs snap-schedule status [<path>] [<subvol>] [<fs>] [<format>]
68 fs snap-schedule list <path> [<subvol>] [--recursive] [<fs>] [<format>]
69 fs snap-schedule add <path> <snap_schedule> [<start>] [<fs>] [<subvol>]
70 fs snap-schedule remove <path> [<repeat>] [<start>] [<subvol>] [<fs>]
71 fs snap-schedule retention add <path> <retention_spec_or_period> [<retention_count>] [<fs>] [<subvol>]
72 fs snap-schedule retention remove <path> <retention_spec_or_period> [<retention_count>] [<fs>] [<subvol>]
73 fs snap-schedule activate <path> [<repeat>] [<start>] [<subvol>] [<fs>]
74 fs snap-schedule deactivate <path> [<repeat>] [<start>] [<subvol>] [<fs>]
75 Error EINVAL: invalid command
76
77Inspect snapshot schedules
78--------------------------
79
80The module offers two subcommands to inspect existing schedules: `list` and
81`status`. Bother offer plain and json output via the optional `format` argument.
82The default is plain.
83The `list` sub-command will list all schedules on a path in a short single line
84format. It offers a `recursive` argument to list all schedules in the specified
85directory and all contained directories.
86The `status` subcommand prints all available schedules and retention specs for a
87path.
88
89Examples::
90
91 ceph fs snap-schedule status /
b3b6e05e 92 ceph fs snap-schedule status /foo/bar --format=json
f67539c2 93 ceph fs snap-schedule list /
b3b6e05e 94 ceph fs snap-schedule list / --recursive=true # list all schedules in the tree
f67539c2
TL
95
96
97Add and remove schedules
98------------------------
99The `add` and `remove` subcommands add and remove snapshots schedules
100respectively. Both require at least a `path` argument, `add` additionally
101requires a `schedule` argument as described in the USAGE section.
102
103Multiple different schedules can be added to a path. Two schedules are considered
104different from each other if they differ in their repeat interval and their
105start time.
106
107If multiple schedules have been set on a path, `remove` can remove individual
108schedules on a path by specifying the exact repeat interval and start time, or
109the subcommand can remove all schedules on a path when just a `path` is
110specified.
111
112Examples::
113
114 ceph fs snap-schedule add / 1h
115 ceph fs snap-schedule add / 1h 11:55
116 ceph fs snap-schedule add / 2h 11:55
117 ceph fs snap-schedule remove / 1h 11:55 # removes one single schedule
b3b6e05e 118 ceph fs snap-schedule remove / 1h # removes all schedules with --repeat=1h
f67539c2
TL
119 ceph fs snap-schedule remove / # removes all schedules on path /
120
121Add and remove retention policies
122---------------------------------
123The `retention add` and `retention remove` subcommands allow to manage
124retention policies. One path has exactly one retention policy. A policy can
125however contain multiple count-time period pairs in order to specify complex
126retention policies.
127Retention policies can be added and removed individually or in bulk via the
128forms `ceph fs snap-schedule retention add <path> <time period> <count>` and
129`ceph fs snap-schedule retention add <path> <countTime period>[countTime period]`
130
131Examples::
132
133 ceph fs snap-schedule retention add / h 24 # keep 24 snapshots at least an hour apart
134 ceph fs snap-schedule retention add / d 7 # and 7 snapshots at least a day apart
135 ceph fs snap-schedule retention remove / h 24 # remove retention for 24 hourlies
136 ceph fs snap-schedule retention add / 24h4w # add 24 hourly and 4 weekly to retention
137 ceph fs snap-schedule retention remove / 7d4w # remove 7 daily and 4 weekly, leaves 24 hourly
138
139Active and inactive schedules
140-----------------------------
141Snapshot schedules can be added for a path that doesn't exist yet in the
142directory tree. Similarly a path can be removed without affecting any snapshot
143schedules on that path.
144If a directory is not present when a snapshot is scheduled to be taken, the
145schedule will be set to inactive and will be excluded from scheduling until
146it is activated again.
147A schedule can manually be set to inactive to pause the creating of scheduled
148snapshots.
149The module provides the `activate` and `deactivate` subcommands for this
150purpose.
151
152Examples::
153
154 ceph fs snap-schedule activate / # activate all schedules on the root directory
155 ceph fs snap-schedule deactivate / 1d # deactivates daily snapshots on the root directory
156
157Limitations
158-----------
159Snapshots are scheduled using python Timers. Under normal circumstances
160specifying 1h as the schedule will result in snapshots 1 hour apart fairly
161precisely. If the mgr daemon is under heavy load however, the Timer threads
162might not get scheduled right away, resulting in a slightly delayed snapshot. If
163this happens, the next snapshot will be schedule as if the previous one was not
164delayed, i.e. one or more delayed snapshots will not cause drift in the overall
165schedule.
166
167In order to somewhat limit the overall number of snapshots in a file system, the
168module will only keep a maximum of 50 snapshots per directory. If the retention
169policy results in more then 50 retained snapshots, the retention list will be
170shortened to the newest 50 snapshots.
171
172Data storage
173------------
174The snapshot schedule data is stored in a rados object in the cephfs metadata
175pool. At runtime all data lives in a sqlite database that is serialized and
176stored as a rados object.