3 ========================================
4 ceph-volume -- Ceph OSD deployment tool
5 ========================================
7 .. program:: ceph-volume
12 | **ceph-volume** [-h] [--cluster CLUSTER] [--log-level LOG_LEVEL]
13 | [--log-path LOG_PATH]
15 | **ceph-volume** **lvm** [ *trigger* | *create* | *activate* | *prepare*
16 | *zap* | *list* | *batch*]
18 | **ceph-volume** **simple** [ *trigger* | *scan* | *activate* ]
24 :program:`ceph-volume` is a single purpose command line tool to deploy logical
25 volumes as OSDs, trying to maintain a similar API to ``ceph-disk`` when
26 preparing, activating, and creating OSDs.
28 It deviates from ``ceph-disk`` by not interacting or relying on the udev rules
29 that come installed for Ceph. These rules allow automatic detection of
30 previously setup devices that are in turn fed into ``ceph-disk`` to activate
40 By making use of LVM tags, the ``lvm`` sub-command is able to store and later
41 re-discover and query devices associated with OSDs so that they can later
47 Creates OSDs from a list of devices using a ``filestore``
48 or ``bluestore`` (default) setup. It will create all necessary volume groups
49 and logical volumes required to have a working OSD.
51 Example usage with three devices::
53 ceph-volume lvm batch --bluestore /dev/sda /dev/sdb /dev/sdc
57 * [-h, --help] show the help message and exit
58 * [--bluestore] Use the bluestore objectstore (default)
59 * [--filestore] Use the filestore objectstore
60 * [--yes] Skip the report and prompt to continue provisioning
61 * [--dmcrypt] Enable encryption for the underlying OSD devices
62 * [--crush-device-class] Define a CRUSH device class to assign the OSD to
63 * [--no-systemd] Do not enable or create any systemd units
64 * [--report] Report what the potential outcome would be for the
65 current input (requires devices to be passed in)
66 * [--format] Output format when reporting (used along with
67 --report), can be one of 'pretty' (default) or 'json'
69 Required positional arguments:
71 * <DEVICE> Full path to a raw device, like ``/dev/sda``. Multiple
72 ``<DEVICE>`` paths can be passed in.
76 Enables a systemd unit that persists the OSD ID and its UUID (also called
77 ``fsid`` in Ceph CLI tools), so that at boot time it can understand what OSD is
78 enabled and needs to be mounted.
82 ceph-volume lvm activate --bluestore <osd id> <osd fsid>
86 * [-h, --help] show the help message and exit
87 * [--auto-detect-objectstore] Automatically detect the objecstore by inspecting
89 * [--bluestore] bluestore objectstore (default)
90 * [--filestore] filestore objectstore
91 * [--all] Activate all OSDs found in the system
92 * [--no-systemd] Skip creating and enabling systemd units and starting of OSD
95 Multiple OSDs can be activated at once by using the (idempotent) ``--all`` flag::
97 ceph-volume lvm activate --all
101 Prepares a logical volume to be used as an OSD and journal using a ``filestore``
102 or ``bluestore`` (default) setup. It will not create or modify the logical volumes
103 except for adding extra metadata.
107 ceph-volume lvm prepare --filestore --data <data lv> --journal <journal device>
111 * [-h, --help] show the help message and exit
112 * [--journal JOURNAL] A logical group name, path to a logical volume, or path to a device
113 * [--bluestore] Use the bluestore objectstore (default)
114 * [--block.wal] Path to a bluestore block.wal logical volume or partition
115 * [--block.db] Path to a bluestore block.db logical volume or partition
116 * [--filestore] Use the filestore objectstore
117 * [--dmcrypt] Enable encryption for the underlying OSD devices
118 * [--osd-id OSD_ID] Reuse an existing OSD id
119 * [--osd-fsid OSD_FSID] Reuse an existing OSD fsid
120 * [--crush-device-class] Define a CRUSH device class to assign the OSD to
124 * --data A logical group name or a path to a logical volume
126 For encrypting an OSD, the ``--dmcrypt`` flag must be added when preparing
127 (also supported in the ``create`` sub-command).
131 Wraps the two-step process to provision a new osd (calling ``prepare`` first
132 and then ``activate``) into a single one. The reason to prefer ``prepare`` and
133 then ``activate`` is to gradually introduce new OSDs into a cluster, and
134 avoiding large amounts of data being rebalanced.
136 The single-call process unifies exactly what ``prepare`` and ``activate`` do,
137 with the convenience of doing it all at once. Flags and general usage are
138 equivalent to those of the ``prepare`` and ``activate`` subcommand.
141 This subcommand is not meant to be used directly, and it is used by systemd so
142 that it proxies input to ``ceph-volume lvm activate`` by parsing the
143 input from systemd, detecting the UUID and ID associated with an OSD.
147 ceph-volume lvm trigger <SYSTEMD-DATA>
149 The systemd "data" is expected to be in the format of::
153 The lvs associated with the OSD need to have been prepared previously,
154 so that all needed tags and metadata exist.
156 Positional arguments:
158 * <SYSTEMD_DATA> Data from a systemd unit containing ID and UUID of the OSD.
161 List devices or logical volumes associated with Ceph. An association is
162 determined if a device has information relating to an OSD. This is
163 verified by querying LVM's metadata and correlating it with devices.
165 The lvs associated with the OSD need to have been prepared previously by
166 ceph-volume so that all needed tags and metadata exist.
172 List a particular device, reporting all metadata about it::
174 ceph-volume lvm list /dev/sda1
176 List a logical volume, along with all its metadata (vg is a volume
177 group, and lv the logical volume name)::
179 ceph-volume lvm list {vg/lv}
181 Positional arguments:
183 * <DEVICE> Either in the form of ``vg/lv`` for logical volumes,
184 ``/path/to/sda1`` or ``/path/to/sda`` for regular devices.
188 Zaps the given logical volume or partition. If given a path to a logical
189 volume it must be in the format of vg/lv. Any filesystems present
190 on the given lv or partition will be removed and all data will be purged.
192 However, the lv or partition will be kept intact.
194 Usage, for logical volumes::
196 ceph-volume lvm zap {vg/lv}
198 Usage, for logical partitions::
200 ceph-volume lvm zap /dev/sdc1
202 Positional arguments:
204 * <DEVICE> Either in the form of ``vg/lv`` for logical volumes,
205 ``/path/to/sda1`` or ``/path/to/sda`` for regular devices.
211 Scan legacy OSD directories or data devices that may have been created by
212 ceph-disk, or manually.
217 Enables a systemd unit that persists the OSD ID and its UUID (also called
218 ``fsid`` in Ceph CLI tools), so that at boot time it can understand what OSD is
219 enabled and needs to be mounted, while reading information that was previously
220 created and persisted at ``/etc/ceph/osd/`` in JSON format.
224 ceph-volume simple activate --bluestore <osd id> <osd fsid>
228 * [-h, --help] show the help message and exit
229 * [--bluestore] bluestore objectstore (default)
230 * [--filestore] filestore objectstore
232 Note: It requires a matching JSON file with the following format::
234 /etc/ceph/osd/<osd id>-<osd fsid>.json
238 Scan a running OSD or data device for an OSD for metadata that can later be
239 used to activate and manage the OSD with ceph-volume. The scan method will
240 create a JSON file with the required information plus anything found in the OSD
243 Optionally, the JSON blob can be sent to stdout for further inspection.
245 Usage on data devices::
247 ceph-volume simple scan <data device>
249 Running OSD directories::
251 ceph-volume simple scan <path to osd dir>
256 * [-h, --help] show the help message and exit
257 * [--stdout] Send the JSON blob to stdout
258 * [--force] If the JSON file exists at destination, overwrite it
260 Required Positional arguments:
262 * <DATA DEVICE or OSD DIR> Actual data partition or a path to the running OSD
265 This subcommand is not meant to be used directly, and it is used by systemd so
266 that it proxies input to ``ceph-volume simple activate`` by parsing the
267 input from systemd, detecting the UUID and ID associated with an OSD.
271 ceph-volume simple trigger <SYSTEMD-DATA>
273 The systemd "data" is expected to be in the format of::
277 The JSON file associated with the OSD need to have been persisted previously by
278 a scan (or manually), so that all needed metadata can be used.
280 Positional arguments:
282 * <SYSTEMD_DATA> Data from a systemd unit containing ID and UUID of the OSD.
288 :program:`ceph-volume` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
289 the documentation at http://docs.ceph.com/ for more information.
295 :doc:`ceph-osd <ceph-osd>`\(8),
296 :doc:`ceph-disk <ceph-disk>`\(8),