3 ========================================
4 ceph-volume -- Ceph OSD deployment tool
5 ========================================
7 .. program:: ceph-volume
12 | **ceph-volume** [-h] [--cluster CLUSTER] [--log-level LOG_LEVEL]
13 | [--log-path LOG_PATH]
15 | **ceph-volume** **lvm** [ *trigger* | *create* | *activate* | *prepare*
18 | **ceph-volume** **simple** [ *trigger* | *scan* | *activate* ]
24 :program:`ceph-volume` is a single purpose command line tool to deploy logical
25 volumes as OSDs, trying to maintain a similar API to ``ceph-disk`` when
26 preparing, activating, and creating OSDs.
28 It deviates from ``ceph-disk`` by not interacting or relying on the udev rules
29 that come installed for Ceph. These rules allow automatic detection of
30 previously setup devices that are in turn fed into ``ceph-disk`` to activate
40 By making use of LVM tags, the ``lvm`` sub-command is able to store and later
41 re-discover and query devices associated with OSDs so that they can later
47 Enables a systemd unit that persists the OSD ID and its UUID (also called
48 ``fsid`` in Ceph CLI tools), so that at boot time it can understand what OSD is
49 enabled and needs to be mounted.
53 ceph-volume lvm activate --bluestore <osd id> <osd fsid>
57 * [-h, --help] show the help message and exit
58 * [--auto-detect-objectstore] Automatically detect the objecstore by inspecting
60 * [--bluestore] bluestore objectstore (default)
61 * [--filestore] filestore objectstore
62 * [--all] Activate all OSDs found in the system
63 * [--no-systemd] Skip creating and enabling systemd units and starting of OSD
68 Prepares a logical volume to be used as an OSD and journal using a ``filestore``
69 or ``bluestore`` (default) setup. It will not create or modify the logical volumes
70 except for adding extra metadata.
74 ceph-volume lvm prepare --filestore --data <data lv> --journal <journal device>
78 * [-h, --help] show the help message and exit
79 * [--journal JOURNAL] A logical group name, path to a logical volume, or path to a device
80 * [--bluestore] Use the bluestore objectstore (default)
81 * [--block.wal] Path to a bluestore block.wal logical volume or partition
82 * [--block.db] Path to a bluestore block.db logical volume or partition
83 * [--filestore] Use the filestore objectstore
84 * [--dmcrypt] Enable encryption for the underlying OSD devices
85 * [--osd-id OSD_ID] Reuse an existing OSD id
86 * [--osd-fsid OSD_FSID] Reuse an existing OSD fsid
87 * [--crush-device-class] Define a CRUSH device class to assign the OSD to
91 * --data A logical group name or a path to a logical volume
94 Wraps the two-step process to provision a new osd (calling ``prepare`` first
95 and then ``activate``) into a single one. The reason to prefer ``prepare`` and
96 then ``activate`` is to gradually introduce new OSDs into a cluster, and
97 avoiding large amounts of data being rebalanced.
99 The single-call process unifies exactly what ``prepare`` and ``activate`` do,
100 with the convenience of doing it all at once. Flags and general usage are
101 equivalent to those of the ``prepare`` and ``activate`` subcommand.
104 This subcommand is not meant to be used directly, and it is used by systemd so
105 that it proxies input to ``ceph-volume lvm activate`` by parsing the
106 input from systemd, detecting the UUID and ID associated with an OSD.
110 ceph-volume lvm trigger <SYSTEMD-DATA>
112 The systemd "data" is expected to be in the format of::
116 The lvs associated with the OSD need to have been prepared previously,
117 so that all needed tags and metadata exist.
119 Positional arguments:
121 * <SYSTEMD_DATA> Data from a systemd unit containing ID and UUID of the OSD.
124 List devices or logical volumes associated with Ceph. An association is
125 determined if a device has information relating to an OSD. This is
126 verified by querying LVM's metadata and correlating it with devices.
128 The lvs associated with the OSD need to have been prepared previously by
129 ceph-volume so that all needed tags and metadata exist.
135 List a particular device, reporting all metadata about it::
137 ceph-volume lvm list /dev/sda1
139 List a logical volume, along with all its metadata (vg is a volume
140 group, and lv the logical volume name)::
142 ceph-volume lvm list {vg/lv}
144 Positional arguments:
146 * <DEVICE> Either in the form of ``vg/lv`` for logical volumes,
147 ``/path/to/sda1`` or ``/path/to/sda`` for regular devices.
151 Zaps the given logical volume or partition. If given a path to a logical
152 volume it must be in the format of vg/lv. Any filesystems present
153 on the given lv or partition will be removed and all data will be purged.
155 However, the lv or partition will be kept intact.
157 Usage, for logical volumes::
159 ceph-volume lvm zap {vg/lv}
161 Usage, for logical partitions::
163 ceph-volume lvm zap /dev/sdc1
165 Positional arguments:
167 * <DEVICE> Either in the form of ``vg/lv`` for logical volumes,
168 ``/path/to/sda1`` or ``/path/to/sda`` for regular devices.
174 Scan legacy OSD directories or data devices that may have been created by
175 ceph-disk, or manually.
180 Enables a systemd unit that persists the OSD ID and its UUID (also called
181 ``fsid`` in Ceph CLI tools), so that at boot time it can understand what OSD is
182 enabled and needs to be mounted, while reading information that was previously
183 created and persisted at ``/etc/ceph/osd/`` in JSON format.
187 ceph-volume simple activate --bluestore <osd id> <osd fsid>
191 * [-h, --help] show the help message and exit
192 * [--bluestore] bluestore objectstore (default)
193 * [--filestore] filestore objectstore
195 Note: It requires a matching JSON file with the following format::
197 /etc/ceph/osd/<osd id>-<osd fsid>.json
201 Scan a running OSD or data device for an OSD for metadata that can later be
202 used to activate and manage the OSD with ceph-volume. The scan method will
203 create a JSON file with the required information plus anything found in the OSD
206 Optionally, the JSON blob can be sent to stdout for further inspection.
208 Usage on data devices::
210 ceph-volume simple scan <data device>
212 Running OSD directories::
214 ceph-volume simple scan <path to osd dir>
219 * [-h, --help] show the help message and exit
220 * [--stdout] Send the JSON blob to stdout
221 * [--force] If the JSON file exists at destination, overwrite it
223 Required Positional arguments:
225 * <DATA DEVICE or OSD DIR> Actual data partition or a path to the running OSD
228 This subcommand is not meant to be used directly, and it is used by systemd so
229 that it proxies input to ``ceph-volume simple activate`` by parsing the
230 input from systemd, detecting the UUID and ID associated with an OSD.
234 ceph-volume simple trigger <SYSTEMD-DATA>
236 The systemd "data" is expected to be in the format of::
240 The JSON file associated with the OSD need to have been persisted previously by
241 a scan (or manually), so that all needed metadata can be used.
243 Positional arguments:
245 * <SYSTEMD_DATA> Data from a systemd unit containing ID and UUID of the OSD.
251 :program:`ceph-volume` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
252 the documentation at http://docs.ceph.com/ for more information.
258 :doc:`ceph-osd <ceph-osd>`\(8),
259 :doc:`ceph-disk <ceph-disk>`\(8),