3 =======================================================
4 ceph-volume -- Ceph OSD deployment and inspection tool
5 =======================================================
7 .. program:: ceph-volume
12 | **ceph-volume** [-h] [--cluster CLUSTER] [--log-level LOG_LEVEL]
13 | [--log-path LOG_PATH]
15 | **ceph-volume** **inventory**
17 | **ceph-volume** **lvm** [ *trigger* | *create* | *activate* | *prepare*
18 | *zap* | *list* | *batch*]
20 | **ceph-volume** **simple** [ *trigger* | *scan* | *activate* ]
26 :program:`ceph-volume` is a single purpose command line tool to deploy logical
27 volumes as OSDs, trying to maintain a similar API to ``ceph-disk`` when
28 preparing, activating, and creating OSDs.
30 It deviates from ``ceph-disk`` by not interacting or relying on the udev rules
31 that come installed for Ceph. These rules allow automatic detection of
32 previously setup devices that are in turn fed into ``ceph-disk`` to activate
42 This subcommand provides information about a host's physical disc inventory and
43 reports metadata about these discs. Among this metadata one can find disc
44 specific data items (like model, size, rotational or solid state) as well as
45 data items specific to ceph using a device, such as if it is available for
46 use with ceph or if logical volumes are present.
51 ceph-volume inventory /dev/sda
52 ceph-volume inventory --format json-pretty
56 * [-h, --help] show the help message and exit
57 * [--format] report format, valid values are ``plain`` (default),
58 ``json`` and ``json-pretty``
63 By making use of LVM tags, the ``lvm`` sub-command is able to store and later
64 re-discover and query devices associated with OSDs so that they can later
70 Creates OSDs from a list of devices using a ``filestore``
71 or ``bluestore`` (default) setup. It will create all necessary volume groups
72 and logical volumes required to have a working OSD.
74 Example usage with three devices::
76 ceph-volume lvm batch --bluestore /dev/sda /dev/sdb /dev/sdc
80 * [-h, --help] show the help message and exit
81 * [--bluestore] Use the bluestore objectstore (default)
82 * [--filestore] Use the filestore objectstore
83 * [--yes] Skip the report and prompt to continue provisioning
84 * [--prepare] Only prepare OSDs, do not activate
85 * [--dmcrypt] Enable encryption for the underlying OSD devices
86 * [--crush-device-class] Define a CRUSH device class to assign the OSD to
87 * [--no-systemd] Do not enable or create any systemd units
88 * [--report] Report what the potential outcome would be for the
89 current input (requires devices to be passed in)
90 * [--format] Output format when reporting (used along with
91 --report), can be one of 'pretty' (default) or 'json'
92 * [--block-db-size] Set (or override) the "bluestore_block_db_size" value,
94 * [--journal-size] Override the "osd_journal_size" value, in megabytes
96 Required positional arguments:
98 * <DEVICE> Full path to a raw device, like ``/dev/sda``. Multiple
99 ``<DEVICE>`` paths can be passed in.
103 Enables a systemd unit that persists the OSD ID and its UUID (also called
104 ``fsid`` in Ceph CLI tools), so that at boot time it can understand what OSD is
105 enabled and needs to be mounted.
109 ceph-volume lvm activate --bluestore <osd id> <osd fsid>
113 * [-h, --help] show the help message and exit
114 * [--auto-detect-objectstore] Automatically detect the objecstore by inspecting
116 * [--bluestore] bluestore objectstore (default)
117 * [--filestore] filestore objectstore
118 * [--all] Activate all OSDs found in the system
119 * [--no-systemd] Skip creating and enabling systemd units and starting of OSD
122 Multiple OSDs can be activated at once by using the (idempotent) ``--all`` flag::
124 ceph-volume lvm activate --all
128 Prepares a logical volume to be used as an OSD and journal using a ``filestore``
129 or ``bluestore`` (default) setup. It will not create or modify the logical volumes
130 except for adding extra metadata.
134 ceph-volume lvm prepare --filestore --data <data lv> --journal <journal device>
138 * [-h, --help] show the help message and exit
139 * [--journal JOURNAL] A logical group name, path to a logical volume, or path to a device
140 * [--bluestore] Use the bluestore objectstore (default)
141 * [--block.wal] Path to a bluestore block.wal logical volume or partition
142 * [--block.db] Path to a bluestore block.db logical volume or partition
143 * [--filestore] Use the filestore objectstore
144 * [--dmcrypt] Enable encryption for the underlying OSD devices
145 * [--osd-id OSD_ID] Reuse an existing OSD id
146 * [--osd-fsid OSD_FSID] Reuse an existing OSD fsid
147 * [--crush-device-class] Define a CRUSH device class to assign the OSD to
151 * --data A logical group name or a path to a logical volume
153 For encrypting an OSD, the ``--dmcrypt`` flag must be added when preparing
154 (also supported in the ``create`` sub-command).
158 Wraps the two-step process to provision a new osd (calling ``prepare`` first
159 and then ``activate``) into a single one. The reason to prefer ``prepare`` and
160 then ``activate`` is to gradually introduce new OSDs into a cluster, and
161 avoiding large amounts of data being rebalanced.
163 The single-call process unifies exactly what ``prepare`` and ``activate`` do,
164 with the convenience of doing it all at once. Flags and general usage are
165 equivalent to those of the ``prepare`` and ``activate`` subcommand.
168 This subcommand is not meant to be used directly, and it is used by systemd so
169 that it proxies input to ``ceph-volume lvm activate`` by parsing the
170 input from systemd, detecting the UUID and ID associated with an OSD.
174 ceph-volume lvm trigger <SYSTEMD-DATA>
176 The systemd "data" is expected to be in the format of::
180 The lvs associated with the OSD need to have been prepared previously,
181 so that all needed tags and metadata exist.
183 Positional arguments:
185 * <SYSTEMD_DATA> Data from a systemd unit containing ID and UUID of the OSD.
188 List devices or logical volumes associated with Ceph. An association is
189 determined if a device has information relating to an OSD. This is
190 verified by querying LVM's metadata and correlating it with devices.
192 The lvs associated with the OSD need to have been prepared previously by
193 ceph-volume so that all needed tags and metadata exist.
199 List a particular device, reporting all metadata about it::
201 ceph-volume lvm list /dev/sda1
203 List a logical volume, along with all its metadata (vg is a volume
204 group, and lv the logical volume name)::
206 ceph-volume lvm list {vg/lv}
208 Positional arguments:
210 * <DEVICE> Either in the form of ``vg/lv`` for logical volumes,
211 ``/path/to/sda1`` or ``/path/to/sda`` for regular devices.
215 Zaps the given logical volume or partition. If given a path to a logical
216 volume it must be in the format of vg/lv. Any filesystems present
217 on the given lv or partition will be removed and all data will be purged.
219 However, the lv or partition will be kept intact.
221 Usage, for logical volumes::
223 ceph-volume lvm zap {vg/lv}
225 Usage, for logical partitions::
227 ceph-volume lvm zap /dev/sdc1
229 Positional arguments:
231 * <DEVICE> Either in the form of ``vg/lv`` for logical volumes,
232 ``/path/to/sda1`` or ``/path/to/sda`` for regular devices.
238 Scan legacy OSD directories or data devices that may have been created by
239 ceph-disk, or manually.
244 Enables a systemd unit that persists the OSD ID and its UUID (also called
245 ``fsid`` in Ceph CLI tools), so that at boot time it can understand what OSD is
246 enabled and needs to be mounted, while reading information that was previously
247 created and persisted at ``/etc/ceph/osd/`` in JSON format.
251 ceph-volume simple activate --bluestore <osd id> <osd fsid>
255 * [-h, --help] show the help message and exit
256 * [--bluestore] bluestore objectstore (default)
257 * [--filestore] filestore objectstore
259 Note: It requires a matching JSON file with the following format::
261 /etc/ceph/osd/<osd id>-<osd fsid>.json
265 Scan a running OSD or data device for an OSD for metadata that can later be
266 used to activate and manage the OSD with ceph-volume. The scan method will
267 create a JSON file with the required information plus anything found in the OSD
270 Optionally, the JSON blob can be sent to stdout for further inspection.
272 Usage on data devices::
274 ceph-volume simple scan <data device>
276 Running OSD directories::
278 ceph-volume simple scan <path to osd dir>
283 * [-h, --help] show the help message and exit
284 * [--stdout] Send the JSON blob to stdout
285 * [--force] If the JSON file exists at destination, overwrite it
287 Required Positional arguments:
289 * <DATA DEVICE or OSD DIR> Actual data partition or a path to the running OSD
292 This subcommand is not meant to be used directly, and it is used by systemd so
293 that it proxies input to ``ceph-volume simple activate`` by parsing the
294 input from systemd, detecting the UUID and ID associated with an OSD.
298 ceph-volume simple trigger <SYSTEMD-DATA>
300 The systemd "data" is expected to be in the format of::
304 The JSON file associated with the OSD need to have been persisted previously by
305 a scan (or manually), so that all needed metadata can be used.
307 Positional arguments:
309 * <SYSTEMD_DATA> Data from a systemd unit containing ID and UUID of the OSD.
315 :program:`ceph-volume` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
316 the documentation at http://docs.ceph.com/ for more information.
322 :doc:`ceph-osd <ceph-osd>`\(8),
323 :doc:`ceph-disk <ceph-disk>`\(8),