]> git.proxmox.com Git - ceph.git/blob - ceph/doc/man/8/ceph-volume.rst
import ceph 15.2.14
[ceph.git] / ceph / doc / man / 8 / ceph-volume.rst
1 :orphan:
2
3 =======================================================
4 ceph-volume -- Ceph OSD deployment and inspection tool
5 =======================================================
6
7 .. program:: ceph-volume
8
9 Synopsis
10 ========
11
12 | **ceph-volume** [-h] [--cluster CLUSTER] [--log-level LOG_LEVEL]
13 | [--log-path LOG_PATH]
14
15 | **ceph-volume** **inventory**
16
17 | **ceph-volume** **lvm** [ *trigger* | *create* | *activate* | *prepare*
18 | *zap* | *list* | *batch* | *new-wal* | *new-db* | *migrate* ]
19
20 | **ceph-volume** **simple** [ *trigger* | *scan* | *activate* ]
21
22
23 Description
24 ===========
25
26 :program:`ceph-volume` is a single purpose command line tool to deploy logical
27 volumes as OSDs, trying to maintain a similar API to ``ceph-disk`` when
28 preparing, activating, and creating OSDs.
29
30 It deviates from ``ceph-disk`` by not interacting or relying on the udev rules
31 that come installed for Ceph. These rules allow automatic detection of
32 previously setup devices that are in turn fed into ``ceph-disk`` to activate
33 them.
34
35
36 Commands
37 ========
38
39 inventory
40 ---------
41
42 This subcommand provides information about a host's physical disc inventory and
43 reports metadata about these discs. Among this metadata one can find disc
44 specific data items (like model, size, rotational or solid state) as well as
45 data items specific to ceph using a device, such as if it is available for
46 use with ceph or if logical volumes are present.
47
48 Examples::
49
50 ceph-volume inventory
51 ceph-volume inventory /dev/sda
52 ceph-volume inventory --format json-pretty
53
54 Optional arguments:
55
56 * [-h, --help] show the help message and exit
57 * [--format] report format, valid values are ``plain`` (default),
58 ``json`` and ``json-pretty``
59
60 lvm
61 ---
62
63 By making use of LVM tags, the ``lvm`` sub-command is able to store and later
64 re-discover and query devices associated with OSDs so that they can later
65 activated.
66
67 Subcommands:
68
69 **batch**
70 Creates OSDs from a list of devices using a ``filestore``
71 or ``bluestore`` (default) setup. It will create all necessary volume groups
72 and logical volumes required to have a working OSD.
73
74 Example usage with three devices::
75
76 ceph-volume lvm batch --bluestore /dev/sda /dev/sdb /dev/sdc
77
78 Optional arguments:
79
80 * [-h, --help] show the help message and exit
81 * [--bluestore] Use the bluestore objectstore (default)
82 * [--filestore] Use the filestore objectstore
83 * [--yes] Skip the report and prompt to continue provisioning
84 * [--prepare] Only prepare OSDs, do not activate
85 * [--dmcrypt] Enable encryption for the underlying OSD devices
86 * [--crush-device-class] Define a CRUSH device class to assign the OSD to
87 * [--no-systemd] Do not enable or create any systemd units
88 * [--osds-per-device] Provision more than 1 (the default) OSD per device
89 * [--report] Report what the potential outcome would be for the current input (requires devices to be passed in)
90 * [--format] Output format when reporting (used along with --report), can be one of 'pretty' (default) or 'json'
91 * [--block-db-size] Set (or override) the "bluestore_block_db_size" value, in bytes
92 * [--journal-size] Override the "osd_journal_size" value, in megabytes
93
94 Required positional arguments:
95
96 * <DEVICE> Full path to a raw device, like ``/dev/sda``. Multiple
97 ``<DEVICE>`` paths can be passed in.
98
99
100 **activate**
101 Enables a systemd unit that persists the OSD ID and its UUID (also called
102 ``fsid`` in Ceph CLI tools), so that at boot time it can understand what OSD is
103 enabled and needs to be mounted.
104
105 Usage::
106
107 ceph-volume lvm activate --bluestore <osd id> <osd fsid>
108
109 Optional Arguments:
110
111 * [-h, --help] show the help message and exit
112 * [--auto-detect-objectstore] Automatically detect the objectstore by inspecting
113 the OSD
114 * [--bluestore] bluestore objectstore (default)
115 * [--filestore] filestore objectstore
116 * [--all] Activate all OSDs found in the system
117 * [--no-systemd] Skip creating and enabling systemd units and starting of OSD
118 services
119
120 Multiple OSDs can be activated at once by using the (idempotent) ``--all`` flag::
121
122 ceph-volume lvm activate --all
123
124
125 **prepare**
126 Prepares a logical volume to be used as an OSD and journal using a ``filestore``
127 or ``bluestore`` (default) setup. It will not create or modify the logical volumes
128 except for adding extra metadata.
129
130 Usage::
131
132 ceph-volume lvm prepare --filestore --data <data lv> --journal <journal device>
133
134 Optional arguments:
135
136 * [-h, --help] show the help message and exit
137 * [--journal JOURNAL] A logical group name, path to a logical volume, or path to a device
138 * [--bluestore] Use the bluestore objectstore (default)
139 * [--block.wal] Path to a bluestore block.wal logical volume or partition
140 * [--block.db] Path to a bluestore block.db logical volume or partition
141 * [--filestore] Use the filestore objectstore
142 * [--dmcrypt] Enable encryption for the underlying OSD devices
143 * [--osd-id OSD_ID] Reuse an existing OSD id
144 * [--osd-fsid OSD_FSID] Reuse an existing OSD fsid
145 * [--crush-device-class] Define a CRUSH device class to assign the OSD to
146
147 Required arguments:
148
149 * --data A logical group name or a path to a logical volume
150
151 For encrypting an OSD, the ``--dmcrypt`` flag must be added when preparing
152 (also supported in the ``create`` sub-command).
153
154
155 **create**
156 Wraps the two-step process to provision a new osd (calling ``prepare`` first
157 and then ``activate``) into a single one. The reason to prefer ``prepare`` and
158 then ``activate`` is to gradually introduce new OSDs into a cluster, and
159 avoiding large amounts of data being rebalanced.
160
161 The single-call process unifies exactly what ``prepare`` and ``activate`` do,
162 with the convenience of doing it all at once. Flags and general usage are
163 equivalent to those of the ``prepare`` and ``activate`` subcommand.
164
165 **trigger**
166 This subcommand is not meant to be used directly, and it is used by systemd so
167 that it proxies input to ``ceph-volume lvm activate`` by parsing the
168 input from systemd, detecting the UUID and ID associated with an OSD.
169
170 Usage::
171
172 ceph-volume lvm trigger <SYSTEMD-DATA>
173
174 The systemd "data" is expected to be in the format of::
175
176 <OSD ID>-<OSD UUID>
177
178 The lvs associated with the OSD need to have been prepared previously,
179 so that all needed tags and metadata exist.
180
181 Positional arguments:
182
183 * <SYSTEMD_DATA> Data from a systemd unit containing ID and UUID of the OSD.
184
185 **list**
186 List devices or logical volumes associated with Ceph. An association is
187 determined if a device has information relating to an OSD. This is
188 verified by querying LVM's metadata and correlating it with devices.
189
190 The lvs associated with the OSD need to have been prepared previously by
191 ceph-volume so that all needed tags and metadata exist.
192
193 Usage::
194
195 ceph-volume lvm list
196
197 List a particular device, reporting all metadata about it::
198
199 ceph-volume lvm list /dev/sda1
200
201 List a logical volume, along with all its metadata (vg is a volume
202 group, and lv the logical volume name)::
203
204 ceph-volume lvm list {vg/lv}
205
206 Positional arguments:
207
208 * <DEVICE> Either in the form of ``vg/lv`` for logical volumes,
209 ``/path/to/sda1`` or ``/path/to/sda`` for regular devices.
210
211
212 **zap**
213 Zaps the given logical volume or partition. If given a path to a logical
214 volume it must be in the format of vg/lv. Any file systems present
215 on the given lv or partition will be removed and all data will be purged.
216
217 However, the lv or partition will be kept intact.
218
219 Usage, for logical volumes::
220
221 ceph-volume lvm zap {vg/lv}
222
223 Usage, for logical partitions::
224
225 ceph-volume lvm zap /dev/sdc1
226
227 For full removal of the device use the ``--destroy`` flag (allowed for all
228 device types)::
229
230 ceph-volume lvm zap --destroy /dev/sdc1
231
232 Multiple devices can be removed by specifying the OSD ID and/or the OSD FSID::
233
234 ceph-volume lvm zap --destroy --osd-id 1
235 ceph-volume lvm zap --destroy --osd-id 1 --osd-fsid C9605912-8395-4D76-AFC0-7DFDAC315D59
236
237
238 Positional arguments:
239
240 * <DEVICE> Either in the form of ``vg/lv`` for logical volumes,
241 ``/path/to/sda1`` or ``/path/to/sda`` for regular devices.
242
243
244 **new-wal**
245 Attaches the given logical volume to OSD as a WAL. Logical volume
246 name format is vg/lv. Fails if OSD has already got attached WAL.
247
248 Usage::
249
250 ceph-volume lvm new-wal --osd-id OSD_ID --osd-fsid OSD_FSID --target TARGET_LV
251
252 Optional arguments:
253
254 * [-h, --help] show the help message and exit
255
256 Required arguments:
257
258 * --osd-id OSD_ID OSD id to attach new WAL to
259 * --osd-fsid OSD_FSID OSD fsid to attach new WAL to
260 * --target TARGET_LV logical volume name to attach as WAL
261
262
263 **new-db**
264 Attaches the given logical volume to OSD as a DB. Logical volume
265 name format is vg/lv. Fails if OSD has already got attached DB.
266
267 Usage::
268
269 ceph-volume lvm new-db --osd-id OSD_ID --osd-fsid OSD_FSID --target <target lv>
270
271 Optional arguments:
272
273 * [-h, --help] show the help message and exit
274
275 Required arguments:
276
277 * --osd-id OSD_ID OSD id to attach new DB to
278 * --osd-fsid OSD_FSID OSD fsid to attach new DB to
279 * --target TARGET_LV logical volume name to attach as DB
280
281 **migrate**
282
283 Moves BlueFS data from source volume(s) to the target one, source volumes
284 (except the main, i.e. data or block one) are removed on success. LVM volumes
285 are permitted for Target only, both already attached or new one. In the latter
286 case it is attached to the OSD replacing one of the source devices. Following
287 replacement rules apply (in the order of precedence, stop on the first match):
288
289 - if source list has DB volume - target device replaces it.
290 - if source list has WAL volume - target device replace it.
291 - if source list has slow volume only - operation is not permitted,
292 requires explicit allocation via new-db/new-wal command.
293
294 Usage::
295
296 ceph-volume lvm migrate --osd-id OSD_ID --osd-fsid OSD_FSID --target TARGET_LV --from {data|db|wal} [{data|db|wal} ...]
297
298 Optional arguments:
299
300 * [-h, --help] show the help message and exit
301
302 Required arguments:
303
304 * --osd-id OSD_ID OSD id to perform migration at
305 * --osd-fsid OSD_FSID OSD fsid to perform migration at
306 * --target TARGET_LV logical volume to move data to
307 * --from TYPE_LIST list of source device type names, e.g. --from db wal
308
309 simple
310 ------
311
312 Scan legacy OSD directories or data devices that may have been created by
313 ceph-disk, or manually.
314
315 Subcommands:
316
317 **activate**
318 Enables a systemd unit that persists the OSD ID and its UUID (also called
319 ``fsid`` in Ceph CLI tools), so that at boot time it can understand what OSD is
320 enabled and needs to be mounted, while reading information that was previously
321 created and persisted at ``/etc/ceph/osd/`` in JSON format.
322
323 Usage::
324
325 ceph-volume simple activate --bluestore <osd id> <osd fsid>
326
327 Optional Arguments:
328
329 * [-h, --help] show the help message and exit
330 * [--bluestore] bluestore objectstore (default)
331 * [--filestore] filestore objectstore
332
333 Note: It requires a matching JSON file with the following format::
334
335 /etc/ceph/osd/<osd id>-<osd fsid>.json
336
337
338 **scan**
339 Scan a running OSD or data device for an OSD for metadata that can later be
340 used to activate and manage the OSD with ceph-volume. The scan method will
341 create a JSON file with the required information plus anything found in the OSD
342 directory as well.
343
344 Optionally, the JSON blob can be sent to stdout for further inspection.
345
346 Usage on all running OSDs::
347
348 ceph-volume simple scan
349
350 Usage on data devices::
351
352 ceph-volume simple scan <data device>
353
354 Running OSD directories::
355
356 ceph-volume simple scan <path to osd dir>
357
358
359 Optional arguments:
360
361 * [-h, --help] show the help message and exit
362 * [--stdout] Send the JSON blob to stdout
363 * [--force] If the JSON file exists at destination, overwrite it
364
365 Optional Positional arguments:
366
367 * <DATA DEVICE or OSD DIR> Actual data partition or a path to the running OSD
368
369 **trigger**
370 This subcommand is not meant to be used directly, and it is used by systemd so
371 that it proxies input to ``ceph-volume simple activate`` by parsing the
372 input from systemd, detecting the UUID and ID associated with an OSD.
373
374 Usage::
375
376 ceph-volume simple trigger <SYSTEMD-DATA>
377
378 The systemd "data" is expected to be in the format of::
379
380 <OSD ID>-<OSD UUID>
381
382 The JSON file associated with the OSD need to have been persisted previously by
383 a scan (or manually), so that all needed metadata can be used.
384
385 Positional arguments:
386
387 * <SYSTEMD_DATA> Data from a systemd unit containing ID and UUID of the OSD.
388
389
390 Availability
391 ============
392
393 :program:`ceph-volume` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
394 the documentation at http://docs.ceph.com/ for more information.
395
396
397 See also
398 ========
399
400 :doc:`ceph-osd <ceph-osd>`\(8),