]> git.proxmox.com Git - ceph.git/blame - ceph/doc/ceph-volume/lvm/batch.rst
Import ceph 15.2.8
[ceph.git] / ceph / doc / ceph-volume / lvm / batch.rst
CommitLineData
1adf2230
AA
1.. _ceph-volume-lvm-batch:
2
3``batch``
4===========
f91f0fd5
TL
5The subcommand allows to create multiple OSDs at the same time given
6an input of devices. The ``batch`` subcommand is closely related to
7drive-groups. One individual drive group specification translates to a single
8``batch`` invocation.
1adf2230 9
f91f0fd5
TL
10The subcommand is based to :ref:`ceph-volume-lvm-create`, and will use the very
11same code path. All ``batch`` does is to calculate the appropriate sizes of all
12volumes and skip over already created volumes.
1adf2230
AA
13
14All the features that ``ceph-volume lvm create`` supports, like ``dmcrypt``,
15avoiding ``systemd`` units from starting, defining bluestore or filestore,
f91f0fd5 16are supported.
1adf2230 17
91327a77 18
f91f0fd5 19.. _ceph-volume-lvm-batch_auto:
91327a77 20
f91f0fd5
TL
21Automatic sorting of disks
22--------------------------
23If ``batch`` receives only a single list of data devices and other options are
24passed , ``ceph-volume`` will auto-sort disks by its rotational
25property and use non-rotating disks for ``block.db`` or ``journal`` depending
26on the objectstore used. If all devices are to be used for standalone OSDs,
27no matter if rotating or solid state, pass ``--no-auto``.
28For example assuming :term:`bluestore` is used and ``--no-auto`` is not passed,
29the deprecated behavior would deploy the following, depending on the devices
30passed:
1adf2230
AA
31
32#. Devices are all spinning HDDs: 1 OSD is created per device
11fdf7f2
TL
33#. Devices are all SSDs: 2 OSDs are created per device
34#. Devices are a mix of HDDs and SSDs: data is placed on the spinning device,
1adf2230
AA
35 the ``block.db`` is created on the SSD, as large as possible.
36
1adf2230 37.. note:: Although operations in ``ceph-volume lvm create`` allow usage of
f91f0fd5 38 ``block.wal`` it isn't supported with the ``auto`` behavior.
1adf2230 39
f91f0fd5
TL
40This default auto-sorting behavior is now DEPRECATED and will be changed in future releases.
41Instead devices are not automatically sorted unless the ``--auto`` option is passed
1adf2230 42
f91f0fd5
TL
43It is recommended to make use of the explicit device lists for ``block.db``,
44 ``block.wal`` and ``journal``.
1adf2230 45
f91f0fd5 46.. _ceph-volume-lvm-batch_bluestore:
1adf2230 47
f91f0fd5
TL
48Reporting
49=========
50By default ``batch`` will print a report of the computed OSD layout and ask the
51user to confirm. This can be overridden by passing ``--yes``.
1adf2230 52
f91f0fd5
TL
53If one wants to try out several invocations with being asked to deploy
54``--report`` can be passed. ``ceph-volume`` will exit after printing the report.
1adf2230 55
f91f0fd5 56Consider the following invocation::
1adf2230 57
f91f0fd5 58 $ ceph-volume lvm batch --report /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1
1adf2230 59
f91f0fd5
TL
60This will deploy three OSDs with external ``db`` and ``wal`` volumes on
61an NVME device.
1adf2230 62
f91f0fd5
TL
63**pretty reporting**
64The ``pretty`` report format (the default) would
65look like this::
1adf2230 66
f91f0fd5
TL
67 $ ceph-volume lvm batch --report /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1
68 --> passed data devices: 3 physical, 0 LVM
69 --> relative data size: 1.0
70 --> passed block_db devices: 1 physical, 0 LVM
1adf2230 71
f91f0fd5 72 Total OSDs: 3
1adf2230 73
f91f0fd5
TL
74 Type Path LV Size % of device
75 ----------------------------------------------------------------------------------------------------
76 data /dev/sdb 300.00 GB 100.00%
77 block_db /dev/nvme0n1 66.67 GB 33.33%
78 ----------------------------------------------------------------------------------------------------
79 data /dev/sdc 300.00 GB 100.00%
80 block_db /dev/nvme0n1 66.67 GB 33.33%
81 ----------------------------------------------------------------------------------------------------
82 data /dev/sdd 300.00 GB 100.00%
83 block_db /dev/nvme0n1 66.67 GB 33.33%
1adf2230 84
1adf2230 85
1adf2230
AA
86
87
88
89**JSON reporting**
f91f0fd5
TL
90Reporting can produce a structured output with ``--format json`` or
91``--format json-pretty``::
92
93 $ ceph-volume lvm batch --report --format json-pretty /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1
94 --> passed data devices: 3 physical, 0 LVM
95 --> relative data size: 1.0
96 --> passed block_db devices: 1 physical, 0 LVM
97 [
98 {
99 "block_db": "/dev/nvme0n1",
100 "block_db_size": "66.67 GB",
101 "data": "/dev/sdb",
102 "data_size": "300.00 GB",
103 "encryption": "None"
104 },
105 {
106 "block_db": "/dev/nvme0n1",
107 "block_db_size": "66.67 GB",
108 "data": "/dev/sdc",
109 "data_size": "300.00 GB",
110 "encryption": "None"
111 },
112 {
113 "block_db": "/dev/nvme0n1",
114 "block_db_size": "66.67 GB",
115 "data": "/dev/sdd",
116 "data_size": "300.00 GB",
117 "encryption": "None"
118 }
119 ]
120
121Sizing
122======
123When no sizing arguments are passed, `ceph-volume` will derive the sizing from
124the passed device lists (or the sorted lists when using the automatic sorting).
125`ceph-volume batch` will attempt to fully utilize a device's available capacity.
126Relying on automatic sizing is recommended.
127
128If one requires a different sizing policy for wal, db or journal devices,
129`ceph-volume` offers implicit and explicit sizing rules.
130
131Implicit sizing
132---------------
133Scenarios in which either devices are under-comitted or not all data devices are
134currently ready for use (due to a broken disk for example), one can still rely
135on `ceph-volume` automatic sizing.
136Users can provide hints to `ceph-volume` as to how many data devices should have
137their external volumes on a set of fast devices. These options are:
138
139* ``--block-db-slots``
140* ``--block-wal-slots``
141* ``--journal-slots``
142
143For example, consider an OSD host that is supposed to contain 5 data devices and
144one device for wal/db volumes. However, one data device is currently broken and
145is being replaced. Instead of calculating the explicit sizes for the wal/db
146volume, one can simply call::
147
148 $ ceph-volume lvm batch --report /dev/sdb /dev/sdc /dev/sdd /dev/sde --db-devices /dev/nvme0n1 --block-db-slots 5
149
150Explicit sizing
151---------------
152It is also possible to provide explicit sizes to `ceph-volume` via the arguments
153
154* ``--block-db-size``
155* ``--block-wal-size``
156* ``--journal-size``
157
158`ceph-volume` will try to satisfy the requested sizes given the passed disks. If
159this is not possible, no OSDs will be deployed.
160
161
162Idempotency and disk replacements
163=================================
164`ceph-volume lvm batch` intends to be idempotent, i.e. calling the same command
165repeatedly must result in the same outcome. For example calling::
166
167 $ ceph-volume lvm batch --report /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1
168
169will result in three deployed OSDs (if all disks were available). Calling this
170command again, you will still end up with three OSDs and ceph-volume will exit
171with return code 0.
172
173Suppose /dev/sdc goes bad and needs to be replaced. After destroying the OSD and
174replacing the hardware, you can again call the same command and `ceph-volume`
175will detect that only two out of the three wanted OSDs are setup and re-create
176the missing OSD.
177
178This idempotency notion is tightly coupled to and extensively used by :ref:`drivegroups`.