]>
Commit | Line | Data |
---|---|---|
1adf2230 AA |
1 | .. _ceph-volume-lvm-batch: |
2 | ||
3 | ``batch`` | |
4 | =========== | |
f91f0fd5 TL |
5 | The subcommand allows to create multiple OSDs at the same time given |
6 | an input of devices. The ``batch`` subcommand is closely related to | |
7 | drive-groups. One individual drive group specification translates to a single | |
8 | ``batch`` invocation. | |
1adf2230 | 9 | |
f91f0fd5 TL |
10 | The subcommand is based to :ref:`ceph-volume-lvm-create`, and will use the very |
11 | same code path. All ``batch`` does is to calculate the appropriate sizes of all | |
12 | volumes and skip over already created volumes. | |
1adf2230 AA |
13 | |
14 | All the features that ``ceph-volume lvm create`` supports, like ``dmcrypt``, | |
15 | avoiding ``systemd`` units from starting, defining bluestore or filestore, | |
f91f0fd5 | 16 | are supported. |
1adf2230 | 17 | |
91327a77 | 18 | |
f91f0fd5 | 19 | .. _ceph-volume-lvm-batch_auto: |
91327a77 | 20 | |
f91f0fd5 TL |
21 | Automatic sorting of disks |
22 | -------------------------- | |
23 | If ``batch`` receives only a single list of data devices and other options are | |
24 | passed , ``ceph-volume`` will auto-sort disks by its rotational | |
25 | property and use non-rotating disks for ``block.db`` or ``journal`` depending | |
26 | on the objectstore used. If all devices are to be used for standalone OSDs, | |
27 | no matter if rotating or solid state, pass ``--no-auto``. | |
28 | For example assuming :term:`bluestore` is used and ``--no-auto`` is not passed, | |
29 | the deprecated behavior would deploy the following, depending on the devices | |
30 | passed: | |
1adf2230 AA |
31 | |
32 | #. Devices are all spinning HDDs: 1 OSD is created per device | |
11fdf7f2 TL |
33 | #. Devices are all SSDs: 2 OSDs are created per device |
34 | #. Devices are a mix of HDDs and SSDs: data is placed on the spinning device, | |
1adf2230 AA |
35 | the ``block.db`` is created on the SSD, as large as possible. |
36 | ||
1adf2230 | 37 | .. note:: Although operations in ``ceph-volume lvm create`` allow usage of |
f91f0fd5 | 38 | ``block.wal`` it isn't supported with the ``auto`` behavior. |
1adf2230 | 39 | |
f91f0fd5 TL |
40 | This default auto-sorting behavior is now DEPRECATED and will be changed in future releases. |
41 | Instead devices are not automatically sorted unless the ``--auto`` option is passed | |
1adf2230 | 42 | |
f91f0fd5 TL |
43 | It is recommended to make use of the explicit device lists for ``block.db``, |
44 | ``block.wal`` and ``journal``. | |
1adf2230 | 45 | |
f91f0fd5 | 46 | .. _ceph-volume-lvm-batch_bluestore: |
1adf2230 | 47 | |
f91f0fd5 TL |
48 | Reporting |
49 | ========= | |
50 | By default ``batch`` will print a report of the computed OSD layout and ask the | |
51 | user to confirm. This can be overridden by passing ``--yes``. | |
1adf2230 | 52 | |
f91f0fd5 TL |
53 | If one wants to try out several invocations with being asked to deploy |
54 | ``--report`` can be passed. ``ceph-volume`` will exit after printing the report. | |
1adf2230 | 55 | |
f91f0fd5 | 56 | Consider the following invocation:: |
1adf2230 | 57 | |
f91f0fd5 | 58 | $ ceph-volume lvm batch --report /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1 |
1adf2230 | 59 | |
f91f0fd5 TL |
60 | This will deploy three OSDs with external ``db`` and ``wal`` volumes on |
61 | an NVME device. | |
1adf2230 | 62 | |
f91f0fd5 TL |
63 | **pretty reporting** |
64 | The ``pretty`` report format (the default) would | |
65 | look like this:: | |
1adf2230 | 66 | |
f91f0fd5 TL |
67 | $ ceph-volume lvm batch --report /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1 |
68 | --> passed data devices: 3 physical, 0 LVM | |
69 | --> relative data size: 1.0 | |
70 | --> passed block_db devices: 1 physical, 0 LVM | |
1adf2230 | 71 | |
f91f0fd5 | 72 | Total OSDs: 3 |
1adf2230 | 73 | |
f91f0fd5 TL |
74 | Type Path LV Size % of device |
75 | ---------------------------------------------------------------------------------------------------- | |
76 | data /dev/sdb 300.00 GB 100.00% | |
77 | block_db /dev/nvme0n1 66.67 GB 33.33% | |
78 | ---------------------------------------------------------------------------------------------------- | |
79 | data /dev/sdc 300.00 GB 100.00% | |
80 | block_db /dev/nvme0n1 66.67 GB 33.33% | |
81 | ---------------------------------------------------------------------------------------------------- | |
82 | data /dev/sdd 300.00 GB 100.00% | |
83 | block_db /dev/nvme0n1 66.67 GB 33.33% | |
1adf2230 | 84 | |
1adf2230 | 85 | |
1adf2230 AA |
86 | |
87 | ||
88 | ||
89 | **JSON reporting** | |
f91f0fd5 TL |
90 | Reporting can produce a structured output with ``--format json`` or |
91 | ``--format json-pretty``:: | |
92 | ||
93 | $ ceph-volume lvm batch --report --format json-pretty /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1 | |
94 | --> passed data devices: 3 physical, 0 LVM | |
95 | --> relative data size: 1.0 | |
96 | --> passed block_db devices: 1 physical, 0 LVM | |
97 | [ | |
98 | { | |
99 | "block_db": "/dev/nvme0n1", | |
100 | "block_db_size": "66.67 GB", | |
101 | "data": "/dev/sdb", | |
102 | "data_size": "300.00 GB", | |
103 | "encryption": "None" | |
104 | }, | |
105 | { | |
106 | "block_db": "/dev/nvme0n1", | |
107 | "block_db_size": "66.67 GB", | |
108 | "data": "/dev/sdc", | |
109 | "data_size": "300.00 GB", | |
110 | "encryption": "None" | |
111 | }, | |
112 | { | |
113 | "block_db": "/dev/nvme0n1", | |
114 | "block_db_size": "66.67 GB", | |
115 | "data": "/dev/sdd", | |
116 | "data_size": "300.00 GB", | |
117 | "encryption": "None" | |
118 | } | |
119 | ] | |
120 | ||
121 | Sizing | |
122 | ====== | |
123 | When no sizing arguments are passed, `ceph-volume` will derive the sizing from | |
124 | the passed device lists (or the sorted lists when using the automatic sorting). | |
125 | `ceph-volume batch` will attempt to fully utilize a device's available capacity. | |
126 | Relying on automatic sizing is recommended. | |
127 | ||
128 | If one requires a different sizing policy for wal, db or journal devices, | |
129 | `ceph-volume` offers implicit and explicit sizing rules. | |
130 | ||
131 | Implicit sizing | |
132 | --------------- | |
133 | Scenarios in which either devices are under-comitted or not all data devices are | |
134 | currently ready for use (due to a broken disk for example), one can still rely | |
135 | on `ceph-volume` automatic sizing. | |
136 | Users can provide hints to `ceph-volume` as to how many data devices should have | |
137 | their external volumes on a set of fast devices. These options are: | |
138 | ||
139 | * ``--block-db-slots`` | |
140 | * ``--block-wal-slots`` | |
141 | * ``--journal-slots`` | |
142 | ||
143 | For example, consider an OSD host that is supposed to contain 5 data devices and | |
144 | one device for wal/db volumes. However, one data device is currently broken and | |
145 | is being replaced. Instead of calculating the explicit sizes for the wal/db | |
146 | volume, one can simply call:: | |
147 | ||
148 | $ ceph-volume lvm batch --report /dev/sdb /dev/sdc /dev/sdd /dev/sde --db-devices /dev/nvme0n1 --block-db-slots 5 | |
149 | ||
150 | Explicit sizing | |
151 | --------------- | |
152 | It is also possible to provide explicit sizes to `ceph-volume` via the arguments | |
153 | ||
154 | * ``--block-db-size`` | |
155 | * ``--block-wal-size`` | |
156 | * ``--journal-size`` | |
157 | ||
158 | `ceph-volume` will try to satisfy the requested sizes given the passed disks. If | |
159 | this is not possible, no OSDs will be deployed. | |
160 | ||
161 | ||
162 | Idempotency and disk replacements | |
163 | ================================= | |
164 | `ceph-volume lvm batch` intends to be idempotent, i.e. calling the same command | |
165 | repeatedly must result in the same outcome. For example calling:: | |
166 | ||
167 | $ ceph-volume lvm batch --report /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1 | |
168 | ||
169 | will result in three deployed OSDs (if all disks were available). Calling this | |
170 | command again, you will still end up with three OSDs and ceph-volume will exit | |
171 | with return code 0. | |
172 | ||
173 | Suppose /dev/sdc goes bad and needs to be replaced. After destroying the OSD and | |
174 | replacing the hardware, you can again call the same command and `ceph-volume` | |
175 | will detect that only two out of the three wanted OSDs are setup and re-create | |
176 | the missing OSD. | |
177 | ||
178 | This idempotency notion is tightly coupled to and extensively used by :ref:`drivegroups`. |