]>
Commit | Line | Data |
---|---|---|
1 | .. _ceph-volume-lvm-batch: | |
2 | ||
3 | ``batch`` | |
4 | =========== | |
5 | This subcommand allows for multiple OSDs to be created at the same time given | |
6 | an input of devices. Depending on the device type (spinning drive, or solid | |
7 | state), the internal engine will decide the best approach to create the OSDs. | |
8 | ||
9 | This decision abstracts away the many nuances when creating an OSD: how large | |
10 | should a ``block.db`` be? How can one mix a solid state device with spinning | |
11 | devices in an efficient way? | |
12 | ||
13 | The process is similar to :ref:`ceph-volume-lvm-create`, and will do the | |
14 | preparation and activation at once, following the same workflow for each OSD. | |
15 | However, If the ``--prepare`` flag is passed then only the prepare step is taken | |
16 | and the OSDs are not activated. | |
17 | ||
18 | All the features that ``ceph-volume lvm create`` supports, like ``dmcrypt``, | |
19 | avoiding ``systemd`` units from starting, defining bluestore or filestore, | |
20 | are supported. Any fine-grained option that may affect a single OSD is not | |
21 | supported, for example: specifying where journals should be placed. | |
22 | ||
23 | ||
24 | ||
25 | ||
26 | .. _ceph-volume-lvm-batch_bluestore: | |
27 | ||
28 | ``bluestore`` | |
29 | ------------- | |
30 | The :term:`bluestore` objectstore (the default) is used when creating multiple OSDs | |
31 | with the ``batch`` sub-command. It allows a few different scenarios depending | |
32 | on the input of devices: | |
33 | ||
34 | #. Devices are all spinning HDDs: 1 OSD is created per device | |
35 | #. Devices are all spinning SSDs: 2 OSDs are created per device | |
36 | #. Devices are a mix of HDDS and SSDs: data is placed on the spinning device, | |
37 | the ``block.db`` is created on the SSD, as large as possible. | |
38 | ||
39 | ||
40 | .. note:: Although operations in ``ceph-volume lvm create`` allow usage of | |
41 | ``block.wal`` it isn't supported with the ``batch`` sub-command | |
42 | ||
43 | ||
44 | .. _ceph-volume-lvm-batch_filestore: | |
45 | ||
46 | ``filestore`` | |
47 | ------------- | |
48 | The :term:`filestore` objectstore can be used when creating multiple OSDs | |
49 | with the ``batch`` sub-command. It allows two different scenarios depending | |
50 | on the input of devices: | |
51 | ||
52 | #. Devices are all the same type (for example all spinning HDD or all SSDs): | |
53 | 1 OSD is created per device, collocating the journal in the same HDD. | |
54 | #. Devices are a mix of HDDS and SSDs: data is placed on the spinning device, | |
55 | while the journal is created on the SSD using the sizing options from | |
56 | ceph.conf and falling back to the default journal size of 5GB. | |
57 | ||
58 | ||
59 | When a mix of solid and spinning devices are used, ``ceph-volume`` will try to | |
60 | detect existing volume groups on the solid devices. If a VG is found, it will | |
61 | try to create the logical volume from there, otherwise raising an error if | |
62 | space is insufficient. | |
63 | ||
64 | If a raw solid device is used along with a device that has a volume group in | |
65 | addition to some spinning devices, ``ceph-volume`` will try to extend the | |
66 | existing volume group and then create a logical volume. | |
67 | ||
68 | .. _ceph-volume-lvm-batch_report: | |
69 | ||
70 | Reporting | |
71 | ========= | |
72 | When a call is received to create OSDs, the tool will prompt the user to | |
73 | continue if the pre-computed output is acceptable. This output is useful to | |
74 | understand the outcome of the received devices. Once confirmation is accepted, | |
75 | the process continues. | |
76 | ||
77 | Although prompts are good to understand outcomes, it is incredibly useful to | |
78 | try different inputs to find the best product possible. With the ``--report`` | |
79 | flag, one can prevent any actual operations and just verify outcomes from | |
80 | inputs. | |
81 | ||
82 | **pretty reporting** | |
83 | For two spinning devices, this is how the ``pretty`` report (the default) would | |
84 | look:: | |
85 | ||
86 | $ ceph-volume lvm batch --report /dev/sdb /dev/sdc | |
87 | ||
88 | Total OSDs: 2 | |
89 | ||
90 | Type Path LV Size % of device | |
91 | -------------------------------------------------------------------------------- | |
92 | [data] /dev/sdb 10.74 GB 100% | |
93 | -------------------------------------------------------------------------------- | |
94 | [data] /dev/sdc 10.74 GB 100% | |
95 | ||
96 | ||
97 | ||
98 | **JSON reporting** | |
99 | Reporting can produce a richer output with ``JSON``, which gives a few more | |
100 | hints on sizing. This feature might be better for other tooling to consume | |
101 | information that will need to be transformed. | |
102 | ||
103 | For two spinning devices, this is how the ``JSON`` report would look:: | |
104 | ||
105 | $ ceph-volume lvm batch --report --format=json /dev/sdb /dev/sdc | |
106 | { | |
107 | "osds": [ | |
108 | { | |
109 | "block.db": {}, | |
110 | "data": { | |
111 | "human_readable_size": "10.74 GB", | |
112 | "parts": 1, | |
113 | "path": "/dev/sdb", | |
114 | "percentage": 100, | |
115 | "size": 11534336000.0 | |
116 | } | |
117 | }, | |
118 | { | |
119 | "block.db": {}, | |
120 | "data": { | |
121 | "human_readable_size": "10.74 GB", | |
122 | "parts": 1, | |
123 | "path": "/dev/sdc", | |
124 | "percentage": 100, | |
125 | "size": 11534336000.0 | |
126 | } | |
127 | } | |
128 | ], | |
129 | "vgs": [ | |
130 | { | |
131 | "devices": [ | |
132 | "/dev/sdb" | |
133 | ], | |
134 | "parts": 1 | |
135 | }, | |
136 | { | |
137 | "devices": [ | |
138 | "/dev/sdc" | |
139 | ], | |
140 | "parts": 1 | |
141 | } | |
142 | ] | |
143 | } |