]>
Commit | Line | Data |
---|---|---|
1adf2230 AA |
1 | .. _ceph-volume-lvm-batch: |
2 | ||
3 | ``batch`` | |
4 | =========== | |
5 | This subcommand allows for multiple OSDs to be created at the same time given | |
6 | an input of devices. Depending on the device type (spinning drive, or solid | |
7 | state), the internal engine will decide the best approach to create the OSDs. | |
8 | ||
9 | This decision abstracts away the many nuances when creating an OSD: how large | |
10 | should a ``block.db`` be? How can one mix a solid state device with spinning | |
11 | devices in an efficient way? | |
12 | ||
13 | The process is similar to :ref:`ceph-volume-lvm-create`, and will do the | |
14 | preparation and activation at once, following the same workflow for each OSD. | |
15 | ||
16 | All the features that ``ceph-volume lvm create`` supports, like ``dmcrypt``, | |
17 | avoiding ``systemd`` units from starting, defining bluestore or filestore, | |
18 | are supported. Any fine-grained option that may affect a single OSD is not | |
19 | supported, for example: specifying where journals should be placed. | |
20 | ||
21 | ||
22 | .. _ceph-volume-lvm-batch_bluestore: | |
23 | ||
24 | ``bluestore`` | |
25 | ------------- | |
26 | The :term:`bluestore` objectstore (the default) is used when creating multiple OSDs | |
27 | with the ``batch`` sub-command. It allows a few different scenarios depending | |
28 | on the input of devices: | |
29 | ||
30 | #. Devices are all spinning HDDs: 1 OSD is created per device | |
31 | #. Devices are all spinning SSDs: 2 OSDs are created per device | |
32 | #. Devices are a mix of HDDS and SSDs: data is placed on the spinning device, | |
33 | the ``block.db`` is created on the SSD, as large as possible. | |
34 | ||
35 | ||
36 | .. note:: Although operations in ``ceph-volume lvm create`` allow usage of | |
37 | ``block.wal`` it isn't supported with the ``batch`` sub-command | |
38 | ||
39 | ||
40 | .. _ceph-volume-lvm-batch_filestore: | |
41 | ||
42 | ``filestore`` | |
43 | ------------- | |
44 | The :term:`filestore` objectstore can be used when creating multiple OSDs | |
45 | with the ``batch`` sub-command. It allows two different scenarios depending | |
46 | on the input of devices: | |
47 | ||
48 | #. Devices are all the same type (for example all spinning HDD or all SSDs): | |
49 | 1 OSD is created per device, collocating the journal in the same HDD. | |
50 | #. Devices are a mix of HDDS and SSDs: data is placed on the spinning device, | |
51 | while the journal is created on the SSD using the sizing options from | |
52 | ceph.conf and falling back to the default journal size of 5GB. | |
53 | ||
54 | ||
55 | When a mix of solid and spinning devices are used, ``ceph-volume`` will try to | |
56 | detect existing volume groups on the solid devices. If a VG is found, it will | |
57 | try to create the logical volume from there, otherwise raising an error if | |
58 | space is insufficient. | |
59 | ||
60 | If a raw solid device is used along with a device that has a volume group in | |
61 | addition to some spinning devices, ``ceph-volume`` will try to extend the | |
62 | existing volume group and then create a logical volume. | |
63 | ||
64 | .. _ceph-volume-lvm-batch_report: | |
65 | ||
66 | Reporting | |
67 | ========= | |
68 | When a call is received to create OSDs, the tool will prompt the user to | |
69 | continue if the pre-computed output is acceptable. This output is useful to | |
70 | understand the outcome of the received devices. Once confirmation is accepted, | |
71 | the process continues. | |
72 | ||
73 | Although prompts are good to understand outcomes, it is incredibly useful to | |
74 | try different inputs to find the best product possible. With the ``--report`` | |
75 | flag, one can prevent any actual operations and just verify outcomes from | |
76 | inputs. | |
77 | ||
78 | **pretty reporting** | |
79 | For two spinning devices, this is how the ``pretty`` report (the default) would | |
80 | look:: | |
81 | ||
82 | $ ceph-volume lvm batch --report /dev/sdb /dev/sdc | |
83 | ||
84 | Total OSDs: 2 | |
85 | ||
86 | Type Path LV Size % of device | |
87 | -------------------------------------------------------------------------------- | |
88 | [data] /dev/sdb 10.74 GB 100% | |
89 | -------------------------------------------------------------------------------- | |
90 | [data] /dev/sdc 10.74 GB 100% | |
91 | ||
92 | ||
93 | ||
94 | **JSON reporting** | |
95 | Reporting can produce a richer output with ``JSON``, which gives a few more | |
96 | hints on sizing. This feature might be better for other tooling to consume | |
97 | information that will need to be transformed. | |
98 | ||
99 | For two spinning devices, this is how the ``JSON`` report would look:: | |
100 | ||
101 | $ ceph-volume lvm batch --report --format=json /dev/sdb /dev/sdc | |
102 | { | |
103 | "osds": [ | |
104 | { | |
105 | "block.db": {}, | |
106 | "data": { | |
107 | "human_readable_size": "10.74 GB", | |
108 | "parts": 1, | |
109 | "path": "/dev/sdb", | |
110 | "percentage": 100, | |
111 | "size": 11534336000.0 | |
112 | } | |
113 | }, | |
114 | { | |
115 | "block.db": {}, | |
116 | "data": { | |
117 | "human_readable_size": "10.74 GB", | |
118 | "parts": 1, | |
119 | "path": "/dev/sdc", | |
120 | "percentage": 100, | |
121 | "size": 11534336000.0 | |
122 | } | |
123 | } | |
124 | ], | |
125 | "vgs": [ | |
126 | { | |
127 | "devices": [ | |
128 | "/dev/sdb" | |
129 | ], | |
130 | "parts": 1 | |
131 | }, | |
132 | { | |
133 | "devices": [ | |
134 | "/dev/sdc" | |
135 | ], | |
136 | "parts": 1 | |
137 | } | |
138 | ] | |
139 | } |