]> git.proxmox.com Git - ceph.git/blame - ceph/doc/ceph-volume/lvm/batch.rst
update sources to 12.2.10
[ceph.git] / ceph / doc / ceph-volume / lvm / batch.rst
CommitLineData
1adf2230
AA
1.. _ceph-volume-lvm-batch:
2
3``batch``
4===========
5This subcommand allows for multiple OSDs to be created at the same time given
6an input of devices. Depending on the device type (spinning drive, or solid
7state), the internal engine will decide the best approach to create the OSDs.
8
9This decision abstracts away the many nuances when creating an OSD: how large
10should a ``block.db`` be? How can one mix a solid state device with spinning
11devices in an efficient way?
12
13The process is similar to :ref:`ceph-volume-lvm-create`, and will do the
14preparation and activation at once, following the same workflow for each OSD.
91327a77
AA
15However, If the ``--prepare`` flag is passed then only the prepare step is taken
16and the OSDs are not activated.
1adf2230
AA
17
18All the features that ``ceph-volume lvm create`` supports, like ``dmcrypt``,
19avoiding ``systemd`` units from starting, defining bluestore or filestore,
20are supported. Any fine-grained option that may affect a single OSD is not
21supported, for example: specifying where journals should be placed.
22
23
91327a77
AA
24
25
1adf2230
AA
26.. _ceph-volume-lvm-batch_bluestore:
27
28``bluestore``
29-------------
30The :term:`bluestore` objectstore (the default) is used when creating multiple OSDs
31with the ``batch`` sub-command. It allows a few different scenarios depending
32on the input of devices:
33
34#. Devices are all spinning HDDs: 1 OSD is created per device
35#. Devices are all spinning SSDs: 2 OSDs are created per device
36#. Devices are a mix of HDDS and SSDs: data is placed on the spinning device,
37 the ``block.db`` is created on the SSD, as large as possible.
38
39
40.. note:: Although operations in ``ceph-volume lvm create`` allow usage of
41 ``block.wal`` it isn't supported with the ``batch`` sub-command
42
43
44.. _ceph-volume-lvm-batch_filestore:
45
46``filestore``
47-------------
48The :term:`filestore` objectstore can be used when creating multiple OSDs
49with the ``batch`` sub-command. It allows two different scenarios depending
50on the input of devices:
51
52#. Devices are all the same type (for example all spinning HDD or all SSDs):
53 1 OSD is created per device, collocating the journal in the same HDD.
54#. Devices are a mix of HDDS and SSDs: data is placed on the spinning device,
55 while the journal is created on the SSD using the sizing options from
56 ceph.conf and falling back to the default journal size of 5GB.
57
58
59When a mix of solid and spinning devices are used, ``ceph-volume`` will try to
60detect existing volume groups on the solid devices. If a VG is found, it will
61try to create the logical volume from there, otherwise raising an error if
62space is insufficient.
63
64If a raw solid device is used along with a device that has a volume group in
65addition to some spinning devices, ``ceph-volume`` will try to extend the
66existing volume group and then create a logical volume.
67
68.. _ceph-volume-lvm-batch_report:
69
70Reporting
71=========
72When a call is received to create OSDs, the tool will prompt the user to
73continue if the pre-computed output is acceptable. This output is useful to
74understand the outcome of the received devices. Once confirmation is accepted,
75the process continues.
76
77Although prompts are good to understand outcomes, it is incredibly useful to
78try different inputs to find the best product possible. With the ``--report``
79flag, one can prevent any actual operations and just verify outcomes from
80inputs.
81
82**pretty reporting**
83For two spinning devices, this is how the ``pretty`` report (the default) would
84look::
85
86 $ ceph-volume lvm batch --report /dev/sdb /dev/sdc
87
88 Total OSDs: 2
89
90 Type Path LV Size % of device
91 --------------------------------------------------------------------------------
92 [data] /dev/sdb 10.74 GB 100%
93 --------------------------------------------------------------------------------
94 [data] /dev/sdc 10.74 GB 100%
95
96
97
98**JSON reporting**
99Reporting can produce a richer output with ``JSON``, which gives a few more
100hints on sizing. This feature might be better for other tooling to consume
101information that will need to be transformed.
102
103For two spinning devices, this is how the ``JSON`` report would look::
104
105 $ ceph-volume lvm batch --report --format=json /dev/sdb /dev/sdc
106 {
107 "osds": [
108 {
109 "block.db": {},
110 "data": {
111 "human_readable_size": "10.74 GB",
112 "parts": 1,
113 "path": "/dev/sdb",
114 "percentage": 100,
115 "size": 11534336000.0
116 }
117 },
118 {
119 "block.db": {},
120 "data": {
121 "human_readable_size": "10.74 GB",
122 "parts": 1,
123 "path": "/dev/sdc",
124 "percentage": 100,
125 "size": 11534336000.0
126 }
127 }
128 ],
129 "vgs": [
130 {
131 "devices": [
132 "/dev/sdb"
133 ],
134 "parts": 1
135 },
136 {
137 "devices": [
138 "/dev/sdc"
139 ],
140 "parts": 1
141 }
142 ]
143 }