ceph orch daemon add osd host1:data_devices=/dev/sda,/dev/sdb,db_devices=/dev/sdc,osds_per_device=2
+* Create an OSD on a specific LVM logical volume on a specific host:
+
+ .. prompt:: bash #
+
+ ceph orch daemon add osd *<host>*:*<lvm-path>*
+
+ For example:
+
+ .. prompt:: bash #
+
+ ceph orch daemon add osd host1:/dev/vg_osd/lvm_osd1701
+
* You can use :ref:`drivegroups` to categorize device(s) based on their
properties. This might be useful in forming a clearer picture of which
devices are available to consume. Properties include device type (SSD or
* If you remove an OSD and clean the LVM physical volume, a new OSD will be
created automatically.
-To disable the automatic creation of OSD on available devices, use the
-``unmanaged`` parameter:
-
If you want to avoid this behavior (disable automatic creation of OSD on available devices), use the ``unmanaged`` parameter:
.. prompt:: bash #
.. prompt:: bash #
- orch osd rm <osd_id(s)> --replace [--force]
+ ceph orch osd rm <osd_id(s)> --replace [--force]
Example:
Sizes don't have to be specified exclusively in Gigabytes(G).
-Other units of size are supported: Megabyte(M), Gigabyte(G) and Terrabyte(T).
+Other units of size are supported: Megabyte(M), Gigabyte(G) and Terabyte(T).
Appending the (B) for byte is also supported: ``MB``, ``GB``, ``TB``.
:members:
:exclude-members: from_json
+
Examples
========
host_pattern: '*'
spec:
data_devices:
- rotational: 0
+ rotational: 1
db_devices:
model: MC-55-44-XZ
limit: 2 # db_slots is actually to be favoured here, but it's not implemented yet
vendor: VendorC
This would create the desired layout by using all HDDs as data_devices with two SSD assigned as dedicated db/wal devices.
-The remaining SSDs(8) will be data_devices that have the 'VendorC' NVMEs assigned as dedicated db/wal devices.
+The remaining SSDs(10) will be data_devices that have the 'VendorC' NVMEs assigned as dedicated db/wal devices.
Multiple hosts with the same disk layout
----------------------------------------
.. code-block:: none
20 HDDs
- Vendor: Intel
+ Vendor: VendorA
Model: SSD-123-foo
Size: 4TB
2 SSDs
- Vendor: VendorA
+ Vendor: VendorB
Model: MC-55-44-ZX
Size: 512GB
.. code-block:: none
5 NVMEs
- Vendor: Intel
+ Vendor: VendorA
Model: SSD-123-foo
Size: 4TB
20 SSDs
- Vendor: VendorA
+ Vendor: VendorB
Model: MC-55-44-ZX
Size: 512GB
db_devices:
model: SSD-123-foo
+
This applies different OSD specs to different hosts depending on the `placement` key.
See :ref:`orchestrator-cli-placement-spec`