5 Deploy OSDs with different device technologies like lvm or physical disks using
6 pluggable tools (:doc:`lvm/index` itself is treated like a plugin) and trying to
7 follow a predictable, and robust way of preparing, activating, and starting OSDs.
9 :ref:`Overview <ceph-volume-overview>` |
10 :ref:`Plugin Guide <ceph-volume-plugins>` |
13 **Command Line Subcommands**
14 There is currently support for ``lvm``, and plain disks (with GPT partitions)
15 that may have been deployed with ``ceph-disk``.
17 * :ref:`ceph-volume-lvm`
18 * :ref:`ceph-volume-simple`
23 Starting on Ceph version 12.2.2, ``ceph-disk`` is deprecated. Deprecation
24 warnings will show up that will link to this page. It is strongly suggested
25 that users start consuming ``ceph-volume``. There are two paths for migrating:
27 #. Keep OSDs deployed with ``ceph-disk``: The :ref:`ceph-volume-simple` command
28 provides a way to take over the management while disabling ``ceph-disk``
30 #. Redeploy existing OSDs with ``ceph-volume``: This is covered in depth on
31 :ref:`rados-replacing-an-osd`
35 For new deployments, :ref:`ceph-volume-lvm` is recommended, it can use any
36 logical volume as input for data OSDs, or it can setup a minimal/naive logical
41 If the cluster has OSDs that were provisioned with ``ceph-disk``, then
42 ``ceph-volume`` can take over the management of these with
43 :ref:`ceph-volume-simple`. A scan is done on the data device or OSD directory,
44 and ``ceph-disk`` is fully disabled. Encryption is fully supported.