5 Deploy OSDs with different device technologies like lvm or physical disks using
6 pluggable tools (:doc:`lvm/index` itself is treated like a plugin) and trying to
7 follow a predictable, and robust way of preparing, activating, and starting OSDs.
9 :ref:`Overview <ceph-volume-overview>` |
10 :ref:`Plugin Guide <ceph-volume-plugins>` |
13 **Command Line Subcommands**
14 There is currently support for ``lvm``, and plain disks (with GPT partitions)
15 that may have been deployed with ``ceph-disk``.
17 * :ref:`ceph-volume-lvm`
18 * :ref:`ceph-volume-simple`
23 Starting on Ceph version 12.2.2, ``ceph-disk`` is deprecated. Deprecation
24 warnings will show up that will link to this page. It is strongly suggested
25 that users start consuming ``ceph-volume``.
29 For new deployments, :ref:`ceph-volume-lvm` is recommended, it can use any
30 logical volume as input for data OSDs, or it can setup a minimal/naive logical
35 If the cluster has OSDs that were provisioned with ``ceph-disk``, then
36 ``ceph-volume`` can take over the management of these with
37 :ref:`ceph-volume-simple`. A scan is done on the data device or OSD directory,
38 and ``ceph-disk`` is fully disabled.
42 If using encryption with OSDs, there is currently no support in ``ceph-volume``
43 for this scenario (although support for this is coming soon). In this case, it
44 is OK to continue to use ``ceph-disk`` until ``ceph-volume`` fully supports it.
45 This page will be updated when that happens.