5 Deploy OSDs with different device technologies like lvm or physical disks using
6 pluggable tools (:doc:`lvm/index` itself is treated like a plugin) and trying to
7 follow a predictable, and robust way of preparing, activating, and starting OSDs.
9 :ref:`Overview <ceph-volume-overview>` |
10 :ref:`Plugin Guide <ceph-volume-plugins>` |
13 **Command Line Subcommands**
15 There is currently support for ``lvm``, and plain disks (with GPT partitions)
16 that may have been deployed with ``ceph-disk``.
18 ``zfs`` support is available for running a FreeBSD cluster.
20 * :ref:`ceph-volume-lvm`
21 * :ref:`ceph-volume-simple`
22 * :ref:`ceph-volume-zfs`
26 The :ref:`ceph-volume-inventory` subcommand provides information and metadata
27 about a nodes physical disk inventory.
32 Starting on Ceph version 13.0.0, ``ceph-disk`` is deprecated. Deprecation
33 warnings will show up that will link to this page. It is strongly suggested
34 that users start consuming ``ceph-volume``. There are two paths for migrating:
36 #. Keep OSDs deployed with ``ceph-disk``: The :ref:`ceph-volume-simple` command
37 provides a way to take over the management while disabling ``ceph-disk``
39 #. Redeploy existing OSDs with ``ceph-volume``: This is covered in depth on
40 :ref:`rados-replacing-an-osd`
42 For details on why ``ceph-disk`` was removed please see the :ref:`Why was
43 ceph-disk replaced? <ceph-disk-replaced>` section.
48 For new deployments, :ref:`ceph-volume-lvm` is recommended, it can use any
49 logical volume as input for data OSDs, or it can setup a minimal/naive logical
54 If the cluster has OSDs that were provisioned with ``ceph-disk``, then
55 ``ceph-volume`` can take over the management of these with
56 :ref:`ceph-volume-simple`. A scan is done on the data device or OSD directory,
57 and ``ceph-disk`` is fully disabled. Encryption is fully supported.