]> git.proxmox.com Git - ceph.git/blob - ceph/doc/ceph-volume/lvm/activate.rst
update ceph source to reef 18.2.1
[ceph.git] / ceph / doc / ceph-volume / lvm / activate.rst
1 .. _ceph-volume-lvm-activate:
2
3 ``activate``
4 ============
5
6 After :ref:`ceph-volume-lvm-prepare` has completed its run, the volume can be
7 activated.
8
9 Activating the volume involves enabling a ``systemd`` unit that persists the
10 ``OSD ID`` and its ``UUID`` (which is also called the ``fsid`` in the Ceph CLI
11 tools). After this information has been persisted, the cluster can determine
12 which OSD is enabled and must be mounted.
13
14 .. note:: The execution of this call is fully idempotent. This means that the
15 call can be executed multiple times without changing the result of its first
16 successful execution.
17
18 For information about OSDs deployed by cephadm, refer to
19 :ref:`cephadm-osd-activate`.
20
21 New OSDs
22 --------
23 To activate newly prepared OSDs both the :term:`OSD id` and :term:`OSD uuid`
24 need to be supplied. For example::
25
26 ceph-volume lvm activate --bluestore 0 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8
27
28 .. note:: The UUID is stored in the ``fsid`` file in the OSD path, which is
29 generated when :ref:`ceph-volume-lvm-prepare` is used.
30
31 Activating all OSDs
32 -------------------
33
34 .. note:: For OSDs deployed by cephadm, please refer to :ref:`cephadm-osd-activate`
35 instead.
36
37 It is possible to activate all existing OSDs at once by using the ``--all``
38 flag. For example::
39
40 ceph-volume lvm activate --all
41
42 This call will inspect all the OSDs created by ceph-volume that are inactive
43 and will activate them one by one. If any of the OSDs are already running, it
44 will report them in the command output and skip them, making it safe to rerun
45 (idempotent).
46
47 requiring uuids
48 ^^^^^^^^^^^^^^^
49 The :term:`OSD uuid` is being required as an extra step to ensure that the
50 right OSD is being activated. It is entirely possible that a previous OSD with
51 the same id exists and would end up activating the incorrect one.
52
53
54 dmcrypt
55 ^^^^^^^
56 If the OSD was prepared with dmcrypt by ceph-volume, there is no need to
57 specify ``--dmcrypt`` on the command line again (that flag is not available for
58 the ``activate`` subcommand). An encrypted OSD will be automatically detected.
59
60
61 Discovery
62 ---------
63 With OSDs previously created by ``ceph-volume``, a *discovery* process is
64 performed using :term:`LVM tags` to enable the systemd units.
65
66 The systemd unit will capture the :term:`OSD id` and :term:`OSD uuid` and
67 persist it. Internally, the activation will enable it like::
68
69 systemctl enable ceph-volume@lvm-$id-$uuid
70
71 For example::
72
73 systemctl enable ceph-volume@lvm-0-8715BEB4-15C5-49DE-BA6F-401086EC7B41
74
75 Would start the discovery process for the OSD with an id of ``0`` and a UUID of
76 ``8715BEB4-15C5-49DE-BA6F-401086EC7B41``.
77
78 .. note:: for more details on the systemd workflow see :ref:`ceph-volume-lvm-systemd`
79
80 The systemd unit will look for the matching OSD device, and by looking at its
81 :term:`LVM tags` will proceed to:
82
83 #. Mount the device in the corresponding location (by convention this is
84 ``/var/lib/ceph/osd/<cluster name>-<osd id>/``)
85
86 #. Ensure that all required devices are ready for that OSD.
87
88 #. Start the ``ceph-osd@0`` systemd unit
89
90 .. note:: The system infers the objectstore type by
91 inspecting the LVM tags applied to the OSD devices
92
93 Existing OSDs
94 -------------
95 For existing OSDs that have been deployed with ``ceph-disk``, they need to be
96 scanned and activated :ref:`using the simple sub-command <ceph-volume-simple>`.
97 If a different tool was used then the only way to port them over to the new
98 mechanism is to prepare them again (losing data). See
99 :ref:`ceph-volume-lvm-existing-osds` for details on how to proceed.
100
101 Summary
102 -------
103 To recap the ``activate`` process for :term:`bluestore`:
104
105 #. Require both :term:`OSD id` and :term:`OSD uuid`
106 #. Enable the system unit with matching id and uuid
107 #. Create the ``tmpfs`` mount at the OSD directory in
108 ``/var/lib/ceph/osd/$cluster-$id/``
109 #. Recreate all the files needed with ``ceph-bluestore-tool prime-osd-dir`` by
110 pointing it to the OSD ``block`` device.
111 #. The systemd unit will ensure all devices are ready and linked
112 #. The matching ``ceph-osd`` systemd unit will get started