]> git.proxmox.com Git - ceph.git/blob - ceph/doc/ceph-volume/lvm/activate.rst
update sources to ceph Nautilus 14.2.1
[ceph.git] / ceph / doc / ceph-volume / lvm / activate.rst
1 .. _ceph-volume-lvm-activate:
2
3 ``activate``
4 ============
5 Once :ref:`ceph-volume-lvm-prepare` is completed, and all the various steps
6 that entails are done, the volume is ready to get "activated".
7
8 This activation process enables a systemd unit that persists the OSD ID and its
9 UUID (also called ``fsid`` in Ceph CLI tools), so that at boot time it can
10 understand what OSD is enabled and needs to be mounted.
11
12 .. note:: The execution of this call is fully idempotent, and there is no
13 side-effects when running multiple times
14
15 New OSDs
16 --------
17 To activate newly prepared OSDs both the :term:`OSD id` and :term:`OSD uuid`
18 need to be supplied. For example::
19
20 ceph-volume lvm activate --bluestore 0 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8
21
22 .. note:: The UUID is stored in the ``fsid`` file in the OSD path, which is
23 generated when :ref:`ceph-volume-lvm-prepare` is used.
24
25 Activating all OSDs
26 -------------------
27 It is possible to activate all existing OSDs at once by using the ``--all``
28 flag. For example::
29
30 ceph-volume lvm activate --all
31
32 This call will inspect all the OSDs created by ceph-volume that are inactive
33 and will activate them one by one. If any of the OSDs are already running, it
34 will report them in the command output and skip them, making it safe to rerun
35 (idempotent).
36
37 requiring uuids
38 ^^^^^^^^^^^^^^^
39 The :term:`OSD uuid` is being required as an extra step to ensure that the
40 right OSD is being activated. It is entirely possible that a previous OSD with
41 the same id exists and would end up activating the incorrect one.
42
43
44 dmcrypt
45 ^^^^^^^
46 If the OSD was prepared with dmcrypt by ceph-volume, there is no need to
47 specify ``--dmcrypt`` on the command line again (that flag is not available for
48 the ``activate`` subcommand). An encrypted OSD will be automatically detected.
49
50
51 Discovery
52 ---------
53 With OSDs previously created by ``ceph-volume``, a *discovery* process is
54 performed using :term:`LVM tags` to enable the systemd units.
55
56 The systemd unit will capture the :term:`OSD id` and :term:`OSD uuid` and
57 persist it. Internally, the activation will enable it like::
58
59 systemctl enable ceph-volume@lvm-$id-$uuid
60
61 For example::
62
63 systemctl enable ceph-volume@lvm-0-8715BEB4-15C5-49DE-BA6F-401086EC7B41
64
65 Would start the discovery process for the OSD with an id of ``0`` and a UUID of
66 ``8715BEB4-15C5-49DE-BA6F-401086EC7B41``.
67
68 .. note:: for more details on the systemd workflow see :ref:`ceph-volume-lvm-systemd`
69
70 The systemd unit will look for the matching OSD device, and by looking at its
71 :term:`LVM tags` will proceed to:
72
73 # mount the device in the corresponding location (by convention this is
74 ``/var/lib/ceph/osd/<cluster name>-<osd id>/``)
75
76 # ensure that all required devices are ready for that OSD. In the case of
77 a journal (when ``--filestore`` is selected) the device will be queried (with
78 ``blkid`` for partitions, and lvm for logical volumes) to ensure that the
79 correct device is being linked. The symbolic link will *always* be re-done to
80 ensure that the correct device is linked.
81
82 # start the ``ceph-osd@0`` systemd unit
83
84 .. note:: The system infers the objectstore type (filestore or bluestore) by
85 inspecting the LVM tags applied to the OSD devices
86
87 Existing OSDs
88 -------------
89 For existing OSDs that have been deployed with ``ceph-disk``, they need to be
90 scanned and activated :ref:`using the simple sub-command <ceph-volume-simple>`.
91 If a different tooling was used then the only way to port them over to the new
92 mechanism is to prepare them again (losing data). See
93 :ref:`ceph-volume-lvm-existing-osds` for details on how to proceed.
94
95 Summary
96 -------
97 To recap the ``activate`` process for :term:`bluestore`:
98
99 #. require both :term:`OSD id` and :term:`OSD uuid`
100 #. enable the system unit with matching id and uuid
101 #. Create the ``tmpfs`` mount at the OSD directory in
102 ``/var/lib/ceph/osd/$cluster-$id/``
103 #. Recreate all the files needed with ``ceph-bluestore-tool prime-osd-dir`` by
104 pointing it to the OSD ``block`` device.
105 #. the systemd unit will ensure all devices are ready and linked
106 #. the matching ``ceph-osd`` systemd unit will get started
107
108 And for :term:`filestore`:
109
110 #. require both :term:`OSD id` and :term:`OSD uuid`
111 #. enable the system unit with matching id and uuid
112 #. the systemd unit will ensure all devices are ready and mounted (if needed)
113 #. the matching ``ceph-osd`` systemd unit will get started