]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephadm/adoption.rst
import ceph quincy 17.2.6
[ceph.git] / ceph / doc / cephadm / adoption.rst
CommitLineData
9f95a23c
TL
1.. _cephadm-adoption:
2
3Converting an existing cluster to cephadm
4=========================================
5
f67539c2 6It is possible to convert some existing clusters so that they can be managed
20effc67 7with ``cephadm``. This statement applies to some clusters that were deployed
f67539c2
TL
8with ``ceph-deploy``, ``ceph-ansible``, or ``DeepSea``.
9
10This section of the documentation explains how to determine whether your
11clusters can be converted to a state in which they can be managed by
12``cephadm`` and how to perform those conversions.
9f95a23c
TL
13
14Limitations
15-----------
16
f67539c2
TL
17* Cephadm works only with BlueStore OSDs. FileStore OSDs that are in your
18 cluster cannot be managed with ``cephadm``.
9f95a23c
TL
19
20Preparation
21-----------
22
f67539c2
TL
23#. Make sure that the ``cephadm`` command line tool is available on each host
24 in the existing cluster. See :ref:`get-cephadm` to learn how.
25
26#. Prepare each host for use by ``cephadm`` by running this command:
27
28 .. prompt:: bash #
29
30 cephadm prepare-host
9f95a23c 31
f67539c2
TL
32#. Choose a version of Ceph to use for the conversion. This procedure will work
33 with any release of Ceph that is Octopus (15.2.z) or later, inclusive. The
34 latest stable release of Ceph is the default. You might be upgrading from an
35 earlier Ceph release at the same time that you're performing this
36 conversion; if you are upgrading from an earlier release, make sure to
33c7a0ef 37 follow any upgrade-related instructions for that release.
9f95a23c 38
f67539c2 39 Pass the image to cephadm with the following command:
9f95a23c 40
f67539c2 41 .. prompt:: bash #
9f95a23c 42
f67539c2 43 cephadm --image $IMAGE <rest of command goes here>
9f95a23c 44
f67539c2 45 The conversion begins.
9f95a23c 46
f67539c2
TL
47#. Confirm that the conversion is underway by running ``cephadm ls`` and
48 making sure that the style of the daemons is changed:
9f95a23c 49
f67539c2 50 .. prompt:: bash #
9f95a23c 51
f67539c2
TL
52 cephadm ls
53
20effc67 54 Before starting the conversion process, ``cephadm ls`` shows all existing
f67539c2
TL
55 daemons to have a style of ``legacy``. As the adoption process progresses,
56 adopted daemons will appear with a style of ``cephadm:v1``.
9f95a23c
TL
57
58
59Adoption process
60----------------
61
f67539c2
TL
62#. Make sure that the ceph configuration has been migrated to use the cluster
63 config database. If the ``/etc/ceph/ceph.conf`` is identical on each host,
64 then the following command can be run on one single host and will affect all
65 hosts:
66
67 .. prompt:: bash #
68
69 ceph config assimilate-conf -i /etc/ceph/ceph.conf
9f95a23c 70
f67539c2
TL
71 If there are configuration variations between hosts, you will need to repeat
72 this command on each host. During this adoption process, view the cluster's
73 configuration to confirm that it is complete by running the following
74 command:
9f95a23c 75
f67539c2 76 .. prompt:: bash #
9f95a23c 77
f67539c2 78 ceph config dump
9f95a23c 79
f67539c2 80#. Adopt each monitor:
9f95a23c 81
f67539c2
TL
82 .. prompt:: bash #
83
84 cephadm adopt --style legacy --name mon.<hostname>
9f95a23c
TL
85
86 Each legacy monitor should stop, quickly restart as a cephadm
87 container, and rejoin the quorum.
88
f67539c2
TL
89#. Adopt each manager:
90
91 .. prompt:: bash #
92
93 cephadm adopt --style legacy --name mgr.<hostname>
9f95a23c 94
f67539c2 95#. Enable cephadm:
9f95a23c 96
f67539c2 97 .. prompt:: bash #
9f95a23c 98
f67539c2
TL
99 ceph mgr module enable cephadm
100 ceph orch set backend cephadm
9f95a23c 101
f67539c2 102#. Generate an SSH key:
9f95a23c 103
f67539c2 104 .. prompt:: bash #
9f95a23c 105
f67539c2
TL
106 ceph cephadm generate-key
107 ceph cephadm get-pub-key > ~/ceph.pub
9f95a23c 108
f67539c2
TL
109#. Install the cluster SSH key on each host in the cluster:
110
111 .. prompt:: bash #
112
113 ssh-copy-id -f -i ~/ceph.pub root@<host>
9f95a23c 114
f6b5b4d7 115 .. note::
39ae355f
TL
116 It is also possible to import an existing SSH key. See
117 :ref:`SSH errors <cephadm-ssh-errors>` in the troubleshooting
f67539c2 118 document for instructions that describe how to import existing
39ae355f 119 SSH keys.
f6b5b4d7 120
a4b75251 121 .. note::
39ae355f 122 It is also possible to have cephadm use a non-root user to SSH
a4b75251 123 into cluster hosts. This user needs to have passwordless sudo access.
39ae355f 124 Use ``ceph cephadm set-user <user>`` and copy the SSH key to that user.
a4b75251
TL
125 See :ref:`cephadm-ssh-user`
126
f67539c2
TL
127#. Tell cephadm which hosts to manage:
128
129 .. prompt:: bash #
9f95a23c 130
f67539c2 131 ceph orch host add <hostname> [ip-address]
9f95a23c 132
f67539c2
TL
133 This will perform a ``cephadm check-host`` on each host before adding it;
134 this check ensures that the host is functioning properly. The IP address
b3b6e05e
TL
135 argument is recommended; if not provided, then the host name will be resolved
136 via DNS.
9f95a23c 137
f67539c2 138#. Verify that the adopted monitor and manager daemons are visible:
9f95a23c 139
f67539c2 140 .. prompt:: bash #
9f95a23c 141
f67539c2 142 ceph orch ps
9f95a23c 143
f67539c2 144#. Adopt all OSDs in the cluster:
9f95a23c 145
f67539c2 146 .. prompt:: bash #
9f95a23c 147
f67539c2
TL
148 cephadm adopt --style legacy --name <name>
149
150 For example:
151
152 .. prompt:: bash #
153
154 cephadm adopt --style legacy --name osd.1
155 cephadm adopt --style legacy --name osd.2
9f95a23c
TL
156
157#. Redeploy MDS daemons by telling cephadm how many daemons to run for
f67539c2
TL
158 each file system. List file systems by name with the command ``ceph fs
159 ls``. Run the following command on the master nodes to redeploy the MDS
160 daemons:
161
162 .. prompt:: bash #
163
164 ceph orch apply mds <fs-name> [--placement=<placement>]
165
166 For example, in a cluster with a single file system called `foo`:
167
168 .. prompt:: bash #
169
170 ceph fs ls
171
172 .. code-block:: bash
173
174 name: foo, metadata pool: foo_metadata, data pools: [foo_data ]
175
176 .. prompt:: bash #
177
178 ceph orch apply mds foo 2
9f95a23c 179
f67539c2 180 Confirm that the new MDS daemons have started:
9f95a23c 181
f67539c2 182 .. prompt:: bash #
9f95a23c 183
f67539c2 184 ceph orch ps --daemon-type mds
9f95a23c 185
f67539c2 186 Finally, stop and remove the legacy MDS daemons:
9f95a23c 187
f67539c2 188 .. prompt:: bash #
9f95a23c 189
f67539c2
TL
190 systemctl stop ceph-mds.target
191 rm -rf /var/lib/ceph/mds/ceph-*
9f95a23c 192
f67539c2
TL
193#. Redeploy RGW daemons. Cephadm manages RGW daemons by zone. For each
194 zone, deploy new RGW daemons with cephadm:
9f95a23c 195
f67539c2 196 .. prompt:: bash #
9f95a23c 197
f67539c2 198 ceph orch apply rgw <svc_id> [--realm=<realm>] [--zone=<zone>] [--port=<port>] [--ssl] [--placement=<placement>]
9f95a23c
TL
199
200 where *<placement>* can be a simple daemon count, or a list of
f67539c2
TL
201 specific hosts (see :ref:`orchestrator-cli-placement-spec`), and the
202 zone and realm arguments are needed only for a multisite setup.
9f95a23c 203
f67539c2
TL
204 After the daemons have started and you have confirmed that they are
205 functioning, stop and remove the old, legacy daemons:
9f95a23c 206
f67539c2 207 .. prompt:: bash #
9f95a23c 208
f67539c2
TL
209 systemctl stop ceph-rgw.target
210 rm -rf /var/lib/ceph/radosgw/ceph-*
e306af50 211
f67539c2
TL
212#. Check the output of the command ``ceph health detail`` for cephadm warnings
213 about stray cluster daemons or hosts that are not yet managed by cephadm.