3 Converting an existing cluster to cephadm
4 =========================================
6 It is possible to convert some existing clusters so that they can be managed
7 with ``cephadm``. This statement applies to some clusters that were deployed
8 with ``ceph-deploy``, ``ceph-ansible``, or ``DeepSea``.
10 This section of the documentation explains how to determine whether your
11 clusters can be converted to a state in which they can be managed by
12 ``cephadm`` and how to perform those conversions.
17 * Cephadm works only with BlueStore OSDs. FileStore OSDs that are in your
18 cluster cannot be managed with ``cephadm``.
23 #. Make sure that the ``cephadm`` command line tool is available on each host
24 in the existing cluster. See :ref:`get-cephadm` to learn how.
26 #. Prepare each host for use by ``cephadm`` by running this command:
32 #. Choose a version of Ceph to use for the conversion. This procedure will work
33 with any release of Ceph that is Octopus (15.2.z) or later, inclusive. The
34 latest stable release of Ceph is the default. You might be upgrading from an
35 earlier Ceph release at the same time that you're performing this
36 conversion; if you are upgrading from an earlier release, make sure to
37 follow any upgrade-related instructions for that release.
39 Pass the image to cephadm with the following command:
43 cephadm --image $IMAGE <rest of command goes here>
45 The conversion begins.
47 #. Confirm that the conversion is underway by running ``cephadm ls`` and
48 making sure that the style of the daemons is changed:
54 Before starting the conversion process, ``cephadm ls`` shows all existing
55 daemons to have a style of ``legacy``. As the adoption process progresses,
56 adopted daemons will appear with a style of ``cephadm:v1``.
62 #. Make sure that the ceph configuration has been migrated to use the cluster
63 config database. If the ``/etc/ceph/ceph.conf`` is identical on each host,
64 then the following command can be run on one single host and will affect all
69 ceph config assimilate-conf -i /etc/ceph/ceph.conf
71 If there are configuration variations between hosts, you will need to repeat
72 this command on each host. During this adoption process, view the cluster's
73 configuration to confirm that it is complete by running the following
80 #. Adopt each monitor:
84 cephadm adopt --style legacy --name mon.<hostname>
86 Each legacy monitor should stop, quickly restart as a cephadm
87 container, and rejoin the quorum.
89 #. Adopt each manager:
93 cephadm adopt --style legacy --name mgr.<hostname>
99 ceph mgr module enable cephadm
100 ceph orch set backend cephadm
102 #. Generate an SSH key:
106 ceph cephadm generate-key
107 ceph cephadm get-pub-key > ~/ceph.pub
109 #. Install the cluster SSH key on each host in the cluster:
113 ssh-copy-id -f -i ~/ceph.pub root@<host>
116 It is also possible to import an existing ssh key. See
117 :ref:`ssh errors <cephadm-ssh-errors>` in the troubleshooting
118 document for instructions that describe how to import existing
122 It is also possible to have cephadm use a non-root user to ssh
123 into cluster hosts. This user needs to have passwordless sudo access.
124 Use ``ceph cephadm set-user <user>`` and copy the ssh key to that user.
125 See :ref:`cephadm-ssh-user`
127 #. Tell cephadm which hosts to manage:
131 ceph orch host add <hostname> [ip-address]
133 This will perform a ``cephadm check-host`` on each host before adding it;
134 this check ensures that the host is functioning properly. The IP address
135 argument is recommended; if not provided, then the host name will be resolved
138 #. Verify that the adopted monitor and manager daemons are visible:
144 #. Adopt all OSDs in the cluster:
148 cephadm adopt --style legacy --name <name>
154 cephadm adopt --style legacy --name osd.1
155 cephadm adopt --style legacy --name osd.2
157 #. Redeploy MDS daemons by telling cephadm how many daemons to run for
158 each file system. List file systems by name with the command ``ceph fs
159 ls``. Run the following command on the master nodes to redeploy the MDS
164 ceph orch apply mds <fs-name> [--placement=<placement>]
166 For example, in a cluster with a single file system called `foo`:
174 name: foo, metadata pool: foo_metadata, data pools: [foo_data ]
178 ceph orch apply mds foo 2
180 Confirm that the new MDS daemons have started:
184 ceph orch ps --daemon-type mds
186 Finally, stop and remove the legacy MDS daemons:
190 systemctl stop ceph-mds.target
191 rm -rf /var/lib/ceph/mds/ceph-*
193 #. Redeploy RGW daemons. Cephadm manages RGW daemons by zone. For each
194 zone, deploy new RGW daemons with cephadm:
198 ceph orch apply rgw <svc_id> [--realm=<realm>] [--zone=<zone>] [--port=<port>] [--ssl] [--placement=<placement>]
200 where *<placement>* can be a simple daemon count, or a list of
201 specific hosts (see :ref:`orchestrator-cli-placement-spec`), and the
202 zone and realm arguments are needed only for a multisite setup.
204 After the daemons have started and you have confirmed that they are
205 functioning, stop and remove the old, legacy daemons:
209 systemctl stop ceph-rgw.target
210 rm -rf /var/lib/ceph/radosgw/ceph-*
212 #. Check the output of the command ``ceph health detail`` for cephadm warnings
213 about stray cluster daemons or hosts that are not yet managed by cephadm.