]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephadm/adoption.rst
import 15.2.0 Octopus source
[ceph.git] / ceph / doc / cephadm / adoption.rst
1 .. _cephadm-adoption:
2
3 Converting an existing cluster to cephadm
4 =========================================
5
6 Cephadm allows you to convert an existing Ceph cluster that
7 has been deployed with ceph-deploy, ceph-ansible, DeepSea, or similar tools.
8
9 Limitations
10 -----------
11
12 * Cephadm only works with BlueStore OSDs. If there are FileStore OSDs
13 in your cluster you cannot manage them.
14
15 Preparation
16 -----------
17
18 #. Get the ``cephadm`` command line tool on each host in the existing
19 cluster. See :ref:`get-cephadm`.
20
21 #. Prepare each host for use by ``cephadm``::
22
23 # cephadm prepare-host
24
25 #. Determine which Ceph version you will use. You can use any Octopus (15.2.z)
26 release or later. For example, ``docker.io/ceph/ceph:v15.2.0``. The default
27 will be the latest stable release, but if you are upgrading from an earlier
28 release at the same time be sure to refer to the upgrade notes for any
29 special steps to take while upgrading.
30
31 The image is passed to cephadm with::
32
33 # cephadm --image $IMAGE <rest of command goes here>
34
35 #. Cephadm can provide a list of all Ceph daemons on the current host::
36
37 # cephadm ls
38
39 Before starting, you should see that all existing daemons have a
40 style of ``legacy`` in the resulting output. As the adoption
41 process progresses, adopted daemons will appear as style
42 ``cephadm:v1``.
43
44
45 Adoption process
46 ----------------
47
48 #. Ensure the ceph configuration is migrated to use the cluster config database.
49 If the ``/etc/ceph/ceph.conf`` is identical on each host, then on one host::
50
51 # ceph config assimilate-conf -i /etc/ceph/ceph.conf
52
53 If there are config variations on each host, you may need to repeat
54 this command on each host. You can view the cluster's
55 configuration to confirm that it is complete with::
56
57 # ceph config dump
58
59 #. Adopt each monitor::
60
61 # cephadm adopt --style legacy --name mon.<hostname>
62
63 Each legacy monitor should stop, quickly restart as a cephadm
64 container, and rejoin the quorum.
65
66 #. Adopt each manager::
67
68 # cephadm adopt --style legacy --name mgr.<hostname>
69
70 #. Enable cephadm::
71
72 # ceph mgr module enable cephadm
73 # ceph orch set backend cephadm
74
75 #. Generate an SSH key::
76
77 # ceph cephadm generate-key
78 # ceph cephadm get-pub-key > ceph.pub
79
80 #. Install the cluster SSH key on each host in the cluster::
81
82 # ssh-copy-id -f -i ceph.pub root@<host>
83
84 #. Tell cephadm which hosts to manage::
85
86 # ceph orch host add <hostname> [ip-address]
87
88 This will perform a ``cephadm check-host`` on each host before
89 adding it to ensure it is working. The IP address argument is only
90 required if DNS does not allow you to connect to each host by its
91 short name.
92
93 #. Verify that the adopted monitor and manager daemons are visible::
94
95 # ceph orch ps
96
97 #. Adopt all OSDs in the cluster::
98
99 # cephadm adopt --style legacy --name <name>
100
101 For example::
102
103 # cephadm adopt --style legacy --name osd.1
104 # cephadm adopt --style legacy --name osd.2
105
106 #. Redeploy MDS daemons by telling cephadm how many daemons to run for
107 each file system. You can list file systems by name with ``ceph fs
108 ls``. For each file system::
109
110 # ceph orch apply mds <fs-name> <num-daemons>
111
112 For example, in a cluster with a single file system called `foo`::
113
114 # ceph fs ls
115 name: foo, metadata pool: foo_metadata, data pools: [foo_data ]
116 # ceph orch apply mds foo 2
117
118 Wait for the new MDS daemons to start with::
119
120 # ceph orch ps --daemon-type mds
121
122 Finally, stop and remove the legacy MDS daemons::
123
124 # systemctl stop ceph-mds.target
125 # rm -rf /var/lib/ceph/mds/ceph-*
126
127 #. Redeploy RGW daemons. Cephadm manages RGW daemons by zone. For each
128 zone, deploy new RGW daemons with cephadm::
129
130 # ceph orch apply rgw <realm> <zone> <placement> [--port <port>] [--ssl]
131
132 where *<placement>* can be a simple daemon count, or a list of
133 specific hosts (see :ref:`orchestrator-cli-placement-spec`).
134
135 Once the daemons have started and you have confirmed they are functioning,
136 stop and remove the old legacy daemons::
137
138 # systemctl stop ceph-rgw.target
139 # rm -rf /var/lib/ceph/radosgw/ceph-*
140
141 #. Check the ``ceph health detail`` output for cephadm warnings about
142 stray cluster daemons or hosts that are not yet managed.