]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephadm/host-management.rst
import ceph 16.2.7
[ceph.git] / ceph / doc / cephadm / host-management.rst
CommitLineData
f67539c2
TL
1.. _orchestrator-cli-host-management:
2
3===============
4Host Management
5===============
6
7To list hosts associated with the cluster:
8
9.. prompt:: bash #
10
11 ceph orch host ls [--format yaml]
12
13.. _cephadm-adding-hosts:
14
15Adding Hosts
16============
17
18Hosts must have these :ref:`cephadm-host-requirements` installed.
19Hosts without all the necessary requirements will fail to be added to the cluster.
20
21To add each new host to the cluster, perform two steps:
22
23#. Install the cluster's public SSH key in the new host's root user's ``authorized_keys`` file:
24
25 .. prompt:: bash #
26
27 ssh-copy-id -f -i /etc/ceph/ceph.pub root@*<new-host>*
28
29 For example:
30
31 .. prompt:: bash #
32
33 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host2
34 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host3
35
36#. Tell Ceph that the new node is part of the cluster:
37
38 .. prompt:: bash #
39
b3b6e05e 40 ceph orch host add *<newhost>* [*<ip>*] [*<label1> ...*]
f67539c2
TL
41
42 For example:
43
44 .. prompt:: bash #
45
b3b6e05e
TL
46 ceph orch host add host2 10.10.0.102
47 ceph orch host add host3 10.10.0.103
48
49 It is best to explicitly provide the host IP address. If an IP is
50 not provided, then the host name will be immediately resolved via
51 DNS and that IP will be used.
52
53 One or more labels can also be included to immediately label the
54 new host. For example, by default the ``_admin`` label will make
55 cephadm maintain a copy of the ``ceph.conf`` file and a
56 ``client.admin`` keyring file in ``/etc/ceph``:
57
58 .. prompt:: bash #
59
60 ceph orch host add host4 10.10.0.104 --labels _admin
61
f67539c2
TL
62.. _cephadm-removing-hosts:
63
64Removing Hosts
65==============
66
522d829b 67A host can safely be removed from a the cluster once all daemons are removed from it.
f67539c2 68
522d829b 69To drain all daemons from a host do the following:
f67539c2 70
522d829b
TL
71.. prompt:: bash #
72
73 ceph orch host drain *<host>*
74
75The '_no_schedule' label will be applied to the host. See :ref:`cephadm-special-host-labels`
f67539c2 76
522d829b 77All osds on the host will be scheduled to be removed. You can check osd removal progress with the following:
f67539c2 78
522d829b 79.. prompt:: bash #
f67539c2 80
522d829b 81 ceph orch osd rm status
f67539c2 82
522d829b 83see :ref:`cephadm-osd-removal` for more details about osd removal
f67539c2 84
522d829b 85You can check if there are no deamons left on the host with the following:
f67539c2 86
522d829b 87.. prompt:: bash #
f67539c2 88
522d829b 89 ceph orch ps <host>
f67539c2 90
522d829b 91Once all daemons are removed you can remove the host with the following:
f67539c2
TL
92
93.. prompt:: bash #
94
522d829b 95 ceph orch host rm <host>
f67539c2 96
522d829b
TL
97Offline host removal
98--------------------
f67539c2 99
522d829b 100If a host is offline and can not be recovered it can still be removed from the cluster with the following:
f67539c2
TL
101
102.. prompt:: bash #
103
522d829b
TL
104 ceph orch host rm <host> --offline --force
105
106This can potentially cause data loss as osds will be forcefully purged from the cluster by calling ``osd purge-actual`` for each osd.
107Service specs that still contain this host should be manually updated.
f67539c2
TL
108
109.. _orchestrator-host-labels:
110
111Host labels
112===========
113
114The orchestrator supports assigning labels to hosts. Labels
115are free form and have no particular meaning by itself and each host
116can have multiple labels. They can be used to specify placement
117of daemons. See :ref:`orch-placement-by-labels`
118
119Labels can be added when adding a host with the ``--labels`` flag::
120
121 ceph orch host add my_hostname --labels=my_label1
122 ceph orch host add my_hostname --labels=my_label1,my_label2
123
124To add a label a existing host, run::
125
126 ceph orch host label add my_hostname my_label
127
128To remove a label, run::
129
130 ceph orch host label rm my_hostname my_label
131
b3b6e05e
TL
132
133.. _cephadm-special-host-labels:
134
135Special host labels
136-------------------
137
138The following host labels have a special meaning to cephadm. All start with ``_``.
139
140* ``_no_schedule``: *Do not schedule or deploy daemons on this host*.
141
142 This label prevents cephadm from deploying daemons on this host. If it is added to
143 an existing host that already contains Ceph daemons, it will cause cephadm to move
144 those daemons elsewhere (except OSDs, which are not removed automatically).
145
146* ``_no_autotune_memory``: *Do not autotune memory on this host*.
147
148 This label will prevent daemon memory from being tuned even when the
149 ``osd_memory_target_autotune`` or similar option is enabled for one or more daemons
150 on that host.
151
152* ``_admin``: *Distribute client.admin and ceph.conf to this host*.
153
154 By default, an ``_admin`` label is applied to the first host in the cluster (where
155 bootstrap was originally run), and the ``client.admin`` key is set to be distributed
156 to that host via the ``ceph orch client-keyring ...`` function. Adding this label
157 to additional hosts will normally cause cephadm to deploy config and keyring files
158 in ``/etc/ceph``.
159
f67539c2
TL
160Maintenance Mode
161================
162
163Place a host in and out of maintenance mode (stops all Ceph daemons on host)::
164
165 ceph orch host maintenance enter <hostname> [--force]
166 ceph orch host maintenace exit <hostname>
167
168Where the force flag when entering maintenance allows the user to bypass warnings (but not alerts)
169
170See also :ref:`cephadm-fqdn`
171
a4b75251
TL
172Creating many hosts at once
173===========================
f67539c2
TL
174
175Many hosts can be added at once using
a4b75251
TL
176``ceph orch apply -i`` by submitting a multi-document YAML file:
177
178.. code-block:: yaml
f67539c2 179
f67539c2 180 service_type: host
f67539c2 181 hostname: node-00
b3b6e05e 182 addr: 192.168.0.10
f67539c2
TL
183 labels:
184 - example1
185 - example2
186 ---
187 service_type: host
f67539c2 188 hostname: node-01
b3b6e05e 189 addr: 192.168.0.11
f67539c2
TL
190 labels:
191 - grafana
192 ---
193 service_type: host
f67539c2 194 hostname: node-02
b3b6e05e 195 addr: 192.168.0.12
f67539c2
TL
196
197This can be combined with service specifications (below) to create a cluster spec
198file to deploy a whole cluster in one command. see ``cephadm bootstrap --apply-spec``
199also to do this during bootstrap. Cluster SSH Keys must be copied to hosts prior to adding them.
200
a4b75251
TL
201Setting the initial CRUSH location of host
202==========================================
203
204Hosts can contain a ``location`` identifier which will instruct cephadm to
205create a new CRUSH host located in the specified hierachy.
206
207.. code-block:: yaml
208
209 service_type: host
210 hostname: node-00
211 addr: 192.168.0.10
212 location:
213 rack: rack1
214
215.. note::
216
217 The ``location`` attribute will be only affect the initial CRUSH location. Subsequent
218 changes of the ``location`` property will be ignored. Also, removing a host will no remove
219 any CRUSH buckets.
220
f67539c2
TL
221SSH Configuration
222=================
223
224Cephadm uses SSH to connect to remote hosts. SSH uses a key to authenticate
225with those hosts in a secure way.
226
227
228Default behavior
229----------------
230
231Cephadm stores an SSH key in the monitor that is used to
232connect to remote hosts. When the cluster is bootstrapped, this SSH
233key is generated automatically and no additional configuration
234is necessary.
235
236A *new* SSH key can be generated with::
237
238 ceph cephadm generate-key
239
240The public portion of the SSH key can be retrieved with::
241
242 ceph cephadm get-pub-key
243
244The currently stored SSH key can be deleted with::
245
246 ceph cephadm clear-key
247
248You can make use of an existing key by directly importing it with::
249
250 ceph config-key set mgr/cephadm/ssh_identity_key -i <key>
251 ceph config-key set mgr/cephadm/ssh_identity_pub -i <pub>
252
253You will then need to restart the mgr daemon to reload the configuration with::
254
255 ceph mgr fail
256
a4b75251
TL
257.. _cephadm-ssh-user:
258
f67539c2
TL
259Configuring a different SSH user
260----------------------------------
261
262Cephadm must be able to log into all the Ceph cluster nodes as an user
263that has enough privileges to download container images, start containers
264and execute commands without prompting for a password. If you do not want
265to use the "root" user (default option in cephadm), you must provide
266cephadm the name of the user that is going to be used to perform all the
267cephadm operations. Use the command::
268
269 ceph cephadm set-user <user>
270
271Prior to running this the cluster ssh key needs to be added to this users
272authorized_keys file and non-root users must have passwordless sudo access.
273
274
275Customizing the SSH configuration
276---------------------------------
277
278Cephadm generates an appropriate ``ssh_config`` file that is
279used for connecting to remote hosts. This configuration looks
280something like this::
281
282 Host *
283 User root
284 StrictHostKeyChecking no
285 UserKnownHostsFile /dev/null
286
287There are two ways to customize this configuration for your environment:
288
289#. Import a customized configuration file that will be stored
290 by the monitor with::
291
292 ceph cephadm set-ssh-config -i <ssh_config_file>
293
294 To remove a customized SSH config and revert back to the default behavior::
295
296 ceph cephadm clear-ssh-config
297
298#. You can configure a file location for the SSH configuration file with::
299
300 ceph config set mgr mgr/cephadm/ssh_config_file <path>
301
302 We do *not recommend* this approach. The path name must be
303 visible to *any* mgr daemon, and cephadm runs all daemons as
304 containers. That means that the file either need to be placed
305 inside a customized container image for your deployment, or
306 manually distributed to the mgr data directory
307 (``/var/lib/ceph/<cluster-fsid>/mgr.<id>`` on the host, visible at
308 ``/var/lib/ceph/mgr/ceph-<id>`` from inside the container).
309
310.. _cephadm-fqdn:
311
312Fully qualified domain names vs bare host names
313===============================================
314
f67539c2
TL
315.. note::
316
317 cephadm demands that the name of the host given via ``ceph orch host add``
318 equals the output of ``hostname`` on remote hosts.
319
b3b6e05e 320Otherwise cephadm can't be sure that names returned by
f67539c2
TL
321``ceph * metadata`` match the hosts known to cephadm. This might result
322in a :ref:`cephadm-stray-host` warning.
323
324When configuring new hosts, there are two **valid** ways to set the
325``hostname`` of a host:
326
3271. Using the bare host name. In this case:
328
329- ``hostname`` returns the bare host name.
330- ``hostname -f`` returns the FQDN.
331
3322. Using the fully qualified domain name as the host name. In this case:
333
334- ``hostname`` returns the FQDN
335- ``hostname -s`` return the bare host name
336
337Note that ``man hostname`` recommends ``hostname`` to return the bare
338host name:
339
340 The FQDN (Fully Qualified Domain Name) of the system is the
341 name that the resolver(3) returns for the host name, such as,
342 ursula.example.com. It is usually the hostname followed by the DNS
343 domain name (the part after the first dot). You can check the FQDN
344 using ``hostname --fqdn`` or the domain name using ``dnsdomainname``.
345
346 .. code-block:: none
347
348 You cannot change the FQDN with hostname or dnsdomainname.
349
350 The recommended method of setting the FQDN is to make the hostname
351 be an alias for the fully qualified name using /etc/hosts, DNS, or
352 NIS. For example, if the hostname was "ursula", one might have
353 a line in /etc/hosts which reads
354
355 127.0.1.1 ursula.example.com ursula
356
357Which means, ``man hostname`` recommends ``hostname`` to return the bare
358host name. This in turn means that Ceph will return the bare host names
359when executing ``ceph * metadata``. This in turn means cephadm also
360requires the bare host name when adding a host to the cluster:
361``ceph orch host add <bare-name>``.
362
363..
364 TODO: This chapter needs to provide way for users to configure
365 Grafana in the dashboard, as this is right no very hard to do.