1 .. _orchestrator-cli-host-management:
7 To list hosts associated with the cluster:
11 ceph orch host ls [--format yaml] [--host-pattern <name>] [--label <label>] [--host-status <status>]
13 where the optional arguments "host-pattern", "label" and "host-status" are used for filtering.
14 "host-pattern" is a regex that will match against hostnames and will only return matching hosts
15 "label" will only return hosts with the given label
16 "host-status" will only return hosts with the given status (currently "offline" or "maintenance")
17 Any combination of these filtering flags is valid. You may filter against name, label and/or status simultaneously
19 .. _cephadm-adding-hosts:
24 Hosts must have these :ref:`cephadm-host-requirements` installed.
25 Hosts without all the necessary requirements will fail to be added to the cluster.
27 To add each new host to the cluster, perform two steps:
29 #. Install the cluster's public SSH key in the new host's root user's ``authorized_keys`` file:
33 ssh-copy-id -f -i /etc/ceph/ceph.pub root@*<new-host>*
39 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host2
40 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host3
42 #. Tell Ceph that the new node is part of the cluster:
46 ceph orch host add *<newhost>* [*<ip>*] [*<label1> ...*]
52 ceph orch host add host2 10.10.0.102
53 ceph orch host add host3 10.10.0.103
55 It is best to explicitly provide the host IP address. If an IP is
56 not provided, then the host name will be immediately resolved via
57 DNS and that IP will be used.
59 One or more labels can also be included to immediately label the
60 new host. For example, by default the ``_admin`` label will make
61 cephadm maintain a copy of the ``ceph.conf`` file and a
62 ``client.admin`` keyring file in ``/etc/ceph``:
66 ceph orch host add host4 10.10.0.104 --labels _admin
68 .. _cephadm-removing-hosts:
73 A host can safely be removed from a the cluster once all daemons are removed from it.
75 To drain all daemons from a host do the following:
79 ceph orch host drain *<host>*
81 The '_no_schedule' label will be applied to the host. See :ref:`cephadm-special-host-labels`
83 All osds on the host will be scheduled to be removed. You can check osd removal progress with the following:
87 ceph orch osd rm status
89 see :ref:`cephadm-osd-removal` for more details about osd removal
91 You can check if there are no daemons left on the host with the following:
97 Once all daemons are removed you can remove the host with the following:
101 ceph orch host rm <host>
106 If a host is offline and can not be recovered it can still be removed from the cluster with the following:
110 ceph orch host rm <host> --offline --force
112 This can potentially cause data loss as osds will be forcefully purged from the cluster by calling ``osd purge-actual`` for each osd.
113 Service specs that still contain this host should be manually updated.
115 .. _orchestrator-host-labels:
120 The orchestrator supports assigning labels to hosts. Labels
121 are free form and have no particular meaning by itself and each host
122 can have multiple labels. They can be used to specify placement
123 of daemons. See :ref:`orch-placement-by-labels`
125 Labels can be added when adding a host with the ``--labels`` flag::
127 ceph orch host add my_hostname --labels=my_label1
128 ceph orch host add my_hostname --labels=my_label1,my_label2
130 To add a label a existing host, run::
132 ceph orch host label add my_hostname my_label
134 To remove a label, run::
136 ceph orch host label rm my_hostname my_label
139 .. _cephadm-special-host-labels:
144 The following host labels have a special meaning to cephadm. All start with ``_``.
146 * ``_no_schedule``: *Do not schedule or deploy daemons on this host*.
148 This label prevents cephadm from deploying daemons on this host. If it is added to
149 an existing host that already contains Ceph daemons, it will cause cephadm to move
150 those daemons elsewhere (except OSDs, which are not removed automatically).
152 * ``_no_autotune_memory``: *Do not autotune memory on this host*.
154 This label will prevent daemon memory from being tuned even when the
155 ``osd_memory_target_autotune`` or similar option is enabled for one or more daemons
158 * ``_admin``: *Distribute client.admin and ceph.conf to this host*.
160 By default, an ``_admin`` label is applied to the first host in the cluster (where
161 bootstrap was originally run), and the ``client.admin`` key is set to be distributed
162 to that host via the ``ceph orch client-keyring ...`` function. Adding this label
163 to additional hosts will normally cause cephadm to deploy config and keyring files
169 Place a host in and out of maintenance mode (stops all Ceph daemons on host)::
171 ceph orch host maintenance enter <hostname> [--force]
172 ceph orch host maintenance exit <hostname>
174 Where the force flag when entering maintenance allows the user to bypass warnings (but not alerts)
176 See also :ref:`cephadm-fqdn`
178 Creating many hosts at once
179 ===========================
181 Many hosts can be added at once using
182 ``ceph orch apply -i`` by submitting a multi-document YAML file:
203 This can be combined with service specifications (below) to create a cluster spec
204 file to deploy a whole cluster in one command. see ``cephadm bootstrap --apply-spec``
205 also to do this during bootstrap. Cluster SSH Keys must be copied to hosts prior to adding them.
207 Setting the initial CRUSH location of host
208 ==========================================
210 Hosts can contain a ``location`` identifier which will instruct cephadm to
211 create a new CRUSH host located in the specified hierarchy.
223 The ``location`` attribute will be only affect the initial CRUSH location. Subsequent
224 changes of the ``location`` property will be ignored. Also, removing a host will no remove
227 See also :ref:`crush_map_default_types`.
232 Cephadm uses SSH to connect to remote hosts. SSH uses a key to authenticate
233 with those hosts in a secure way.
239 Cephadm stores an SSH key in the monitor that is used to
240 connect to remote hosts. When the cluster is bootstrapped, this SSH
241 key is generated automatically and no additional configuration
244 A *new* SSH key can be generated with::
246 ceph cephadm generate-key
248 The public portion of the SSH key can be retrieved with::
250 ceph cephadm get-pub-key
252 The currently stored SSH key can be deleted with::
254 ceph cephadm clear-key
256 You can make use of an existing key by directly importing it with::
258 ceph config-key set mgr/cephadm/ssh_identity_key -i <key>
259 ceph config-key set mgr/cephadm/ssh_identity_pub -i <pub>
261 You will then need to restart the mgr daemon to reload the configuration with::
265 .. _cephadm-ssh-user:
267 Configuring a different SSH user
268 ----------------------------------
270 Cephadm must be able to log into all the Ceph cluster nodes as an user
271 that has enough privileges to download container images, start containers
272 and execute commands without prompting for a password. If you do not want
273 to use the "root" user (default option in cephadm), you must provide
274 cephadm the name of the user that is going to be used to perform all the
275 cephadm operations. Use the command::
277 ceph cephadm set-user <user>
279 Prior to running this the cluster ssh key needs to be added to this users
280 authorized_keys file and non-root users must have passwordless sudo access.
283 Customizing the SSH configuration
284 ---------------------------------
286 Cephadm generates an appropriate ``ssh_config`` file that is
287 used for connecting to remote hosts. This configuration looks
288 something like this::
292 StrictHostKeyChecking no
293 UserKnownHostsFile /dev/null
295 There are two ways to customize this configuration for your environment:
297 #. Import a customized configuration file that will be stored
298 by the monitor with::
300 ceph cephadm set-ssh-config -i <ssh_config_file>
302 To remove a customized SSH config and revert back to the default behavior::
304 ceph cephadm clear-ssh-config
306 #. You can configure a file location for the SSH configuration file with::
308 ceph config set mgr mgr/cephadm/ssh_config_file <path>
310 We do *not recommend* this approach. The path name must be
311 visible to *any* mgr daemon, and cephadm runs all daemons as
312 containers. That means that the file either need to be placed
313 inside a customized container image for your deployment, or
314 manually distributed to the mgr data directory
315 (``/var/lib/ceph/<cluster-fsid>/mgr.<id>`` on the host, visible at
316 ``/var/lib/ceph/mgr/ceph-<id>`` from inside the container).
320 Fully qualified domain names vs bare host names
321 ===============================================
325 cephadm demands that the name of the host given via ``ceph orch host add``
326 equals the output of ``hostname`` on remote hosts.
328 Otherwise cephadm can't be sure that names returned by
329 ``ceph * metadata`` match the hosts known to cephadm. This might result
330 in a :ref:`cephadm-stray-host` warning.
332 When configuring new hosts, there are two **valid** ways to set the
333 ``hostname`` of a host:
335 1. Using the bare host name. In this case:
337 - ``hostname`` returns the bare host name.
338 - ``hostname -f`` returns the FQDN.
340 2. Using the fully qualified domain name as the host name. In this case:
342 - ``hostname`` returns the FQDN
343 - ``hostname -s`` return the bare host name
345 Note that ``man hostname`` recommends ``hostname`` to return the bare
348 The FQDN (Fully Qualified Domain Name) of the system is the
349 name that the resolver(3) returns for the host name, such as,
350 ursula.example.com. It is usually the hostname followed by the DNS
351 domain name (the part after the first dot). You can check the FQDN
352 using ``hostname --fqdn`` or the domain name using ``dnsdomainname``.
356 You cannot change the FQDN with hostname or dnsdomainname.
358 The recommended method of setting the FQDN is to make the hostname
359 be an alias for the fully qualified name using /etc/hosts, DNS, or
360 NIS. For example, if the hostname was "ursula", one might have
361 a line in /etc/hosts which reads
363 127.0.1.1 ursula.example.com ursula
365 Which means, ``man hostname`` recommends ``hostname`` to return the bare
366 host name. This in turn means that Ceph will return the bare host names
367 when executing ``ceph * metadata``. This in turn means cephadm also
368 requires the bare host name when adding a host to the cluster:
369 ``ceph orch host add <bare-name>``.
372 TODO: This chapter needs to provide way for users to configure
373 Grafana in the dashboard, as this is right no very hard to do.