1 .. _orchestrator-cli-host-management:
7 To list hosts associated with the cluster:
11 ceph orch host ls [--format yaml]
13 .. _cephadm-adding-hosts:
18 Hosts must have these :ref:`cephadm-host-requirements` installed.
19 Hosts without all the necessary requirements will fail to be added to the cluster.
21 To add each new host to the cluster, perform two steps:
23 #. Install the cluster's public SSH key in the new host's root user's ``authorized_keys`` file:
27 ssh-copy-id -f -i /etc/ceph/ceph.pub root@*<new-host>*
33 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host2
34 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host3
36 #. Tell Ceph that the new node is part of the cluster:
40 ceph orch host add *<newhost>* [*<ip>*] [*<label1> ...*]
46 ceph orch host add host2 10.10.0.102
47 ceph orch host add host3 10.10.0.103
49 It is best to explicitly provide the host IP address. If an IP is
50 not provided, then the host name will be immediately resolved via
51 DNS and that IP will be used.
53 One or more labels can also be included to immediately label the
54 new host. For example, by default the ``_admin`` label will make
55 cephadm maintain a copy of the ``ceph.conf`` file and a
56 ``client.admin`` keyring file in ``/etc/ceph``:
60 ceph orch host add host4 10.10.0.104 --labels _admin
62 .. _cephadm-removing-hosts:
67 If the node that want you to remove is running OSDs, make sure you remove the OSDs from the node.
69 To remove a host from a cluster, do the following:
71 For all Ceph service types, except for ``node-exporter`` and ``crash``, remove
72 the host from the placement specification file (for example, cluster.yml).
73 For example, if you are removing the host named host2, remove all occurrences of
74 ``- host2`` from all ``placement:`` sections.
96 Remove the host from cephadm's environment:
100 ceph orch host rm host2
103 If the host is running ``node-exporter`` and crash services, remove them by running
104 the following command on the host:
108 cephadm rm-daemon --fsid CLUSTER_ID --name SERVICE_NAME
110 .. _orchestrator-host-labels:
115 The orchestrator supports assigning labels to hosts. Labels
116 are free form and have no particular meaning by itself and each host
117 can have multiple labels. They can be used to specify placement
118 of daemons. See :ref:`orch-placement-by-labels`
120 Labels can be added when adding a host with the ``--labels`` flag::
122 ceph orch host add my_hostname --labels=my_label1
123 ceph orch host add my_hostname --labels=my_label1,my_label2
125 To add a label a existing host, run::
127 ceph orch host label add my_hostname my_label
129 To remove a label, run::
131 ceph orch host label rm my_hostname my_label
134 .. _cephadm-special-host-labels:
139 The following host labels have a special meaning to cephadm. All start with ``_``.
141 * ``_no_schedule``: *Do not schedule or deploy daemons on this host*.
143 This label prevents cephadm from deploying daemons on this host. If it is added to
144 an existing host that already contains Ceph daemons, it will cause cephadm to move
145 those daemons elsewhere (except OSDs, which are not removed automatically).
147 * ``_no_autotune_memory``: *Do not autotune memory on this host*.
149 This label will prevent daemon memory from being tuned even when the
150 ``osd_memory_target_autotune`` or similar option is enabled for one or more daemons
153 * ``_admin``: *Distribute client.admin and ceph.conf to this host*.
155 By default, an ``_admin`` label is applied to the first host in the cluster (where
156 bootstrap was originally run), and the ``client.admin`` key is set to be distributed
157 to that host via the ``ceph orch client-keyring ...`` function. Adding this label
158 to additional hosts will normally cause cephadm to deploy config and keyring files
164 Place a host in and out of maintenance mode (stops all Ceph daemons on host)::
166 ceph orch host maintenance enter <hostname> [--force]
167 ceph orch host maintenace exit <hostname>
169 Where the force flag when entering maintenance allows the user to bypass warnings (but not alerts)
171 See also :ref:`cephadm-fqdn`
176 Many hosts can be added at once using
177 ``ceph orch apply -i`` by submitting a multi-document YAML file::
197 This can be combined with service specifications (below) to create a cluster spec
198 file to deploy a whole cluster in one command. see ``cephadm bootstrap --apply-spec``
199 also to do this during bootstrap. Cluster SSH Keys must be copied to hosts prior to adding them.
204 Cephadm uses SSH to connect to remote hosts. SSH uses a key to authenticate
205 with those hosts in a secure way.
211 Cephadm stores an SSH key in the monitor that is used to
212 connect to remote hosts. When the cluster is bootstrapped, this SSH
213 key is generated automatically and no additional configuration
216 A *new* SSH key can be generated with::
218 ceph cephadm generate-key
220 The public portion of the SSH key can be retrieved with::
222 ceph cephadm get-pub-key
224 The currently stored SSH key can be deleted with::
226 ceph cephadm clear-key
228 You can make use of an existing key by directly importing it with::
230 ceph config-key set mgr/cephadm/ssh_identity_key -i <key>
231 ceph config-key set mgr/cephadm/ssh_identity_pub -i <pub>
233 You will then need to restart the mgr daemon to reload the configuration with::
237 Configuring a different SSH user
238 ----------------------------------
240 Cephadm must be able to log into all the Ceph cluster nodes as an user
241 that has enough privileges to download container images, start containers
242 and execute commands without prompting for a password. If you do not want
243 to use the "root" user (default option in cephadm), you must provide
244 cephadm the name of the user that is going to be used to perform all the
245 cephadm operations. Use the command::
247 ceph cephadm set-user <user>
249 Prior to running this the cluster ssh key needs to be added to this users
250 authorized_keys file and non-root users must have passwordless sudo access.
253 Customizing the SSH configuration
254 ---------------------------------
256 Cephadm generates an appropriate ``ssh_config`` file that is
257 used for connecting to remote hosts. This configuration looks
258 something like this::
262 StrictHostKeyChecking no
263 UserKnownHostsFile /dev/null
265 There are two ways to customize this configuration for your environment:
267 #. Import a customized configuration file that will be stored
268 by the monitor with::
270 ceph cephadm set-ssh-config -i <ssh_config_file>
272 To remove a customized SSH config and revert back to the default behavior::
274 ceph cephadm clear-ssh-config
276 #. You can configure a file location for the SSH configuration file with::
278 ceph config set mgr mgr/cephadm/ssh_config_file <path>
280 We do *not recommend* this approach. The path name must be
281 visible to *any* mgr daemon, and cephadm runs all daemons as
282 containers. That means that the file either need to be placed
283 inside a customized container image for your deployment, or
284 manually distributed to the mgr data directory
285 (``/var/lib/ceph/<cluster-fsid>/mgr.<id>`` on the host, visible at
286 ``/var/lib/ceph/mgr/ceph-<id>`` from inside the container).
290 Fully qualified domain names vs bare host names
291 ===============================================
295 cephadm demands that the name of the host given via ``ceph orch host add``
296 equals the output of ``hostname`` on remote hosts.
298 Otherwise cephadm can't be sure that names returned by
299 ``ceph * metadata`` match the hosts known to cephadm. This might result
300 in a :ref:`cephadm-stray-host` warning.
302 When configuring new hosts, there are two **valid** ways to set the
303 ``hostname`` of a host:
305 1. Using the bare host name. In this case:
307 - ``hostname`` returns the bare host name.
308 - ``hostname -f`` returns the FQDN.
310 2. Using the fully qualified domain name as the host name. In this case:
312 - ``hostname`` returns the FQDN
313 - ``hostname -s`` return the bare host name
315 Note that ``man hostname`` recommends ``hostname`` to return the bare
318 The FQDN (Fully Qualified Domain Name) of the system is the
319 name that the resolver(3) returns for the host name, such as,
320 ursula.example.com. It is usually the hostname followed by the DNS
321 domain name (the part after the first dot). You can check the FQDN
322 using ``hostname --fqdn`` or the domain name using ``dnsdomainname``.
326 You cannot change the FQDN with hostname or dnsdomainname.
328 The recommended method of setting the FQDN is to make the hostname
329 be an alias for the fully qualified name using /etc/hosts, DNS, or
330 NIS. For example, if the hostname was "ursula", one might have
331 a line in /etc/hosts which reads
333 127.0.1.1 ursula.example.com ursula
335 Which means, ``man hostname`` recommends ``hostname`` to return the bare
336 host name. This in turn means that Ceph will return the bare host names
337 when executing ``ceph * metadata``. This in turn means cephadm also
338 requires the bare host name when adding a host to the cluster:
339 ``ceph orch host add <bare-name>``.
342 TODO: This chapter needs to provide way for users to configure
343 Grafana in the dashboard, as this is right no very hard to do.