1 .. _orchestrator-cli-host-management:
7 To list hosts associated with the cluster:
11 ceph orch host ls [--format yaml] [--host-pattern <name>] [--label <label>] [--host-status <status>]
13 where the optional arguments "host-pattern", "label" and "host-status" are used for filtering.
14 "host-pattern" is a regex that will match against hostnames and will only return matching hosts
15 "label" will only return hosts with the given label
16 "host-status" will only return hosts with the given status (currently "offline" or "maintenance")
17 Any combination of these filtering flags is valid. You may filter against name, label and/or status simultaneously
19 .. _cephadm-adding-hosts:
24 Hosts must have these :ref:`cephadm-host-requirements` installed.
25 Hosts without all the necessary requirements will fail to be added to the cluster.
27 To add each new host to the cluster, perform two steps:
29 #. Install the cluster's public SSH key in the new host's root user's ``authorized_keys`` file:
33 ssh-copy-id -f -i /etc/ceph/ceph.pub root@*<new-host>*
39 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host2
40 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host3
42 #. Tell Ceph that the new node is part of the cluster:
46 ceph orch host add *<newhost>* [*<ip>*] [*<label1> ...*]
52 ceph orch host add host2 10.10.0.102
53 ceph orch host add host3 10.10.0.103
55 It is best to explicitly provide the host IP address. If an IP is
56 not provided, then the host name will be immediately resolved via
57 DNS and that IP will be used.
59 One or more labels can also be included to immediately label the
60 new host. For example, by default the ``_admin`` label will make
61 cephadm maintain a copy of the ``ceph.conf`` file and a
62 ``client.admin`` keyring file in ``/etc/ceph``:
66 ceph orch host add host4 10.10.0.104 --labels _admin
68 .. _cephadm-removing-hosts:
73 A host can safely be removed from a the cluster once all daemons are removed from it.
75 To drain all daemons from a host do the following:
79 ceph orch host drain *<host>*
81 The '_no_schedule' label will be applied to the host. See :ref:`cephadm-special-host-labels`
83 All osds on the host will be scheduled to be removed. You can check osd removal progress with the following:
87 ceph orch osd rm status
89 see :ref:`cephadm-osd-removal` for more details about osd removal
91 You can check if there are no daemons left on the host with the following:
97 Once all daemons are removed you can remove the host with the following:
101 ceph orch host rm <host>
106 If a host is offline and can not be recovered it can still be removed from the cluster with the following:
110 ceph orch host rm <host> --offline --force
112 This can potentially cause data loss as osds will be forcefully purged from the cluster by calling ``osd purge-actual`` for each osd.
113 Service specs that still contain this host should be manually updated.
115 .. _orchestrator-host-labels:
120 The orchestrator supports assigning labels to hosts. Labels
121 are free form and have no particular meaning by itself and each host
122 can have multiple labels. They can be used to specify placement
123 of daemons. See :ref:`orch-placement-by-labels`
125 Labels can be added when adding a host with the ``--labels`` flag::
127 ceph orch host add my_hostname --labels=my_label1
128 ceph orch host add my_hostname --labels=my_label1,my_label2
130 To add a label a existing host, run::
132 ceph orch host label add my_hostname my_label
134 To remove a label, run::
136 ceph orch host label rm my_hostname my_label
139 .. _cephadm-special-host-labels:
144 The following host labels have a special meaning to cephadm. All start with ``_``.
146 * ``_no_schedule``: *Do not schedule or deploy daemons on this host*.
148 This label prevents cephadm from deploying daemons on this host. If it is added to
149 an existing host that already contains Ceph daemons, it will cause cephadm to move
150 those daemons elsewhere (except OSDs, which are not removed automatically).
152 * ``_no_autotune_memory``: *Do not autotune memory on this host*.
154 This label will prevent daemon memory from being tuned even when the
155 ``osd_memory_target_autotune`` or similar option is enabled for one or more daemons
158 * ``_admin``: *Distribute client.admin and ceph.conf to this host*.
160 By default, an ``_admin`` label is applied to the first host in the cluster (where
161 bootstrap was originally run), and the ``client.admin`` key is set to be distributed
162 to that host via the ``ceph orch client-keyring ...`` function. Adding this label
163 to additional hosts will normally cause cephadm to deploy config and keyring files
164 in ``/etc/ceph``. Starting from versions 16.2.10 (Pacific) and 17.2.1 (Quincy) in
165 addition to the default location ``/etc/ceph/`` cephadm also stores config and keyring
166 files in the ``/var/lib/ceph/<fsid>/config`` directory.
171 Place a host in and out of maintenance mode (stops all Ceph daemons on host)::
173 ceph orch host maintenance enter <hostname> [--force]
174 ceph orch host maintenance exit <hostname>
176 Where the force flag when entering maintenance allows the user to bypass warnings (but not alerts)
178 See also :ref:`cephadm-fqdn`
180 Rescanning Host Devices
181 =======================
183 Some servers and external enclosures may not register device removal or insertion with the
184 kernel. In these scenarios, you'll need to perform a host rescan. A rescan is typically
185 non-disruptive, and can be performed with the following CLI command.::
187 ceph orch host rescan <hostname> [--with-summary]
189 The ``with-summary`` flag provides a breakdown of the number of HBAs found and scanned, together
190 with any that failed.::
192 [ceph: root@rh9-ceph1 /]# ceph orch host rescan rh9-ceph1 --with-summary
193 Ok. 2 adapters detected: 2 rescanned, 0 skipped, 0 failed (0.32s)
195 Creating many hosts at once
196 ===========================
198 Many hosts can be added at once using
199 ``ceph orch apply -i`` by submitting a multi-document YAML file:
220 This can be combined with service specifications (below) to create a cluster spec
221 file to deploy a whole cluster in one command. see ``cephadm bootstrap --apply-spec``
222 also to do this during bootstrap. Cluster SSH Keys must be copied to hosts prior to adding them.
224 Setting the initial CRUSH location of host
225 ==========================================
227 Hosts can contain a ``location`` identifier which will instruct cephadm to
228 create a new CRUSH host located in the specified hierarchy.
240 The ``location`` attribute will be only affect the initial CRUSH location. Subsequent
241 changes of the ``location`` property will be ignored. Also, removing a host will no remove
244 See also :ref:`crush_map_default_types`.
249 Cephadm can manage operating system tuning profiles that apply a set of sysctl settings
250 to a given set of hosts. First create a YAML spec file in the following format
254 profile_name: 23-mon-host-profile
263 Then apply the tuning profile with::
265 ceph orch tuned-profile apply -i <tuned-profile-file-name>
267 This profile will then be written to ``/etc/sysctl.d/`` on each host matching the
268 given placement and `sysctl --system` will be run on the host.
272 The exact filename the profile will be written to is within ``/etc/sysctl.d/`` is
273 ``<profile-name>-cephadm-tuned-profile.conf`` where <profile-name>
274 is the `profile_name` setting specified in the provided YAML spec. Since sysctl
275 settings are applied in lexicographical order by the filename the setting is
276 specified in, you may want to set the `profile_name` in your spec so
277 that it is applied before or after other conf files that may exist.
281 These settings are applied only at the host level, and are not specific
282 to any certain daemon or container
286 Applying tuned profiles is idempotent when the ``--no-overwrite`` option is passed.
287 In this case existing profiles with the same name are not overwritten.
293 To view all current profiles cephadm is managing::
295 ceph orch tuned-profile ls
299 If you'd like to make modifications and re-apply a profile passing `--format yaml` to the
300 ``tuned-profile ls`` command will present the profiles in a format where they can be copied
307 If you no longer want one of the previously applied profiles, it can be removed with::
309 ceph orch tuned-profile rm <profile-name>
311 When a profile is removed, cephadm will clean up the file previously written to /etc/sysctl.d
317 While you can modify a profile by simply re-applying a YAML spec with the same profile name,
318 you may also want to adjust a setting within a given profile, so there are commands
321 To add or modify a setting for an existing profile::
323 ceph orch tuned-profile add-setting <setting-name> <value>
325 To remove a setting from an existing profile::
327 ceph orch tuned-profile rm-setting <setting-name>
331 Modifying the placement will require re-applying a profile with the same name. Keep
332 in mind that profiles are tracked by their name, so whenever a profile with the same
333 name as an existing profile is applied, it will overwrite the old profile.
338 Cephadm uses SSH to connect to remote hosts. SSH uses a key to authenticate
339 with those hosts in a secure way.
345 Cephadm stores an SSH key in the monitor that is used to
346 connect to remote hosts. When the cluster is bootstrapped, this SSH
347 key is generated automatically and no additional configuration
350 A *new* SSH key can be generated with::
352 ceph cephadm generate-key
354 The public portion of the SSH key can be retrieved with::
356 ceph cephadm get-pub-key
358 The currently stored SSH key can be deleted with::
360 ceph cephadm clear-key
362 You can make use of an existing key by directly importing it with::
364 ceph config-key set mgr/cephadm/ssh_identity_key -i <key>
365 ceph config-key set mgr/cephadm/ssh_identity_pub -i <pub>
367 You will then need to restart the mgr daemon to reload the configuration with::
371 .. _cephadm-ssh-user:
373 Configuring a different SSH user
374 ----------------------------------
376 Cephadm must be able to log into all the Ceph cluster nodes as an user
377 that has enough privileges to download container images, start containers
378 and execute commands without prompting for a password. If you do not want
379 to use the "root" user (default option in cephadm), you must provide
380 cephadm the name of the user that is going to be used to perform all the
381 cephadm operations. Use the command::
383 ceph cephadm set-user <user>
385 Prior to running this the cluster ssh key needs to be added to this users
386 authorized_keys file and non-root users must have passwordless sudo access.
389 Customizing the SSH configuration
390 ---------------------------------
392 Cephadm generates an appropriate ``ssh_config`` file that is
393 used for connecting to remote hosts. This configuration looks
394 something like this::
398 StrictHostKeyChecking no
399 UserKnownHostsFile /dev/null
401 There are two ways to customize this configuration for your environment:
403 #. Import a customized configuration file that will be stored
404 by the monitor with::
406 ceph cephadm set-ssh-config -i <ssh_config_file>
408 To remove a customized SSH config and revert back to the default behavior::
410 ceph cephadm clear-ssh-config
412 #. You can configure a file location for the SSH configuration file with::
414 ceph config set mgr mgr/cephadm/ssh_config_file <path>
416 We do *not recommend* this approach. The path name must be
417 visible to *any* mgr daemon, and cephadm runs all daemons as
418 containers. That means that the file either need to be placed
419 inside a customized container image for your deployment, or
420 manually distributed to the mgr data directory
421 (``/var/lib/ceph/<cluster-fsid>/mgr.<id>`` on the host, visible at
422 ``/var/lib/ceph/mgr/ceph-<id>`` from inside the container).
426 Fully qualified domain names vs bare host names
427 ===============================================
431 cephadm demands that the name of the host given via ``ceph orch host add``
432 equals the output of ``hostname`` on remote hosts.
434 Otherwise cephadm can't be sure that names returned by
435 ``ceph * metadata`` match the hosts known to cephadm. This might result
436 in a :ref:`cephadm-stray-host` warning.
438 When configuring new hosts, there are two **valid** ways to set the
439 ``hostname`` of a host:
441 1. Using the bare host name. In this case:
443 - ``hostname`` returns the bare host name.
444 - ``hostname -f`` returns the FQDN.
446 2. Using the fully qualified domain name as the host name. In this case:
448 - ``hostname`` returns the FQDN
449 - ``hostname -s`` return the bare host name
451 Note that ``man hostname`` recommends ``hostname`` to return the bare
454 The FQDN (Fully Qualified Domain Name) of the system is the
455 name that the resolver(3) returns for the host name, such as,
456 ursula.example.com. It is usually the hostname followed by the DNS
457 domain name (the part after the first dot). You can check the FQDN
458 using ``hostname --fqdn`` or the domain name using ``dnsdomainname``.
462 You cannot change the FQDN with hostname or dnsdomainname.
464 The recommended method of setting the FQDN is to make the hostname
465 be an alias for the fully qualified name using /etc/hosts, DNS, or
466 NIS. For example, if the hostname was "ursula", one might have
467 a line in /etc/hosts which reads
469 127.0.1.1 ursula.example.com ursula
471 Which means, ``man hostname`` recommends ``hostname`` to return the bare
472 host name. This in turn means that Ceph will return the bare host names
473 when executing ``ceph * metadata``. This in turn means cephadm also
474 requires the bare host name when adding a host to the cluster:
475 ``ceph orch host add <bare-name>``.
478 TODO: This chapter needs to provide way for users to configure
479 Grafana in the dashboard, as this is right no very hard to do.