]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephadm/host-management.rst
import quincy beta 17.1.0
[ceph.git] / ceph / doc / cephadm / host-management.rst
1 .. _orchestrator-cli-host-management:
2
3 ===============
4 Host Management
5 ===============
6
7 To list hosts associated with the cluster:
8
9 .. prompt:: bash #
10
11 ceph orch host ls [--format yaml] [--host-pattern <name>] [--label <label>] [--host-status <status>]
12
13 where the optional arguments "host-pattern", "label" and "host-status" are used for filtering.
14 "host-pattern" is a regex that will match against hostnames and will only return matching hosts
15 "label" will only return hosts with the given label
16 "host-status" will only return hosts with the given status (currently "offline" or "maintenance")
17 Any combination of these filtering flags is valid. You may filter against name, label and/or status simultaneously
18
19 .. _cephadm-adding-hosts:
20
21 Adding Hosts
22 ============
23
24 Hosts must have these :ref:`cephadm-host-requirements` installed.
25 Hosts without all the necessary requirements will fail to be added to the cluster.
26
27 To add each new host to the cluster, perform two steps:
28
29 #. Install the cluster's public SSH key in the new host's root user's ``authorized_keys`` file:
30
31 .. prompt:: bash #
32
33 ssh-copy-id -f -i /etc/ceph/ceph.pub root@*<new-host>*
34
35 For example:
36
37 .. prompt:: bash #
38
39 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host2
40 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host3
41
42 #. Tell Ceph that the new node is part of the cluster:
43
44 .. prompt:: bash #
45
46 ceph orch host add *<newhost>* [*<ip>*] [*<label1> ...*]
47
48 For example:
49
50 .. prompt:: bash #
51
52 ceph orch host add host2 10.10.0.102
53 ceph orch host add host3 10.10.0.103
54
55 It is best to explicitly provide the host IP address. If an IP is
56 not provided, then the host name will be immediately resolved via
57 DNS and that IP will be used.
58
59 One or more labels can also be included to immediately label the
60 new host. For example, by default the ``_admin`` label will make
61 cephadm maintain a copy of the ``ceph.conf`` file and a
62 ``client.admin`` keyring file in ``/etc/ceph``:
63
64 .. prompt:: bash #
65
66 ceph orch host add host4 10.10.0.104 --labels _admin
67
68 .. _cephadm-removing-hosts:
69
70 Removing Hosts
71 ==============
72
73 A host can safely be removed from a the cluster once all daemons are removed from it.
74
75 To drain all daemons from a host do the following:
76
77 .. prompt:: bash #
78
79 ceph orch host drain *<host>*
80
81 The '_no_schedule' label will be applied to the host. See :ref:`cephadm-special-host-labels`
82
83 All osds on the host will be scheduled to be removed. You can check osd removal progress with the following:
84
85 .. prompt:: bash #
86
87 ceph orch osd rm status
88
89 see :ref:`cephadm-osd-removal` for more details about osd removal
90
91 You can check if there are no daemons left on the host with the following:
92
93 .. prompt:: bash #
94
95 ceph orch ps <host>
96
97 Once all daemons are removed you can remove the host with the following:
98
99 .. prompt:: bash #
100
101 ceph orch host rm <host>
102
103 Offline host removal
104 --------------------
105
106 If a host is offline and can not be recovered it can still be removed from the cluster with the following:
107
108 .. prompt:: bash #
109
110 ceph orch host rm <host> --offline --force
111
112 This can potentially cause data loss as osds will be forcefully purged from the cluster by calling ``osd purge-actual`` for each osd.
113 Service specs that still contain this host should be manually updated.
114
115 .. _orchestrator-host-labels:
116
117 Host labels
118 ===========
119
120 The orchestrator supports assigning labels to hosts. Labels
121 are free form and have no particular meaning by itself and each host
122 can have multiple labels. They can be used to specify placement
123 of daemons. See :ref:`orch-placement-by-labels`
124
125 Labels can be added when adding a host with the ``--labels`` flag::
126
127 ceph orch host add my_hostname --labels=my_label1
128 ceph orch host add my_hostname --labels=my_label1,my_label2
129
130 To add a label a existing host, run::
131
132 ceph orch host label add my_hostname my_label
133
134 To remove a label, run::
135
136 ceph orch host label rm my_hostname my_label
137
138
139 .. _cephadm-special-host-labels:
140
141 Special host labels
142 -------------------
143
144 The following host labels have a special meaning to cephadm. All start with ``_``.
145
146 * ``_no_schedule``: *Do not schedule or deploy daemons on this host*.
147
148 This label prevents cephadm from deploying daemons on this host. If it is added to
149 an existing host that already contains Ceph daemons, it will cause cephadm to move
150 those daemons elsewhere (except OSDs, which are not removed automatically).
151
152 * ``_no_autotune_memory``: *Do not autotune memory on this host*.
153
154 This label will prevent daemon memory from being tuned even when the
155 ``osd_memory_target_autotune`` or similar option is enabled for one or more daemons
156 on that host.
157
158 * ``_admin``: *Distribute client.admin and ceph.conf to this host*.
159
160 By default, an ``_admin`` label is applied to the first host in the cluster (where
161 bootstrap was originally run), and the ``client.admin`` key is set to be distributed
162 to that host via the ``ceph orch client-keyring ...`` function. Adding this label
163 to additional hosts will normally cause cephadm to deploy config and keyring files
164 in ``/etc/ceph``.
165
166 Maintenance Mode
167 ================
168
169 Place a host in and out of maintenance mode (stops all Ceph daemons on host)::
170
171 ceph orch host maintenance enter <hostname> [--force]
172 ceph orch host maintenance exit <hostname>
173
174 Where the force flag when entering maintenance allows the user to bypass warnings (but not alerts)
175
176 See also :ref:`cephadm-fqdn`
177
178 Creating many hosts at once
179 ===========================
180
181 Many hosts can be added at once using
182 ``ceph orch apply -i`` by submitting a multi-document YAML file:
183
184 .. code-block:: yaml
185
186 service_type: host
187 hostname: node-00
188 addr: 192.168.0.10
189 labels:
190 - example1
191 - example2
192 ---
193 service_type: host
194 hostname: node-01
195 addr: 192.168.0.11
196 labels:
197 - grafana
198 ---
199 service_type: host
200 hostname: node-02
201 addr: 192.168.0.12
202
203 This can be combined with service specifications (below) to create a cluster spec
204 file to deploy a whole cluster in one command. see ``cephadm bootstrap --apply-spec``
205 also to do this during bootstrap. Cluster SSH Keys must be copied to hosts prior to adding them.
206
207 Setting the initial CRUSH location of host
208 ==========================================
209
210 Hosts can contain a ``location`` identifier which will instruct cephadm to
211 create a new CRUSH host located in the specified hierarchy.
212
213 .. code-block:: yaml
214
215 service_type: host
216 hostname: node-00
217 addr: 192.168.0.10
218 location:
219 rack: rack1
220
221 .. note::
222
223 The ``location`` attribute will be only affect the initial CRUSH location. Subsequent
224 changes of the ``location`` property will be ignored. Also, removing a host will no remove
225 any CRUSH buckets.
226
227 See also :ref:`crush_map_default_types`.
228
229 SSH Configuration
230 =================
231
232 Cephadm uses SSH to connect to remote hosts. SSH uses a key to authenticate
233 with those hosts in a secure way.
234
235
236 Default behavior
237 ----------------
238
239 Cephadm stores an SSH key in the monitor that is used to
240 connect to remote hosts. When the cluster is bootstrapped, this SSH
241 key is generated automatically and no additional configuration
242 is necessary.
243
244 A *new* SSH key can be generated with::
245
246 ceph cephadm generate-key
247
248 The public portion of the SSH key can be retrieved with::
249
250 ceph cephadm get-pub-key
251
252 The currently stored SSH key can be deleted with::
253
254 ceph cephadm clear-key
255
256 You can make use of an existing key by directly importing it with::
257
258 ceph config-key set mgr/cephadm/ssh_identity_key -i <key>
259 ceph config-key set mgr/cephadm/ssh_identity_pub -i <pub>
260
261 You will then need to restart the mgr daemon to reload the configuration with::
262
263 ceph mgr fail
264
265 .. _cephadm-ssh-user:
266
267 Configuring a different SSH user
268 ----------------------------------
269
270 Cephadm must be able to log into all the Ceph cluster nodes as an user
271 that has enough privileges to download container images, start containers
272 and execute commands without prompting for a password. If you do not want
273 to use the "root" user (default option in cephadm), you must provide
274 cephadm the name of the user that is going to be used to perform all the
275 cephadm operations. Use the command::
276
277 ceph cephadm set-user <user>
278
279 Prior to running this the cluster ssh key needs to be added to this users
280 authorized_keys file and non-root users must have passwordless sudo access.
281
282
283 Customizing the SSH configuration
284 ---------------------------------
285
286 Cephadm generates an appropriate ``ssh_config`` file that is
287 used for connecting to remote hosts. This configuration looks
288 something like this::
289
290 Host *
291 User root
292 StrictHostKeyChecking no
293 UserKnownHostsFile /dev/null
294
295 There are two ways to customize this configuration for your environment:
296
297 #. Import a customized configuration file that will be stored
298 by the monitor with::
299
300 ceph cephadm set-ssh-config -i <ssh_config_file>
301
302 To remove a customized SSH config and revert back to the default behavior::
303
304 ceph cephadm clear-ssh-config
305
306 #. You can configure a file location for the SSH configuration file with::
307
308 ceph config set mgr mgr/cephadm/ssh_config_file <path>
309
310 We do *not recommend* this approach. The path name must be
311 visible to *any* mgr daemon, and cephadm runs all daemons as
312 containers. That means that the file either need to be placed
313 inside a customized container image for your deployment, or
314 manually distributed to the mgr data directory
315 (``/var/lib/ceph/<cluster-fsid>/mgr.<id>`` on the host, visible at
316 ``/var/lib/ceph/mgr/ceph-<id>`` from inside the container).
317
318 .. _cephadm-fqdn:
319
320 Fully qualified domain names vs bare host names
321 ===============================================
322
323 .. note::
324
325 cephadm demands that the name of the host given via ``ceph orch host add``
326 equals the output of ``hostname`` on remote hosts.
327
328 Otherwise cephadm can't be sure that names returned by
329 ``ceph * metadata`` match the hosts known to cephadm. This might result
330 in a :ref:`cephadm-stray-host` warning.
331
332 When configuring new hosts, there are two **valid** ways to set the
333 ``hostname`` of a host:
334
335 1. Using the bare host name. In this case:
336
337 - ``hostname`` returns the bare host name.
338 - ``hostname -f`` returns the FQDN.
339
340 2. Using the fully qualified domain name as the host name. In this case:
341
342 - ``hostname`` returns the FQDN
343 - ``hostname -s`` return the bare host name
344
345 Note that ``man hostname`` recommends ``hostname`` to return the bare
346 host name:
347
348 The FQDN (Fully Qualified Domain Name) of the system is the
349 name that the resolver(3) returns for the host name, such as,
350 ursula.example.com. It is usually the hostname followed by the DNS
351 domain name (the part after the first dot). You can check the FQDN
352 using ``hostname --fqdn`` or the domain name using ``dnsdomainname``.
353
354 .. code-block:: none
355
356 You cannot change the FQDN with hostname or dnsdomainname.
357
358 The recommended method of setting the FQDN is to make the hostname
359 be an alias for the fully qualified name using /etc/hosts, DNS, or
360 NIS. For example, if the hostname was "ursula", one might have
361 a line in /etc/hosts which reads
362
363 127.0.1.1 ursula.example.com ursula
364
365 Which means, ``man hostname`` recommends ``hostname`` to return the bare
366 host name. This in turn means that Ceph will return the bare host names
367 when executing ``ceph * metadata``. This in turn means cephadm also
368 requires the bare host name when adding a host to the cluster:
369 ``ceph orch host add <bare-name>``.
370
371 ..
372 TODO: This chapter needs to provide way for users to configure
373 Grafana in the dashboard, as this is right no very hard to do.