]>
Commit | Line | Data |
---|---|---|
f67539c2 TL |
1 | .. _orchestrator-cli-host-management: |
2 | ||
3 | =============== | |
4 | Host Management | |
5 | =============== | |
6 | ||
7 | To list hosts associated with the cluster: | |
8 | ||
9 | .. prompt:: bash # | |
10 | ||
11 | ceph orch host ls [--format yaml] | |
12 | ||
13 | .. _cephadm-adding-hosts: | |
14 | ||
15 | Adding Hosts | |
16 | ============ | |
17 | ||
18 | Hosts must have these :ref:`cephadm-host-requirements` installed. | |
19 | Hosts without all the necessary requirements will fail to be added to the cluster. | |
20 | ||
21 | To add each new host to the cluster, perform two steps: | |
22 | ||
23 | #. Install the cluster's public SSH key in the new host's root user's ``authorized_keys`` file: | |
24 | ||
25 | .. prompt:: bash # | |
26 | ||
27 | ssh-copy-id -f -i /etc/ceph/ceph.pub root@*<new-host>* | |
28 | ||
29 | For example: | |
30 | ||
31 | .. prompt:: bash # | |
32 | ||
33 | ssh-copy-id -f -i /etc/ceph/ceph.pub root@host2 | |
34 | ssh-copy-id -f -i /etc/ceph/ceph.pub root@host3 | |
35 | ||
36 | #. Tell Ceph that the new node is part of the cluster: | |
37 | ||
38 | .. prompt:: bash # | |
39 | ||
40 | ceph orch host add *newhost* | |
41 | ||
42 | For example: | |
43 | ||
44 | .. prompt:: bash # | |
45 | ||
46 | ceph orch host add host2 | |
47 | ceph orch host add host3 | |
48 | ||
49 | .. _cephadm-removing-hosts: | |
50 | ||
51 | Removing Hosts | |
52 | ============== | |
53 | ||
54 | If the node that want you to remove is running OSDs, make sure you remove the OSDs from the node. | |
55 | ||
56 | To remove a host from a cluster, do the following: | |
57 | ||
58 | For all Ceph service types, except for ``node-exporter`` and ``crash``, remove | |
59 | the host from the placement specification file (for example, cluster.yml). | |
60 | For example, if you are removing the host named host2, remove all occurrences of | |
61 | ``- host2`` from all ``placement:`` sections. | |
62 | ||
63 | Update: | |
64 | ||
65 | .. code-block:: yaml | |
66 | ||
67 | service_type: rgw | |
68 | placement: | |
69 | hosts: | |
70 | - host1 | |
71 | - host2 | |
72 | ||
73 | To: | |
74 | ||
75 | .. code-block:: yaml | |
76 | ||
77 | ||
78 | service_type: rgw | |
79 | placement: | |
80 | hosts: | |
81 | - host1 | |
82 | ||
83 | Remove the host from cephadm's environment: | |
84 | ||
85 | .. prompt:: bash # | |
86 | ||
87 | ceph orch host rm host2 | |
88 | ||
89 | ||
90 | If the host is running ``node-exporter`` and crash services, remove them by running | |
91 | the following command on the host: | |
92 | ||
93 | .. prompt:: bash # | |
94 | ||
95 | cephadm rm-daemon --fsid CLUSTER_ID --name SERVICE_NAME | |
96 | ||
97 | .. _orchestrator-host-labels: | |
98 | ||
99 | Host labels | |
100 | =========== | |
101 | ||
102 | The orchestrator supports assigning labels to hosts. Labels | |
103 | are free form and have no particular meaning by itself and each host | |
104 | can have multiple labels. They can be used to specify placement | |
105 | of daemons. See :ref:`orch-placement-by-labels` | |
106 | ||
107 | Labels can be added when adding a host with the ``--labels`` flag:: | |
108 | ||
109 | ceph orch host add my_hostname --labels=my_label1 | |
110 | ceph orch host add my_hostname --labels=my_label1,my_label2 | |
111 | ||
112 | To add a label a existing host, run:: | |
113 | ||
114 | ceph orch host label add my_hostname my_label | |
115 | ||
116 | To remove a label, run:: | |
117 | ||
118 | ceph orch host label rm my_hostname my_label | |
119 | ||
120 | ||
121 | Maintenance Mode | |
122 | ================ | |
123 | ||
124 | Place a host in and out of maintenance mode (stops all Ceph daemons on host):: | |
125 | ||
126 | ceph orch host maintenance enter <hostname> [--force] | |
127 | ceph orch host maintenace exit <hostname> | |
128 | ||
129 | Where the force flag when entering maintenance allows the user to bypass warnings (but not alerts) | |
130 | ||
131 | See also :ref:`cephadm-fqdn` | |
132 | ||
133 | Host Specification | |
134 | ================== | |
135 | ||
136 | Many hosts can be added at once using | |
137 | ``ceph orch apply -i`` by submitting a multi-document YAML file:: | |
138 | ||
139 | --- | |
140 | service_type: host | |
141 | addr: node-00 | |
142 | hostname: node-00 | |
143 | labels: | |
144 | - example1 | |
145 | - example2 | |
146 | --- | |
147 | service_type: host | |
148 | addr: node-01 | |
149 | hostname: node-01 | |
150 | labels: | |
151 | - grafana | |
152 | --- | |
153 | service_type: host | |
154 | addr: node-02 | |
155 | hostname: node-02 | |
156 | ||
157 | This can be combined with service specifications (below) to create a cluster spec | |
158 | file to deploy a whole cluster in one command. see ``cephadm bootstrap --apply-spec`` | |
159 | also to do this during bootstrap. Cluster SSH Keys must be copied to hosts prior to adding them. | |
160 | ||
161 | SSH Configuration | |
162 | ================= | |
163 | ||
164 | Cephadm uses SSH to connect to remote hosts. SSH uses a key to authenticate | |
165 | with those hosts in a secure way. | |
166 | ||
167 | ||
168 | Default behavior | |
169 | ---------------- | |
170 | ||
171 | Cephadm stores an SSH key in the monitor that is used to | |
172 | connect to remote hosts. When the cluster is bootstrapped, this SSH | |
173 | key is generated automatically and no additional configuration | |
174 | is necessary. | |
175 | ||
176 | A *new* SSH key can be generated with:: | |
177 | ||
178 | ceph cephadm generate-key | |
179 | ||
180 | The public portion of the SSH key can be retrieved with:: | |
181 | ||
182 | ceph cephadm get-pub-key | |
183 | ||
184 | The currently stored SSH key can be deleted with:: | |
185 | ||
186 | ceph cephadm clear-key | |
187 | ||
188 | You can make use of an existing key by directly importing it with:: | |
189 | ||
190 | ceph config-key set mgr/cephadm/ssh_identity_key -i <key> | |
191 | ceph config-key set mgr/cephadm/ssh_identity_pub -i <pub> | |
192 | ||
193 | You will then need to restart the mgr daemon to reload the configuration with:: | |
194 | ||
195 | ceph mgr fail | |
196 | ||
197 | Configuring a different SSH user | |
198 | ---------------------------------- | |
199 | ||
200 | Cephadm must be able to log into all the Ceph cluster nodes as an user | |
201 | that has enough privileges to download container images, start containers | |
202 | and execute commands without prompting for a password. If you do not want | |
203 | to use the "root" user (default option in cephadm), you must provide | |
204 | cephadm the name of the user that is going to be used to perform all the | |
205 | cephadm operations. Use the command:: | |
206 | ||
207 | ceph cephadm set-user <user> | |
208 | ||
209 | Prior to running this the cluster ssh key needs to be added to this users | |
210 | authorized_keys file and non-root users must have passwordless sudo access. | |
211 | ||
212 | ||
213 | Customizing the SSH configuration | |
214 | --------------------------------- | |
215 | ||
216 | Cephadm generates an appropriate ``ssh_config`` file that is | |
217 | used for connecting to remote hosts. This configuration looks | |
218 | something like this:: | |
219 | ||
220 | Host * | |
221 | User root | |
222 | StrictHostKeyChecking no | |
223 | UserKnownHostsFile /dev/null | |
224 | ||
225 | There are two ways to customize this configuration for your environment: | |
226 | ||
227 | #. Import a customized configuration file that will be stored | |
228 | by the monitor with:: | |
229 | ||
230 | ceph cephadm set-ssh-config -i <ssh_config_file> | |
231 | ||
232 | To remove a customized SSH config and revert back to the default behavior:: | |
233 | ||
234 | ceph cephadm clear-ssh-config | |
235 | ||
236 | #. You can configure a file location for the SSH configuration file with:: | |
237 | ||
238 | ceph config set mgr mgr/cephadm/ssh_config_file <path> | |
239 | ||
240 | We do *not recommend* this approach. The path name must be | |
241 | visible to *any* mgr daemon, and cephadm runs all daemons as | |
242 | containers. That means that the file either need to be placed | |
243 | inside a customized container image for your deployment, or | |
244 | manually distributed to the mgr data directory | |
245 | (``/var/lib/ceph/<cluster-fsid>/mgr.<id>`` on the host, visible at | |
246 | ``/var/lib/ceph/mgr/ceph-<id>`` from inside the container). | |
247 | ||
248 | .. _cephadm-fqdn: | |
249 | ||
250 | Fully qualified domain names vs bare host names | |
251 | =============================================== | |
252 | ||
253 | cephadm has very minimal requirements when it comes to resolving host | |
254 | names etc. When cephadm initiates an ssh connection to a remote host, | |
255 | the host name can be resolved in four different ways: | |
256 | ||
257 | - a custom ssh config resolving the name to an IP | |
258 | - via an externally maintained ``/etc/hosts`` | |
259 | - via explicitly providing an IP address to cephadm: ``ceph orch host add <hostname> <IP>`` | |
260 | - automatic name resolution via DNS. | |
261 | ||
262 | Ceph itself uses the command ``hostname`` to determine the name of the | |
263 | current host. | |
264 | ||
265 | .. note:: | |
266 | ||
267 | cephadm demands that the name of the host given via ``ceph orch host add`` | |
268 | equals the output of ``hostname`` on remote hosts. | |
269 | ||
270 | Otherwise cephadm can't be sure, the host names returned by | |
271 | ``ceph * metadata`` match the hosts known to cephadm. This might result | |
272 | in a :ref:`cephadm-stray-host` warning. | |
273 | ||
274 | When configuring new hosts, there are two **valid** ways to set the | |
275 | ``hostname`` of a host: | |
276 | ||
277 | 1. Using the bare host name. In this case: | |
278 | ||
279 | - ``hostname`` returns the bare host name. | |
280 | - ``hostname -f`` returns the FQDN. | |
281 | ||
282 | 2. Using the fully qualified domain name as the host name. In this case: | |
283 | ||
284 | - ``hostname`` returns the FQDN | |
285 | - ``hostname -s`` return the bare host name | |
286 | ||
287 | Note that ``man hostname`` recommends ``hostname`` to return the bare | |
288 | host name: | |
289 | ||
290 | The FQDN (Fully Qualified Domain Name) of the system is the | |
291 | name that the resolver(3) returns for the host name, such as, | |
292 | ursula.example.com. It is usually the hostname followed by the DNS | |
293 | domain name (the part after the first dot). You can check the FQDN | |
294 | using ``hostname --fqdn`` or the domain name using ``dnsdomainname``. | |
295 | ||
296 | .. code-block:: none | |
297 | ||
298 | You cannot change the FQDN with hostname or dnsdomainname. | |
299 | ||
300 | The recommended method of setting the FQDN is to make the hostname | |
301 | be an alias for the fully qualified name using /etc/hosts, DNS, or | |
302 | NIS. For example, if the hostname was "ursula", one might have | |
303 | a line in /etc/hosts which reads | |
304 | ||
305 | 127.0.1.1 ursula.example.com ursula | |
306 | ||
307 | Which means, ``man hostname`` recommends ``hostname`` to return the bare | |
308 | host name. This in turn means that Ceph will return the bare host names | |
309 | when executing ``ceph * metadata``. This in turn means cephadm also | |
310 | requires the bare host name when adding a host to the cluster: | |
311 | ``ceph orch host add <bare-name>``. | |
312 | ||
313 | .. | |
314 | TODO: This chapter needs to provide way for users to configure | |
315 | Grafana in the dashboard, as this is right no very hard to do. |