]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephadm/service-management.rst
bump version to 16.2.6-pve2
[ceph.git] / ceph / doc / cephadm / service-management.rst
1 ==================
2 Service Management
3 ==================
4
5 Service Status
6 ==============
7
8 A service is a group of daemons configured together. To see the status of one
9 of the services running in the Ceph cluster, do the following:
10
11 #. Use the command line to print a list of services.
12 #. Locate the service whose status you want to check.
13 #. Print the status of the service.
14
15 The following command prints a list of services known to the orchestrator. To
16 limit the output to services only on a specified host, use the optional
17 ``--host`` parameter. To limit the output to services of only a particular
18 type, use the optional ``--type`` parameter (mon, osd, mgr, mds, rgw):
19
20 .. prompt:: bash #
21
22 ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh]
23
24 Discover the status of a particular service or daemon:
25
26 .. prompt:: bash #
27
28 ceph orch ls --service_type type --service_name <name> [--refresh]
29
30 To export the service specifications knows to the orchestrator, run the following command.
31
32 .. prompt:: bash #
33
34 ceph orch ls --export
35
36 The service specifications exported with this command will be exported as yaml
37 and that yaml can be used with the ``ceph orch apply -i`` command.
38
39 For information about retrieving the specifications of single services (including examples of commands), see :ref:`orchestrator-cli-service-spec-retrieve`.
40
41 Daemon Status
42 =============
43
44 A daemon is a systemd unit that is running and part of a service.
45
46 To see the status of a daemon, do the following:
47
48 #. Print a list of all daemons known to the orchestrator.
49 #. Query the status of the target daemon.
50
51 First, print a list of all daemons known to the orchestrator:
52
53 .. prompt:: bash #
54
55 ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh]
56
57 Then query the status of a particular service instance (mon, osd, mds, rgw).
58 For OSDs the id is the numeric OSD ID. For MDS services the id is the file
59 system name:
60
61 .. prompt:: bash #
62
63 ceph orch ps --daemon_type osd --daemon_id 0
64
65 .. _orchestrator-cli-service-spec:
66
67 Service Specification
68 =====================
69
70 A *Service Specification* is a data structure that is used to specify the
71 deployment of services. Here is an example of a service specification in YAML:
72
73 .. code-block:: yaml
74
75 service_type: rgw
76 service_id: realm.zone
77 placement:
78 hosts:
79 - host1
80 - host2
81 - host3
82 unmanaged: false
83 ...
84
85 In this example, the properties of this service specification are:
86
87 * ``service_type``
88 The type of the service. Needs to be either a Ceph
89 service (``mon``, ``crash``, ``mds``, ``mgr``, ``osd`` or
90 ``rbd-mirror``), a gateway (``nfs`` or ``rgw``), part of the
91 monitoring stack (``alertmanager``, ``grafana``, ``node-exporter`` or
92 ``prometheus``) or (``container``) for custom containers.
93 * ``service_id``
94 The name of the service.
95 * ``placement``
96 See :ref:`orchestrator-cli-placement-spec`.
97 * ``unmanaged`` If set to ``true``, the orchestrator will not deploy nor remove
98 any daemon associated with this service. Placement and all other properties
99 will be ignored. This is useful, if you do not want this service to be
100 managed temporarily. For cephadm, See :ref:`cephadm-spec-unmanaged`
101
102 Each service type can have additional service-specific properties.
103
104 Service specifications of type ``mon``, ``mgr``, and the monitoring
105 types do not require a ``service_id``.
106
107 A service of type ``osd`` is described in :ref:`drivegroups`
108
109 Many service specifications can be applied at once using ``ceph orch apply -i``
110 by submitting a multi-document YAML file::
111
112 cat <<EOF | ceph orch apply -i -
113 service_type: mon
114 placement:
115 host_pattern: "mon*"
116 ---
117 service_type: mgr
118 placement:
119 host_pattern: "mgr*"
120 ---
121 service_type: osd
122 service_id: default_drive_group
123 placement:
124 host_pattern: "osd*"
125 data_devices:
126 all: true
127 EOF
128
129 .. _orchestrator-cli-service-spec-retrieve:
130
131 Retrieving the running Service Specification
132 --------------------------------------------
133
134 If the services have been started via ``ceph orch apply...``, then directly changing
135 the Services Specification is complicated. Instead of attempting to directly change
136 the Services Specification, we suggest exporting the running Service Specification by
137 following these instructions:
138
139 .. prompt:: bash #
140
141 ceph orch ls --service-name rgw.<realm>.<zone> --export > rgw.<realm>.<zone>.yaml
142 ceph orch ls --service-type mgr --export > mgr.yaml
143 ceph orch ls --export > cluster.yaml
144
145 The Specification can then be changed and re-applied as above.
146
147 .. _orchestrator-cli-placement-spec:
148
149 Placement Specification
150 =======================
151
152 For the orchestrator to deploy a *service*, it needs to know where to deploy
153 *daemons*, and how many to deploy. This is the role of a placement
154 specification. Placement specifications can either be passed as command line arguments
155 or in a YAML files.
156
157 .. note::
158
159 cephadm will not deploy daemons on hosts with the ``_no_schedule`` label; see :ref:`cephadm-special-host-labels`.
160
161 .. note::
162 The **apply** command can be confusing. For this reason, we recommend using
163 YAML specifications.
164
165 Each ``ceph orch apply <service-name>`` command supersedes the one before it.
166 If you do not use the proper syntax, you will clobber your work
167 as you go.
168
169 For example:
170
171 .. prompt:: bash #
172
173 ceph orch apply mon host1
174 ceph orch apply mon host2
175 ceph orch apply mon host3
176
177 This results in only one host having a monitor applied to it: host 3.
178
179 (The first command creates a monitor on host1. Then the second command
180 clobbers the monitor on host1 and creates a monitor on host2. Then the
181 third command clobbers the monitor on host2 and creates a monitor on
182 host3. In this scenario, at this point, there is a monitor ONLY on
183 host3.)
184
185 To make certain that a monitor is applied to each of these three hosts,
186 run a command like this:
187
188 .. prompt:: bash #
189
190 ceph orch apply mon "host1,host2,host3"
191
192 There is another way to apply monitors to multiple hosts: a ``yaml`` file
193 can be used. Instead of using the "ceph orch apply mon" commands, run a
194 command of this form:
195
196 .. prompt:: bash #
197
198 ceph orch apply -i file.yaml
199
200 Here is a sample **file.yaml** file::
201
202 service_type: mon
203 placement:
204 hosts:
205 - host1
206 - host2
207 - host3
208
209 Explicit placements
210 -------------------
211
212 Daemons can be explicitly placed on hosts by simply specifying them:
213
214 .. prompt:: bash #
215
216 orch apply prometheus --placement="host1 host2 host3"
217
218 Or in YAML:
219
220 .. code-block:: yaml
221
222 service_type: prometheus
223 placement:
224 hosts:
225 - host1
226 - host2
227 - host3
228
229 MONs and other services may require some enhanced network specifications:
230
231 .. prompt:: bash #
232
233 orch daemon add mon --placement="myhost:[v2:1.2.3.4:3300,v1:1.2.3.4:6789]=name"
234
235 where ``[v2:1.2.3.4:3300,v1:1.2.3.4:6789]`` is the network address of the monitor
236 and ``=name`` specifies the name of the new monitor.
237
238 .. _orch-placement-by-labels:
239
240 Placement by labels
241 -------------------
242
243 Daemon placement can be limited to hosts that match a specific label. To set
244 a label ``mylabel`` to the appropriate hosts, run this command:
245
246 .. prompt:: bash #
247
248 ceph orch host label add *<hostname>* mylabel
249
250 To view the current hosts and labels, run this command:
251
252 .. prompt:: bash #
253
254 ceph orch host ls
255
256 For example:
257
258 .. prompt:: bash #
259
260 ceph orch host label add host1 mylabel
261 ceph orch host label add host2 mylabel
262 ceph orch host label add host3 mylabel
263 ceph orch host ls
264
265 .. code-block:: bash
266
267 HOST ADDR LABELS STATUS
268 host1 mylabel
269 host2 mylabel
270 host3 mylabel
271 host4
272 host5
273
274 Now, Tell cephadm to deploy daemons based on the label by running
275 this command:
276
277 .. prompt:: bash #
278
279 orch apply prometheus --placement="label:mylabel"
280
281 Or in YAML:
282
283 .. code-block:: yaml
284
285 service_type: prometheus
286 placement:
287 label: "mylabel"
288
289 * See :ref:`orchestrator-host-labels`
290
291 Placement by pattern matching
292 -----------------------------
293
294 Daemons can be placed on hosts as well:
295
296 .. prompt:: bash #
297
298 orch apply prometheus --placement='myhost[1-3]'
299
300 Or in YAML:
301
302 .. code-block:: yaml
303
304 service_type: prometheus
305 placement:
306 host_pattern: "myhost[1-3]"
307
308 To place a service on *all* hosts, use ``"*"``:
309
310 .. prompt:: bash #
311
312 orch apply node-exporter --placement='*'
313
314 Or in YAML:
315
316 .. code-block:: yaml
317
318 service_type: node-exporter
319 placement:
320 host_pattern: "*"
321
322
323 Changing the number of monitors
324 -------------------------------
325
326 By specifying ``count``, only the number of daemons specified will be created:
327
328 .. prompt:: bash #
329
330 orch apply prometheus --placement=3
331
332 To deploy *daemons* on a subset of hosts, specify the count:
333
334 .. prompt:: bash #
335
336 orch apply prometheus --placement="2 host1 host2 host3"
337
338 If the count is bigger than the amount of hosts, cephadm deploys one per host:
339
340 .. prompt:: bash #
341
342 orch apply prometheus --placement="3 host1 host2"
343
344 The command immediately above results in two Prometheus daemons.
345
346 YAML can also be used to specify limits, in the following way:
347
348 .. code-block:: yaml
349
350 service_type: prometheus
351 placement:
352 count: 3
353
354 YAML can also be used to specify limits on hosts:
355
356 .. code-block:: yaml
357
358 service_type: prometheus
359 placement:
360 count: 2
361 hosts:
362 - host1
363 - host2
364 - host3
365
366 Updating Service Specifications
367 ===============================
368
369 The Ceph Orchestrator maintains a declarative state of each
370 service in a ``ServiceSpec``. For certain operations, like updating
371 the RGW HTTP port, we need to update the existing
372 specification.
373
374 1. List the current ``ServiceSpec``:
375
376 .. prompt:: bash #
377
378 ceph orch ls --service_name=<service-name> --export > myservice.yaml
379
380 2. Update the yaml file:
381
382 .. prompt:: bash #
383
384 vi myservice.yaml
385
386 3. Apply the new ``ServiceSpec``:
387
388 .. prompt:: bash #
389
390 ceph orch apply -i myservice.yaml [--dry-run]
391
392 Deployment of Daemons
393 =====================
394
395 Cephadm uses a declarative state to define the layout of the cluster. This
396 state consists of a list of service specifications containing placement
397 specifications (See :ref:`orchestrator-cli-service-spec` ).
398
399 Cephadm continually compares a list of daemons actually running in the cluster
400 against the list in the service specifications. Cephadm adds new daemons and
401 removes old daemons as necessary in order to conform to the service
402 specifications.
403
404 Cephadm does the following to maintain compliance with the service
405 specifications.
406
407 Cephadm first selects a list of candidate hosts. Cephadm seeks explicit host
408 names and selects them. If cephadm finds no explicit host names, it looks for
409 label specifications. If no label is defined in the specification, cephadm
410 selects hosts based on a host pattern. If no host pattern is defined, as a last
411 resort, cephadm selects all known hosts as candidates.
412
413 Cephadm is aware of existing daemons running services and tries to avoid moving
414 them.
415
416 Cephadm supports the deployment of a specific amount of services.
417 Consider the following service specification:
418
419 .. code-block:: yaml
420
421 service_type: mds
422 service_name: myfs
423 placement:
424 count: 3
425 label: myfs
426
427 This service specifcation instructs cephadm to deploy three daemons on hosts
428 labeled ``myfs`` across the cluster.
429
430 If there are fewer than three daemons deployed on the candidate hosts, cephadm
431 randomly chooses hosts on which to deploy new daemons.
432
433 If there are more than three daemons deployed on the candidate hosts, cephadm
434 removes existing daemons.
435
436 Finally, cephadm removes daemons on hosts that are outside of the list of
437 candidate hosts.
438
439 .. note::
440
441 There is a special case that cephadm must consider.
442
443 If there are fewer hosts selected by the placement specification than
444 demanded by ``count``, cephadm will deploy only on the selected hosts.
445
446
447 .. _cephadm-spec-unmanaged:
448
449 Disabling automatic deployment of daemons
450 =========================================
451
452 Cephadm supports disabling the automated deployment and removal of daemons on a
453 per service basis. The CLI supports two commands for this.
454
455 Disabling automatic management of daemons
456 -----------------------------------------
457
458 To disable the automatic management of dameons, set ``unmanaged=True`` in the
459 :ref:`orchestrator-cli-service-spec` (``mgr.yaml``).
460
461 ``mgr.yaml``:
462
463 .. code-block:: yaml
464
465 service_type: mgr
466 unmanaged: true
467 placement:
468 label: mgr
469
470
471 .. prompt:: bash #
472
473 ceph orch apply -i mgr.yaml
474
475
476 .. note::
477
478 After you apply this change in the Service Specification, cephadm will no
479 longer deploy any new daemons (even if the placement specification matches
480 additional hosts).
481
482 Deploying a daemon on a host manually
483 -------------------------------------
484
485 .. note::
486
487 This workflow has a very limited use case and should only be used
488 in rare circumstances.
489
490 To manually deploy a daemon on a host, follow these steps:
491
492 Modify the service spec for a service by getting the
493 existing spec, adding ``unmanaged: true``, and applying the modified spec.
494
495 Then manually deploy the daemon using the following:
496
497 .. prompt:: bash #
498
499 ceph orch daemon add <daemon-type> --placement=<placement spec>
500
501 For example :
502
503 .. prompt:: bash #
504
505 ceph orch daemon add mgr --placement=my_host
506
507 .. note::
508
509 Removing ``unmanaged: true`` from the service spec will
510 enable the reconciliation loop for this service and will
511 potentially lead to the removal of the daemon, depending
512 on the placement spec.
513
514 Removing a daemon from a host manually
515 --------------------------------------
516
517 To manually remove a daemon, run a command of the following form:
518
519 .. prompt:: bash #
520
521 ceph orch daemon rm <daemon name>... [--force]
522
523 For example:
524
525 .. prompt:: bash #
526
527 ceph orch daemon rm mgr.my_host.xyzxyz
528
529 .. note::
530
531 For managed services (``unmanaged=False``), cephadm will automatically
532 deploy a new daemon a few seconds later.
533
534 See also
535 --------
536
537 * See :ref:`cephadm-osd-declarative` for special handling of unmanaged OSDs.
538 * See also :ref:`cephadm-pause`