]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephadm/services/index.rst
d6520ea407f8fe6c5e0a41fc52f8ae0cf682ac5d
[ceph.git] / ceph / doc / cephadm / services / index.rst
1 ==================
2 Service Management
3 ==================
4
5 A service is a group of daemons configured together. See these chapters
6 for details on individual services:
7
8 .. toctree::
9 :maxdepth: 1
10
11 mon
12 mgr
13 osd
14 rgw
15 mds
16 nfs
17 iscsi
18 custom-container
19 monitoring
20 snmp-gateway
21
22 Service Status
23 ==============
24
25
26 To see the status of one
27 of the services running in the Ceph cluster, do the following:
28
29 #. Use the command line to print a list of services.
30 #. Locate the service whose status you want to check.
31 #. Print the status of the service.
32
33 The following command prints a list of services known to the orchestrator. To
34 limit the output to services only on a specified host, use the optional
35 ``--host`` parameter. To limit the output to services of only a particular
36 type, use the optional ``--type`` parameter (mon, osd, mgr, mds, rgw):
37
38 .. prompt:: bash #
39
40 ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh]
41
42 Discover the status of a particular service or daemon:
43
44 .. prompt:: bash #
45
46 ceph orch ls --service_type type --service_name <name> [--refresh]
47
48 To export the service specifications knows to the orchestrator, run the following command.
49
50 .. prompt:: bash #
51
52 ceph orch ls --export
53
54 The service specifications exported with this command will be exported as yaml
55 and that yaml can be used with the ``ceph orch apply -i`` command.
56
57 For information about retrieving the specifications of single services (including examples of commands), see :ref:`orchestrator-cli-service-spec-retrieve`.
58
59 Daemon Status
60 =============
61
62 A daemon is a systemd unit that is running and part of a service.
63
64 To see the status of a daemon, do the following:
65
66 #. Print a list of all daemons known to the orchestrator.
67 #. Query the status of the target daemon.
68
69 First, print a list of all daemons known to the orchestrator:
70
71 .. prompt:: bash #
72
73 ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh]
74
75 Then query the status of a particular service instance (mon, osd, mds, rgw).
76 For OSDs the id is the numeric OSD ID. For MDS services the id is the file
77 system name:
78
79 .. prompt:: bash #
80
81 ceph orch ps --daemon_type osd --daemon_id 0
82
83 .. _orchestrator-cli-service-spec:
84
85 Service Specification
86 =====================
87
88 A *Service Specification* is a data structure that is used to specify the
89 deployment of services. In addition to parameters such as `placement` or
90 `networks`, the user can set initial values of service configuration parameters
91 by means of the `config` section. For each param/value configuration pair,
92 cephadm calls the following command to set its value:
93
94 .. prompt:: bash #
95
96 ceph config set <service-name> <param> <value>
97
98 cephadm raises health warnings in case invalid configuration parameters are
99 found in the spec (`CEPHADM_INVALID_CONFIG_OPTION`) or if any error while
100 trying to apply the new configuration option(s) (`CEPHADM_FAILED_SET_OPTION`).
101
102 Here is an example of a service specification in YAML:
103
104 .. code-block:: yaml
105
106 service_type: rgw
107 service_id: realm.zone
108 placement:
109 hosts:
110 - host1
111 - host2
112 - host3
113 config:
114 param_1: val_1
115 ...
116 param_N: val_N
117 unmanaged: false
118 networks:
119 - 192.169.142.0/24
120 spec:
121 # Additional service specific attributes.
122
123 In this example, the properties of this service specification are:
124
125 .. py:currentmodule:: ceph.deployment.service_spec
126
127 .. autoclass:: ServiceSpec
128 :members:
129
130 Each service type can have additional service-specific properties.
131
132 Service specifications of type ``mon``, ``mgr``, and the monitoring
133 types do not require a ``service_id``.
134
135 A service of type ``osd`` is described in :ref:`drivegroups`
136
137 Many service specifications can be applied at once using ``ceph orch apply -i``
138 by submitting a multi-document YAML file::
139
140 cat <<EOF | ceph orch apply -i -
141 service_type: mon
142 placement:
143 host_pattern: "mon*"
144 ---
145 service_type: mgr
146 placement:
147 host_pattern: "mgr*"
148 ---
149 service_type: osd
150 service_id: default_drive_group
151 placement:
152 host_pattern: "osd*"
153 data_devices:
154 all: true
155 EOF
156
157 .. _orchestrator-cli-service-spec-retrieve:
158
159 Retrieving the running Service Specification
160 --------------------------------------------
161
162 If the services have been started via ``ceph orch apply...``, then directly changing
163 the Services Specification is complicated. Instead of attempting to directly change
164 the Services Specification, we suggest exporting the running Service Specification by
165 following these instructions:
166
167 .. prompt:: bash #
168
169 ceph orch ls --service-name rgw.<realm>.<zone> --export > rgw.<realm>.<zone>.yaml
170 ceph orch ls --service-type mgr --export > mgr.yaml
171 ceph orch ls --export > cluster.yaml
172
173 The Specification can then be changed and re-applied as above.
174
175 Updating Service Specifications
176 -------------------------------
177
178 The Ceph Orchestrator maintains a declarative state of each
179 service in a ``ServiceSpec``. For certain operations, like updating
180 the RGW HTTP port, we need to update the existing
181 specification.
182
183 1. List the current ``ServiceSpec``:
184
185 .. prompt:: bash #
186
187 ceph orch ls --service_name=<service-name> --export > myservice.yaml
188
189 2. Update the yaml file:
190
191 .. prompt:: bash #
192
193 vi myservice.yaml
194
195 3. Apply the new ``ServiceSpec``:
196
197 .. prompt:: bash #
198
199 ceph orch apply -i myservice.yaml [--dry-run]
200
201 .. _orchestrator-cli-placement-spec:
202
203 Daemon Placement
204 ================
205
206 For the orchestrator to deploy a *service*, it needs to know where to deploy
207 *daemons*, and how many to deploy. This is the role of a placement
208 specification. Placement specifications can either be passed as command line arguments
209 or in a YAML files.
210
211 .. note::
212
213 cephadm will not deploy daemons on hosts with the ``_no_schedule`` label; see :ref:`cephadm-special-host-labels`.
214
215 .. note::
216 The **apply** command can be confusing. For this reason, we recommend using
217 YAML specifications.
218
219 Each ``ceph orch apply <service-name>`` command supersedes the one before it.
220 If you do not use the proper syntax, you will clobber your work
221 as you go.
222
223 For example:
224
225 .. prompt:: bash #
226
227 ceph orch apply mon host1
228 ceph orch apply mon host2
229 ceph orch apply mon host3
230
231 This results in only one host having a monitor applied to it: host 3.
232
233 (The first command creates a monitor on host1. Then the second command
234 clobbers the monitor on host1 and creates a monitor on host2. Then the
235 third command clobbers the monitor on host2 and creates a monitor on
236 host3. In this scenario, at this point, there is a monitor ONLY on
237 host3.)
238
239 To make certain that a monitor is applied to each of these three hosts,
240 run a command like this:
241
242 .. prompt:: bash #
243
244 ceph orch apply mon "host1,host2,host3"
245
246 There is another way to apply monitors to multiple hosts: a ``yaml`` file
247 can be used. Instead of using the "ceph orch apply mon" commands, run a
248 command of this form:
249
250 .. prompt:: bash #
251
252 ceph orch apply -i file.yaml
253
254 Here is a sample **file.yaml** file
255
256 .. code-block:: yaml
257
258 service_type: mon
259 placement:
260 hosts:
261 - host1
262 - host2
263 - host3
264
265 Explicit placements
266 -------------------
267
268 Daemons can be explicitly placed on hosts by simply specifying them:
269
270 .. prompt:: bash #
271
272 ceph orch apply prometheus --placement="host1 host2 host3"
273
274 Or in YAML:
275
276 .. code-block:: yaml
277
278 service_type: prometheus
279 placement:
280 hosts:
281 - host1
282 - host2
283 - host3
284
285 MONs and other services may require some enhanced network specifications:
286
287 .. prompt:: bash #
288
289 ceph orch daemon add mon --placement="myhost:[v2:1.2.3.4:3300,v1:1.2.3.4:6789]=name"
290
291 where ``[v2:1.2.3.4:3300,v1:1.2.3.4:6789]`` is the network address of the monitor
292 and ``=name`` specifies the name of the new monitor.
293
294 .. _orch-placement-by-labels:
295
296 Placement by labels
297 -------------------
298
299 Daemon placement can be limited to hosts that match a specific label. To set
300 a label ``mylabel`` to the appropriate hosts, run this command:
301
302 .. prompt:: bash #
303
304 ceph orch host label add *<hostname>* mylabel
305
306 To view the current hosts and labels, run this command:
307
308 .. prompt:: bash #
309
310 ceph orch host ls
311
312 For example:
313
314 .. prompt:: bash #
315
316 ceph orch host label add host1 mylabel
317 ceph orch host label add host2 mylabel
318 ceph orch host label add host3 mylabel
319 ceph orch host ls
320
321 .. code-block:: bash
322
323 HOST ADDR LABELS STATUS
324 host1 mylabel
325 host2 mylabel
326 host3 mylabel
327 host4
328 host5
329
330 Now, Tell cephadm to deploy daemons based on the label by running
331 this command:
332
333 .. prompt:: bash #
334
335 ceph orch apply prometheus --placement="label:mylabel"
336
337 Or in YAML:
338
339 .. code-block:: yaml
340
341 service_type: prometheus
342 placement:
343 label: "mylabel"
344
345 * See :ref:`orchestrator-host-labels`
346
347 Placement by pattern matching
348 -----------------------------
349
350 Daemons can be placed on hosts as well:
351
352 .. prompt:: bash #
353
354 ceph orch apply prometheus --placement='myhost[1-3]'
355
356 Or in YAML:
357
358 .. code-block:: yaml
359
360 service_type: prometheus
361 placement:
362 host_pattern: "myhost[1-3]"
363
364 To place a service on *all* hosts, use ``"*"``:
365
366 .. prompt:: bash #
367
368 ceph orch apply node-exporter --placement='*'
369
370 Or in YAML:
371
372 .. code-block:: yaml
373
374 service_type: node-exporter
375 placement:
376 host_pattern: "*"
377
378
379 Changing the number of daemons
380 ------------------------------
381
382 By specifying ``count``, only the number of daemons specified will be created:
383
384 .. prompt:: bash #
385
386 ceph orch apply prometheus --placement=3
387
388 To deploy *daemons* on a subset of hosts, specify the count:
389
390 .. prompt:: bash #
391
392 ceph orch apply prometheus --placement="2 host1 host2 host3"
393
394 If the count is bigger than the amount of hosts, cephadm deploys one per host:
395
396 .. prompt:: bash #
397
398 ceph orch apply prometheus --placement="3 host1 host2"
399
400 The command immediately above results in two Prometheus daemons.
401
402 YAML can also be used to specify limits, in the following way:
403
404 .. code-block:: yaml
405
406 service_type: prometheus
407 placement:
408 count: 3
409
410 YAML can also be used to specify limits on hosts:
411
412 .. code-block:: yaml
413
414 service_type: prometheus
415 placement:
416 count: 2
417 hosts:
418 - host1
419 - host2
420 - host3
421
422 .. _cephadm_co_location:
423
424 Co-location of daemons
425 ----------------------
426
427 Cephadm supports the deployment of multiple daemons on the same host:
428
429 .. code-block:: yaml
430
431 service_type: rgw
432 placement:
433 label: rgw
434 count_per_host: 2
435
436 The main reason for deploying multiple daemons per host is an additional
437 performance benefit for running multiple RGW and MDS daemons on the same host.
438
439 See also:
440
441 * :ref:`cephadm_mgr_co_location`.
442 * :ref:`cephadm-rgw-designated_gateways`.
443
444 This feature was introduced in Pacific.
445
446 Algorithm description
447 ---------------------
448
449 Cephadm's declarative state consists of a list of service specifications
450 containing placement specifications.
451
452 Cephadm continually compares a list of daemons actually running in the cluster
453 against the list in the service specifications. Cephadm adds new daemons and
454 removes old daemons as necessary in order to conform to the service
455 specifications.
456
457 Cephadm does the following to maintain compliance with the service
458 specifications.
459
460 Cephadm first selects a list of candidate hosts. Cephadm seeks explicit host
461 names and selects them. If cephadm finds no explicit host names, it looks for
462 label specifications. If no label is defined in the specification, cephadm
463 selects hosts based on a host pattern. If no host pattern is defined, as a last
464 resort, cephadm selects all known hosts as candidates.
465
466 Cephadm is aware of existing daemons running services and tries to avoid moving
467 them.
468
469 Cephadm supports the deployment of a specific amount of services.
470 Consider the following service specification:
471
472 .. code-block:: yaml
473
474 service_type: mds
475 service_name: myfs
476 placement:
477 count: 3
478 label: myfs
479
480 This service specification instructs cephadm to deploy three daemons on hosts
481 labeled ``myfs`` across the cluster.
482
483 If there are fewer than three daemons deployed on the candidate hosts, cephadm
484 randomly chooses hosts on which to deploy new daemons.
485
486 If there are more than three daemons deployed on the candidate hosts, cephadm
487 removes existing daemons.
488
489 Finally, cephadm removes daemons on hosts that are outside of the list of
490 candidate hosts.
491
492 .. note::
493
494 There is a special case that cephadm must consider.
495
496 If there are fewer hosts selected by the placement specification than
497 demanded by ``count``, cephadm will deploy only on the selected hosts.
498
499 Extra Container Arguments
500 =========================
501
502 .. warning::
503 The arguments provided for extra container args are limited to whatever arguments are available for a `run` command from whichever container engine you are using. Providing any arguments the `run` command does not support (or invalid values for arguments) will cause the daemon to fail to start.
504
505
506 Cephadm supports providing extra miscellaneous container arguments for
507 specific cases when they may be necessary. For example, if a user needed
508 to limit the amount of cpus their mon daemons make use of they could apply
509 a spec like
510
511 .. code-block:: yaml
512
513 service_type: mon
514 service_name: mon
515 placement:
516 hosts:
517 - host1
518 - host2
519 - host3
520 extra_container_args:
521 - "--cpus=2"
522
523 which would cause each mon daemon to be deployed with `--cpus=2`.
524
525 Custom Config Files
526 ===================
527
528 Cephadm supports specifying miscellaneous config files for daemons.
529 To do so, users must provide both the content of the config file and the
530 location within the daemon's container at which it should be mounted. After
531 applying a YAML spec with custom config files specified and having cephadm
532 redeploy the daemons for which the config files are specified, these files will
533 be mounted within the daemon's container at the specified location.
534
535 Example service spec:
536
537 .. code-block:: yaml
538
539 service_type: grafana
540 service_name: grafana
541 custom_configs:
542 - mount_path: /etc/example.conf
543 content: |
544 setting1 = value1
545 setting2 = value2
546 - mount_path: /usr/share/grafana/example.cert
547 content: |
548 -----BEGIN PRIVATE KEY-----
549 V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt
550 ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15
551 IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu
552 YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg
553 ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8=
554 -----END PRIVATE KEY-----
555 -----BEGIN CERTIFICATE-----
556 V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt
557 ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15
558 IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu
559 YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg
560 ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8=
561 -----END CERTIFICATE-----
562
563 To make these new config files actually get mounted within the
564 containers for the daemons
565
566 .. prompt:: bash
567
568 ceph orch redeploy <service-name>
569
570 For example:
571
572 .. prompt:: bash
573
574 ceph orch redeploy grafana
575
576 .. _orch-rm:
577
578 Removing a Service
579 ==================
580
581 In order to remove a service including the removal
582 of all daemons of that service, run
583
584 .. prompt:: bash
585
586 ceph orch rm <service-name>
587
588 For example:
589
590 .. prompt:: bash
591
592 ceph orch rm rgw.myrgw
593
594 .. _cephadm-spec-unmanaged:
595
596 Disabling automatic deployment of daemons
597 =========================================
598
599 Cephadm supports disabling the automated deployment and removal of daemons on a
600 per service basis. The CLI supports two commands for this.
601
602 In order to fully remove a service, see :ref:`orch-rm`.
603
604 Disabling automatic management of daemons
605 -----------------------------------------
606
607 To disable the automatic management of dameons, set ``unmanaged=True`` in the
608 :ref:`orchestrator-cli-service-spec` (``mgr.yaml``).
609
610 ``mgr.yaml``:
611
612 .. code-block:: yaml
613
614 service_type: mgr
615 unmanaged: true
616 placement:
617 label: mgr
618
619
620 .. prompt:: bash #
621
622 ceph orch apply -i mgr.yaml
623
624
625 .. note::
626
627 After you apply this change in the Service Specification, cephadm will no
628 longer deploy any new daemons (even if the placement specification matches
629 additional hosts).
630
631 Deploying a daemon on a host manually
632 -------------------------------------
633
634 .. note::
635
636 This workflow has a very limited use case and should only be used
637 in rare circumstances.
638
639 To manually deploy a daemon on a host, follow these steps:
640
641 Modify the service spec for a service by getting the
642 existing spec, adding ``unmanaged: true``, and applying the modified spec.
643
644 Then manually deploy the daemon using the following:
645
646 .. prompt:: bash #
647
648 ceph orch daemon add <daemon-type> --placement=<placement spec>
649
650 For example :
651
652 .. prompt:: bash #
653
654 ceph orch daemon add mgr --placement=my_host
655
656 .. note::
657
658 Removing ``unmanaged: true`` from the service spec will
659 enable the reconciliation loop for this service and will
660 potentially lead to the removal of the daemon, depending
661 on the placement spec.
662
663 Removing a daemon from a host manually
664 --------------------------------------
665
666 To manually remove a daemon, run a command of the following form:
667
668 .. prompt:: bash #
669
670 ceph orch daemon rm <daemon name>... [--force]
671
672 For example:
673
674 .. prompt:: bash #
675
676 ceph orch daemon rm mgr.my_host.xyzxyz
677
678 .. note::
679
680 For managed services (``unmanaged=False``), cephadm will automatically
681 deploy a new daemon a few seconds later.
682
683 See also
684 --------
685
686 * See :ref:`cephadm-osd-declarative` for special handling of unmanaged OSDs.
687 * See also :ref:`cephadm-pause`