5 A service is a group of daemons configured together. See these chapters
6 for details on individual services:
26 To see the status of one
27 of the services running in the Ceph cluster, do the following:
29 #. Use the command line to print a list of services.
30 #. Locate the service whose status you want to check.
31 #. Print the status of the service.
33 The following command prints a list of services known to the orchestrator. To
34 limit the output to services only on a specified host, use the optional
35 ``--host`` parameter. To limit the output to services of only a particular
36 type, use the optional ``--type`` parameter (mon, osd, mgr, mds, rgw):
40 ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh]
42 Discover the status of a particular service or daemon:
46 ceph orch ls --service_type type --service_name <name> [--refresh]
48 To export the service specifications knows to the orchestrator, run the following command.
54 The service specifications exported with this command will be exported as yaml
55 and that yaml can be used with the ``ceph orch apply -i`` command.
57 For information about retrieving the specifications of single services (including examples of commands), see :ref:`orchestrator-cli-service-spec-retrieve`.
62 A daemon is a systemd unit that is running and part of a service.
64 To see the status of a daemon, do the following:
66 #. Print a list of all daemons known to the orchestrator.
67 #. Query the status of the target daemon.
69 First, print a list of all daemons known to the orchestrator:
73 ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh]
75 Then query the status of a particular service instance (mon, osd, mds, rgw).
76 For OSDs the id is the numeric OSD ID. For MDS services the id is the file
81 ceph orch ps --daemon_type osd --daemon_id 0
83 .. _orchestrator-cli-service-spec:
88 A *Service Specification* is a data structure that is used to specify the
89 deployment of services. In addition to parameters such as `placement` or
90 `networks`, the user can set initial values of service configuration parameters
91 by means of the `config` section. For each param/value configuration pair,
92 cephadm calls the following command to set its value:
96 ceph config set <service-name> <param> <value>
98 cephadm raises health warnings in case invalid configuration parameters are
99 found in the spec (`CEPHADM_INVALID_CONFIG_OPTION`) or if any error while
100 trying to apply the new configuration option(s) (`CEPHADM_FAILED_SET_OPTION`).
102 Here is an example of a service specification in YAML:
107 service_id: realm.zone
121 # Additional service specific attributes.
123 In this example, the properties of this service specification are:
125 .. py:currentmodule:: ceph.deployment.service_spec
127 .. autoclass:: ServiceSpec
130 Each service type can have additional service-specific properties.
132 Service specifications of type ``mon``, ``mgr``, and the monitoring
133 types do not require a ``service_id``.
135 A service of type ``osd`` is described in :ref:`drivegroups`
137 Many service specifications can be applied at once using ``ceph orch apply -i``
138 by submitting a multi-document YAML file::
140 cat <<EOF | ceph orch apply -i -
150 service_id: default_drive_group
157 .. _orchestrator-cli-service-spec-retrieve:
159 Retrieving the running Service Specification
160 --------------------------------------------
162 If the services have been started via ``ceph orch apply...``, then directly changing
163 the Services Specification is complicated. Instead of attempting to directly change
164 the Services Specification, we suggest exporting the running Service Specification by
165 following these instructions:
169 ceph orch ls --service-name rgw.<realm>.<zone> --export > rgw.<realm>.<zone>.yaml
170 ceph orch ls --service-type mgr --export > mgr.yaml
171 ceph orch ls --export > cluster.yaml
173 The Specification can then be changed and re-applied as above.
175 Updating Service Specifications
176 -------------------------------
178 The Ceph Orchestrator maintains a declarative state of each
179 service in a ``ServiceSpec``. For certain operations, like updating
180 the RGW HTTP port, we need to update the existing
183 1. List the current ``ServiceSpec``:
187 ceph orch ls --service_name=<service-name> --export > myservice.yaml
189 2. Update the yaml file:
195 3. Apply the new ``ServiceSpec``:
199 ceph orch apply -i myservice.yaml [--dry-run]
201 .. _orchestrator-cli-placement-spec:
206 For the orchestrator to deploy a *service*, it needs to know where to deploy
207 *daemons*, and how many to deploy. This is the role of a placement
208 specification. Placement specifications can either be passed as command line arguments
213 cephadm will not deploy daemons on hosts with the ``_no_schedule`` label; see :ref:`cephadm-special-host-labels`.
216 The **apply** command can be confusing. For this reason, we recommend using
219 Each ``ceph orch apply <service-name>`` command supersedes the one before it.
220 If you do not use the proper syntax, you will clobber your work
227 ceph orch apply mon host1
228 ceph orch apply mon host2
229 ceph orch apply mon host3
231 This results in only one host having a monitor applied to it: host 3.
233 (The first command creates a monitor on host1. Then the second command
234 clobbers the monitor on host1 and creates a monitor on host2. Then the
235 third command clobbers the monitor on host2 and creates a monitor on
236 host3. In this scenario, at this point, there is a monitor ONLY on
239 To make certain that a monitor is applied to each of these three hosts,
240 run a command like this:
244 ceph orch apply mon "host1,host2,host3"
246 There is another way to apply monitors to multiple hosts: a ``yaml`` file
247 can be used. Instead of using the "ceph orch apply mon" commands, run a
248 command of this form:
252 ceph orch apply -i file.yaml
254 Here is a sample **file.yaml** file
268 Daemons can be explicitly placed on hosts by simply specifying them:
272 ceph orch apply prometheus --placement="host1 host2 host3"
278 service_type: prometheus
285 MONs and other services may require some enhanced network specifications:
289 ceph orch daemon add mon --placement="myhost:[v2:1.2.3.4:3300,v1:1.2.3.4:6789]=name"
291 where ``[v2:1.2.3.4:3300,v1:1.2.3.4:6789]`` is the network address of the monitor
292 and ``=name`` specifies the name of the new monitor.
294 .. _orch-placement-by-labels:
299 Daemon placement can be limited to hosts that match a specific label. To set
300 a label ``mylabel`` to the appropriate hosts, run this command:
304 ceph orch host label add *<hostname>* mylabel
306 To view the current hosts and labels, run this command:
316 ceph orch host label add host1 mylabel
317 ceph orch host label add host2 mylabel
318 ceph orch host label add host3 mylabel
323 HOST ADDR LABELS STATUS
330 Now, Tell cephadm to deploy daemons based on the label by running
335 ceph orch apply prometheus --placement="label:mylabel"
341 service_type: prometheus
345 * See :ref:`orchestrator-host-labels`
347 Placement by pattern matching
348 -----------------------------
350 Daemons can be placed on hosts as well:
354 ceph orch apply prometheus --placement='myhost[1-3]'
360 service_type: prometheus
362 host_pattern: "myhost[1-3]"
364 To place a service on *all* hosts, use ``"*"``:
368 ceph orch apply node-exporter --placement='*'
374 service_type: node-exporter
379 Changing the number of daemons
380 ------------------------------
382 By specifying ``count``, only the number of daemons specified will be created:
386 ceph orch apply prometheus --placement=3
388 To deploy *daemons* on a subset of hosts, specify the count:
392 ceph orch apply prometheus --placement="2 host1 host2 host3"
394 If the count is bigger than the amount of hosts, cephadm deploys one per host:
398 ceph orch apply prometheus --placement="3 host1 host2"
400 The command immediately above results in two Prometheus daemons.
402 YAML can also be used to specify limits, in the following way:
406 service_type: prometheus
410 YAML can also be used to specify limits on hosts:
414 service_type: prometheus
422 .. _cephadm_co_location:
424 Co-location of daemons
425 ----------------------
427 Cephadm supports the deployment of multiple daemons on the same host:
436 The main reason for deploying multiple daemons per host is an additional
437 performance benefit for running multiple RGW and MDS daemons on the same host.
441 * :ref:`cephadm_mgr_co_location`.
442 * :ref:`cephadm-rgw-designated_gateways`.
444 This feature was introduced in Pacific.
446 Algorithm description
447 ---------------------
449 Cephadm's declarative state consists of a list of service specifications
450 containing placement specifications.
452 Cephadm continually compares a list of daemons actually running in the cluster
453 against the list in the service specifications. Cephadm adds new daemons and
454 removes old daemons as necessary in order to conform to the service
457 Cephadm does the following to maintain compliance with the service
460 Cephadm first selects a list of candidate hosts. Cephadm seeks explicit host
461 names and selects them. If cephadm finds no explicit host names, it looks for
462 label specifications. If no label is defined in the specification, cephadm
463 selects hosts based on a host pattern. If no host pattern is defined, as a last
464 resort, cephadm selects all known hosts as candidates.
466 Cephadm is aware of existing daemons running services and tries to avoid moving
469 Cephadm supports the deployment of a specific amount of services.
470 Consider the following service specification:
480 This service specification instructs cephadm to deploy three daemons on hosts
481 labeled ``myfs`` across the cluster.
483 If there are fewer than three daemons deployed on the candidate hosts, cephadm
484 randomly chooses hosts on which to deploy new daemons.
486 If there are more than three daemons deployed on the candidate hosts, cephadm
487 removes existing daemons.
489 Finally, cephadm removes daemons on hosts that are outside of the list of
494 There is a special case that cephadm must consider.
496 If there are fewer hosts selected by the placement specification than
497 demanded by ``count``, cephadm will deploy only on the selected hosts.
499 Extra Container Arguments
500 =========================
503 The arguments provided for extra container args are limited to whatever arguments are available for a `run` command from whichever container engine you are using. Providing any arguments the `run` command does not support (or invalid values for arguments) will cause the daemon to fail to start.
506 Cephadm supports providing extra miscellaneous container arguments for
507 specific cases when they may be necessary. For example, if a user needed
508 to limit the amount of cpus their mon daemons make use of they could apply
520 extra_container_args:
523 which would cause each mon daemon to be deployed with `--cpus=2`.
528 Cephadm supports specifying miscellaneous config files for daemons.
529 To do so, users must provide both the content of the config file and the
530 location within the daemon's container at which it should be mounted. After
531 applying a YAML spec with custom config files specified and having cephadm
532 redeploy the daemons for which the config files are specified, these files will
533 be mounted within the daemon's container at the specified location.
535 Example service spec:
539 service_type: grafana
540 service_name: grafana
542 - mount_path: /etc/example.conf
546 - mount_path: /usr/share/grafana/example.cert
548 -----BEGIN PRIVATE KEY-----
549 V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt
550 ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15
551 IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu
552 YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg
553 ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8=
554 -----END PRIVATE KEY-----
555 -----BEGIN CERTIFICATE-----
556 V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt
557 ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15
558 IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu
559 YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg
560 ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8=
561 -----END CERTIFICATE-----
563 To make these new config files actually get mounted within the
564 containers for the daemons
568 ceph orch redeploy <service-name>
574 ceph orch redeploy grafana
581 In order to remove a service including the removal
582 of all daemons of that service, run
586 ceph orch rm <service-name>
592 ceph orch rm rgw.myrgw
594 .. _cephadm-spec-unmanaged:
596 Disabling automatic deployment of daemons
597 =========================================
599 Cephadm supports disabling the automated deployment and removal of daemons on a
600 per service basis. The CLI supports two commands for this.
602 In order to fully remove a service, see :ref:`orch-rm`.
604 Disabling automatic management of daemons
605 -----------------------------------------
607 To disable the automatic management of dameons, set ``unmanaged=True`` in the
608 :ref:`orchestrator-cli-service-spec` (``mgr.yaml``).
622 ceph orch apply -i mgr.yaml
627 After you apply this change in the Service Specification, cephadm will no
628 longer deploy any new daemons (even if the placement specification matches
631 Deploying a daemon on a host manually
632 -------------------------------------
636 This workflow has a very limited use case and should only be used
637 in rare circumstances.
639 To manually deploy a daemon on a host, follow these steps:
641 Modify the service spec for a service by getting the
642 existing spec, adding ``unmanaged: true``, and applying the modified spec.
644 Then manually deploy the daemon using the following:
648 ceph orch daemon add <daemon-type> --placement=<placement spec>
654 ceph orch daemon add mgr --placement=my_host
658 Removing ``unmanaged: true`` from the service spec will
659 enable the reconciliation loop for this service and will
660 potentially lead to the removal of the daemon, depending
661 on the placement spec.
663 Removing a daemon from a host manually
664 --------------------------------------
666 To manually remove a daemon, run a command of the following form:
670 ceph orch daemon rm <daemon name>... [--force]
676 ceph orch daemon rm mgr.my_host.xyzxyz
680 For managed services (``unmanaged=False``), cephadm will automatically
681 deploy a new daemon a few seconds later.
686 * See :ref:`cephadm-osd-declarative` for special handling of unmanaged OSDs.
687 * See also :ref:`cephadm-pause`