5 A service is a group of daemons configured together. See these chapters
6 for details on individual services:
27 To see the status of one
28 of the services running in the Ceph cluster, do the following:
30 #. Use the command line to print a list of services.
31 #. Locate the service whose status you want to check.
32 #. Print the status of the service.
34 The following command prints a list of services known to the orchestrator. To
35 limit the output to services only on a specified host, use the optional
36 ``--host`` parameter. To limit the output to services of only a particular
37 type, use the optional ``--type`` parameter (mon, osd, mgr, mds, rgw):
41 ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh]
43 Discover the status of a particular service or daemon:
47 ceph orch ls --service_type type --service_name <name> [--refresh]
49 To export the service specifications knows to the orchestrator, run the following command.
55 The service specifications exported with this command will be exported as yaml
56 and that yaml can be used with the ``ceph orch apply -i`` command.
58 For information about retrieving the specifications of single services (including examples of commands), see :ref:`orchestrator-cli-service-spec-retrieve`.
63 A daemon is a systemd unit that is running and part of a service.
65 To see the status of a daemon, do the following:
67 #. Print a list of all daemons known to the orchestrator.
68 #. Query the status of the target daemon.
70 First, print a list of all daemons known to the orchestrator:
74 ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh]
76 Then query the status of a particular service instance (mon, osd, mds, rgw).
77 For OSDs the id is the numeric OSD ID. For MDS services the id is the file
82 ceph orch ps --daemon_type osd --daemon_id 0
85 The output of the command ``ceph orch ps`` may not reflect the current status of the daemons. By default,
86 the status is updated every 10 minutes. This interval can be shortened by modifying the ``mgr/cephadm/daemon_cache_timeout``
87 configuration variable (in seconds) e.g: ``ceph config set mgr mgr/cephadm/daemon_cache_timeout 60`` would reduce the refresh
88 interval to one minute. The information is updated every ``daemon_cache_timeout`` seconds unless the ``--refresh`` option
89 is used. This option would trigger a request to refresh the information, which may take some time depending on the size of
90 the cluster. In general ``REFRESHED`` value indicates how recent the information displayed by ``ceph orch ps`` and similar
93 .. _orchestrator-cli-service-spec:
98 A *Service Specification* is a data structure that is used to specify the
99 deployment of services. In addition to parameters such as `placement` or
100 `networks`, the user can set initial values of service configuration parameters
101 by means of the `config` section. For each param/value configuration pair,
102 cephadm calls the following command to set its value:
106 ceph config set <service-name> <param> <value>
108 cephadm raises health warnings in case invalid configuration parameters are
109 found in the spec (`CEPHADM_INVALID_CONFIG_OPTION`) or if any error while
110 trying to apply the new configuration option(s) (`CEPHADM_FAILED_SET_OPTION`).
112 Here is an example of a service specification in YAML:
117 service_id: realm.zone
131 # Additional service specific attributes.
133 In this example, the properties of this service specification are:
135 .. py:currentmodule:: ceph.deployment.service_spec
137 .. autoclass:: ServiceSpec
140 Each service type can have additional service-specific properties.
142 Service specifications of type ``mon``, ``mgr``, and the monitoring
143 types do not require a ``service_id``.
145 A service of type ``osd`` is described in :ref:`drivegroups`
147 Many service specifications can be applied at once using ``ceph orch apply -i``
148 by submitting a multi-document YAML file::
150 cat <<EOF | ceph orch apply -i -
160 service_id: default_drive_group
167 .. _orchestrator-cli-service-spec-retrieve:
169 Retrieving the running Service Specification
170 --------------------------------------------
172 If the services have been started via ``ceph orch apply...``, then directly changing
173 the Services Specification is complicated. Instead of attempting to directly change
174 the Services Specification, we suggest exporting the running Service Specification by
175 following these instructions:
179 ceph orch ls --service-name rgw.<realm>.<zone> --export > rgw.<realm>.<zone>.yaml
180 ceph orch ls --service-type mgr --export > mgr.yaml
181 ceph orch ls --export > cluster.yaml
183 The Specification can then be changed and re-applied as above.
185 Updating Service Specifications
186 -------------------------------
188 The Ceph Orchestrator maintains a declarative state of each
189 service in a ``ServiceSpec``. For certain operations, like updating
190 the RGW HTTP port, we need to update the existing
193 1. List the current ``ServiceSpec``:
197 ceph orch ls --service_name=<service-name> --export > myservice.yaml
199 2. Update the yaml file:
205 3. Apply the new ``ServiceSpec``:
209 ceph orch apply -i myservice.yaml [--dry-run]
211 .. _orchestrator-cli-placement-spec:
216 For the orchestrator to deploy a *service*, it needs to know where to deploy
217 *daemons*, and how many to deploy. This is the role of a placement
218 specification. Placement specifications can either be passed as command line arguments
223 cephadm will not deploy daemons on hosts with the ``_no_schedule`` label; see :ref:`cephadm-special-host-labels`.
226 The **apply** command can be confusing. For this reason, we recommend using
229 Each ``ceph orch apply <service-name>`` command supersedes the one before it.
230 If you do not use the proper syntax, you will clobber your work
237 ceph orch apply mon host1
238 ceph orch apply mon host2
239 ceph orch apply mon host3
241 This results in only one host having a monitor applied to it: host 3.
243 (The first command creates a monitor on host1. Then the second command
244 clobbers the monitor on host1 and creates a monitor on host2. Then the
245 third command clobbers the monitor on host2 and creates a monitor on
246 host3. In this scenario, at this point, there is a monitor ONLY on
249 To make certain that a monitor is applied to each of these three hosts,
250 run a command like this:
254 ceph orch apply mon "host1,host2,host3"
256 There is another way to apply monitors to multiple hosts: a ``yaml`` file
257 can be used. Instead of using the "ceph orch apply mon" commands, run a
258 command of this form:
262 ceph orch apply -i file.yaml
264 Here is a sample **file.yaml** file
278 Daemons can be explicitly placed on hosts by simply specifying them:
282 ceph orch apply prometheus --placement="host1 host2 host3"
288 service_type: prometheus
295 MONs and other services may require some enhanced network specifications:
299 ceph orch daemon add mon --placement="myhost:[v2:1.2.3.4:3300,v1:1.2.3.4:6789]=name"
301 where ``[v2:1.2.3.4:3300,v1:1.2.3.4:6789]`` is the network address of the monitor
302 and ``=name`` specifies the name of the new monitor.
304 .. _orch-placement-by-labels:
309 Daemon placement can be limited to hosts that match a specific label. To set
310 a label ``mylabel`` to the appropriate hosts, run this command:
314 ceph orch host label add *<hostname>* mylabel
316 To view the current hosts and labels, run this command:
326 ceph orch host label add host1 mylabel
327 ceph orch host label add host2 mylabel
328 ceph orch host label add host3 mylabel
333 HOST ADDR LABELS STATUS
340 Now, Tell cephadm to deploy daemons based on the label by running
345 ceph orch apply prometheus --placement="label:mylabel"
351 service_type: prometheus
355 * See :ref:`orchestrator-host-labels`
357 Placement by pattern matching
358 -----------------------------
360 Daemons can be placed on hosts as well:
364 ceph orch apply prometheus --placement='myhost[1-3]'
370 service_type: prometheus
372 host_pattern: "myhost[1-3]"
374 To place a service on *all* hosts, use ``"*"``:
378 ceph orch apply node-exporter --placement='*'
384 service_type: node-exporter
389 Changing the number of daemons
390 ------------------------------
392 By specifying ``count``, only the number of daemons specified will be created:
396 ceph orch apply prometheus --placement=3
398 To deploy *daemons* on a subset of hosts, specify the count:
402 ceph orch apply prometheus --placement="2 host1 host2 host3"
404 If the count is bigger than the amount of hosts, cephadm deploys one per host:
408 ceph orch apply prometheus --placement="3 host1 host2"
410 The command immediately above results in two Prometheus daemons.
412 YAML can also be used to specify limits, in the following way:
416 service_type: prometheus
420 YAML can also be used to specify limits on hosts:
424 service_type: prometheus
432 .. _cephadm_co_location:
434 Co-location of daemons
435 ----------------------
437 Cephadm supports the deployment of multiple daemons on the same host:
446 The main reason for deploying multiple daemons per host is an additional
447 performance benefit for running multiple RGW and MDS daemons on the same host.
451 * :ref:`cephadm_mgr_co_location`.
452 * :ref:`cephadm-rgw-designated_gateways`.
454 This feature was introduced in Pacific.
456 Algorithm description
457 ---------------------
459 Cephadm's declarative state consists of a list of service specifications
460 containing placement specifications.
462 Cephadm continually compares a list of daemons actually running in the cluster
463 against the list in the service specifications. Cephadm adds new daemons and
464 removes old daemons as necessary in order to conform to the service
467 Cephadm does the following to maintain compliance with the service
470 Cephadm first selects a list of candidate hosts. Cephadm seeks explicit host
471 names and selects them. If cephadm finds no explicit host names, it looks for
472 label specifications. If no label is defined in the specification, cephadm
473 selects hosts based on a host pattern. If no host pattern is defined, as a last
474 resort, cephadm selects all known hosts as candidates.
476 Cephadm is aware of existing daemons running services and tries to avoid moving
479 Cephadm supports the deployment of a specific amount of services.
480 Consider the following service specification:
490 This service specification instructs cephadm to deploy three daemons on hosts
491 labeled ``myfs`` across the cluster.
493 If there are fewer than three daemons deployed on the candidate hosts, cephadm
494 randomly chooses hosts on which to deploy new daemons.
496 If there are more than three daemons deployed on the candidate hosts, cephadm
497 removes existing daemons.
499 Finally, cephadm removes daemons on hosts that are outside of the list of
504 There is a special case that cephadm must consider.
506 If there are fewer hosts selected by the placement specification than
507 demanded by ``count``, cephadm will deploy only on the selected hosts.
509 .. _cephadm-extra-container-args:
511 Extra Container Arguments
512 =========================
515 The arguments provided for extra container args are limited to whatever arguments are available for
516 a `run` command from whichever container engine you are using. Providing any arguments the `run`
517 command does not support (or invalid values for arguments) will cause the daemon to fail to start.
521 For arguments passed to the process running inside the container rather than the for
522 the container runtime itself, see :ref:`cephadm-extra-entrypoint-args`
525 Cephadm supports providing extra miscellaneous container arguments for
526 specific cases when they may be necessary. For example, if a user needed
527 to limit the amount of cpus their mon daemons make use of they could apply
539 extra_container_args:
542 which would cause each mon daemon to be deployed with `--cpus=2`.
544 Mounting Files with Extra Container Arguments
545 ---------------------------------------------
547 A common use case for extra container arguments is to mount additional
548 files within the container. However, some intuitive formats for doing
549 so can cause deployment to fail (see https://tracker.ceph.com/issues/57338).
550 The recommended syntax for mounting a file with extra container arguments is:
554 extra_container_args:
556 - "/absolute/file/path/on/host:/absolute/file/path/in/container"
562 extra_container_args:
564 - "/opt/ceph_cert/host.cert:/etc/grafana/certs/cert_file:ro"
566 .. _cephadm-extra-entrypoint-args:
568 Extra Entrypoint Arguments
569 ==========================
573 For arguments intended for the container runtime rather than the process inside
574 it, see :ref:`cephadm-extra-container-args`
576 Similar to extra container args for the container runtime, Cephadm supports
577 appending to args passed to the entrypoint process running
578 within a container. For example, to set the collector textfile directory for
579 the node-exporter service , one could apply a service spec like
583 service_type: node-exporter
584 service_name: node-exporter
587 extra_entrypoint_args:
588 - "--collector.textfile.directory=/var/lib/node_exporter/textfile_collector2"
593 Cephadm supports specifying miscellaneous config files for daemons.
594 To do so, users must provide both the content of the config file and the
595 location within the daemon's container at which it should be mounted. After
596 applying a YAML spec with custom config files specified and having cephadm
597 redeploy the daemons for which the config files are specified, these files will
598 be mounted within the daemon's container at the specified location.
600 Example service spec:
604 service_type: grafana
605 service_name: grafana
607 - mount_path: /etc/example.conf
611 - mount_path: /usr/share/grafana/example.cert
613 -----BEGIN PRIVATE KEY-----
614 V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt
615 ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15
616 IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu
617 YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg
618 ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8=
619 -----END PRIVATE KEY-----
620 -----BEGIN CERTIFICATE-----
621 V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt
622 ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15
623 IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu
624 YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg
625 ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8=
626 -----END CERTIFICATE-----
628 To make these new config files actually get mounted within the
629 containers for the daemons
633 ceph orch redeploy <service-name>
639 ceph orch redeploy grafana
646 In order to remove a service including the removal
647 of all daemons of that service, run
651 ceph orch rm <service-name>
657 ceph orch rm rgw.myrgw
659 .. _cephadm-spec-unmanaged:
661 Disabling automatic deployment of daemons
662 =========================================
664 Cephadm supports disabling the automated deployment and removal of daemons on a
665 per service basis. The CLI supports two commands for this.
667 In order to fully remove a service, see :ref:`orch-rm`.
669 Disabling automatic management of daemons
670 -----------------------------------------
672 To disable the automatic management of dameons, set ``unmanaged=True`` in the
673 :ref:`orchestrator-cli-service-spec` (``mgr.yaml``).
687 ceph orch apply -i mgr.yaml
689 Cephadm also supports setting the unmanaged parameter to true or false
690 using the ``ceph orch set-unmanaged`` and ``ceph orch set-managed`` commands.
691 The commands take the service name (as reported in ``ceph orch ls``) as
692 the only argument. For example,
696 ceph orch set-unmanaged mon
698 would set ``unmanaged: true`` for the mon service and
702 ceph orch set-managed mon
704 would set ``unmanaged: false`` for the mon service
708 After you apply this change in the Service Specification, cephadm will no
709 longer deploy any new daemons (even if the placement specification matches
714 The "osd" service used to track OSDs that are not tied to any specific
715 service spec is special and will always be marked unmanaged. Attempting
716 to modify it with ``ceph orch set-unmanaged`` or ``ceph orch set-managed``
717 will result in a message ``No service of name osd found. Check "ceph orch ls" for all known services``
719 Deploying a daemon on a host manually
720 -------------------------------------
724 This workflow has a very limited use case and should only be used
725 in rare circumstances.
727 To manually deploy a daemon on a host, follow these steps:
729 Modify the service spec for a service by getting the
730 existing spec, adding ``unmanaged: true``, and applying the modified spec.
732 Then manually deploy the daemon using the following:
736 ceph orch daemon add <daemon-type> --placement=<placement spec>
742 ceph orch daemon add mgr --placement=my_host
746 Removing ``unmanaged: true`` from the service spec will
747 enable the reconciliation loop for this service and will
748 potentially lead to the removal of the daemon, depending
749 on the placement spec.
751 Removing a daemon from a host manually
752 --------------------------------------
754 To manually remove a daemon, run a command of the following form:
758 ceph orch daemon rm <daemon name>... [--force]
764 ceph orch daemon rm mgr.my_host.xyzxyz
768 For managed services (``unmanaged=False``), cephadm will automatically
769 deploy a new daemon a few seconds later.
774 * See :ref:`cephadm-osd-declarative` for special handling of unmanaged OSDs.
775 * See also :ref:`cephadm-pause`