5 A service is a group of daemons configured together. See these chapters
6 for details on individual services:
26 To see the status of one
27 of the services running in the Ceph cluster, do the following:
29 #. Use the command line to print a list of services.
30 #. Locate the service whose status you want to check.
31 #. Print the status of the service.
33 The following command prints a list of services known to the orchestrator. To
34 limit the output to services only on a specified host, use the optional
35 ``--host`` parameter. To limit the output to services of only a particular
36 type, use the optional ``--type`` parameter (mon, osd, mgr, mds, rgw):
40 ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh]
42 Discover the status of a particular service or daemon:
46 ceph orch ls --service_type type --service_name <name> [--refresh]
48 To export the service specifications knows to the orchestrator, run the following command.
54 The service specifications exported with this command will be exported as yaml
55 and that yaml can be used with the ``ceph orch apply -i`` command.
57 For information about retrieving the specifications of single services (including examples of commands), see :ref:`orchestrator-cli-service-spec-retrieve`.
62 A daemon is a systemd unit that is running and part of a service.
64 To see the status of a daemon, do the following:
66 #. Print a list of all daemons known to the orchestrator.
67 #. Query the status of the target daemon.
69 First, print a list of all daemons known to the orchestrator:
73 ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh]
75 Then query the status of a particular service instance (mon, osd, mds, rgw).
76 For OSDs the id is the numeric OSD ID. For MDS services the id is the file
81 ceph orch ps --daemon_type osd --daemon_id 0
83 .. _orchestrator-cli-service-spec:
88 A *Service Specification* is a data structure that is used to specify the
89 deployment of services. Here is an example of a service specification in YAML:
94 service_id: realm.zone
104 # Additional service specific attributes.
106 In this example, the properties of this service specification are:
108 .. py:currentmodule:: ceph.deployment.service_spec
110 .. autoclass:: ServiceSpec
113 Each service type can have additional service-specific properties.
115 Service specifications of type ``mon``, ``mgr``, and the monitoring
116 types do not require a ``service_id``.
118 A service of type ``osd`` is described in :ref:`drivegroups`
120 Many service specifications can be applied at once using ``ceph orch apply -i``
121 by submitting a multi-document YAML file::
123 cat <<EOF | ceph orch apply -i -
133 service_id: default_drive_group
140 .. _orchestrator-cli-service-spec-retrieve:
142 Retrieving the running Service Specification
143 --------------------------------------------
145 If the services have been started via ``ceph orch apply...``, then directly changing
146 the Services Specification is complicated. Instead of attempting to directly change
147 the Services Specification, we suggest exporting the running Service Specification by
148 following these instructions:
152 ceph orch ls --service-name rgw.<realm>.<zone> --export > rgw.<realm>.<zone>.yaml
153 ceph orch ls --service-type mgr --export > mgr.yaml
154 ceph orch ls --export > cluster.yaml
156 The Specification can then be changed and re-applied as above.
158 Updating Service Specifications
159 -------------------------------
161 The Ceph Orchestrator maintains a declarative state of each
162 service in a ``ServiceSpec``. For certain operations, like updating
163 the RGW HTTP port, we need to update the existing
166 1. List the current ``ServiceSpec``:
170 ceph orch ls --service_name=<service-name> --export > myservice.yaml
172 2. Update the yaml file:
178 3. Apply the new ``ServiceSpec``:
182 ceph orch apply -i myservice.yaml [--dry-run]
184 .. _orchestrator-cli-placement-spec:
189 For the orchestrator to deploy a *service*, it needs to know where to deploy
190 *daemons*, and how many to deploy. This is the role of a placement
191 specification. Placement specifications can either be passed as command line arguments
196 cephadm will not deploy daemons on hosts with the ``_no_schedule`` label; see :ref:`cephadm-special-host-labels`.
199 The **apply** command can be confusing. For this reason, we recommend using
202 Each ``ceph orch apply <service-name>`` command supersedes the one before it.
203 If you do not use the proper syntax, you will clobber your work
210 ceph orch apply mon host1
211 ceph orch apply mon host2
212 ceph orch apply mon host3
214 This results in only one host having a monitor applied to it: host 3.
216 (The first command creates a monitor on host1. Then the second command
217 clobbers the monitor on host1 and creates a monitor on host2. Then the
218 third command clobbers the monitor on host2 and creates a monitor on
219 host3. In this scenario, at this point, there is a monitor ONLY on
222 To make certain that a monitor is applied to each of these three hosts,
223 run a command like this:
227 ceph orch apply mon "host1,host2,host3"
229 There is another way to apply monitors to multiple hosts: a ``yaml`` file
230 can be used. Instead of using the "ceph orch apply mon" commands, run a
231 command of this form:
235 ceph orch apply -i file.yaml
237 Here is a sample **file.yaml** file
251 Daemons can be explicitly placed on hosts by simply specifying them:
255 orch apply prometheus --placement="host1 host2 host3"
261 service_type: prometheus
268 MONs and other services may require some enhanced network specifications:
272 orch daemon add mon --placement="myhost:[v2:1.2.3.4:3300,v1:1.2.3.4:6789]=name"
274 where ``[v2:1.2.3.4:3300,v1:1.2.3.4:6789]`` is the network address of the monitor
275 and ``=name`` specifies the name of the new monitor.
277 .. _orch-placement-by-labels:
282 Daemon placement can be limited to hosts that match a specific label. To set
283 a label ``mylabel`` to the appropriate hosts, run this command:
287 ceph orch host label add *<hostname>* mylabel
289 To view the current hosts and labels, run this command:
299 ceph orch host label add host1 mylabel
300 ceph orch host label add host2 mylabel
301 ceph orch host label add host3 mylabel
306 HOST ADDR LABELS STATUS
313 Now, Tell cephadm to deploy daemons based on the label by running
318 orch apply prometheus --placement="label:mylabel"
324 service_type: prometheus
328 * See :ref:`orchestrator-host-labels`
330 Placement by pattern matching
331 -----------------------------
333 Daemons can be placed on hosts as well:
337 orch apply prometheus --placement='myhost[1-3]'
343 service_type: prometheus
345 host_pattern: "myhost[1-3]"
347 To place a service on *all* hosts, use ``"*"``:
351 orch apply node-exporter --placement='*'
357 service_type: node-exporter
362 Changing the number of daemons
363 ------------------------------
365 By specifying ``count``, only the number of daemons specified will be created:
369 orch apply prometheus --placement=3
371 To deploy *daemons* on a subset of hosts, specify the count:
375 orch apply prometheus --placement="2 host1 host2 host3"
377 If the count is bigger than the amount of hosts, cephadm deploys one per host:
381 orch apply prometheus --placement="3 host1 host2"
383 The command immediately above results in two Prometheus daemons.
385 YAML can also be used to specify limits, in the following way:
389 service_type: prometheus
393 YAML can also be used to specify limits on hosts:
397 service_type: prometheus
405 .. _cephadm_co_location:
407 Co-location of daemons
408 ----------------------
410 Cephadm supports the deployment of multiple daemons on the same host:
419 The main reason for deploying multiple daemons per host is an additional
420 performance benefit for running multiple RGW and MDS daemons on the same host.
424 * :ref:`cephadm_mgr_co_location`.
425 * :ref:`cephadm-rgw-designated_gateways`.
427 This feature was introduced in Pacific.
429 Algorithm description
430 ---------------------
432 Cephadm's declarative state consists of a list of service specifications
433 containing placement specifications.
435 Cephadm continually compares a list of daemons actually running in the cluster
436 against the list in the service specifications. Cephadm adds new daemons and
437 removes old daemons as necessary in order to conform to the service
440 Cephadm does the following to maintain compliance with the service
443 Cephadm first selects a list of candidate hosts. Cephadm seeks explicit host
444 names and selects them. If cephadm finds no explicit host names, it looks for
445 label specifications. If no label is defined in the specification, cephadm
446 selects hosts based on a host pattern. If no host pattern is defined, as a last
447 resort, cephadm selects all known hosts as candidates.
449 Cephadm is aware of existing daemons running services and tries to avoid moving
452 Cephadm supports the deployment of a specific amount of services.
453 Consider the following service specification:
463 This service specification instructs cephadm to deploy three daemons on hosts
464 labeled ``myfs`` across the cluster.
466 If there are fewer than three daemons deployed on the candidate hosts, cephadm
467 randomly chooses hosts on which to deploy new daemons.
469 If there are more than three daemons deployed on the candidate hosts, cephadm
470 removes existing daemons.
472 Finally, cephadm removes daemons on hosts that are outside of the list of
477 There is a special case that cephadm must consider.
479 If there are fewer hosts selected by the placement specification than
480 demanded by ``count``, cephadm will deploy only on the selected hosts.
482 Extra Container Arguments
483 =========================
486 The arguments provided for extra container args are limited to whatever arguments are available for a `run` command from whichever container engine you are using. Providing any arguments the `run` command does not support (or invalid values for arguments) will cause the daemon to fail to start.
489 Cephadm supports providing extra miscellaneous container arguments for
490 specific cases when they may be necessary. For example, if a user needed
491 to limit the amount of cpus their mon daemons make use of they could apply
503 extra_container_args:
506 which would cause each mon daemon to be deployed with `--cpus=2`.
513 In order to remove a service including the removal
514 of all daemons of that service, run
518 ceph orch rm <service-name>
524 ceph orch rm rgw.myrgw
526 .. _cephadm-spec-unmanaged:
528 Disabling automatic deployment of daemons
529 =========================================
531 Cephadm supports disabling the automated deployment and removal of daemons on a
532 per service basis. The CLI supports two commands for this.
534 In order to fully remove a service, see :ref:`orch-rm`.
536 Disabling automatic management of daemons
537 -----------------------------------------
539 To disable the automatic management of dameons, set ``unmanaged=True`` in the
540 :ref:`orchestrator-cli-service-spec` (``mgr.yaml``).
554 ceph orch apply -i mgr.yaml
559 After you apply this change in the Service Specification, cephadm will no
560 longer deploy any new daemons (even if the placement specification matches
563 Deploying a daemon on a host manually
564 -------------------------------------
568 This workflow has a very limited use case and should only be used
569 in rare circumstances.
571 To manually deploy a daemon on a host, follow these steps:
573 Modify the service spec for a service by getting the
574 existing spec, adding ``unmanaged: true``, and applying the modified spec.
576 Then manually deploy the daemon using the following:
580 ceph orch daemon add <daemon-type> --placement=<placement spec>
586 ceph orch daemon add mgr --placement=my_host
590 Removing ``unmanaged: true`` from the service spec will
591 enable the reconciliation loop for this service and will
592 potentially lead to the removal of the daemon, depending
593 on the placement spec.
595 Removing a daemon from a host manually
596 --------------------------------------
598 To manually remove a daemon, run a command of the following form:
602 ceph orch daemon rm <daemon name>... [--force]
608 ceph orch daemon rm mgr.my_host.xyzxyz
612 For managed services (``unmanaged=False``), cephadm will automatically
613 deploy a new daemon a few seconds later.
618 * See :ref:`cephadm-osd-declarative` for special handling of unmanaged OSDs.
619 * See also :ref:`cephadm-pause`