5 A service is a group of daemons configured together. See these chapters
6 for details on individual services:
25 To see the status of one
26 of the services running in the Ceph cluster, do the following:
28 #. Use the command line to print a list of services.
29 #. Locate the service whose status you want to check.
30 #. Print the status of the service.
32 The following command prints a list of services known to the orchestrator. To
33 limit the output to services only on a specified host, use the optional
34 ``--host`` parameter. To limit the output to services of only a particular
35 type, use the optional ``--type`` parameter (mon, osd, mgr, mds, rgw):
39 ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh]
41 Discover the status of a particular service or daemon:
45 ceph orch ls --service_type type --service_name <name> [--refresh]
47 To export the service specifications knows to the orchestrator, run the following command.
53 The service specifications exported with this command will be exported as yaml
54 and that yaml can be used with the ``ceph orch apply -i`` command.
56 For information about retrieving the specifications of single services (including examples of commands), see :ref:`orchestrator-cli-service-spec-retrieve`.
61 A daemon is a systemd unit that is running and part of a service.
63 To see the status of a daemon, do the following:
65 #. Print a list of all daemons known to the orchestrator.
66 #. Query the status of the target daemon.
68 First, print a list of all daemons known to the orchestrator:
72 ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh]
74 Then query the status of a particular service instance (mon, osd, mds, rgw).
75 For OSDs the id is the numeric OSD ID. For MDS services the id is the file
80 ceph orch ps --daemon_type osd --daemon_id 0
82 .. _orchestrator-cli-service-spec:
87 A *Service Specification* is a data structure that is used to specify the
88 deployment of services. Here is an example of a service specification in YAML:
93 service_id: realm.zone
103 # Additional service specific attributes.
105 In this example, the properties of this service specification are:
107 .. py:currentmodule:: ceph.deployment.service_spec
109 .. autoclass:: ServiceSpec
112 Each service type can have additional service-specific properties.
114 Service specifications of type ``mon``, ``mgr``, and the monitoring
115 types do not require a ``service_id``.
117 A service of type ``osd`` is described in :ref:`drivegroups`
119 Many service specifications can be applied at once using ``ceph orch apply -i``
120 by submitting a multi-document YAML file::
122 cat <<EOF | ceph orch apply -i -
132 service_id: default_drive_group
139 .. _orchestrator-cli-service-spec-retrieve:
141 Retrieving the running Service Specification
142 --------------------------------------------
144 If the services have been started via ``ceph orch apply...``, then directly changing
145 the Services Specification is complicated. Instead of attempting to directly change
146 the Services Specification, we suggest exporting the running Service Specification by
147 following these instructions:
151 ceph orch ls --service-name rgw.<realm>.<zone> --export > rgw.<realm>.<zone>.yaml
152 ceph orch ls --service-type mgr --export > mgr.yaml
153 ceph orch ls --export > cluster.yaml
155 The Specification can then be changed and re-applied as above.
157 Updating Service Specifications
158 -------------------------------
160 The Ceph Orchestrator maintains a declarative state of each
161 service in a ``ServiceSpec``. For certain operations, like updating
162 the RGW HTTP port, we need to update the existing
165 1. List the current ``ServiceSpec``:
169 ceph orch ls --service_name=<service-name> --export > myservice.yaml
171 2. Update the yaml file:
177 3. Apply the new ``ServiceSpec``:
181 ceph orch apply -i myservice.yaml [--dry-run]
183 .. _orchestrator-cli-placement-spec:
188 For the orchestrator to deploy a *service*, it needs to know where to deploy
189 *daemons*, and how many to deploy. This is the role of a placement
190 specification. Placement specifications can either be passed as command line arguments
195 cephadm will not deploy daemons on hosts with the ``_no_schedule`` label; see :ref:`cephadm-special-host-labels`.
198 The **apply** command can be confusing. For this reason, we recommend using
201 Each ``ceph orch apply <service-name>`` command supersedes the one before it.
202 If you do not use the proper syntax, you will clobber your work
209 ceph orch apply mon host1
210 ceph orch apply mon host2
211 ceph orch apply mon host3
213 This results in only one host having a monitor applied to it: host 3.
215 (The first command creates a monitor on host1. Then the second command
216 clobbers the monitor on host1 and creates a monitor on host2. Then the
217 third command clobbers the monitor on host2 and creates a monitor on
218 host3. In this scenario, at this point, there is a monitor ONLY on
221 To make certain that a monitor is applied to each of these three hosts,
222 run a command like this:
226 ceph orch apply mon "host1,host2,host3"
228 There is another way to apply monitors to multiple hosts: a ``yaml`` file
229 can be used. Instead of using the "ceph orch apply mon" commands, run a
230 command of this form:
234 ceph orch apply -i file.yaml
236 Here is a sample **file.yaml** file
250 Daemons can be explicitly placed on hosts by simply specifying them:
254 orch apply prometheus --placement="host1 host2 host3"
260 service_type: prometheus
267 MONs and other services may require some enhanced network specifications:
271 orch daemon add mon --placement="myhost:[v2:1.2.3.4:3300,v1:1.2.3.4:6789]=name"
273 where ``[v2:1.2.3.4:3300,v1:1.2.3.4:6789]`` is the network address of the monitor
274 and ``=name`` specifies the name of the new monitor.
276 .. _orch-placement-by-labels:
281 Daemon placement can be limited to hosts that match a specific label. To set
282 a label ``mylabel`` to the appropriate hosts, run this command:
286 ceph orch host label add *<hostname>* mylabel
288 To view the current hosts and labels, run this command:
298 ceph orch host label add host1 mylabel
299 ceph orch host label add host2 mylabel
300 ceph orch host label add host3 mylabel
305 HOST ADDR LABELS STATUS
312 Now, Tell cephadm to deploy daemons based on the label by running
317 orch apply prometheus --placement="label:mylabel"
323 service_type: prometheus
327 * See :ref:`orchestrator-host-labels`
329 Placement by pattern matching
330 -----------------------------
332 Daemons can be placed on hosts as well:
336 orch apply prometheus --placement='myhost[1-3]'
342 service_type: prometheus
344 host_pattern: "myhost[1-3]"
346 To place a service on *all* hosts, use ``"*"``:
350 orch apply node-exporter --placement='*'
356 service_type: node-exporter
361 Changing the number of daemons
362 ------------------------------
364 By specifying ``count``, only the number of daemons specified will be created:
368 orch apply prometheus --placement=3
370 To deploy *daemons* on a subset of hosts, specify the count:
374 orch apply prometheus --placement="2 host1 host2 host3"
376 If the count is bigger than the amount of hosts, cephadm deploys one per host:
380 orch apply prometheus --placement="3 host1 host2"
382 The command immediately above results in two Prometheus daemons.
384 YAML can also be used to specify limits, in the following way:
388 service_type: prometheus
392 YAML can also be used to specify limits on hosts:
396 service_type: prometheus
404 Algorithm description
405 ---------------------
407 Cephadm's declarative state consists of a list of service specifications
408 containing placement specifications.
410 Cephadm continually compares a list of daemons actually running in the cluster
411 against the list in the service specifications. Cephadm adds new daemons and
412 removes old daemons as necessary in order to conform to the service
415 Cephadm does the following to maintain compliance with the service
418 Cephadm first selects a list of candidate hosts. Cephadm seeks explicit host
419 names and selects them. If cephadm finds no explicit host names, it looks for
420 label specifications. If no label is defined in the specification, cephadm
421 selects hosts based on a host pattern. If no host pattern is defined, as a last
422 resort, cephadm selects all known hosts as candidates.
424 Cephadm is aware of existing daemons running services and tries to avoid moving
427 Cephadm supports the deployment of a specific amount of services.
428 Consider the following service specification:
438 This service specifcation instructs cephadm to deploy three daemons on hosts
439 labeled ``myfs`` across the cluster.
441 If there are fewer than three daemons deployed on the candidate hosts, cephadm
442 randomly chooses hosts on which to deploy new daemons.
444 If there are more than three daemons deployed on the candidate hosts, cephadm
445 removes existing daemons.
447 Finally, cephadm removes daemons on hosts that are outside of the list of
452 There is a special case that cephadm must consider.
454 If there are fewer hosts selected by the placement specification than
455 demanded by ``count``, cephadm will deploy only on the selected hosts.
462 In order to remove a service including the removal
463 of all daemons of that service, run
467 ceph orch rm <service-name>
473 ceph orch rm rgw.myrgw
475 .. _cephadm-spec-unmanaged:
477 Disabling automatic deployment of daemons
478 =========================================
480 Cephadm supports disabling the automated deployment and removal of daemons on a
481 per service basis. The CLI supports two commands for this.
483 In order to fully remove a service, see :ref:`orch-rm`.
485 Disabling automatic management of daemons
486 -----------------------------------------
488 To disable the automatic management of dameons, set ``unmanaged=True`` in the
489 :ref:`orchestrator-cli-service-spec` (``mgr.yaml``).
503 ceph orch apply -i mgr.yaml
508 After you apply this change in the Service Specification, cephadm will no
509 longer deploy any new daemons (even if the placement specification matches
512 Deploying a daemon on a host manually
513 -------------------------------------
517 This workflow has a very limited use case and should only be used
518 in rare circumstances.
520 To manually deploy a daemon on a host, follow these steps:
522 Modify the service spec for a service by getting the
523 existing spec, adding ``unmanaged: true``, and applying the modified spec.
525 Then manually deploy the daemon using the following:
529 ceph orch daemon add <daemon-type> --placement=<placement spec>
535 ceph orch daemon add mgr --placement=my_host
539 Removing ``unmanaged: true`` from the service spec will
540 enable the reconciliation loop for this service and will
541 potentially lead to the removal of the daemon, depending
542 on the placement spec.
544 Removing a daemon from a host manually
545 --------------------------------------
547 To manually remove a daemon, run a command of the following form:
551 ceph orch daemon rm <daemon name>... [--force]
557 ceph orch daemon rm mgr.my_host.xyzxyz
561 For managed services (``unmanaged=False``), cephadm will automatically
562 deploy a new daemon a few seconds later.
567 * See :ref:`cephadm-osd-declarative` for special handling of unmanaged OSDs.
568 * See also :ref:`cephadm-pause`