8 A service is a group of daemons configured together. To see the status of one
9 of the services running in the Ceph cluster, do the following:
11 #. Use the command line to print a list of services.
12 #. Locate the service whose status you want to check.
13 #. Print the status of the service.
15 The following command prints a list of services known to the orchestrator. To
16 limit the output to services only on a specified host, use the optional
17 ``--host`` parameter. To limit the output to services of only a particular
18 type, use the optional ``--type`` parameter (mon, osd, mgr, mds, rgw):
22 ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh]
24 Discover the status of a particular service or daemon:
28 ceph orch ls --service_type type --service_name <name> [--refresh]
30 To export the service specifications knows to the orchestrator, run the following command.
36 The service specifications exported with this command will be exported as yaml
37 and that yaml can be used with the ``ceph orch apply -i`` command.
39 For information about retrieving the specifications of single services (including examples of commands), see :ref:`orchestrator-cli-service-spec-retrieve`.
44 A daemon is a systemd unit that is running and part of a service.
46 To see the status of a daemon, do the following:
48 #. Print a list of all daemons known to the orchestrator.
49 #. Query the status of the target daemon.
51 First, print a list of all daemons known to the orchestrator:
55 ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh]
57 Then query the status of a particular service instance (mon, osd, mds, rgw).
58 For OSDs the id is the numeric OSD ID. For MDS services the id is the file
63 ceph orch ps --daemon_type osd --daemon_id 0
65 .. _orchestrator-cli-service-spec:
70 A *Service Specification* is a data structure that is used to specify the
71 deployment of services. Here is an example of a service specification in YAML:
76 service_id: realm.zone
85 In this example, the properties of this service specification are:
88 The type of the service. Needs to be either a Ceph
89 service (``mon``, ``crash``, ``mds``, ``mgr``, ``osd`` or
90 ``rbd-mirror``), a gateway (``nfs`` or ``rgw``), part of the
91 monitoring stack (``alertmanager``, ``grafana``, ``node-exporter`` or
92 ``prometheus``) or (``container``) for custom containers.
94 The name of the service.
96 See :ref:`orchestrator-cli-placement-spec`.
97 * ``unmanaged`` If set to ``true``, the orchestrator will not deploy nor remove
98 any daemon associated with this service. Placement and all other properties
99 will be ignored. This is useful, if you do not want this service to be
100 managed temporarily. For cephadm, See :ref:`cephadm-spec-unmanaged`
102 Each service type can have additional service-specific properties.
104 Service specifications of type ``mon``, ``mgr``, and the monitoring
105 types do not require a ``service_id``.
107 A service of type ``osd`` is described in :ref:`drivegroups`
109 Many service specifications can be applied at once using ``ceph orch apply -i``
110 by submitting a multi-document YAML file::
112 cat <<EOF | ceph orch apply -i -
122 service_id: default_drive_group
129 .. _orchestrator-cli-service-spec-retrieve:
131 Retrieving the running Service Specification
132 --------------------------------------------
134 If the services have been started via ``ceph orch apply...``, then directly changing
135 the Services Specification is complicated. Instead of attempting to directly change
136 the Services Specification, we suggest exporting the running Service Specification by
137 following these instructions:
141 ceph orch ls --service-name rgw.<realm>.<zone> --export > rgw.<realm>.<zone>.yaml
142 ceph orch ls --service-type mgr --export > mgr.yaml
143 ceph orch ls --export > cluster.yaml
145 The Specification can then be changed and re-applied as above.
147 .. _orchestrator-cli-placement-spec:
149 Placement Specification
150 =======================
152 For the orchestrator to deploy a *service*, it needs to know where to deploy
153 *daemons*, and how many to deploy. This is the role of a placement
154 specification. Placement specifications can either be passed as command line arguments
159 cephadm will not deploy daemons on hosts with the ``_no_schedule`` label; see :ref:`cephadm-special-host-labels`.
162 The **apply** command can be confusing. For this reason, we recommend using
165 Each ``ceph orch apply <service-name>`` command supersedes the one before it.
166 If you do not use the proper syntax, you will clobber your work
173 ceph orch apply mon host1
174 ceph orch apply mon host2
175 ceph orch apply mon host3
177 This results in only one host having a monitor applied to it: host 3.
179 (The first command creates a monitor on host1. Then the second command
180 clobbers the monitor on host1 and creates a monitor on host2. Then the
181 third command clobbers the monitor on host2 and creates a monitor on
182 host3. In this scenario, at this point, there is a monitor ONLY on
185 To make certain that a monitor is applied to each of these three hosts,
186 run a command like this:
190 ceph orch apply mon "host1,host2,host3"
192 There is another way to apply monitors to multiple hosts: a ``yaml`` file
193 can be used. Instead of using the "ceph orch apply mon" commands, run a
194 command of this form:
198 ceph orch apply -i file.yaml
200 Here is a sample **file.yaml** file::
212 Daemons can be explicitly placed on hosts by simply specifying them:
216 orch apply prometheus --placement="host1 host2 host3"
222 service_type: prometheus
229 MONs and other services may require some enhanced network specifications:
233 orch daemon add mon --placement="myhost:[v2:1.2.3.4:3300,v1:1.2.3.4:6789]=name"
235 where ``[v2:1.2.3.4:3300,v1:1.2.3.4:6789]`` is the network address of the monitor
236 and ``=name`` specifies the name of the new monitor.
238 .. _orch-placement-by-labels:
243 Daemon placement can be limited to hosts that match a specific label. To set
244 a label ``mylabel`` to the appropriate hosts, run this command:
248 ceph orch host label add *<hostname>* mylabel
250 To view the current hosts and labels, run this command:
260 ceph orch host label add host1 mylabel
261 ceph orch host label add host2 mylabel
262 ceph orch host label add host3 mylabel
267 HOST ADDR LABELS STATUS
274 Now, Tell cephadm to deploy daemons based on the label by running
279 orch apply prometheus --placement="label:mylabel"
285 service_type: prometheus
289 * See :ref:`orchestrator-host-labels`
291 Placement by pattern matching
292 -----------------------------
294 Daemons can be placed on hosts as well:
298 orch apply prometheus --placement='myhost[1-3]'
304 service_type: prometheus
306 host_pattern: "myhost[1-3]"
308 To place a service on *all* hosts, use ``"*"``:
312 orch apply node-exporter --placement='*'
318 service_type: node-exporter
323 Changing the number of monitors
324 -------------------------------
326 By specifying ``count``, only the number of daemons specified will be created:
330 orch apply prometheus --placement=3
332 To deploy *daemons* on a subset of hosts, specify the count:
336 orch apply prometheus --placement="2 host1 host2 host3"
338 If the count is bigger than the amount of hosts, cephadm deploys one per host:
342 orch apply prometheus --placement="3 host1 host2"
344 The command immediately above results in two Prometheus daemons.
346 YAML can also be used to specify limits, in the following way:
350 service_type: prometheus
354 YAML can also be used to specify limits on hosts:
358 service_type: prometheus
366 Updating Service Specifications
367 ===============================
369 The Ceph Orchestrator maintains a declarative state of each
370 service in a ``ServiceSpec``. For certain operations, like updating
371 the RGW HTTP port, we need to update the existing
374 1. List the current ``ServiceSpec``:
378 ceph orch ls --service_name=<service-name> --export > myservice.yaml
380 2. Update the yaml file:
386 3. Apply the new ``ServiceSpec``:
390 ceph orch apply -i myservice.yaml [--dry-run]
392 Deployment of Daemons
393 =====================
395 Cephadm uses a declarative state to define the layout of the cluster. This
396 state consists of a list of service specifications containing placement
397 specifications (See :ref:`orchestrator-cli-service-spec` ).
399 Cephadm continually compares a list of daemons actually running in the cluster
400 against the list in the service specifications. Cephadm adds new daemons and
401 removes old daemons as necessary in order to conform to the service
404 Cephadm does the following to maintain compliance with the service
407 Cephadm first selects a list of candidate hosts. Cephadm seeks explicit host
408 names and selects them. If cephadm finds no explicit host names, it looks for
409 label specifications. If no label is defined in the specification, cephadm
410 selects hosts based on a host pattern. If no host pattern is defined, as a last
411 resort, cephadm selects all known hosts as candidates.
413 Cephadm is aware of existing daemons running services and tries to avoid moving
416 Cephadm supports the deployment of a specific amount of services.
417 Consider the following service specification:
427 This service specifcation instructs cephadm to deploy three daemons on hosts
428 labeled ``myfs`` across the cluster.
430 If there are fewer than three daemons deployed on the candidate hosts, cephadm
431 randomly chooses hosts on which to deploy new daemons.
433 If there are more than three daemons deployed on the candidate hosts, cephadm
434 removes existing daemons.
436 Finally, cephadm removes daemons on hosts that are outside of the list of
441 There is a special case that cephadm must consider.
443 If there are fewer hosts selected by the placement specification than
444 demanded by ``count``, cephadm will deploy only on the selected hosts.
447 .. _cephadm-spec-unmanaged:
449 Disabling automatic deployment of daemons
450 =========================================
452 Cephadm supports disabling the automated deployment and removal of daemons on a
453 per service basis. The CLI supports two commands for this.
455 Disabling automatic management of daemons
456 -----------------------------------------
458 To disable the automatic management of dameons, set ``unmanaged=True`` in the
459 :ref:`orchestrator-cli-service-spec` (``mgr.yaml``).
473 ceph orch apply -i mgr.yaml
478 After you apply this change in the Service Specification, cephadm will no
479 longer deploy any new daemons (even if the placement specification matches
482 Deploying a daemon on a host manually
483 -------------------------------------
487 This workflow has a very limited use case and should only be used
488 in rare circumstances.
490 To manually deploy a daemon on a host, follow these steps:
492 Modify the service spec for a service by getting the
493 existing spec, adding ``unmanaged: true``, and applying the modified spec.
495 Then manually deploy the daemon using the following:
499 ceph orch daemon add <daemon-type> --placement=<placement spec>
505 ceph orch daemon add mgr --placement=my_host
509 Removing ``unmanaged: true`` from the service spec will
510 enable the reconciliation loop for this service and will
511 potentially lead to the removal of the daemon, depending
512 on the placement spec.
514 Removing a daemon from a host manually
515 --------------------------------------
517 To manually remove a daemon, run a command of the following form:
521 ceph orch daemon rm <daemon name>... [--force]
527 ceph orch daemon rm mgr.my_host.xyzxyz
531 For managed services (``unmanaged=False``), cephadm will automatically
532 deploy a new daemon a few seconds later.
537 * See :ref:`cephadm-osd-declarative` for special handling of unmanaged OSDs.
538 * See also :ref:`cephadm-pause`