]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephadm/services/index.rst
import quincy beta 17.1.0
[ceph.git] / ceph / doc / cephadm / services / index.rst
CommitLineData
f67539c2
TL
1==================
2Service Management
3==================
4
a4b75251
TL
5A service is a group of daemons configured together. See these chapters
6for details on individual services:
7
8.. toctree::
9 :maxdepth: 1
10
11 mon
12 mgr
13 osd
14 rgw
15 mds
16 nfs
17 iscsi
18 custom-container
19 monitoring
20effc67 20 snmp-gateway
a4b75251 21
f67539c2
TL
22Service Status
23==============
24
a4b75251
TL
25
26To see the status of one
b3b6e05e 27of the services running in the Ceph cluster, do the following:
f67539c2 28
20effc67
TL
29#. Use the command line to print a list of services.
30#. Locate the service whose status you want to check.
b3b6e05e 31#. Print the status of the service.
f67539c2 32
b3b6e05e
TL
33The following command prints a list of services known to the orchestrator. To
34limit the output to services only on a specified host, use the optional
35``--host`` parameter. To limit the output to services of only a particular
36type, use the optional ``--type`` parameter (mon, osd, mgr, mds, rgw):
f67539c2 37
b3b6e05e 38 .. prompt:: bash #
f67539c2 39
b3b6e05e 40 ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh]
f67539c2 41
b3b6e05e 42Discover the status of a particular service or daemon:
f67539c2 43
b3b6e05e 44 .. prompt:: bash #
f67539c2 45
b3b6e05e 46 ceph orch ls --service_type type --service_name <name> [--refresh]
f67539c2 47
b3b6e05e
TL
48To export the service specifications knows to the orchestrator, run the following command.
49
50 .. prompt:: bash #
51
52 ceph orch ls --export
53
54The service specifications exported with this command will be exported as yaml
55and that yaml can be used with the ``ceph orch apply -i`` command.
56
57For information about retrieving the specifications of single services (including examples of commands), see :ref:`orchestrator-cli-service-spec-retrieve`.
f67539c2
TL
58
59Daemon Status
60=============
61
b3b6e05e 62A daemon is a systemd unit that is running and part of a service.
f67539c2 63
b3b6e05e
TL
64To see the status of a daemon, do the following:
65
66#. Print a list of all daemons known to the orchestrator.
67#. Query the status of the target daemon.
68
69First, print a list of all daemons known to the orchestrator:
70
71 .. prompt:: bash #
f67539c2
TL
72
73 ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh]
74
b3b6e05e
TL
75Then query the status of a particular service instance (mon, osd, mds, rgw).
76For OSDs the id is the numeric OSD ID. For MDS services the id is the file
77system name:
78
79 .. prompt:: bash #
f67539c2
TL
80
81 ceph orch ps --daemon_type osd --daemon_id 0
20effc67 82
f67539c2
TL
83.. _orchestrator-cli-service-spec:
84
85Service Specification
86=====================
87
b3b6e05e
TL
88A *Service Specification* is a data structure that is used to specify the
89deployment of services. Here is an example of a service specification in YAML:
f67539c2
TL
90
91.. code-block:: yaml
92
93 service_type: rgw
94 service_id: realm.zone
95 placement:
96 hosts:
97 - host1
98 - host2
99 - host3
100 unmanaged: false
a4b75251
TL
101 networks:
102 - 192.169.142.0/24
103 spec:
104 # Additional service specific attributes.
f67539c2 105
b3b6e05e 106In this example, the properties of this service specification are:
f67539c2 107
a4b75251
TL
108.. py:currentmodule:: ceph.deployment.service_spec
109
110.. autoclass:: ServiceSpec
111 :members:
f67539c2 112
b3b6e05e 113Each service type can have additional service-specific properties.
f67539c2
TL
114
115Service specifications of type ``mon``, ``mgr``, and the monitoring
116types do not require a ``service_id``.
117
118A service of type ``osd`` is described in :ref:`drivegroups`
119
b3b6e05e
TL
120Many service specifications can be applied at once using ``ceph orch apply -i``
121by submitting a multi-document YAML file::
f67539c2
TL
122
123 cat <<EOF | ceph orch apply -i -
124 service_type: mon
125 placement:
126 host_pattern: "mon*"
127 ---
128 service_type: mgr
129 placement:
130 host_pattern: "mgr*"
131 ---
132 service_type: osd
133 service_id: default_drive_group
134 placement:
135 host_pattern: "osd*"
136 data_devices:
137 all: true
138 EOF
139
140.. _orchestrator-cli-service-spec-retrieve:
141
142Retrieving the running Service Specification
143--------------------------------------------
144
145If the services have been started via ``ceph orch apply...``, then directly changing
146the Services Specification is complicated. Instead of attempting to directly change
147the Services Specification, we suggest exporting the running Service Specification by
b3b6e05e
TL
148following these instructions:
149
150 .. prompt:: bash #
20effc67 151
f67539c2
TL
152 ceph orch ls --service-name rgw.<realm>.<zone> --export > rgw.<realm>.<zone>.yaml
153 ceph orch ls --service-type mgr --export > mgr.yaml
154 ceph orch ls --export > cluster.yaml
155
156The Specification can then be changed and re-applied as above.
157
a4b75251
TL
158Updating Service Specifications
159-------------------------------
160
161The Ceph Orchestrator maintains a declarative state of each
162service in a ``ServiceSpec``. For certain operations, like updating
163the RGW HTTP port, we need to update the existing
164specification.
165
1661. List the current ``ServiceSpec``:
167
168 .. prompt:: bash #
169
170 ceph orch ls --service_name=<service-name> --export > myservice.yaml
171
1722. Update the yaml file:
173
174 .. prompt:: bash #
175
176 vi myservice.yaml
177
1783. Apply the new ``ServiceSpec``:
179
180 .. prompt:: bash #
181
182 ceph orch apply -i myservice.yaml [--dry-run]
183
f67539c2
TL
184.. _orchestrator-cli-placement-spec:
185
a4b75251
TL
186Daemon Placement
187================
f67539c2
TL
188
189For the orchestrator to deploy a *service*, it needs to know where to deploy
190*daemons*, and how many to deploy. This is the role of a placement
191specification. Placement specifications can either be passed as command line arguments
192or in a YAML files.
193
b3b6e05e
TL
194.. note::
195
196 cephadm will not deploy daemons on hosts with the ``_no_schedule`` label; see :ref:`cephadm-special-host-labels`.
197
a4b75251
TL
198.. note::
199 The **apply** command can be confusing. For this reason, we recommend using
200 YAML specifications.
522d829b 201
a4b75251
TL
202 Each ``ceph orch apply <service-name>`` command supersedes the one before it.
203 If you do not use the proper syntax, you will clobber your work
204 as you go.
522d829b 205
a4b75251 206 For example:
522d829b 207
a4b75251 208 .. prompt:: bash #
522d829b 209
a4b75251
TL
210 ceph orch apply mon host1
211 ceph orch apply mon host2
212 ceph orch apply mon host3
522d829b 213
a4b75251 214 This results in only one host having a monitor applied to it: host 3.
522d829b 215
a4b75251
TL
216 (The first command creates a monitor on host1. Then the second command
217 clobbers the monitor on host1 and creates a monitor on host2. Then the
218 third command clobbers the monitor on host2 and creates a monitor on
219 host3. In this scenario, at this point, there is a monitor ONLY on
220 host3.)
522d829b 221
a4b75251
TL
222 To make certain that a monitor is applied to each of these three hosts,
223 run a command like this:
522d829b 224
a4b75251 225 .. prompt:: bash #
522d829b 226
a4b75251 227 ceph orch apply mon "host1,host2,host3"
522d829b 228
a4b75251
TL
229 There is another way to apply monitors to multiple hosts: a ``yaml`` file
230 can be used. Instead of using the "ceph orch apply mon" commands, run a
231 command of this form:
522d829b 232
a4b75251
TL
233 .. prompt:: bash #
234
235 ceph orch apply -i file.yaml
522d829b 236
a4b75251 237 Here is a sample **file.yaml** file
522d829b 238
a4b75251 239 .. code-block:: yaml
522d829b 240
a4b75251
TL
241 service_type: mon
242 placement:
243 hosts:
244 - host1
245 - host2
246 - host3
522d829b 247
f67539c2
TL
248Explicit placements
249-------------------
250
b3b6e05e
TL
251Daemons can be explicitly placed on hosts by simply specifying them:
252
253 .. prompt:: bash #
f67539c2
TL
254
255 orch apply prometheus --placement="host1 host2 host3"
256
257Or in YAML:
258
259.. code-block:: yaml
260
261 service_type: prometheus
262 placement:
263 hosts:
264 - host1
265 - host2
266 - host3
267
b3b6e05e
TL
268MONs and other services may require some enhanced network specifications:
269
270 .. prompt:: bash #
f67539c2 271
b3b6e05e 272 orch daemon add mon --placement="myhost:[v2:1.2.3.4:3300,v1:1.2.3.4:6789]=name"
f67539c2
TL
273
274where ``[v2:1.2.3.4:3300,v1:1.2.3.4:6789]`` is the network address of the monitor
275and ``=name`` specifies the name of the new monitor.
276
277.. _orch-placement-by-labels:
278
279Placement by labels
280-------------------
281
522d829b
TL
282Daemon placement can be limited to hosts that match a specific label. To set
283a label ``mylabel`` to the appropriate hosts, run this command:
284
285 .. prompt:: bash #
286
287 ceph orch host label add *<hostname>* mylabel
288
289 To view the current hosts and labels, run this command:
290
291 .. prompt:: bash #
292
293 ceph orch host ls
294
295 For example:
296
297 .. prompt:: bash #
298
299 ceph orch host label add host1 mylabel
300 ceph orch host label add host2 mylabel
301 ceph orch host label add host3 mylabel
302 ceph orch host ls
303
304 .. code-block:: bash
305
306 HOST ADDR LABELS STATUS
307 host1 mylabel
308 host2 mylabel
309 host3 mylabel
310 host4
311 host5
312
313Now, Tell cephadm to deploy daemons based on the label by running
314this command:
b3b6e05e
TL
315
316 .. prompt:: bash #
f67539c2
TL
317
318 orch apply prometheus --placement="label:mylabel"
319
320Or in YAML:
321
322.. code-block:: yaml
323
324 service_type: prometheus
325 placement:
326 label: "mylabel"
327
328* See :ref:`orchestrator-host-labels`
329
330Placement by pattern matching
331-----------------------------
332
b3b6e05e
TL
333Daemons can be placed on hosts as well:
334
335 .. prompt:: bash #
f67539c2
TL
336
337 orch apply prometheus --placement='myhost[1-3]'
338
339Or in YAML:
340
341.. code-block:: yaml
342
343 service_type: prometheus
344 placement:
345 host_pattern: "myhost[1-3]"
346
b3b6e05e
TL
347To place a service on *all* hosts, use ``"*"``:
348
349 .. prompt:: bash #
f67539c2
TL
350
351 orch apply node-exporter --placement='*'
352
353Or in YAML:
354
355.. code-block:: yaml
356
357 service_type: node-exporter
358 placement:
359 host_pattern: "*"
360
361
a4b75251
TL
362Changing the number of daemons
363------------------------------
f67539c2 364
b3b6e05e
TL
365By specifying ``count``, only the number of daemons specified will be created:
366
367 .. prompt:: bash #
f67539c2
TL
368
369 orch apply prometheus --placement=3
370
b3b6e05e
TL
371To deploy *daemons* on a subset of hosts, specify the count:
372
373 .. prompt:: bash #
f67539c2
TL
374
375 orch apply prometheus --placement="2 host1 host2 host3"
376
b3b6e05e
TL
377If the count is bigger than the amount of hosts, cephadm deploys one per host:
378
379 .. prompt:: bash #
f67539c2
TL
380
381 orch apply prometheus --placement="3 host1 host2"
382
b3b6e05e 383The command immediately above results in two Prometheus daemons.
f67539c2 384
b3b6e05e 385YAML can also be used to specify limits, in the following way:
f67539c2
TL
386
387.. code-block:: yaml
388
389 service_type: prometheus
390 placement:
391 count: 3
392
b3b6e05e 393YAML can also be used to specify limits on hosts:
f67539c2
TL
394
395.. code-block:: yaml
396
397 service_type: prometheus
398 placement:
399 count: 2
400 hosts:
401 - host1
402 - host2
403 - host3
404
20effc67
TL
405.. _cephadm_co_location:
406
407Co-location of daemons
408----------------------
409
410Cephadm supports the deployment of multiple daemons on the same host:
411
412.. code-block:: yaml
413
414 service_type: rgw
415 placement:
416 label: rgw
417 count-per-host: 2
418
419The main reason for deploying multiple daemons per host is an additional
420performance benefit for running multiple RGW and MDS daemons on the same host.
421
422See also:
423
424* :ref:`cephadm_mgr_co_location`.
425* :ref:`cephadm-rgw-designated_gateways`.
426
427This feature was introduced in Pacific.
428
a4b75251
TL
429Algorithm description
430---------------------
f67539c2 431
a4b75251
TL
432Cephadm's declarative state consists of a list of service specifications
433containing placement specifications.
f67539c2 434
b3b6e05e
TL
435Cephadm continually compares a list of daemons actually running in the cluster
436against the list in the service specifications. Cephadm adds new daemons and
437removes old daemons as necessary in order to conform to the service
438specifications.
439
440Cephadm does the following to maintain compliance with the service
441specifications.
f67539c2 442
b3b6e05e
TL
443Cephadm first selects a list of candidate hosts. Cephadm seeks explicit host
444names and selects them. If cephadm finds no explicit host names, it looks for
445label specifications. If no label is defined in the specification, cephadm
446selects hosts based on a host pattern. If no host pattern is defined, as a last
447resort, cephadm selects all known hosts as candidates.
f67539c2 448
b3b6e05e
TL
449Cephadm is aware of existing daemons running services and tries to avoid moving
450them.
f67539c2 451
b3b6e05e
TL
452Cephadm supports the deployment of a specific amount of services.
453Consider the following service specification:
f67539c2
TL
454
455.. code-block:: yaml
456
457 service_type: mds
458 service_name: myfs
459 placement:
460 count: 3
461 label: myfs
462
20effc67 463This service specification instructs cephadm to deploy three daemons on hosts
b3b6e05e 464labeled ``myfs`` across the cluster.
f67539c2 465
b3b6e05e
TL
466If there are fewer than three daemons deployed on the candidate hosts, cephadm
467randomly chooses hosts on which to deploy new daemons.
f67539c2 468
b3b6e05e
TL
469If there are more than three daemons deployed on the candidate hosts, cephadm
470removes existing daemons.
f67539c2 471
b3b6e05e 472Finally, cephadm removes daemons on hosts that are outside of the list of
f67539c2
TL
473candidate hosts.
474
b3b6e05e 475.. note::
a4b75251 476
b3b6e05e 477 There is a special case that cephadm must consider.
f67539c2 478
a4b75251 479 If there are fewer hosts selected by the placement specification than
b3b6e05e 480 demanded by ``count``, cephadm will deploy only on the selected hosts.
f67539c2 481
20effc67
TL
482Extra Container Arguments
483=========================
484
485.. warning::
486 The arguments provided for extra container args are limited to whatever arguments are available for a `run` command from whichever container engine you are using. Providing any arguments the `run` command does not support (or invalid values for arguments) will cause the daemon to fail to start.
487
488
489Cephadm supports providing extra miscellaneous container arguments for
490specific cases when they may be necessary. For example, if a user needed
491to limit the amount of cpus their mon daemons make use of they could apply
492a spec like
493
494.. code-block:: yaml
495
496 service_type: mon
497 service_name: mon
498 placement:
499 hosts:
500 - host1
501 - host2
502 - host3
503 extra_container_args:
504 - "--cpus=2"
505
506which would cause each mon daemon to be deployed with `--cpus=2`.
507
a4b75251
TL
508.. _orch-rm:
509
510Removing a Service
511==================
512
513In order to remove a service including the removal
514of all daemons of that service, run
515
516.. prompt:: bash
517
518 ceph orch rm <service-name>
519
520For example:
521
522.. prompt:: bash
523
524 ceph orch rm rgw.myrgw
f67539c2
TL
525
526.. _cephadm-spec-unmanaged:
527
b3b6e05e
TL
528Disabling automatic deployment of daemons
529=========================================
f67539c2 530
b3b6e05e
TL
531Cephadm supports disabling the automated deployment and removal of daemons on a
532per service basis. The CLI supports two commands for this.
f67539c2 533
a4b75251
TL
534In order to fully remove a service, see :ref:`orch-rm`.
535
b3b6e05e
TL
536Disabling automatic management of daemons
537-----------------------------------------
538
539To disable the automatic management of dameons, set ``unmanaged=True`` in the
540:ref:`orchestrator-cli-service-spec` (``mgr.yaml``).
f67539c2
TL
541
542``mgr.yaml``:
543
544.. code-block:: yaml
545
546 service_type: mgr
547 unmanaged: true
548 placement:
549 label: mgr
550
f67539c2 551
b3b6e05e
TL
552.. prompt:: bash #
553
554 ceph orch apply -i mgr.yaml
555
f67539c2
TL
556
557.. note::
558
b3b6e05e
TL
559 After you apply this change in the Service Specification, cephadm will no
560 longer deploy any new daemons (even if the placement specification matches
561 additional hosts).
562
563Deploying a daemon on a host manually
564-------------------------------------
f67539c2 565
522d829b
TL
566.. note::
567
568 This workflow has a very limited use case and should only be used
20effc67 569 in rare circumstances.
522d829b
TL
570
571To manually deploy a daemon on a host, follow these steps:
572
20effc67
TL
573Modify the service spec for a service by getting the
574existing spec, adding ``unmanaged: true``, and applying the modified spec.
522d829b
TL
575
576Then manually deploy the daemon using the following:
f67539c2 577
b3b6e05e 578 .. prompt:: bash #
f67539c2 579
b3b6e05e 580 ceph orch daemon add <daemon-type> --placement=<placement spec>
f67539c2 581
b3b6e05e 582For example :
f67539c2 583
b3b6e05e 584 .. prompt:: bash #
f67539c2 585
b3b6e05e 586 ceph orch daemon add mgr --placement=my_host
f67539c2 587
20effc67 588.. note::
522d829b 589
20effc67 590 Removing ``unmanaged: true`` from the service spec will
522d829b
TL
591 enable the reconciliation loop for this service and will
592 potentially lead to the removal of the daemon, depending
20effc67 593 on the placement spec.
522d829b 594
b3b6e05e
TL
595Removing a daemon from a host manually
596--------------------------------------
f67539c2 597
b3b6e05e 598To manually remove a daemon, run a command of the following form:
f67539c2 599
b3b6e05e 600 .. prompt:: bash #
f67539c2 601
b3b6e05e 602 ceph orch daemon rm <daemon name>... [--force]
f67539c2 603
b3b6e05e 604For example:
f67539c2 605
b3b6e05e
TL
606 .. prompt:: bash #
607
608 ceph orch daemon rm mgr.my_host.xyzxyz
f67539c2 609
20effc67 610.. note::
f67539c2
TL
611
612 For managed services (``unmanaged=False``), cephadm will automatically
613 deploy a new daemon a few seconds later.
b3b6e05e
TL
614
615See also
616--------
20effc67
TL
617
618* See :ref:`cephadm-osd-declarative` for special handling of unmanaged OSDs.
f67539c2 619* See also :ref:`cephadm-pause`