]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephadm/services/index.rst
import quincy beta 17.1.0
[ceph.git] / ceph / doc / cephadm / services / index.rst
1 ==================
2 Service Management
3 ==================
4
5 A service is a group of daemons configured together. See these chapters
6 for details on individual services:
7
8 .. toctree::
9 :maxdepth: 1
10
11 mon
12 mgr
13 osd
14 rgw
15 mds
16 nfs
17 iscsi
18 custom-container
19 monitoring
20 snmp-gateway
21
22 Service Status
23 ==============
24
25
26 To see the status of one
27 of the services running in the Ceph cluster, do the following:
28
29 #. Use the command line to print a list of services.
30 #. Locate the service whose status you want to check.
31 #. Print the status of the service.
32
33 The following command prints a list of services known to the orchestrator. To
34 limit the output to services only on a specified host, use the optional
35 ``--host`` parameter. To limit the output to services of only a particular
36 type, use the optional ``--type`` parameter (mon, osd, mgr, mds, rgw):
37
38 .. prompt:: bash #
39
40 ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh]
41
42 Discover the status of a particular service or daemon:
43
44 .. prompt:: bash #
45
46 ceph orch ls --service_type type --service_name <name> [--refresh]
47
48 To export the service specifications knows to the orchestrator, run the following command.
49
50 .. prompt:: bash #
51
52 ceph orch ls --export
53
54 The service specifications exported with this command will be exported as yaml
55 and that yaml can be used with the ``ceph orch apply -i`` command.
56
57 For information about retrieving the specifications of single services (including examples of commands), see :ref:`orchestrator-cli-service-spec-retrieve`.
58
59 Daemon Status
60 =============
61
62 A daemon is a systemd unit that is running and part of a service.
63
64 To see the status of a daemon, do the following:
65
66 #. Print a list of all daemons known to the orchestrator.
67 #. Query the status of the target daemon.
68
69 First, print a list of all daemons known to the orchestrator:
70
71 .. prompt:: bash #
72
73 ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh]
74
75 Then query the status of a particular service instance (mon, osd, mds, rgw).
76 For OSDs the id is the numeric OSD ID. For MDS services the id is the file
77 system name:
78
79 .. prompt:: bash #
80
81 ceph orch ps --daemon_type osd --daemon_id 0
82
83 .. _orchestrator-cli-service-spec:
84
85 Service Specification
86 =====================
87
88 A *Service Specification* is a data structure that is used to specify the
89 deployment of services. Here is an example of a service specification in YAML:
90
91 .. code-block:: yaml
92
93 service_type: rgw
94 service_id: realm.zone
95 placement:
96 hosts:
97 - host1
98 - host2
99 - host3
100 unmanaged: false
101 networks:
102 - 192.169.142.0/24
103 spec:
104 # Additional service specific attributes.
105
106 In this example, the properties of this service specification are:
107
108 .. py:currentmodule:: ceph.deployment.service_spec
109
110 .. autoclass:: ServiceSpec
111 :members:
112
113 Each service type can have additional service-specific properties.
114
115 Service specifications of type ``mon``, ``mgr``, and the monitoring
116 types do not require a ``service_id``.
117
118 A service of type ``osd`` is described in :ref:`drivegroups`
119
120 Many service specifications can be applied at once using ``ceph orch apply -i``
121 by submitting a multi-document YAML file::
122
123 cat <<EOF | ceph orch apply -i -
124 service_type: mon
125 placement:
126 host_pattern: "mon*"
127 ---
128 service_type: mgr
129 placement:
130 host_pattern: "mgr*"
131 ---
132 service_type: osd
133 service_id: default_drive_group
134 placement:
135 host_pattern: "osd*"
136 data_devices:
137 all: true
138 EOF
139
140 .. _orchestrator-cli-service-spec-retrieve:
141
142 Retrieving the running Service Specification
143 --------------------------------------------
144
145 If the services have been started via ``ceph orch apply...``, then directly changing
146 the Services Specification is complicated. Instead of attempting to directly change
147 the Services Specification, we suggest exporting the running Service Specification by
148 following these instructions:
149
150 .. prompt:: bash #
151
152 ceph orch ls --service-name rgw.<realm>.<zone> --export > rgw.<realm>.<zone>.yaml
153 ceph orch ls --service-type mgr --export > mgr.yaml
154 ceph orch ls --export > cluster.yaml
155
156 The Specification can then be changed and re-applied as above.
157
158 Updating Service Specifications
159 -------------------------------
160
161 The Ceph Orchestrator maintains a declarative state of each
162 service in a ``ServiceSpec``. For certain operations, like updating
163 the RGW HTTP port, we need to update the existing
164 specification.
165
166 1. List the current ``ServiceSpec``:
167
168 .. prompt:: bash #
169
170 ceph orch ls --service_name=<service-name> --export > myservice.yaml
171
172 2. Update the yaml file:
173
174 .. prompt:: bash #
175
176 vi myservice.yaml
177
178 3. Apply the new ``ServiceSpec``:
179
180 .. prompt:: bash #
181
182 ceph orch apply -i myservice.yaml [--dry-run]
183
184 .. _orchestrator-cli-placement-spec:
185
186 Daemon Placement
187 ================
188
189 For the orchestrator to deploy a *service*, it needs to know where to deploy
190 *daemons*, and how many to deploy. This is the role of a placement
191 specification. Placement specifications can either be passed as command line arguments
192 or in a YAML files.
193
194 .. note::
195
196 cephadm will not deploy daemons on hosts with the ``_no_schedule`` label; see :ref:`cephadm-special-host-labels`.
197
198 .. note::
199 The **apply** command can be confusing. For this reason, we recommend using
200 YAML specifications.
201
202 Each ``ceph orch apply <service-name>`` command supersedes the one before it.
203 If you do not use the proper syntax, you will clobber your work
204 as you go.
205
206 For example:
207
208 .. prompt:: bash #
209
210 ceph orch apply mon host1
211 ceph orch apply mon host2
212 ceph orch apply mon host3
213
214 This results in only one host having a monitor applied to it: host 3.
215
216 (The first command creates a monitor on host1. Then the second command
217 clobbers the monitor on host1 and creates a monitor on host2. Then the
218 third command clobbers the monitor on host2 and creates a monitor on
219 host3. In this scenario, at this point, there is a monitor ONLY on
220 host3.)
221
222 To make certain that a monitor is applied to each of these three hosts,
223 run a command like this:
224
225 .. prompt:: bash #
226
227 ceph orch apply mon "host1,host2,host3"
228
229 There is another way to apply monitors to multiple hosts: a ``yaml`` file
230 can be used. Instead of using the "ceph orch apply mon" commands, run a
231 command of this form:
232
233 .. prompt:: bash #
234
235 ceph orch apply -i file.yaml
236
237 Here is a sample **file.yaml** file
238
239 .. code-block:: yaml
240
241 service_type: mon
242 placement:
243 hosts:
244 - host1
245 - host2
246 - host3
247
248 Explicit placements
249 -------------------
250
251 Daemons can be explicitly placed on hosts by simply specifying them:
252
253 .. prompt:: bash #
254
255 orch apply prometheus --placement="host1 host2 host3"
256
257 Or in YAML:
258
259 .. code-block:: yaml
260
261 service_type: prometheus
262 placement:
263 hosts:
264 - host1
265 - host2
266 - host3
267
268 MONs and other services may require some enhanced network specifications:
269
270 .. prompt:: bash #
271
272 orch daemon add mon --placement="myhost:[v2:1.2.3.4:3300,v1:1.2.3.4:6789]=name"
273
274 where ``[v2:1.2.3.4:3300,v1:1.2.3.4:6789]`` is the network address of the monitor
275 and ``=name`` specifies the name of the new monitor.
276
277 .. _orch-placement-by-labels:
278
279 Placement by labels
280 -------------------
281
282 Daemon placement can be limited to hosts that match a specific label. To set
283 a label ``mylabel`` to the appropriate hosts, run this command:
284
285 .. prompt:: bash #
286
287 ceph orch host label add *<hostname>* mylabel
288
289 To view the current hosts and labels, run this command:
290
291 .. prompt:: bash #
292
293 ceph orch host ls
294
295 For example:
296
297 .. prompt:: bash #
298
299 ceph orch host label add host1 mylabel
300 ceph orch host label add host2 mylabel
301 ceph orch host label add host3 mylabel
302 ceph orch host ls
303
304 .. code-block:: bash
305
306 HOST ADDR LABELS STATUS
307 host1 mylabel
308 host2 mylabel
309 host3 mylabel
310 host4
311 host5
312
313 Now, Tell cephadm to deploy daemons based on the label by running
314 this command:
315
316 .. prompt:: bash #
317
318 orch apply prometheus --placement="label:mylabel"
319
320 Or in YAML:
321
322 .. code-block:: yaml
323
324 service_type: prometheus
325 placement:
326 label: "mylabel"
327
328 * See :ref:`orchestrator-host-labels`
329
330 Placement by pattern matching
331 -----------------------------
332
333 Daemons can be placed on hosts as well:
334
335 .. prompt:: bash #
336
337 orch apply prometheus --placement='myhost[1-3]'
338
339 Or in YAML:
340
341 .. code-block:: yaml
342
343 service_type: prometheus
344 placement:
345 host_pattern: "myhost[1-3]"
346
347 To place a service on *all* hosts, use ``"*"``:
348
349 .. prompt:: bash #
350
351 orch apply node-exporter --placement='*'
352
353 Or in YAML:
354
355 .. code-block:: yaml
356
357 service_type: node-exporter
358 placement:
359 host_pattern: "*"
360
361
362 Changing the number of daemons
363 ------------------------------
364
365 By specifying ``count``, only the number of daemons specified will be created:
366
367 .. prompt:: bash #
368
369 orch apply prometheus --placement=3
370
371 To deploy *daemons* on a subset of hosts, specify the count:
372
373 .. prompt:: bash #
374
375 orch apply prometheus --placement="2 host1 host2 host3"
376
377 If the count is bigger than the amount of hosts, cephadm deploys one per host:
378
379 .. prompt:: bash #
380
381 orch apply prometheus --placement="3 host1 host2"
382
383 The command immediately above results in two Prometheus daemons.
384
385 YAML can also be used to specify limits, in the following way:
386
387 .. code-block:: yaml
388
389 service_type: prometheus
390 placement:
391 count: 3
392
393 YAML can also be used to specify limits on hosts:
394
395 .. code-block:: yaml
396
397 service_type: prometheus
398 placement:
399 count: 2
400 hosts:
401 - host1
402 - host2
403 - host3
404
405 .. _cephadm_co_location:
406
407 Co-location of daemons
408 ----------------------
409
410 Cephadm supports the deployment of multiple daemons on the same host:
411
412 .. code-block:: yaml
413
414 service_type: rgw
415 placement:
416 label: rgw
417 count-per-host: 2
418
419 The main reason for deploying multiple daemons per host is an additional
420 performance benefit for running multiple RGW and MDS daemons on the same host.
421
422 See also:
423
424 * :ref:`cephadm_mgr_co_location`.
425 * :ref:`cephadm-rgw-designated_gateways`.
426
427 This feature was introduced in Pacific.
428
429 Algorithm description
430 ---------------------
431
432 Cephadm's declarative state consists of a list of service specifications
433 containing placement specifications.
434
435 Cephadm continually compares a list of daemons actually running in the cluster
436 against the list in the service specifications. Cephadm adds new daemons and
437 removes old daemons as necessary in order to conform to the service
438 specifications.
439
440 Cephadm does the following to maintain compliance with the service
441 specifications.
442
443 Cephadm first selects a list of candidate hosts. Cephadm seeks explicit host
444 names and selects them. If cephadm finds no explicit host names, it looks for
445 label specifications. If no label is defined in the specification, cephadm
446 selects hosts based on a host pattern. If no host pattern is defined, as a last
447 resort, cephadm selects all known hosts as candidates.
448
449 Cephadm is aware of existing daemons running services and tries to avoid moving
450 them.
451
452 Cephadm supports the deployment of a specific amount of services.
453 Consider the following service specification:
454
455 .. code-block:: yaml
456
457 service_type: mds
458 service_name: myfs
459 placement:
460 count: 3
461 label: myfs
462
463 This service specification instructs cephadm to deploy three daemons on hosts
464 labeled ``myfs`` across the cluster.
465
466 If there are fewer than three daemons deployed on the candidate hosts, cephadm
467 randomly chooses hosts on which to deploy new daemons.
468
469 If there are more than three daemons deployed on the candidate hosts, cephadm
470 removes existing daemons.
471
472 Finally, cephadm removes daemons on hosts that are outside of the list of
473 candidate hosts.
474
475 .. note::
476
477 There is a special case that cephadm must consider.
478
479 If there are fewer hosts selected by the placement specification than
480 demanded by ``count``, cephadm will deploy only on the selected hosts.
481
482 Extra Container Arguments
483 =========================
484
485 .. warning::
486 The arguments provided for extra container args are limited to whatever arguments are available for a `run` command from whichever container engine you are using. Providing any arguments the `run` command does not support (or invalid values for arguments) will cause the daemon to fail to start.
487
488
489 Cephadm supports providing extra miscellaneous container arguments for
490 specific cases when they may be necessary. For example, if a user needed
491 to limit the amount of cpus their mon daemons make use of they could apply
492 a spec like
493
494 .. code-block:: yaml
495
496 service_type: mon
497 service_name: mon
498 placement:
499 hosts:
500 - host1
501 - host2
502 - host3
503 extra_container_args:
504 - "--cpus=2"
505
506 which would cause each mon daemon to be deployed with `--cpus=2`.
507
508 .. _orch-rm:
509
510 Removing a Service
511 ==================
512
513 In order to remove a service including the removal
514 of all daemons of that service, run
515
516 .. prompt:: bash
517
518 ceph orch rm <service-name>
519
520 For example:
521
522 .. prompt:: bash
523
524 ceph orch rm rgw.myrgw
525
526 .. _cephadm-spec-unmanaged:
527
528 Disabling automatic deployment of daemons
529 =========================================
530
531 Cephadm supports disabling the automated deployment and removal of daemons on a
532 per service basis. The CLI supports two commands for this.
533
534 In order to fully remove a service, see :ref:`orch-rm`.
535
536 Disabling automatic management of daemons
537 -----------------------------------------
538
539 To disable the automatic management of dameons, set ``unmanaged=True`` in the
540 :ref:`orchestrator-cli-service-spec` (``mgr.yaml``).
541
542 ``mgr.yaml``:
543
544 .. code-block:: yaml
545
546 service_type: mgr
547 unmanaged: true
548 placement:
549 label: mgr
550
551
552 .. prompt:: bash #
553
554 ceph orch apply -i mgr.yaml
555
556
557 .. note::
558
559 After you apply this change in the Service Specification, cephadm will no
560 longer deploy any new daemons (even if the placement specification matches
561 additional hosts).
562
563 Deploying a daemon on a host manually
564 -------------------------------------
565
566 .. note::
567
568 This workflow has a very limited use case and should only be used
569 in rare circumstances.
570
571 To manually deploy a daemon on a host, follow these steps:
572
573 Modify the service spec for a service by getting the
574 existing spec, adding ``unmanaged: true``, and applying the modified spec.
575
576 Then manually deploy the daemon using the following:
577
578 .. prompt:: bash #
579
580 ceph orch daemon add <daemon-type> --placement=<placement spec>
581
582 For example :
583
584 .. prompt:: bash #
585
586 ceph orch daemon add mgr --placement=my_host
587
588 .. note::
589
590 Removing ``unmanaged: true`` from the service spec will
591 enable the reconciliation loop for this service and will
592 potentially lead to the removal of the daemon, depending
593 on the placement spec.
594
595 Removing a daemon from a host manually
596 --------------------------------------
597
598 To manually remove a daemon, run a command of the following form:
599
600 .. prompt:: bash #
601
602 ceph orch daemon rm <daemon name>... [--force]
603
604 For example:
605
606 .. prompt:: bash #
607
608 ceph orch daemon rm mgr.my_host.xyzxyz
609
610 .. note::
611
612 For managed services (``unmanaged=False``), cephadm will automatically
613 deploy a new daemon a few seconds later.
614
615 See also
616 --------
617
618 * See :ref:`cephadm-osd-declarative` for special handling of unmanaged OSDs.
619 * See also :ref:`cephadm-pause`