]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephadm/services/index.rst
import ceph 16.2.7
[ceph.git] / ceph / doc / cephadm / services / index.rst
1 ==================
2 Service Management
3 ==================
4
5 A service is a group of daemons configured together. See these chapters
6 for details on individual services:
7
8 .. toctree::
9 :maxdepth: 1
10
11 mon
12 mgr
13 osd
14 rgw
15 mds
16 nfs
17 iscsi
18 custom-container
19 monitoring
20
21 Service Status
22 ==============
23
24
25 To see the status of one
26 of the services running in the Ceph cluster, do the following:
27
28 #. Use the command line to print a list of services.
29 #. Locate the service whose status you want to check.
30 #. Print the status of the service.
31
32 The following command prints a list of services known to the orchestrator. To
33 limit the output to services only on a specified host, use the optional
34 ``--host`` parameter. To limit the output to services of only a particular
35 type, use the optional ``--type`` parameter (mon, osd, mgr, mds, rgw):
36
37 .. prompt:: bash #
38
39 ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh]
40
41 Discover the status of a particular service or daemon:
42
43 .. prompt:: bash #
44
45 ceph orch ls --service_type type --service_name <name> [--refresh]
46
47 To export the service specifications knows to the orchestrator, run the following command.
48
49 .. prompt:: bash #
50
51 ceph orch ls --export
52
53 The service specifications exported with this command will be exported as yaml
54 and that yaml can be used with the ``ceph orch apply -i`` command.
55
56 For information about retrieving the specifications of single services (including examples of commands), see :ref:`orchestrator-cli-service-spec-retrieve`.
57
58 Daemon Status
59 =============
60
61 A daemon is a systemd unit that is running and part of a service.
62
63 To see the status of a daemon, do the following:
64
65 #. Print a list of all daemons known to the orchestrator.
66 #. Query the status of the target daemon.
67
68 First, print a list of all daemons known to the orchestrator:
69
70 .. prompt:: bash #
71
72 ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh]
73
74 Then query the status of a particular service instance (mon, osd, mds, rgw).
75 For OSDs the id is the numeric OSD ID. For MDS services the id is the file
76 system name:
77
78 .. prompt:: bash #
79
80 ceph orch ps --daemon_type osd --daemon_id 0
81
82 .. _orchestrator-cli-service-spec:
83
84 Service Specification
85 =====================
86
87 A *Service Specification* is a data structure that is used to specify the
88 deployment of services. Here is an example of a service specification in YAML:
89
90 .. code-block:: yaml
91
92 service_type: rgw
93 service_id: realm.zone
94 placement:
95 hosts:
96 - host1
97 - host2
98 - host3
99 unmanaged: false
100 networks:
101 - 192.169.142.0/24
102 spec:
103 # Additional service specific attributes.
104
105 In this example, the properties of this service specification are:
106
107 .. py:currentmodule:: ceph.deployment.service_spec
108
109 .. autoclass:: ServiceSpec
110 :members:
111
112 Each service type can have additional service-specific properties.
113
114 Service specifications of type ``mon``, ``mgr``, and the monitoring
115 types do not require a ``service_id``.
116
117 A service of type ``osd`` is described in :ref:`drivegroups`
118
119 Many service specifications can be applied at once using ``ceph orch apply -i``
120 by submitting a multi-document YAML file::
121
122 cat <<EOF | ceph orch apply -i -
123 service_type: mon
124 placement:
125 host_pattern: "mon*"
126 ---
127 service_type: mgr
128 placement:
129 host_pattern: "mgr*"
130 ---
131 service_type: osd
132 service_id: default_drive_group
133 placement:
134 host_pattern: "osd*"
135 data_devices:
136 all: true
137 EOF
138
139 .. _orchestrator-cli-service-spec-retrieve:
140
141 Retrieving the running Service Specification
142 --------------------------------------------
143
144 If the services have been started via ``ceph orch apply...``, then directly changing
145 the Services Specification is complicated. Instead of attempting to directly change
146 the Services Specification, we suggest exporting the running Service Specification by
147 following these instructions:
148
149 .. prompt:: bash #
150
151 ceph orch ls --service-name rgw.<realm>.<zone> --export > rgw.<realm>.<zone>.yaml
152 ceph orch ls --service-type mgr --export > mgr.yaml
153 ceph orch ls --export > cluster.yaml
154
155 The Specification can then be changed and re-applied as above.
156
157 Updating Service Specifications
158 -------------------------------
159
160 The Ceph Orchestrator maintains a declarative state of each
161 service in a ``ServiceSpec``. For certain operations, like updating
162 the RGW HTTP port, we need to update the existing
163 specification.
164
165 1. List the current ``ServiceSpec``:
166
167 .. prompt:: bash #
168
169 ceph orch ls --service_name=<service-name> --export > myservice.yaml
170
171 2. Update the yaml file:
172
173 .. prompt:: bash #
174
175 vi myservice.yaml
176
177 3. Apply the new ``ServiceSpec``:
178
179 .. prompt:: bash #
180
181 ceph orch apply -i myservice.yaml [--dry-run]
182
183 .. _orchestrator-cli-placement-spec:
184
185 Daemon Placement
186 ================
187
188 For the orchestrator to deploy a *service*, it needs to know where to deploy
189 *daemons*, and how many to deploy. This is the role of a placement
190 specification. Placement specifications can either be passed as command line arguments
191 or in a YAML files.
192
193 .. note::
194
195 cephadm will not deploy daemons on hosts with the ``_no_schedule`` label; see :ref:`cephadm-special-host-labels`.
196
197 .. note::
198 The **apply** command can be confusing. For this reason, we recommend using
199 YAML specifications.
200
201 Each ``ceph orch apply <service-name>`` command supersedes the one before it.
202 If you do not use the proper syntax, you will clobber your work
203 as you go.
204
205 For example:
206
207 .. prompt:: bash #
208
209 ceph orch apply mon host1
210 ceph orch apply mon host2
211 ceph orch apply mon host3
212
213 This results in only one host having a monitor applied to it: host 3.
214
215 (The first command creates a monitor on host1. Then the second command
216 clobbers the monitor on host1 and creates a monitor on host2. Then the
217 third command clobbers the monitor on host2 and creates a monitor on
218 host3. In this scenario, at this point, there is a monitor ONLY on
219 host3.)
220
221 To make certain that a monitor is applied to each of these three hosts,
222 run a command like this:
223
224 .. prompt:: bash #
225
226 ceph orch apply mon "host1,host2,host3"
227
228 There is another way to apply monitors to multiple hosts: a ``yaml`` file
229 can be used. Instead of using the "ceph orch apply mon" commands, run a
230 command of this form:
231
232 .. prompt:: bash #
233
234 ceph orch apply -i file.yaml
235
236 Here is a sample **file.yaml** file
237
238 .. code-block:: yaml
239
240 service_type: mon
241 placement:
242 hosts:
243 - host1
244 - host2
245 - host3
246
247 Explicit placements
248 -------------------
249
250 Daemons can be explicitly placed on hosts by simply specifying them:
251
252 .. prompt:: bash #
253
254 orch apply prometheus --placement="host1 host2 host3"
255
256 Or in YAML:
257
258 .. code-block:: yaml
259
260 service_type: prometheus
261 placement:
262 hosts:
263 - host1
264 - host2
265 - host3
266
267 MONs and other services may require some enhanced network specifications:
268
269 .. prompt:: bash #
270
271 orch daemon add mon --placement="myhost:[v2:1.2.3.4:3300,v1:1.2.3.4:6789]=name"
272
273 where ``[v2:1.2.3.4:3300,v1:1.2.3.4:6789]`` is the network address of the monitor
274 and ``=name`` specifies the name of the new monitor.
275
276 .. _orch-placement-by-labels:
277
278 Placement by labels
279 -------------------
280
281 Daemon placement can be limited to hosts that match a specific label. To set
282 a label ``mylabel`` to the appropriate hosts, run this command:
283
284 .. prompt:: bash #
285
286 ceph orch host label add *<hostname>* mylabel
287
288 To view the current hosts and labels, run this command:
289
290 .. prompt:: bash #
291
292 ceph orch host ls
293
294 For example:
295
296 .. prompt:: bash #
297
298 ceph orch host label add host1 mylabel
299 ceph orch host label add host2 mylabel
300 ceph orch host label add host3 mylabel
301 ceph orch host ls
302
303 .. code-block:: bash
304
305 HOST ADDR LABELS STATUS
306 host1 mylabel
307 host2 mylabel
308 host3 mylabel
309 host4
310 host5
311
312 Now, Tell cephadm to deploy daemons based on the label by running
313 this command:
314
315 .. prompt:: bash #
316
317 orch apply prometheus --placement="label:mylabel"
318
319 Or in YAML:
320
321 .. code-block:: yaml
322
323 service_type: prometheus
324 placement:
325 label: "mylabel"
326
327 * See :ref:`orchestrator-host-labels`
328
329 Placement by pattern matching
330 -----------------------------
331
332 Daemons can be placed on hosts as well:
333
334 .. prompt:: bash #
335
336 orch apply prometheus --placement='myhost[1-3]'
337
338 Or in YAML:
339
340 .. code-block:: yaml
341
342 service_type: prometheus
343 placement:
344 host_pattern: "myhost[1-3]"
345
346 To place a service on *all* hosts, use ``"*"``:
347
348 .. prompt:: bash #
349
350 orch apply node-exporter --placement='*'
351
352 Or in YAML:
353
354 .. code-block:: yaml
355
356 service_type: node-exporter
357 placement:
358 host_pattern: "*"
359
360
361 Changing the number of daemons
362 ------------------------------
363
364 By specifying ``count``, only the number of daemons specified will be created:
365
366 .. prompt:: bash #
367
368 orch apply prometheus --placement=3
369
370 To deploy *daemons* on a subset of hosts, specify the count:
371
372 .. prompt:: bash #
373
374 orch apply prometheus --placement="2 host1 host2 host3"
375
376 If the count is bigger than the amount of hosts, cephadm deploys one per host:
377
378 .. prompt:: bash #
379
380 orch apply prometheus --placement="3 host1 host2"
381
382 The command immediately above results in two Prometheus daemons.
383
384 YAML can also be used to specify limits, in the following way:
385
386 .. code-block:: yaml
387
388 service_type: prometheus
389 placement:
390 count: 3
391
392 YAML can also be used to specify limits on hosts:
393
394 .. code-block:: yaml
395
396 service_type: prometheus
397 placement:
398 count: 2
399 hosts:
400 - host1
401 - host2
402 - host3
403
404 Algorithm description
405 ---------------------
406
407 Cephadm's declarative state consists of a list of service specifications
408 containing placement specifications.
409
410 Cephadm continually compares a list of daemons actually running in the cluster
411 against the list in the service specifications. Cephadm adds new daemons and
412 removes old daemons as necessary in order to conform to the service
413 specifications.
414
415 Cephadm does the following to maintain compliance with the service
416 specifications.
417
418 Cephadm first selects a list of candidate hosts. Cephadm seeks explicit host
419 names and selects them. If cephadm finds no explicit host names, it looks for
420 label specifications. If no label is defined in the specification, cephadm
421 selects hosts based on a host pattern. If no host pattern is defined, as a last
422 resort, cephadm selects all known hosts as candidates.
423
424 Cephadm is aware of existing daemons running services and tries to avoid moving
425 them.
426
427 Cephadm supports the deployment of a specific amount of services.
428 Consider the following service specification:
429
430 .. code-block:: yaml
431
432 service_type: mds
433 service_name: myfs
434 placement:
435 count: 3
436 label: myfs
437
438 This service specifcation instructs cephadm to deploy three daemons on hosts
439 labeled ``myfs`` across the cluster.
440
441 If there are fewer than three daemons deployed on the candidate hosts, cephadm
442 randomly chooses hosts on which to deploy new daemons.
443
444 If there are more than three daemons deployed on the candidate hosts, cephadm
445 removes existing daemons.
446
447 Finally, cephadm removes daemons on hosts that are outside of the list of
448 candidate hosts.
449
450 .. note::
451
452 There is a special case that cephadm must consider.
453
454 If there are fewer hosts selected by the placement specification than
455 demanded by ``count``, cephadm will deploy only on the selected hosts.
456
457 .. _orch-rm:
458
459 Removing a Service
460 ==================
461
462 In order to remove a service including the removal
463 of all daemons of that service, run
464
465 .. prompt:: bash
466
467 ceph orch rm <service-name>
468
469 For example:
470
471 .. prompt:: bash
472
473 ceph orch rm rgw.myrgw
474
475 .. _cephadm-spec-unmanaged:
476
477 Disabling automatic deployment of daemons
478 =========================================
479
480 Cephadm supports disabling the automated deployment and removal of daemons on a
481 per service basis. The CLI supports two commands for this.
482
483 In order to fully remove a service, see :ref:`orch-rm`.
484
485 Disabling automatic management of daemons
486 -----------------------------------------
487
488 To disable the automatic management of dameons, set ``unmanaged=True`` in the
489 :ref:`orchestrator-cli-service-spec` (``mgr.yaml``).
490
491 ``mgr.yaml``:
492
493 .. code-block:: yaml
494
495 service_type: mgr
496 unmanaged: true
497 placement:
498 label: mgr
499
500
501 .. prompt:: bash #
502
503 ceph orch apply -i mgr.yaml
504
505
506 .. note::
507
508 After you apply this change in the Service Specification, cephadm will no
509 longer deploy any new daemons (even if the placement specification matches
510 additional hosts).
511
512 Deploying a daemon on a host manually
513 -------------------------------------
514
515 .. note::
516
517 This workflow has a very limited use case and should only be used
518 in rare circumstances.
519
520 To manually deploy a daemon on a host, follow these steps:
521
522 Modify the service spec for a service by getting the
523 existing spec, adding ``unmanaged: true``, and applying the modified spec.
524
525 Then manually deploy the daemon using the following:
526
527 .. prompt:: bash #
528
529 ceph orch daemon add <daemon-type> --placement=<placement spec>
530
531 For example :
532
533 .. prompt:: bash #
534
535 ceph orch daemon add mgr --placement=my_host
536
537 .. note::
538
539 Removing ``unmanaged: true`` from the service spec will
540 enable the reconciliation loop for this service and will
541 potentially lead to the removal of the daemon, depending
542 on the placement spec.
543
544 Removing a daemon from a host manually
545 --------------------------------------
546
547 To manually remove a daemon, run a command of the following form:
548
549 .. prompt:: bash #
550
551 ceph orch daemon rm <daemon name>... [--force]
552
553 For example:
554
555 .. prompt:: bash #
556
557 ceph orch daemon rm mgr.my_host.xyzxyz
558
559 .. note::
560
561 For managed services (``unmanaged=False``), cephadm will automatically
562 deploy a new daemon a few seconds later.
563
564 See also
565 --------
566
567 * See :ref:`cephadm-osd-declarative` for special handling of unmanaged OSDs.
568 * See also :ref:`cephadm-pause`