]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephadm/services/index.rst
update ceph source to reef 18.2.1
[ceph.git] / ceph / doc / cephadm / services / index.rst
1 ==================
2 Service Management
3 ==================
4
5 A service is a group of daemons configured together. See these chapters
6 for details on individual services:
7
8 .. toctree::
9 :maxdepth: 1
10
11 mon
12 mgr
13 osd
14 rgw
15 mds
16 nfs
17 iscsi
18 custom-container
19 monitoring
20 snmp-gateway
21 tracing
22
23 Service Status
24 ==============
25
26
27 To see the status of one
28 of the services running in the Ceph cluster, do the following:
29
30 #. Use the command line to print a list of services.
31 #. Locate the service whose status you want to check.
32 #. Print the status of the service.
33
34 The following command prints a list of services known to the orchestrator. To
35 limit the output to services only on a specified host, use the optional
36 ``--host`` parameter. To limit the output to services of only a particular
37 type, use the optional ``--type`` parameter (mon, osd, mgr, mds, rgw):
38
39 .. prompt:: bash #
40
41 ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh]
42
43 Discover the status of a particular service or daemon:
44
45 .. prompt:: bash #
46
47 ceph orch ls --service_type type --service_name <name> [--refresh]
48
49 To export the service specifications knows to the orchestrator, run the following command.
50
51 .. prompt:: bash #
52
53 ceph orch ls --export
54
55 The service specifications exported with this command will be exported as yaml
56 and that yaml can be used with the ``ceph orch apply -i`` command.
57
58 For information about retrieving the specifications of single services (including examples of commands), see :ref:`orchestrator-cli-service-spec-retrieve`.
59
60 Daemon Status
61 =============
62
63 A daemon is a systemd unit that is running and part of a service.
64
65 To see the status of a daemon, do the following:
66
67 #. Print a list of all daemons known to the orchestrator.
68 #. Query the status of the target daemon.
69
70 First, print a list of all daemons known to the orchestrator:
71
72 .. prompt:: bash #
73
74 ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh]
75
76 Then query the status of a particular service instance (mon, osd, mds, rgw).
77 For OSDs the id is the numeric OSD ID. For MDS services the id is the file
78 system name:
79
80 .. prompt:: bash #
81
82 ceph orch ps --daemon_type osd --daemon_id 0
83
84 .. note::
85 The output of the command ``ceph orch ps`` may not reflect the current status of the daemons. By default,
86 the status is updated every 10 minutes. This interval can be shortened by modifying the ``mgr/cephadm/daemon_cache_timeout``
87 configuration variable (in seconds) e.g: ``ceph config set mgr mgr/cephadm/daemon_cache_timeout 60`` would reduce the refresh
88 interval to one minute. The information is updated every ``daemon_cache_timeout`` seconds unless the ``--refresh`` option
89 is used. This option would trigger a request to refresh the information, which may take some time depending on the size of
90 the cluster. In general ``REFRESHED`` value indicates how recent the information displayed by ``ceph orch ps`` and similar
91 commands is.
92
93 .. _orchestrator-cli-service-spec:
94
95 Service Specification
96 =====================
97
98 A *Service Specification* is a data structure that is used to specify the
99 deployment of services. In addition to parameters such as `placement` or
100 `networks`, the user can set initial values of service configuration parameters
101 by means of the `config` section. For each param/value configuration pair,
102 cephadm calls the following command to set its value:
103
104 .. prompt:: bash #
105
106 ceph config set <service-name> <param> <value>
107
108 cephadm raises health warnings in case invalid configuration parameters are
109 found in the spec (`CEPHADM_INVALID_CONFIG_OPTION`) or if any error while
110 trying to apply the new configuration option(s) (`CEPHADM_FAILED_SET_OPTION`).
111
112 Here is an example of a service specification in YAML:
113
114 .. code-block:: yaml
115
116 service_type: rgw
117 service_id: realm.zone
118 placement:
119 hosts:
120 - host1
121 - host2
122 - host3
123 config:
124 param_1: val_1
125 ...
126 param_N: val_N
127 unmanaged: false
128 networks:
129 - 192.169.142.0/24
130 spec:
131 # Additional service specific attributes.
132
133 In this example, the properties of this service specification are:
134
135 .. py:currentmodule:: ceph.deployment.service_spec
136
137 .. autoclass:: ServiceSpec
138 :members:
139
140 Each service type can have additional service-specific properties.
141
142 Service specifications of type ``mon``, ``mgr``, and the monitoring
143 types do not require a ``service_id``.
144
145 A service of type ``osd`` is described in :ref:`drivegroups`
146
147 Many service specifications can be applied at once using ``ceph orch apply -i``
148 by submitting a multi-document YAML file::
149
150 cat <<EOF | ceph orch apply -i -
151 service_type: mon
152 placement:
153 host_pattern: "mon*"
154 ---
155 service_type: mgr
156 placement:
157 host_pattern: "mgr*"
158 ---
159 service_type: osd
160 service_id: default_drive_group
161 placement:
162 host_pattern: "osd*"
163 data_devices:
164 all: true
165 EOF
166
167 .. _orchestrator-cli-service-spec-retrieve:
168
169 Retrieving the running Service Specification
170 --------------------------------------------
171
172 If the services have been started via ``ceph orch apply...``, then directly changing
173 the Services Specification is complicated. Instead of attempting to directly change
174 the Services Specification, we suggest exporting the running Service Specification by
175 following these instructions:
176
177 .. prompt:: bash #
178
179 ceph orch ls --service-name rgw.<realm>.<zone> --export > rgw.<realm>.<zone>.yaml
180 ceph orch ls --service-type mgr --export > mgr.yaml
181 ceph orch ls --export > cluster.yaml
182
183 The Specification can then be changed and re-applied as above.
184
185 Updating Service Specifications
186 -------------------------------
187
188 The Ceph Orchestrator maintains a declarative state of each
189 service in a ``ServiceSpec``. For certain operations, like updating
190 the RGW HTTP port, we need to update the existing
191 specification.
192
193 1. List the current ``ServiceSpec``:
194
195 .. prompt:: bash #
196
197 ceph orch ls --service_name=<service-name> --export > myservice.yaml
198
199 2. Update the yaml file:
200
201 .. prompt:: bash #
202
203 vi myservice.yaml
204
205 3. Apply the new ``ServiceSpec``:
206
207 .. prompt:: bash #
208
209 ceph orch apply -i myservice.yaml [--dry-run]
210
211 .. _orchestrator-cli-placement-spec:
212
213 Daemon Placement
214 ================
215
216 For the orchestrator to deploy a *service*, it needs to know where to deploy
217 *daemons*, and how many to deploy. This is the role of a placement
218 specification. Placement specifications can either be passed as command line arguments
219 or in a YAML files.
220
221 .. note::
222
223 cephadm will not deploy daemons on hosts with the ``_no_schedule`` label; see :ref:`cephadm-special-host-labels`.
224
225 .. note::
226 The **apply** command can be confusing. For this reason, we recommend using
227 YAML specifications.
228
229 Each ``ceph orch apply <service-name>`` command supersedes the one before it.
230 If you do not use the proper syntax, you will clobber your work
231 as you go.
232
233 For example:
234
235 .. prompt:: bash #
236
237 ceph orch apply mon host1
238 ceph orch apply mon host2
239 ceph orch apply mon host3
240
241 This results in only one host having a monitor applied to it: host 3.
242
243 (The first command creates a monitor on host1. Then the second command
244 clobbers the monitor on host1 and creates a monitor on host2. Then the
245 third command clobbers the monitor on host2 and creates a monitor on
246 host3. In this scenario, at this point, there is a monitor ONLY on
247 host3.)
248
249 To make certain that a monitor is applied to each of these three hosts,
250 run a command like this:
251
252 .. prompt:: bash #
253
254 ceph orch apply mon "host1,host2,host3"
255
256 There is another way to apply monitors to multiple hosts: a ``yaml`` file
257 can be used. Instead of using the "ceph orch apply mon" commands, run a
258 command of this form:
259
260 .. prompt:: bash #
261
262 ceph orch apply -i file.yaml
263
264 Here is a sample **file.yaml** file
265
266 .. code-block:: yaml
267
268 service_type: mon
269 placement:
270 hosts:
271 - host1
272 - host2
273 - host3
274
275 Explicit placements
276 -------------------
277
278 Daemons can be explicitly placed on hosts by simply specifying them:
279
280 .. prompt:: bash #
281
282 ceph orch apply prometheus --placement="host1 host2 host3"
283
284 Or in YAML:
285
286 .. code-block:: yaml
287
288 service_type: prometheus
289 placement:
290 hosts:
291 - host1
292 - host2
293 - host3
294
295 MONs and other services may require some enhanced network specifications:
296
297 .. prompt:: bash #
298
299 ceph orch daemon add mon --placement="myhost:[v2:1.2.3.4:3300,v1:1.2.3.4:6789]=name"
300
301 where ``[v2:1.2.3.4:3300,v1:1.2.3.4:6789]`` is the network address of the monitor
302 and ``=name`` specifies the name of the new monitor.
303
304 .. _orch-placement-by-labels:
305
306 Placement by labels
307 -------------------
308
309 Daemon placement can be limited to hosts that match a specific label. To set
310 a label ``mylabel`` to the appropriate hosts, run this command:
311
312 .. prompt:: bash #
313
314 ceph orch host label add *<hostname>* mylabel
315
316 To view the current hosts and labels, run this command:
317
318 .. prompt:: bash #
319
320 ceph orch host ls
321
322 For example:
323
324 .. prompt:: bash #
325
326 ceph orch host label add host1 mylabel
327 ceph orch host label add host2 mylabel
328 ceph orch host label add host3 mylabel
329 ceph orch host ls
330
331 .. code-block:: bash
332
333 HOST ADDR LABELS STATUS
334 host1 mylabel
335 host2 mylabel
336 host3 mylabel
337 host4
338 host5
339
340 Now, Tell cephadm to deploy daemons based on the label by running
341 this command:
342
343 .. prompt:: bash #
344
345 ceph orch apply prometheus --placement="label:mylabel"
346
347 Or in YAML:
348
349 .. code-block:: yaml
350
351 service_type: prometheus
352 placement:
353 label: "mylabel"
354
355 * See :ref:`orchestrator-host-labels`
356
357 Placement by pattern matching
358 -----------------------------
359
360 Daemons can be placed on hosts as well:
361
362 .. prompt:: bash #
363
364 ceph orch apply prometheus --placement='myhost[1-3]'
365
366 Or in YAML:
367
368 .. code-block:: yaml
369
370 service_type: prometheus
371 placement:
372 host_pattern: "myhost[1-3]"
373
374 To place a service on *all* hosts, use ``"*"``:
375
376 .. prompt:: bash #
377
378 ceph orch apply node-exporter --placement='*'
379
380 Or in YAML:
381
382 .. code-block:: yaml
383
384 service_type: node-exporter
385 placement:
386 host_pattern: "*"
387
388
389 Changing the number of daemons
390 ------------------------------
391
392 By specifying ``count``, only the number of daemons specified will be created:
393
394 .. prompt:: bash #
395
396 ceph orch apply prometheus --placement=3
397
398 To deploy *daemons* on a subset of hosts, specify the count:
399
400 .. prompt:: bash #
401
402 ceph orch apply prometheus --placement="2 host1 host2 host3"
403
404 If the count is bigger than the amount of hosts, cephadm deploys one per host:
405
406 .. prompt:: bash #
407
408 ceph orch apply prometheus --placement="3 host1 host2"
409
410 The command immediately above results in two Prometheus daemons.
411
412 YAML can also be used to specify limits, in the following way:
413
414 .. code-block:: yaml
415
416 service_type: prometheus
417 placement:
418 count: 3
419
420 YAML can also be used to specify limits on hosts:
421
422 .. code-block:: yaml
423
424 service_type: prometheus
425 placement:
426 count: 2
427 hosts:
428 - host1
429 - host2
430 - host3
431
432 .. _cephadm_co_location:
433
434 Co-location of daemons
435 ----------------------
436
437 Cephadm supports the deployment of multiple daemons on the same host:
438
439 .. code-block:: yaml
440
441 service_type: rgw
442 placement:
443 label: rgw
444 count_per_host: 2
445
446 The main reason for deploying multiple daemons per host is an additional
447 performance benefit for running multiple RGW and MDS daemons on the same host.
448
449 See also:
450
451 * :ref:`cephadm_mgr_co_location`.
452 * :ref:`cephadm-rgw-designated_gateways`.
453
454 This feature was introduced in Pacific.
455
456 Algorithm description
457 ---------------------
458
459 Cephadm's declarative state consists of a list of service specifications
460 containing placement specifications.
461
462 Cephadm continually compares a list of daemons actually running in the cluster
463 against the list in the service specifications. Cephadm adds new daemons and
464 removes old daemons as necessary in order to conform to the service
465 specifications.
466
467 Cephadm does the following to maintain compliance with the service
468 specifications.
469
470 Cephadm first selects a list of candidate hosts. Cephadm seeks explicit host
471 names and selects them. If cephadm finds no explicit host names, it looks for
472 label specifications. If no label is defined in the specification, cephadm
473 selects hosts based on a host pattern. If no host pattern is defined, as a last
474 resort, cephadm selects all known hosts as candidates.
475
476 Cephadm is aware of existing daemons running services and tries to avoid moving
477 them.
478
479 Cephadm supports the deployment of a specific amount of services.
480 Consider the following service specification:
481
482 .. code-block:: yaml
483
484 service_type: mds
485 service_name: myfs
486 placement:
487 count: 3
488 label: myfs
489
490 This service specification instructs cephadm to deploy three daemons on hosts
491 labeled ``myfs`` across the cluster.
492
493 If there are fewer than three daemons deployed on the candidate hosts, cephadm
494 randomly chooses hosts on which to deploy new daemons.
495
496 If there are more than three daemons deployed on the candidate hosts, cephadm
497 removes existing daemons.
498
499 Finally, cephadm removes daemons on hosts that are outside of the list of
500 candidate hosts.
501
502 .. note::
503
504 There is a special case that cephadm must consider.
505
506 If there are fewer hosts selected by the placement specification than
507 demanded by ``count``, cephadm will deploy only on the selected hosts.
508
509 .. _cephadm-extra-container-args:
510
511 Extra Container Arguments
512 =========================
513
514 .. warning::
515 The arguments provided for extra container args are limited to whatever arguments are available for
516 a `run` command from whichever container engine you are using. Providing any arguments the `run`
517 command does not support (or invalid values for arguments) will cause the daemon to fail to start.
518
519 .. note::
520
521 For arguments passed to the process running inside the container rather than the for
522 the container runtime itself, see :ref:`cephadm-extra-entrypoint-args`
523
524
525 Cephadm supports providing extra miscellaneous container arguments for
526 specific cases when they may be necessary. For example, if a user needed
527 to limit the amount of cpus their mon daemons make use of they could apply
528 a spec like
529
530 .. code-block:: yaml
531
532 service_type: mon
533 service_name: mon
534 placement:
535 hosts:
536 - host1
537 - host2
538 - host3
539 extra_container_args:
540 - "--cpus=2"
541
542 which would cause each mon daemon to be deployed with `--cpus=2`.
543
544 There are two ways to express arguments in the ``extra_container_args`` list.
545 To start, an item in the list can be a string. When passing an argument
546 as a string and the string contains spaces, Cephadm will automatically split it
547 into multiple arguments. For example, ``--cpus 2`` would become ``["--cpus",
548 "2"]`` when processed. Example:
549
550 .. code-block:: yaml
551
552 service_type: mon
553 service_name: mon
554 placement:
555 hosts:
556 - host1
557 - host2
558 - host3
559 extra_container_args:
560 - "--cpus 2"
561
562 As an alternative, an item in the list can be an object (mapping) containing
563 the required key "argument" and an optional key "split". The value associated
564 with the ``argument`` key must be a single string. The value associated with
565 the ``split`` key is a boolean value. The ``split`` key explicitly controls if
566 spaces in the argument value cause the value to be split into multiple
567 arguments. If ``split`` is true then Cephadm will automatically split the value
568 into multiple arguments. If ``split`` is false then spaces in the value will
569 be retained in the argument. The default, when ``split`` is not provided, is
570 false. Examples:
571
572 .. code-block:: yaml
573
574 service_type: mon
575 service_name: mon
576 placement:
577 hosts:
578 - tiebreaker
579 extra_container_args:
580 # No spaces, always treated as a single argument
581 - argument: "--timout=3000"
582 # Splitting explicitly disabled, one single argument
583 - argument: "--annotation=com.example.name=my favorite mon"
584 split: false
585 # Splitting explicitly enabled, will become two arguments
586 - argument: "--cpuset-cpus 1-3,7-11"
587 split: true
588 # Splitting implicitly disabled, one single argument
589 - argument: "--annotation=com.example.note=a simple example"
590
591 Mounting Files with Extra Container Arguments
592 ---------------------------------------------
593
594 A common use case for extra container arguments is to mount additional
595 files within the container. Older versions of Ceph did not support spaces
596 in arguments and therefore the examples below apply to the widest range
597 of Ceph versions.
598
599 .. code-block:: yaml
600
601 extra_container_args:
602 - "-v"
603 - "/absolute/file/path/on/host:/absolute/file/path/in/container"
604
605 For example:
606
607 .. code-block:: yaml
608
609 extra_container_args:
610 - "-v"
611 - "/opt/ceph_cert/host.cert:/etc/grafana/certs/cert_file:ro"
612
613 .. _cephadm-extra-entrypoint-args:
614
615 Extra Entrypoint Arguments
616 ==========================
617
618 .. note::
619
620 For arguments intended for the container runtime rather than the process inside
621 it, see :ref:`cephadm-extra-container-args`
622
623 Similar to extra container args for the container runtime, Cephadm supports
624 appending to args passed to the entrypoint process running
625 within a container. For example, to set the collector textfile directory for
626 the node-exporter service , one could apply a service spec like
627
628 .. code-block:: yaml
629
630 service_type: node-exporter
631 service_name: node-exporter
632 placement:
633 host_pattern: '*'
634 extra_entrypoint_args:
635 - "--collector.textfile.directory=/var/lib/node_exporter/textfile_collector2"
636
637 There are two ways to express arguments in the ``extra_entrypoint_args`` list.
638 To start, an item in the list can be a string. When passing an argument as a
639 string and the string contains spaces, cephadm will automatically split it into
640 multiple arguments. For example, ``--debug_ms 10`` would become
641 ``["--debug_ms", "10"]`` when processed. Example:
642
643 .. code-block:: yaml
644
645 service_type: mon
646 service_name: mon
647 placement:
648 hosts:
649 - host1
650 - host2
651 - host3
652 extra_entrypoint_args:
653 - "--debug_ms 2"
654
655 As an alternative, an item in the list can be an object (mapping) containing
656 the required key "argument" and an optional key "split". The value associated
657 with the ``argument`` key must be a single string. The value associated with
658 the ``split`` key is a boolean value. The ``split`` key explicitly controls if
659 spaces in the argument value cause the value to be split into multiple
660 arguments. If ``split`` is true then cephadm will automatically split the value
661 into multiple arguments. If ``split`` is false then spaces in the value will
662 be retained in the argument. The default, when ``split`` is not provided, is
663 false. Examples:
664
665 .. code-block:: yaml
666
667 # An theoretical data migration service
668 service_type: pretend
669 service_name: imagine1
670 placement:
671 hosts:
672 - host1
673 extra_entrypoint_args:
674 # No spaces, always treated as a single argument
675 - argument: "--timout=30m"
676 # Splitting explicitly disabled, one single argument
677 - argument: "--import=/mnt/usb/My Documents"
678 split: false
679 # Splitting explicitly enabled, will become two arguments
680 - argument: "--tag documents"
681 split: true
682 # Splitting implicitly disabled, one single argument
683 - argument: "--title=Imported Documents"
684
685
686 Custom Config Files
687 ===================
688
689 Cephadm supports specifying miscellaneous config files for daemons.
690 To do so, users must provide both the content of the config file and the
691 location within the daemon's container at which it should be mounted. After
692 applying a YAML spec with custom config files specified and having cephadm
693 redeploy the daemons for which the config files are specified, these files will
694 be mounted within the daemon's container at the specified location.
695
696 Example service spec:
697
698 .. code-block:: yaml
699
700 service_type: grafana
701 service_name: grafana
702 custom_configs:
703 - mount_path: /etc/example.conf
704 content: |
705 setting1 = value1
706 setting2 = value2
707 - mount_path: /usr/share/grafana/example.cert
708 content: |
709 -----BEGIN PRIVATE KEY-----
710 V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt
711 ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15
712 IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu
713 YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg
714 ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8=
715 -----END PRIVATE KEY-----
716 -----BEGIN CERTIFICATE-----
717 V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt
718 ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15
719 IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu
720 YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg
721 ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8=
722 -----END CERTIFICATE-----
723
724 To make these new config files actually get mounted within the
725 containers for the daemons
726
727 .. prompt:: bash
728
729 ceph orch redeploy <service-name>
730
731 For example:
732
733 .. prompt:: bash
734
735 ceph orch redeploy grafana
736
737 .. _orch-rm:
738
739 Removing a Service
740 ==================
741
742 In order to remove a service including the removal
743 of all daemons of that service, run
744
745 .. prompt:: bash
746
747 ceph orch rm <service-name>
748
749 For example:
750
751 .. prompt:: bash
752
753 ceph orch rm rgw.myrgw
754
755 .. _cephadm-spec-unmanaged:
756
757 Disabling automatic deployment of daemons
758 =========================================
759
760 Cephadm supports disabling the automated deployment and removal of daemons on a
761 per service basis. The CLI supports two commands for this.
762
763 In order to fully remove a service, see :ref:`orch-rm`.
764
765 Disabling automatic management of daemons
766 -----------------------------------------
767
768 To disable the automatic management of dameons, set ``unmanaged=True`` in the
769 :ref:`orchestrator-cli-service-spec` (``mgr.yaml``).
770
771 ``mgr.yaml``:
772
773 .. code-block:: yaml
774
775 service_type: mgr
776 unmanaged: true
777 placement:
778 label: mgr
779
780
781 .. prompt:: bash #
782
783 ceph orch apply -i mgr.yaml
784
785 Cephadm also supports setting the unmanaged parameter to true or false
786 using the ``ceph orch set-unmanaged`` and ``ceph orch set-managed`` commands.
787 The commands take the service name (as reported in ``ceph orch ls``) as
788 the only argument. For example,
789
790 .. prompt:: bash #
791
792 ceph orch set-unmanaged mon
793
794 would set ``unmanaged: true`` for the mon service and
795
796 .. prompt:: bash #
797
798 ceph orch set-managed mon
799
800 would set ``unmanaged: false`` for the mon service
801
802 .. note::
803
804 After you apply this change in the Service Specification, cephadm will no
805 longer deploy any new daemons (even if the placement specification matches
806 additional hosts).
807
808 .. note::
809
810 The "osd" service used to track OSDs that are not tied to any specific
811 service spec is special and will always be marked unmanaged. Attempting
812 to modify it with ``ceph orch set-unmanaged`` or ``ceph orch set-managed``
813 will result in a message ``No service of name osd found. Check "ceph orch ls" for all known services``
814
815 Deploying a daemon on a host manually
816 -------------------------------------
817
818 .. note::
819
820 This workflow has a very limited use case and should only be used
821 in rare circumstances.
822
823 To manually deploy a daemon on a host, follow these steps:
824
825 Modify the service spec for a service by getting the
826 existing spec, adding ``unmanaged: true``, and applying the modified spec.
827
828 Then manually deploy the daemon using the following:
829
830 .. prompt:: bash #
831
832 ceph orch daemon add <daemon-type> --placement=<placement spec>
833
834 For example :
835
836 .. prompt:: bash #
837
838 ceph orch daemon add mgr --placement=my_host
839
840 .. note::
841
842 Removing ``unmanaged: true`` from the service spec will
843 enable the reconciliation loop for this service and will
844 potentially lead to the removal of the daemon, depending
845 on the placement spec.
846
847 Removing a daemon from a host manually
848 --------------------------------------
849
850 To manually remove a daemon, run a command of the following form:
851
852 .. prompt:: bash #
853
854 ceph orch daemon rm <daemon name>... [--force]
855
856 For example:
857
858 .. prompt:: bash #
859
860 ceph orch daemon rm mgr.my_host.xyzxyz
861
862 .. note::
863
864 For managed services (``unmanaged=False``), cephadm will automatically
865 deploy a new daemon a few seconds later.
866
867 See also
868 --------
869
870 * See :ref:`cephadm-osd-declarative` for special handling of unmanaged OSDs.
871 * See also :ref:`cephadm-pause`