]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephadm/operations.rst
update ceph source to reef 18.1.2
[ceph.git] / ceph / doc / cephadm / operations.rst
CommitLineData
9f95a23c
TL
1==================
2Cephadm Operations
3==================
4
522d829b
TL
5.. _watching_cephadm_logs:
6
9f95a23c
TL
7Watching cephadm log messages
8=============================
9
522d829b
TL
10Cephadm writes logs to the ``cephadm`` cluster log channel. You can
11monitor Ceph's activity in real time by reading the logs as they fill
12up. Run the following command to see the logs in real time:
13
14.. prompt:: bash #
15
16 ceph -W cephadm
17
18By default, this command shows info-level events and above. To see
19debug-level messages as well as info-level events, run the following
20commands:
9f95a23c 21
522d829b 22.. prompt:: bash #
9f95a23c 23
522d829b
TL
24 ceph config set mgr mgr/cephadm/log_to_cluster_level debug
25 ceph -W cephadm --watch-debug
9f95a23c 26
522d829b 27.. warning::
9f95a23c 28
522d829b 29 The debug messages are very verbose!
9f95a23c 30
522d829b 31You can see recent events by running the following command:
9f95a23c 32
522d829b
TL
33.. prompt:: bash #
34
35 ceph log last cephadm
9f95a23c
TL
36
37These events are also logged to the ``ceph.cephadm.log`` file on
522d829b 38monitor hosts as well as to the monitor daemons' stderr.
9f95a23c
TL
39
40
801d1391
TL
41.. _cephadm-logs:
42
39ae355f
TL
43
44Ceph daemon control
45===================
46
47Starting and stopping daemons
48-----------------------------
49
50You can stop, start, or restart a daemon with:
51
52.. prompt:: bash #
53
54 ceph orch daemon stop <name>
55 ceph orch daemon start <name>
56 ceph orch daemon restart <name>
57
58You can also do the same for all daemons for a service with:
59
60.. prompt:: bash #
61
62 ceph orch stop <name>
63 ceph orch start <name>
64 ceph orch restart <name>
65
66
67Redeploying or reconfiguring a daemon
68-------------------------------------
69
70The container for a daemon can be stopped, recreated, and restarted with
71the ``redeploy`` command:
72
73.. prompt:: bash #
74
75 ceph orch daemon redeploy <name> [--image <image>]
76
77A container image name can optionally be provided to force a
78particular image to be used (instead of the image specified by the
79``container_image`` config value).
80
81If only the ceph configuration needs to be regenerated, you can also
82issue a ``reconfig`` command, which will rewrite the ``ceph.conf``
83file but will not trigger a restart of the daemon.
84
85.. prompt:: bash #
86
87 ceph orch daemon reconfig <name>
88
89
90Rotating a daemon's authenticate key
91------------------------------------
92
93All Ceph and gateway daemons in the cluster have a secret key that is used to connect
94to and authenticate with the cluster. This key can be rotated (i.e., replaced with a
95new key) with the following command:
96
97.. prompt:: bash #
98
99 ceph orch daemon rotate-key <name>
100
101For MDS, OSD, and MGR daemons, this does not require a daemon restart. For other
102daemons, however (e.g., RGW), the daemon may be restarted to switch to the new key.
103
104
9f95a23c
TL
105Ceph daemon logs
106================
107
522d829b
TL
108Logging to journald
109-------------------
110
111Ceph daemons traditionally write logs to ``/var/log/ceph``. Ceph daemons log to
112journald by default and Ceph logs are captured by the container runtime
113environment. They are accessible via ``journalctl``.
114
115.. note:: Prior to Quincy, ceph daemons logged to stderr.
9f95a23c 116
522d829b
TL
117Example of logging to journald
118~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
9f95a23c
TL
119
120For example, to view the logs for the daemon ``mon.foo`` for a cluster
121with ID ``5c5a50ae-272a-455d-99e9-32c6a013e694``, the command would be
522d829b
TL
122something like:
123
124.. prompt:: bash #
9f95a23c
TL
125
126 journalctl -u ceph-5c5a50ae-272a-455d-99e9-32c6a013e694@mon.foo
127
128This works well for normal operations when logging levels are low.
129
9f95a23c
TL
130Logging to files
131----------------
132
522d829b
TL
133You can also configure Ceph daemons to log to files instead of to
134journald if you prefer logs to appear in files (as they did in earlier,
135pre-cephadm, pre-Octopus versions of Ceph). When Ceph logs to files,
136the logs appear in ``/var/log/ceph/<cluster-fsid>``. If you choose to
137configure Ceph to log to files instead of to journald, remember to
138configure Ceph so that it will not log to journald (the commands for
139this are covered below).
140
141Enabling logging to files
142~~~~~~~~~~~~~~~~~~~~~~~~~
143
144To enable logging to files, run the following commands:
9f95a23c 145
522d829b 146.. prompt:: bash #
9f95a23c
TL
147
148 ceph config set global log_to_file true
149 ceph config set global mon_cluster_log_to_file true
150
522d829b
TL
151Disabling logging to journald
152~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
153
154If you choose to log to files, we recommend disabling logging to journald or else
155everything will be logged twice. Run the following commands to disable logging
156to stderr:
157
158.. prompt:: bash #
9f95a23c
TL
159
160 ceph config set global log_to_stderr false
161 ceph config set global mon_cluster_log_to_stderr false
522d829b
TL
162 ceph config set global log_to_journald false
163 ceph config set global mon_cluster_log_to_journald false
164
165.. note:: You can change the default by passing --log-to-file during
166 bootstrapping a new cluster.
167
168Modifying the log retention schedule
169~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
9f95a23c
TL
170
171By default, cephadm sets up log rotation on each host to rotate these
172files. You can configure the logging retention schedule by modifying
173``/etc/logrotate.d/ceph.<cluster-fsid>``.
174
175
176Data location
177=============
178
522d829b
TL
179Cephadm stores daemon data and logs in different locations than did
180older, pre-cephadm (pre Octopus) versions of ceph:
9f95a23c 181
522d829b
TL
182* ``/var/log/ceph/<cluster-fsid>`` contains all cluster logs. By
183 default, cephadm logs via stderr and the container runtime. These
184 logs will not exist unless you have enabled logging to files as
185 described in `cephadm-logs`_.
9f95a23c
TL
186* ``/var/lib/ceph/<cluster-fsid>`` contains all cluster daemon data
187 (besides logs).
188* ``/var/lib/ceph/<cluster-fsid>/<daemon-name>`` contains all data for
189 an individual daemon.
190* ``/var/lib/ceph/<cluster-fsid>/crash`` contains crash reports for
191 the cluster.
192* ``/var/lib/ceph/<cluster-fsid>/removed`` contains old daemon
193 data directories for stateful daemons (e.g., monitor, prometheus)
194 that have been removed by cephadm.
195
196Disk usage
197----------
198
522d829b
TL
199Because a few Ceph daemons (notably, the monitors and prometheus) store a
200large amount of data in ``/var/lib/ceph`` , we recommend moving this
201directory to its own disk, partition, or logical volume so that it does not
202fill up the root file system.
9f95a23c
TL
203
204
9f95a23c
TL
205Health checks
206=============
522d829b
TL
207The cephadm module provides additional health checks to supplement the
208default health checks provided by the Cluster. These additional health
209checks fall into two categories:
f67539c2 210
522d829b
TL
211- **cephadm operations**: Health checks in this category are always
212 executed when the cephadm module is active.
213- **cluster configuration**: These health checks are *optional*, and
214 focus on the configuration of the hosts in the cluster.
f67539c2
TL
215
216CEPHADM Operations
217------------------
9f95a23c
TL
218
219CEPHADM_PAUSED
522d829b 220~~~~~~~~~~~~~~
9f95a23c 221
522d829b
TL
222This indicates that cephadm background work has been paused with
223``ceph orch pause``. Cephadm continues to perform passive monitoring
224activities (like checking host and daemon status), but it will not
225make any changes (like deploying or removing daemons).
9f95a23c 226
522d829b
TL
227Resume cephadm work by running the following command:
228
229.. prompt:: bash #
9f95a23c
TL
230
231 ceph orch resume
232
f6b5b4d7
TL
233.. _cephadm-stray-host:
234
9f95a23c 235CEPHADM_STRAY_HOST
522d829b
TL
236~~~~~~~~~~~~~~~~~~
237
238This indicates that one or more hosts have Ceph daemons that are
239running, but are not registered as hosts managed by *cephadm*. This
240means that those services cannot currently be managed by cephadm
241(e.g., restarted, upgraded, included in `ceph orch ps`).
9f95a23c 242
a4b75251 243* You can manage the host(s) by running the following command:
9f95a23c 244
a4b75251 245 .. prompt:: bash #
9f95a23c 246
a4b75251 247 ceph orch host add *<hostname>*
9f95a23c 248
a4b75251 249 .. note::
522d829b 250
a4b75251
TL
251 You might need to configure SSH access to the remote host
252 before this will work.
9f95a23c 253
a4b75251
TL
254* See :ref:`cephadm-fqdn` for more information about host names and
255 domain names.
9f95a23c 256
a4b75251
TL
257* Alternatively, you can manually connect to the host and ensure that
258 services on that host are removed or migrated to a host that is
259 managed by *cephadm*.
522d829b 260
a4b75251
TL
261* This warning can be disabled entirely by running the following
262 command:
9f95a23c 263
a4b75251 264 .. prompt:: bash #
9f95a23c 265
a4b75251 266 ceph config set mgr mgr/cephadm/warn_on_stray_hosts false
f6b5b4d7 267
9f95a23c 268CEPHADM_STRAY_DAEMON
522d829b 269~~~~~~~~~~~~~~~~~~~~
9f95a23c
TL
270
271One or more Ceph daemons are running but not are not managed by
272*cephadm*. This may be because they were deployed using a different
273tool, or because they were started manually. Those
274services cannot currently be managed by cephadm (e.g., restarted,
275upgraded, or included in `ceph orch ps`).
276
a4b75251
TL
277* If the daemon is a stateful one (monitor or OSD), it should be adopted
278 by cephadm; see :ref:`cephadm-adoption`. For stateless daemons, it is
279 usually easiest to provision a new daemon with the ``ceph orch apply``
280 command and then stop the unmanaged daemon.
9f95a23c 281
a4b75251 282* If the stray daemon(s) are running on hosts not managed by cephadm, you can manage the host(s) by running the following command:
522d829b 283
a4b75251
TL
284 .. prompt:: bash #
285
286 ceph orch host add *<hostname>*
287
288 .. note::
9f95a23c 289
a4b75251
TL
290 You might need to configure SSH access to the remote host
291 before this will work.
292
293* See :ref:`cephadm-fqdn` for more information about host names and
294 domain names.
295
296* This warning can be disabled entirely by running the following command:
297
298 .. prompt:: bash #
299
300 ceph config set mgr mgr/cephadm/warn_on_stray_daemons false
9f95a23c
TL
301
302CEPHADM_HOST_CHECK_FAILED
522d829b 303~~~~~~~~~~~~~~~~~~~~~~~~~
9f95a23c
TL
304
305One or more hosts have failed the basic cephadm host check, which verifies
306that (1) the host is reachable and cephadm can be executed there, and (2)
307that the host satisfies basic prerequisites, like a working container
308runtime (podman or docker) and working time synchronization.
309If this test fails, cephadm will no be able to manage services on that host.
310
522d829b
TL
311You can manually run this check by running the following command:
312
313.. prompt:: bash #
9f95a23c
TL
314
315 ceph cephadm check-host *<hostname>*
316
522d829b
TL
317You can remove a broken host from management by running the following command:
318
319.. prompt:: bash #
9f95a23c
TL
320
321 ceph orch host rm *<hostname>*
322
522d829b
TL
323You can disable this health warning by running the following command:
324
325.. prompt:: bash #
9f95a23c
TL
326
327 ceph config set mgr mgr/cephadm/warn_on_failed_host_check false
e306af50 328
f67539c2
TL
329Cluster Configuration Checks
330----------------------------
522d829b
TL
331Cephadm periodically scans each of the hosts in the cluster in order
332to understand the state of the OS, disks, NICs etc. These facts can
333then be analysed for consistency across the hosts in the cluster to
334identify any configuration anomalies.
335
336Enabling Cluster Configuration Checks
337~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
e306af50 338
522d829b
TL
339The configuration checks are an **optional** feature, and are enabled
340by running the following command:
341
342.. prompt:: bash #
e306af50 343
f67539c2 344 ceph config set mgr mgr/cephadm/config_checks_enabled true
e306af50 345
522d829b
TL
346States Returned by Cluster Configuration Checks
347~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
348
349The configuration checks are triggered after each host scan (1m). The
350cephadm log entries will show the current state and outcome of the
351configuration checks as follows:
e306af50 352
522d829b
TL
353Disabled state (config_checks_enabled false):
354
355.. code-block:: bash
e306af50 356
f67539c2 357 ALL cephadm checks are disabled, use 'ceph config set mgr mgr/cephadm/config_checks_enabled true' to enable
f91f0fd5 358
522d829b
TL
359Enabled state (config_checks_enabled true):
360
361.. code-block:: bash
f91f0fd5 362
f67539c2 363 CEPHADM 8/8 checks enabled and executed (0 bypassed, 0 disabled). No issues detected
e306af50 364
522d829b
TL
365Managing Configuration Checks (subcommands)
366~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
f91f0fd5 367
522d829b
TL
368The configuration checks themselves are managed through several cephadm subcommands.
369
370To determine whether the configuration checks are enabled, run the following command:
371
372.. prompt:: bash #
f91f0fd5 373
f67539c2
TL
374 ceph cephadm config-check status
375
522d829b
TL
376This command returns the status of the configuration checker as either "Enabled" or "Disabled".
377
f67539c2 378
522d829b 379To list all the configuration checks and their current states, run the following command:
f67539c2 380
522d829b 381.. code-block:: console
f67539c2 382
522d829b 383 # ceph cephadm config-check ls
f67539c2 384
f67539c2
TL
385 NAME HEALTHCHECK STATUS DESCRIPTION
386 kernel_security CEPHADM_CHECK_KERNEL_LSM enabled checks SELINUX/Apparmor profiles are consistent across cluster hosts
387 os_subscription CEPHADM_CHECK_SUBSCRIPTION enabled checks subscription states are consistent for all cluster hosts
33c7a0ef 388 public_network CEPHADM_CHECK_PUBLIC_MEMBERSHIP enabled check that all hosts have a NIC on the Ceph public_network
f67539c2
TL
389 osd_mtu_size CEPHADM_CHECK_MTU enabled check that OSD hosts share a common MTU setting
390 osd_linkspeed CEPHADM_CHECK_LINKSPEED enabled check that OSD hosts share a common linkspeed
391 network_missing CEPHADM_CHECK_NETWORK_MISSING enabled checks that the cluster/public networks defined exist on the Ceph hosts
392 ceph_release CEPHADM_CHECK_CEPH_RELEASE enabled check for Ceph version consistency - ceph daemons should be on the same release (unless upgrade is active)
393 kernel_version CEPHADM_CHECK_KERNEL_VERSION enabled checks that the MAJ.MIN of the kernel on Ceph hosts is consistent
394
522d829b
TL
395The name of each configuration check can be used to enable or disable a specific check by running a command of the following form:
396:
397
398.. prompt:: bash #
adb31ebb 399
f67539c2 400 ceph cephadm config-check disable <name>
adb31ebb 401
522d829b
TL
402For example:
403
404.. prompt:: bash #
405
f67539c2 406 ceph cephadm config-check disable kernel_security
adb31ebb 407
f67539c2 408CEPHADM_CHECK_KERNEL_LSM
522d829b
TL
409~~~~~~~~~~~~~~~~~~~~~~~~
410Each host within the cluster is expected to operate within the same Linux
411Security Module (LSM) state. For example, if the majority of the hosts are
412running with SELINUX in enforcing mode, any host not running in this mode is
1e59de90 413flagged as an anomaly and a healthcheck (WARNING) state raised.
adb31ebb 414
f67539c2 415CEPHADM_CHECK_SUBSCRIPTION
522d829b
TL
416~~~~~~~~~~~~~~~~~~~~~~~~~~
417This check relates to the status of vendor subscription. This check is
418performed only for hosts using RHEL, but helps to confirm that all hosts are
419covered by an active subscription, which ensures that patches and updates are
420available.
adb31ebb 421
f67539c2 422CEPHADM_CHECK_PUBLIC_MEMBERSHIP
522d829b
TL
423~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
424All members of the cluster should have NICs configured on at least one of the
425public network subnets. Hosts that are not on the public network will rely on
426routing, which may affect performance.
adb31ebb 427
f67539c2 428CEPHADM_CHECK_MTU
522d829b
TL
429~~~~~~~~~~~~~~~~~
430The MTU of the NICs on OSDs can be a key factor in consistent performance. This
431check examines hosts that are running OSD services to ensure that the MTU is
432configured consistently within the cluster. This is determined by establishing
433the MTU setting that the majority of hosts is using. Any anomalies result in a
434Ceph health check.
adb31ebb 435
f67539c2 436CEPHADM_CHECK_LINKSPEED
522d829b
TL
437~~~~~~~~~~~~~~~~~~~~~~~
438This check is similar to the MTU check. Linkspeed consistency is a factor in
439consistent cluster performance, just as the MTU of the NICs on the OSDs is.
440This check determines the linkspeed shared by the majority of OSD hosts, and a
441health check is run for any hosts that are set at a lower linkspeed rate.
adb31ebb 442
f67539c2 443CEPHADM_CHECK_NETWORK_MISSING
522d829b
TL
444~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
445The `public_network` and `cluster_network` settings support subnet definitions
446for IPv4 and IPv6. If these settings are not found on any host in the cluster,
447a health check is raised.
adb31ebb 448
f67539c2 449CEPHADM_CHECK_CEPH_RELEASE
522d829b
TL
450~~~~~~~~~~~~~~~~~~~~~~~~~~
451Under normal operations, the Ceph cluster runs daemons under the same ceph
452release (that is, the Ceph cluster runs all daemons under (for example)
453Octopus). This check determines the active release for each daemon, and
454reports any anomalies as a healthcheck. *This check is bypassed if an upgrade
455process is active within the cluster.*
adb31ebb 456
f67539c2 457CEPHADM_CHECK_KERNEL_VERSION
522d829b
TL
458~~~~~~~~~~~~~~~~~~~~~~~~~~~~
459The OS kernel version (maj.min) is checked for consistency across the hosts.
2a845540 460The kernel version of the majority of the hosts is used as the basis for
522d829b
TL
461identifying anomalies.
462
463.. _client_keyrings_and_configs:
adb31ebb 464
b3b6e05e
TL
465Client keyrings and configs
466===========================
522d829b 467Cephadm can distribute copies of the ``ceph.conf`` file and client keyring
2a845540
TL
468files to hosts. Starting from versions 16.2.10 (Pacific) and 17.2.1 (Quincy),
469in addition to the default location ``/etc/ceph/`` cephadm also stores config
470and keyring files in the ``/var/lib/ceph/<fsid>/config`` directory. It is usually
471a good idea to store a copy of the config and ``client.admin`` keyring on any host
472used to administer the cluster via the CLI. By default, cephadm does this for any
473nodes that have the ``_admin`` label (which normally includes the bootstrap host).
474
475.. note:: Ceph daemons will still use files on ``/etc/ceph/``. The new configuration
476 location ``/var/lib/ceph/<fsid>/config`` is used by cephadm only. Having this config
477 directory under the fsid helps cephadm to load the configuration associated with
478 the cluster.
479
b3b6e05e
TL
480
481When a client keyring is placed under management, cephadm will:
482
522d829b
TL
483 - build a list of target hosts based on the specified placement spec (see
484 :ref:`orchestrator-cli-placement-spec`)
b3b6e05e 485 - store a copy of the ``/etc/ceph/ceph.conf`` file on the specified host(s)
2a845540
TL
486 - store a copy of the ``ceph.conf`` file at ``/var/lib/ceph/<fsid>/config/ceph.conf`` on the specified host(s)
487 - store a copy of the ``ceph.client.admin.keyring`` file at ``/var/lib/ceph/<fsid>/config/ceph.client.admin.keyring`` on the specified host(s)
b3b6e05e
TL
488 - store a copy of the keyring file on the specified host(s)
489 - update the ``ceph.conf`` file as needed (e.g., due to a change in the cluster monitors)
522d829b
TL
490 - update the keyring file if the entity's key is changed (e.g., via ``ceph
491 auth ...`` commands)
492 - ensure that the keyring file has the specified ownership and specified mode
b3b6e05e 493 - remove the keyring file when client keyring management is disabled
522d829b
TL
494 - remove the keyring file from old hosts if the keyring placement spec is
495 updated (as needed)
b3b6e05e 496
522d829b
TL
497Listing Client Keyrings
498-----------------------
499
500To see the list of client keyrings are currently under management, run the following command:
501
502.. prompt:: bash #
b3b6e05e
TL
503
504 ceph orch client-keyring ls
505
522d829b
TL
506Putting a Keyring Under Management
507----------------------------------
508
509To put a keyring under management, run a command of the following form:
510
511.. prompt:: bash #
f67539c2 512
b3b6e05e 513 ceph orch client-keyring set <entity> <placement> [--mode=<mode>] [--owner=<uid>.<gid>] [--path=<path>]
f67539c2 514
522d829b
TL
515- By default, the *path* is ``/etc/ceph/client.{entity}.keyring``, which is
516 where Ceph looks by default. Be careful when specifying alternate locations,
517 as existing files may be overwritten.
b3b6e05e
TL
518- A placement of ``*`` (all hosts) is common.
519- The mode defaults to ``0600`` and ownership to ``0:0`` (user root, group root).
adb31ebb 520
522d829b
TL
521For example, to create a ``client.rbd`` key and deploy it to hosts with the
522``rbd-client`` label and make it group readable by uid/gid 107 (qemu), run the
523following commands:
524
525.. prompt:: bash #
adb31ebb 526
b3b6e05e
TL
527 ceph auth get-or-create-key client.rbd mon 'profile rbd' mgr 'profile rbd' osd 'profile rbd pool=my_rbd_pool'
528 ceph orch client-keyring set client.rbd label:rbd-client --owner 107:107 --mode 640
f67539c2 529
522d829b
TL
530The resulting keyring file is:
531
532.. code-block:: console
b3b6e05e
TL
533
534 -rw-r-----. 1 qemu qemu 156 Apr 21 08:47 /etc/ceph/client.client.rbd.keyring
535
522d829b
TL
536Disabling Management of a Keyring File
537--------------------------------------
538
539To disable management of a keyring file, run a command of the following form:
540
541.. prompt:: bash #
b3b6e05e
TL
542
543 ceph orch client-keyring rm <entity>
544
522d829b
TL
545.. note::
546
547 This deletes any keyring files for this entity that were previously written
548 to cluster nodes.
b3b6e05e 549
522d829b 550.. _etc_ceph_conf_distribution:
b3b6e05e
TL
551
552/etc/ceph/ceph.conf
553===================
adb31ebb 554
522d829b
TL
555Distributing ceph.conf to hosts that have no keyrings
556-----------------------------------------------------
557
558It might be useful to distribute ``ceph.conf`` files to hosts without an
559associated client keyring file. By default, cephadm deploys only a
560``ceph.conf`` file to hosts where a client keyring is also distributed (see
561above). To write config files to hosts without client keyrings, run the
562following command:
563
564.. prompt:: bash #
adb31ebb 565
b3b6e05e 566 ceph config set mgr mgr/cephadm/manage_etc_ceph_ceph_conf true
adb31ebb 567
522d829b
TL
568Using Placement Specs to specify which hosts get keyrings
569---------------------------------------------------------
570
571By default, the configs are written to all hosts (i.e., those listed by ``ceph
572orch host ls``). To specify which hosts get a ``ceph.conf``, run a command of
573the following form:
574
575.. prompt:: bash #
576
577 ceph config set mgr mgr/cephadm/manage_etc_ceph_ceph_conf_hosts <placement spec>
578
579For example, to distribute configs to hosts with the ``bare_config`` label, run
580the following command:
581
582Distributing ceph.conf to hosts tagged with bare_config
583-------------------------------------------------------
adb31ebb 584
522d829b 585For example, to distribute configs to hosts with the ``bare_config`` label, run the following command:
adb31ebb 586
522d829b 587.. prompt:: bash #
adb31ebb 588
522d829b 589 ceph config set mgr mgr/cephadm/manage_etc_ceph_ceph_conf_hosts label:bare_config
adb31ebb 590
b3b6e05e 591(See :ref:`orchestrator-cli-placement-spec` for more information about placement specs.)
a4b75251
TL
592
593Purging a cluster
594=================
595
596.. danger:: THIS OPERATION WILL DESTROY ALL DATA STORED IN THIS CLUSTER
597
33c7a0ef
TL
598In order to destroy a cluster and delete all data stored in this cluster, disable
599cephadm to stop all orchestration operations (so we avoid deploying new daemons).
a4b75251
TL
600
601.. prompt:: bash #
602
33c7a0ef 603 ceph mgr module disable cephadm
a4b75251
TL
604
605Then verify the FSID of the cluster:
606
607.. prompt:: bash #
608
33c7a0ef 609 ceph fsid
a4b75251
TL
610
611Purge ceph daemons from all hosts in the cluster
612
613.. prompt:: bash #
614
615 # For each host:
616 cephadm rm-cluster --force --zap-osds --fsid <fsid>