10 There is a finite set of possible health messages that a Ceph cluster can
11 raise -- these are defined as *health checks* which have unique identifiers.
13 The identifier is a terse pseudo-human-readable (i.e. like a variable name)
14 string. It is intended to enable tools (such as UIs) to make sense of
15 health checks, and present them in a way that reflects their meaning.
17 This page lists the health checks that are raised by the monitor and manager
18 daemons. In addition to these, you may also see health checks that originate
19 from MDS daemons (see :ref:`cephfs-health-messages`), and health checks
20 that are defined by ceph-mgr python modules.
31 One or more monitor daemons is currently down. The cluster requires a
32 majority (more than 1/2) of the monitors in order to function. When
33 one or more monitors are down, clients may have a harder time forming
34 their initial connection to the cluster as they may need to try more
35 addresses before they reach an operating monitor.
37 The down monitor daemon should generally be restarted as soon as
38 possible to reduce the risk of a subsequen monitor failure leading to
44 The clocks on the hosts running the ceph-mon monitor daemons are not
45 sufficiently well synchronized. This health alert is raised if the
46 cluster detects a clock skew greater than ``mon_clock_drift_allowed``.
48 This is best resolved by synchronizing the clocks using a tool like
49 ``ntpd`` or ``chrony``.
51 If it is impractical to keep the clocks closely synchronized, the
52 ``mon_clock_drift_allowed`` threshold can also be increased, but this
53 value must stay significantly below the ``mon_lease`` interval in
54 order for monitor cluster to function properly.
59 The ``ms_bind_msgr2`` option is enabled but one or more monitors is
60 not configured to bind to a v2 port in the cluster's monmap. This
61 means that features specific to the msgr2 protocol (e.g., encryption)
62 are not available on some or all connections.
64 In most cases this can be corrected by issuing the command::
68 That command will change any monitor configured for the old default
69 port 6789 to continue to listen for v1 connections on 6789 and also
70 listen for v2 connections on the new default 3300 port.
72 If a monitor is configured to listen for v1 connections on a non-standard port (not 6789), then the monmap will need to be modified manually.
78 One or more monitors is low on disk space. This alert triggers if the
79 available space on the file system storing the monitor database
80 (normally ``/var/lib/ceph/mon``), as a percentage, drops below
81 ``mon_data_avail_warn`` (default: 30%).
83 This may indicate that some other process or user on the system is
84 filling up the same file system used by the monitor. It may also
85 indicate that the monitors database is large (see ``MON_DISK_BIG``
88 If space cannot be freed, the monitor's data directory may need to be
89 moved to another storage device or file system (while the monitor
90 daemon is not running, of course).
96 One or more monitors is critically low on disk space. This alert
97 triggers if the available space on the file system storing the monitor
98 database (normally ``/var/lib/ceph/mon``), as a percentage, drops
99 below ``mon_data_avail_crit`` (default: 5%). See ``MON_DISK_LOW``, above.
104 The database size for one or more monitors is very large. This alert
105 triggers if the size of the monitor's database is larger than
106 ``mon_data_size_warn`` (default: 15 GiB).
108 A large database is unusual, but may not necessarily indicate a
109 problem. Monitor databases may grow in size when there are placement
110 groups that have not reached an ``active+clean`` state in a long time.
112 This may also indicate that the monitor's database is not properly
113 compacting, which has been observed with some older versions of
114 leveldb and rocksdb. Forcing a compaction with ``ceph daemon mon.<id>
115 compact`` may shrink the on-disk size.
117 This warning may also indicate that the monitor has a bug that is
118 preventing it from pruning the cluster metadata it stores. If the
119 problem persists, please report a bug.
121 The warning threshold may be adjusted with::
123 ceph config set global mon_data_size_warn <size>
125 AUTH_INSECURE_GLOBAL_ID_RECLAIM
126 _______________________________
128 One or more clients or daemons are connected to the cluster that are
129 not securely reclaiming their global_id (a unique number identifying
130 each entity in the cluster) when reconnecting to a monitor. The
131 client is being permitted to connect anyway because the
132 ``auth_allow_insecure_global_id_reclaim`` option is set to true (which may
133 be necessary until all ceph clients have been upgraded), and the
134 ``auth_expose_insecure_global_id_reclaim`` option set to ``true`` (which
135 allows monitors to detect clients with insecure reclaim early by forcing them to
136 reconnect right after they first authenticate).
138 You can identify which client(s) are using unpatched ceph client code with::
142 Clients global_id reclaim rehavior can also seen in the
143 ``global_id_status`` field in the dump of clients connected to an
144 individual monitor (``reclaim_insecure`` means the client is
145 unpatched and is contributing to this health alert)::
147 ceph tell mon.\* sessions
149 We strongly recommend that all clients in the system are upgraded to a
150 newer version of Ceph that correctly reclaims global_id values. Once
151 all clients have been updated, you can stop allowing insecure reconnections
154 ceph config set mon auth_allow_insecure_global_id_reclaim false
156 If it is impractical to upgrade all clients immediately, you can silence
157 this warning temporarily with::
159 ceph health mute AUTH_INSECURE_GLOBAL_ID_RECLAIM 1w # 1 week
161 Although we do NOT recommend doing so, you can also disable this warning indefinitely
164 ceph config set mon mon_warn_on_insecure_global_id_reclaim false
166 AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED
167 _______________________________________
169 Ceph is currently configured to allow clients to reconnect to monitors using
170 an insecure process to reclaim their previous global_id because the setting
171 ``auth_allow_insecure_global_id_reclaim`` is set to ``true``. It may be necessary to
172 leave this setting enabled while existing Ceph clients are upgraded to newer
173 versions of Ceph that correctly and securely reclaim their global_id.
175 If the ``AUTH_INSECURE_GLOBAL_ID_RECLAIM`` health alert has not also been raised and
176 the ``auth_expose_insecure_global_id_reclaim`` setting has not been disabled (it is
177 on by default), then there are currently no clients connected that need to be
178 upgraded, and it is safe to disallow insecure global_id reclaim with::
180 ceph config set mon auth_allow_insecure_global_id_reclaim false
182 If there are still clients that need to be upgraded, then this alert can be
183 silenced temporarily with::
185 ceph health mute AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED 1w # 1 week
187 Although we do NOT recommend doing so, you can also disable this warning indefinitely
190 ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false
199 All manager daemons are currently down. The cluster should normally
200 have at least one running manager (``ceph-mgr``) daemon. If no
201 manager daemon is running, the cluster's ability to monitor itself will
202 be compromised, and parts of the management API will become
203 unavailable (for example, the dashboard will not work, and most CLI
204 commands that report metrics or runtime state will block). However,
205 the cluster will still be able to perform all IO operations and
206 recover from failures.
208 The down manager daemon should generally be restarted as soon as
209 possible to ensure that the cluster can be monitored (e.g., so that
210 the ``ceph -s`` information is up to date, and/or metrics can be
211 scraped by Prometheus).
214 MGR_MODULE_DEPENDENCY
215 _____________________
217 An enabled manager module is failing its dependency check. This health check
218 should come with an explanatory message from the module about the problem.
220 For example, a module might report that a required package is not installed:
221 install the required package and restart your manager daemons.
223 This health check is only applied to enabled modules. If a module is
224 not enabled, you can see whether it is reporting dependency issues in
225 the output of `ceph module ls`.
231 A manager module has experienced an unexpected error. Typically,
232 this means an unhandled exception was raised from the module's `serve`
233 function. The human readable description of the error may be obscurely
234 worded if the exception did not provide a useful description of itself.
236 This health check may indicate a bug: please open a Ceph bug report if you
237 think you have encountered a bug.
239 If you believe the error is transient, you may restart your manager
240 daemon(s), or use `ceph mgr fail` on the active daemon to prompt
241 a failover to another daemon.
250 One or more OSDs are marked down. The ceph-osd daemon may have been
251 stopped, or peer OSDs may be unable to reach the OSD over the network.
252 Common causes include a stopped or crashed daemon, a down host, or a
255 Verify the host is healthy, the daemon is started, and network is
256 functioning. If the daemon has crashed, the daemon log file
257 (``/var/log/ceph/ceph-osd.*``) may contain debugging information.
259 OSD_<crush type>_DOWN
260 _____________________
262 (e.g. OSD_HOST_DOWN, OSD_ROOT_DOWN)
264 All the OSDs within a particular CRUSH subtree are marked down, for example
270 An OSD is referenced in the CRUSH map hierarchy but does not exist.
272 The OSD can be removed from the CRUSH hierarchy with::
274 ceph osd crush rm osd.<id>
276 OSD_OUT_OF_ORDER_FULL
277 _____________________
279 The utilization thresholds for `nearfull`, `backfillfull`, `full`,
280 and/or `failsafe_full` are not ascending. In particular, we expect
281 `nearfull < backfillfull`, `backfillfull < full`, and `full <
284 The thresholds can be adjusted with::
286 ceph osd set-nearfull-ratio <ratio>
287 ceph osd set-backfillfull-ratio <ratio>
288 ceph osd set-full-ratio <ratio>
294 One or more OSDs has exceeded the `full` threshold and is preventing
295 the cluster from servicing writes.
297 Utilization by pool can be checked with::
301 The currently defined `full` ratio can be seen with::
303 ceph osd dump | grep full_ratio
305 A short-term workaround to restore write availability is to raise the full
306 threshold by a small amount::
308 ceph osd set-full-ratio <ratio>
310 New storage should be added to the cluster by deploying more OSDs or
311 existing data should be deleted in order to free up space.
316 One or more OSDs has exceeded the `backfillfull` threshold, which will
317 prevent data from being allowed to rebalance to this device. This is
318 an early warning that rebalancing may not be able to complete and that
319 the cluster is approaching full.
321 Utilization by pool can be checked with::
328 One or more OSDs has exceeded the `nearfull` threshold. This is an early
329 warning that the cluster is approaching full.
331 Utilization by pool can be checked with::
338 One or more cluster flags of interest has been set. These flags include:
340 * *full* - the cluster is flagged as full and cannot serve writes
341 * *pauserd*, *pausewr* - paused reads or writes
342 * *noup* - OSDs are not allowed to start
343 * *nodown* - OSD failure reports are being ignored, such that the
344 monitors will not mark OSDs `down`
345 * *noin* - OSDs that were previously marked `out` will not be marked
346 back `in` when they start
347 * *noout* - down OSDs will not automatically be marked out after the
349 * *nobackfill*, *norecover*, *norebalance* - recovery or data
350 rebalancing is suspended
351 * *noscrub*, *nodeep_scrub* - scrubbing is disabled
352 * *notieragent* - cache tiering activity is suspended
354 With the exception of *full*, these flags can be set or cleared with::
357 ceph osd unset <flag>
362 One or more OSDs or CRUSH {nodes,device classes} has a flag of interest set.
365 * *noup*: these OSDs are not allowed to start
366 * *nodown*: failure reports for these OSDs will be ignored
367 * *noin*: if these OSDs were previously marked `out` automatically
368 after a failure, they will not be marked in when they start
369 * *noout*: if these OSDs are down they will not automatically be marked
370 `out` after the configured interval
372 These flags can be set and cleared in batch with::
374 ceph osd set-group <flags> <who>
375 ceph osd unset-group <flags> <who>
379 ceph osd set-group noup,noout osd.0 osd.1
380 ceph osd unset-group noup,noout osd.0 osd.1
381 ceph osd set-group noup,noout host-foo
382 ceph osd unset-group noup,noout host-foo
383 ceph osd set-group noup,noout class-hdd
384 ceph osd unset-group noup,noout class-hdd
389 The CRUSH map is using very old settings and should be updated. The
390 oldest tunables that can be used (i.e., the oldest client version that
391 can connect to the cluster) without triggering this health warning is
392 determined by the ``mon_crush_min_required_version`` config option.
393 See :ref:`crush-map-tunables` for more information.
395 OLD_CRUSH_STRAW_CALC_VERSION
396 ____________________________
398 The CRUSH map is using an older, non-optimal method for calculating
399 intermediate weight values for ``straw`` buckets.
401 The CRUSH map should be updated to use the newer method
402 (``straw_calc_version=1``). See
403 :ref:`crush-map-tunables` for more information.
405 CACHE_POOL_NO_HIT_SET
406 _____________________
408 One or more cache pools is not configured with a *hit set* to track
409 utilization, which will prevent the tiering agent from identifying
410 cold objects to flush and evict from the cache.
412 Hit sets can be configured on the cache pool with::
414 ceph osd pool set <poolname> hit_set_type <type>
415 ceph osd pool set <poolname> hit_set_period <period-in-seconds>
416 ceph osd pool set <poolname> hit_set_count <number-of-hitsets>
417 ceph osd pool set <poolname> hit_set_fpp <target-false-positive-rate>
422 No pre-luminous v12.y.z OSDs are running but the ``sortbitwise`` flag has not
425 The ``sortbitwise`` flag must be set before luminous v12.y.z or newer
426 OSDs can start. You can safely set the flag with::
428 ceph osd set sortbitwise
433 One or more pools has reached its quota and is no longer allowing writes.
435 Pool quotas and utilization can be seen with::
439 You can either raise the pool quota with::
441 ceph osd pool set-quota <poolname> max_objects <num-objects>
442 ceph osd pool set-quota <poolname> max_bytes <num-bytes>
444 or delete some existing data to reduce utilization.
449 One or more OSDs that use the BlueStore backend have been allocated
450 `db` partitions (storage space for metadata, normally on a faster
451 device) but that space has filled, such that metadata has "spilled
452 over" onto the normal slow device. This isn't necessarily an error
453 condition or even unexpected, but if the administrator's expectation
454 was that all metadata would fit on the faster device, it indicates
455 that not enough space was provided.
457 This warning can be disabled on all OSDs with::
459 ceph config set osd bluestore_warn_on_bluefs_spillover false
461 Alternatively, it can be disabled on a specific OSD with::
463 ceph config set osd.123 bluestore_warn_on_bluefs_spillover false
465 To provide more metadata space, the OSD in question could be destroyed and
466 reprovisioned. This will involve data migration and recovery.
468 It may also be possible to expand the LVM logical volume backing the
469 `db` storage. If the underlying LV has been expanded, the OSD daemon
470 needs to be stopped and BlueFS informed of the device size change with::
472 ceph-bluestore-tool bluefs-bdev-expand --path /var/lib/ceph/osd/ceph-$ID
474 BLUEFS_AVAILABLE_SPACE
475 ______________________
477 To check how much space is free for BlueFS do::
479 ceph daemon osd.123 bluestore bluefs available
481 This will output up to 3 values: `BDEV_DB free`, `BDEV_SLOW free` and
482 `available_from_bluestore`. `BDEV_DB` and `BDEV_SLOW` report amount of space that
483 has been acquired by BlueFS and is considered free. Value `available_from_bluestore`
484 denotes ability of BlueStore to relinquish more space to BlueFS.
485 It is normal that this value is different from amount of BlueStore free space, as
486 BlueFS allocation unit is typically larger than BlueStore allocation unit.
487 This means that only part of BlueStore free space will be acceptable for BlueFS.
492 If BlueFS is running low on available free space and there is little
493 `available_from_bluestore` one can consider reducing BlueFS allocation unit size.
494 To simulate available space when allocation unit is different do::
496 ceph daemon osd.123 bluestore bluefs available <alloc-unit-size>
498 BLUESTORE_FRAGMENTATION
499 _______________________
501 As BlueStore works free space on underlying storage will get fragmented.
502 This is normal and unavoidable but excessive fragmentation will cause slowdown.
503 To inspect BlueStore fragmentation one can do::
505 ceph daemon osd.123 bluestore allocator score block
507 Score is given in [0-1] range.
508 [0.0 .. 0.4] tiny fragmentation
509 [0.4 .. 0.7] small, acceptable fragmentation
510 [0.7 .. 0.9] considerable, but safe fragmentation
511 [0.9 .. 1.0] severe fragmentation, may impact BlueFS ability to get space from BlueStore
513 If detailed report of free fragments is required do::
515 ceph daemon osd.123 bluestore allocator dump block
517 In case when handling OSD process that is not running fragmentation can be
518 inspected with `ceph-bluestore-tool`.
519 Get fragmentation score::
521 ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-123 --allocator block free-score
523 And dump detailed free chunks::
525 ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-123 --allocator block free-dump
527 BLUESTORE_LEGACY_STATFS
528 _______________________
530 In the Nautilus release, BlueStore tracks its internal usage
531 statistics on a per-pool granular basis, and one or more OSDs have
532 BlueStore volumes that were created prior to Nautilus. If *all* OSDs
533 are older than Nautilus, this just means that the per-pool metrics are
534 not available. However, if there is a mix of pre-Nautilus and
535 post-Nautilus OSDs, the cluster usage statistics reported by ``ceph
536 df`` will not be accurate.
538 The old OSDs can be updated to use the new usage tracking scheme by stopping each OSD, running a repair operation, and the restarting it. For example, if ``osd.123`` needed to be updated,::
540 systemctl stop ceph-osd@123
541 ceph-bluestore-tool repair --path /var/lib/ceph/osd/ceph-123
542 systemctl start ceph-osd@123
544 This warning can be disabled with::
546 ceph config set global bluestore_warn_on_legacy_statfs false
548 BLUESTORE_NO_PER_POOL_OMAP
549 __________________________
551 Starting with the Octopus release, BlueStore tracks omap space utilization
552 by pool, and one or more OSDs have volumes that were created prior to
553 Octopus. If all OSDs are not running BlueStore with the new tracking
554 enabled, the cluster will report and approximate value for per-pool omap usage
555 based on the most recent deep-scrub.
557 The old OSDs can be updated to track by pool by stopping each OSD,
558 running a repair operation, and the restarting it. For example, if
559 ``osd.123`` needed to be updated,::
561 systemctl stop ceph-osd@123
562 ceph-bluestore-tool repair --path /var/lib/ceph/osd/ceph-123
563 systemctl start ceph-osd@123
565 This warning can be disabled with::
567 ceph config set global bluestore_warn_on_no_per_pool_omap false
570 BLUESTORE_DISK_SIZE_MISMATCH
571 ____________________________
573 One or more OSDs using BlueStore has an internal inconsistency between the size
574 of the physical device and the metadata tracking its size. This can lead to
575 the OSD crashing in the future.
577 The OSDs in question should be destroyed and reprovisioned. Care should be
578 taken to do this one OSD at a time, and in a way that doesn't put any data at
579 risk. For example, if osd ``$N`` has the error,::
582 while ! ceph osd safe-to-destroy osd.$N ; do sleep 1m ; done
583 ceph osd destroy osd.$N
584 ceph-volume lvm zap /path/to/device
585 ceph-volume lvm create --osd-id $N --data /path/to/device
587 BLUESTORE_NO_COMPRESSION
588 ________________________
590 One or more OSDs is unable to load a BlueStore compression plugin.
591 This can be caused by a broken installation, in which the ``ceph-osd``
592 binary does not match the compression plugins, or a recent upgrade
593 that did not include a restart of the ``ceph-osd`` daemon.
595 Verify that the package(s) on the host running the OSD(s) in question
596 are correctly installed and that the OSD daemon(s) have been
597 restarted. If the problem persists, check the OSD log for any clues
598 as to the source of the problem.
608 One or more devices is expected to fail soon, where the warning
609 threshold is controlled by the ``mgr/devicehealth/warn_threshold``
612 This warning only applies to OSDs that are currently marked "in", so
613 the expected response to this failure is to mark the device "out" so
614 that data is migrated off of the device, and then to remove the
615 hardware from the system. Note that the marking out is normally done
616 automatically if ``mgr/devicehealth/self_heal`` is enabled based on
617 the ``mgr/devicehealth/mark_out_threshold``.
619 Device health can be checked with::
621 ceph device info <device-id>
623 Device life expectancy is set by a prediction model run by
624 the mgr or an by external tool via the command::
626 ceph device set-life-expectancy <device-id> <from> <to>
628 You can change the stored life expectancy manually, but that usually
629 doesn't accomplish anything as whatever tool originally set it will
630 probably set it again, and changing the stored value does not affect
631 the actual health of the hardware device.
636 One or more devices is expected to fail soon and has been marked "out"
637 of the cluster based on ``mgr/devicehealth/mark_out_threshold``, but it
638 is still participating in one more PGs. This may be because it was
639 only recently marked "out" and data is still migrating, or because data
640 cannot be migrated off for some reason (e.g., the cluster is nearly
641 full, or the CRUSH hierarchy is such that there isn't another suitable
642 OSD to migrate the data too).
644 This message can be silenced by disabling the self heal behavior
645 (setting ``mgr/devicehealth/self_heal`` to false), by adjusting the
646 ``mgr/devicehealth/mark_out_threshold``, or by addressing what is
647 preventing data from being migrated off of the ailing device.
649 DEVICE_HEALTH_TOOMANY
650 _____________________
652 Too many devices is expected to fail soon and the
653 ``mgr/devicehealth/self_heal`` behavior is enabled, such that marking
654 out all of the ailing devices would exceed the clusters
655 ``mon_osd_min_in_ratio`` ratio that prevents too many OSDs from being
656 automatically marked "out".
658 This generally indicates that too many devices in your cluster are
659 expected to fail soon and you should take action to add newer
660 (healthier) devices before too many devices fail and data is lost.
662 The health message can also be silenced by adjusting parameters like
663 ``mon_osd_min_in_ratio`` or ``mgr/devicehealth/mark_out_threshold``,
664 but be warned that this will increase the likelihood of unrecoverable
665 data loss in the cluster.
668 Data health (pools & placement groups)
669 --------------------------------------
674 Data availability is reduced, meaning that the cluster is unable to
675 service potential read or write requests for some data in the cluster.
676 Specifically, one or more PGs is in a state that does not allow IO
677 requests to be serviced. Problematic PG states include *peering*,
678 *stale*, *incomplete*, and the lack of *active* (if those conditions do not clear
681 Detailed information about which PGs are affected is available from::
685 In most cases the root cause is that one or more OSDs is currently
686 down; see the discussion for ``OSD_DOWN`` above.
688 The state of specific problematic PGs can be queried with::
690 ceph tell <pgid> query
695 Data redundancy is reduced for some data, meaning the cluster does not
696 have the desired number of replicas for all data (for replicated
697 pools) or erasure code fragments (for erasure coded pools).
698 Specifically, one or more PGs:
700 * has the *degraded* or *undersized* flag set, meaning there are not
701 enough instances of that placement group in the cluster;
702 * has not had the *clean* flag set for some time.
704 Detailed information about which PGs are affected is available from::
708 In most cases the root cause is that one or more OSDs is currently
709 down; see the dicussion for ``OSD_DOWN`` above.
711 The state of specific problematic PGs can be queried with::
713 ceph tell <pgid> query
719 Data redundancy may be reduced or at risk for some data due to a lack
720 of free space in the cluster. Specifically, one or more PGs has the
721 *recovery_toofull* flag set, meaning that the
722 cluster is unable to migrate or recover data because one or more OSDs
723 is above the *full* threshold.
725 See the discussion for *OSD_FULL* above for steps to resolve this condition.
730 Data redundancy may be reduced or at risk for some data due to a lack
731 of free space in the cluster. Specifically, one or more PGs has the
732 *backfill_toofull* flag set, meaning that the
733 cluster is unable to migrate or recover data because one or more OSDs
734 is above the *backfillfull* threshold.
736 See the discussion for *OSD_BACKFILLFULL* above for
737 steps to resolve this condition.
742 Data scrubbing has discovered some problems with data consistency in
743 the cluster. Specifically, one or more PGs has the *inconsistent* or
744 *snaptrim_error* flag is set, indicating an earlier scrub operation
745 found a problem, or that the *repair* flag is set, meaning a repair
746 for such an inconsistency is currently in progress.
748 See :doc:`pg-repair` for more information.
753 Recent OSD scrubs have uncovered inconsistencies. This error is generally
754 paired with *PG_DAMAGED* (see above).
756 See :doc:`pg-repair` for more information.
761 When a read error occurs and another replica is available it is used to repair
762 the error immediately, so that the client can get the object data. Scrub
763 handles errors for data at rest. In order to identify possible failing disks
764 that aren't seeing scrub errors, a count of read repairs is maintained. If
765 it exceeds a config value threshold *mon_osd_warn_num_repaired* default 10,
766 this health warning is generated.
771 One or more pools contain large omap objects as determined by
772 ``osd_deep_scrub_large_omap_object_key_threshold`` (threshold for number of keys
773 to determine a large omap object) or
774 ``osd_deep_scrub_large_omap_object_value_sum_threshold`` (the threshold for
775 summed size (bytes) of all key values to determine a large omap object) or both.
776 More information on the object name, key count, and size in bytes can be found
777 by searching the cluster log for 'Large omap object found'. Large omap objects
778 can be caused by RGW bucket index objects that do not have automatic resharding
779 enabled. Please see :ref:`RGW Dynamic Bucket Index Resharding
780 <rgw_dynamic_bucket_index_resharding>` for more information on resharding.
782 The thresholds can be adjusted with::
784 ceph config set osd osd_deep_scrub_large_omap_object_key_threshold <keys>
785 ceph config set osd osd_deep_scrub_large_omap_object_value_sum_threshold <bytes>
790 A cache tier pool is nearly full. Full in this context is determined
791 by the ``target_max_bytes`` and ``target_max_objects`` properties on
792 the cache pool. Once the pool reaches the target threshold, write
793 requests to the pool may block while data is flushed and evicted
794 from the cache, a state that normally leads to very high latencies and
797 The cache pool target size can be adjusted with::
799 ceph osd pool set <cache-pool-name> target_max_bytes <bytes>
800 ceph osd pool set <cache-pool-name> target_max_objects <objects>
802 Normal cache flush and evict activity may also be throttled due to reduced
803 availability or performance of the base tier, or overall cluster load.
808 The number of PGs in use in the cluster is below the configurable
809 threshold of ``mon_pg_warn_min_per_osd`` PGs per OSD. This can lead
810 to suboptimal distribution and balance of data across the OSDs in
811 the cluster, and similarly reduce overall performance.
813 This may be an expected condition if data pools have not yet been
816 The PG count for existing pools can be increased or new pools can be created.
817 Please refer to :ref:`choosing-number-of-placement-groups` for more
820 POOL_PG_NUM_NOT_POWER_OF_TWO
821 ____________________________
823 One or more pools has a ``pg_num`` value that is not a power of two.
824 Although this is not strictly incorrect, it does lead to a less
825 balanced distribution of data because some PGs have roughly twice as
828 This is easily corrected by setting the ``pg_num`` value for the
829 affected pool(s) to a nearby power of two::
831 ceph osd pool set <pool-name> pg_num <value>
833 This health warning can be disabled with::
835 ceph config set global mon_warn_on_pool_pg_num_not_power_of_two false
840 One or more pools should probably have more PGs, based on the amount
841 of data that is currently stored in the pool. This can lead to
842 suboptimal distribution and balance of data across the OSDs in the
843 cluster, and similarly reduce overall performance. This warning is
844 generated if the ``pg_autoscale_mode`` property on the pool is set to
847 To disable the warning, you can disable auto-scaling of PGs for the
850 ceph osd pool set <pool-name> pg_autoscale_mode off
852 To allow the cluster to automatically adjust the number of PGs,::
854 ceph osd pool set <pool-name> pg_autoscale_mode on
856 You can also manually set the number of PGs for the pool to the
857 recommended amount with::
859 ceph osd pool set <pool-name> pg_num <new-pg-num>
861 Please refer to :ref:`choosing-number-of-placement-groups` and
862 :ref:`pg-autoscaler` for more information.
867 The number of PGs in use in the cluster is above the configurable
868 threshold of ``mon_max_pg_per_osd`` PGs per OSD. If this threshold is
869 exceed the cluster will not allow new pools to be created, pool `pg_num` to
870 be increased, or pool replication to be increased (any of which would lead to
871 more PGs in the cluster). A large number of PGs can lead
872 to higher memory utilization for OSD daemons, slower peering after
873 cluster state changes (like OSD restarts, additions, or removals), and
874 higher load on the Manager and Monitor daemons.
876 The simplest way to mitigate the problem is to increase the number of
877 OSDs in the cluster by adding more hardware. Note that the OSD count
878 used for the purposes of this health check is the number of "in" OSDs,
879 so marking "out" OSDs "in" (if there are any) can also help::
881 ceph osd in <osd id(s)>
883 Please refer to :ref:`choosing-number-of-placement-groups` for more
889 One or more pools should probably have more PGs, based on the amount
890 of data that is currently stored in the pool. This can lead to higher
891 memory utilization for OSD daemons, slower peering after cluster state
892 changes (like OSD restarts, additions, or removals), and higher load
893 on the Manager and Monitor daemons. This warning is generated if the
894 ``pg_autoscale_mode`` property on the pool is set to ``warn``.
896 To disable the warning, you can disable auto-scaling of PGs for the
899 ceph osd pool set <pool-name> pg_autoscale_mode off
901 To allow the cluster to automatically adjust the number of PGs,::
903 ceph osd pool set <pool-name> pg_autoscale_mode on
905 You can also manually set the number of PGs for the pool to the
906 recommended amount with::
908 ceph osd pool set <pool-name> pg_num <new-pg-num>
910 Please refer to :ref:`choosing-number-of-placement-groups` and
911 :ref:`pg-autoscaler` for more information.
913 POOL_TARGET_SIZE_BYTES_OVERCOMMITTED
914 ____________________________________
916 One or more pools have a ``target_size_bytes`` property set to
917 estimate the expected size of the pool,
918 but the value(s) exceed the total available storage (either by
919 themselves or in combination with other pools' actual usage).
921 This is usually an indication that the ``target_size_bytes`` value for
922 the pool is too large and should be reduced or set to zero with::
924 ceph osd pool set <pool-name> target_size_bytes 0
926 For more information, see :ref:`specifying_pool_target_size`.
928 POOL_HAS_TARGET_SIZE_BYTES_AND_RATIO
929 ____________________________________
931 One or more pools have both ``target_size_bytes`` and
932 ``target_size_ratio`` set to estimate the expected size of the pool.
933 Only one of these properties should be non-zero. If both are set,
934 ``target_size_ratio`` takes precedence and ``target_size_bytes`` is
937 To reset ``target_size_bytes`` to zero::
939 ceph osd pool set <pool-name> target_size_bytes 0
941 For more information, see :ref:`specifying_pool_target_size`.
946 The number of OSDs in the cluster is below the configurable
947 threshold of ``osd_pool_default_size``.
952 One or more pools has a ``pgp_num`` value less than ``pg_num``. This
953 is normally an indication that the PG count was increased without
954 also increasing the placement behavior.
956 This is sometimes done deliberately to separate out the `split` step
957 when the PG count is adjusted from the data migration that is needed
958 when ``pgp_num`` is changed.
960 This is normally resolved by setting ``pgp_num`` to match ``pg_num``,
961 triggering the data migration, with::
963 ceph osd pool set <pool> pgp_num <pg-num-value>
968 One or more pools has an average number of objects per PG that is
969 significantly higher than the overall cluster average. The specific
970 threshold is controlled by the ``mon_pg_warn_max_object_skew``
973 This is usually an indication that the pool(s) containing most of the
974 data in the cluster have too few PGs, and/or that other pools that do
975 not contain as much data have too many PGs. See the discussion of
976 *TOO_MANY_PGS* above.
978 The threshold can be raised to silence the health warning by adjusting
979 the ``mon_pg_warn_max_object_skew`` config option on the managers.
985 A pool exists that contains one or more objects but has not been
986 tagged for use by a particular application.
988 Resolve this warning by labeling the pool for use by an application. For
989 example, if the pool is used by RBD,::
991 rbd pool init <poolname>
993 If the pool is being used by a custom application 'foo', you can also label
994 via the low-level command::
996 ceph osd pool application enable foo
998 For more information, see :ref:`associate-pool-to-application`.
1003 One or more pools has reached (or is very close to reaching) its
1004 quota. The threshold to trigger this error condition is controlled by
1005 the ``mon_pool_quota_crit_threshold`` configuration option.
1007 Pool quotas can be adjusted up or down (or removed) with::
1009 ceph osd pool set-quota <pool> max_bytes <bytes>
1010 ceph osd pool set-quota <pool> max_objects <objects>
1012 Setting the quota value to 0 will disable the quota.
1017 One or more pools is approaching is quota. The threshold to trigger
1018 this warning condition is controlled by the
1019 ``mon_pool_quota_warn_threshold`` configuration option.
1021 Pool quotas can be adjusted up or down (or removed) with::
1023 ceph osd pool set-quota <pool> max_bytes <bytes>
1024 ceph osd pool set-quota <pool> max_objects <objects>
1026 Setting the quota value to 0 will disable the quota.
1031 One or more objects in the cluster is not stored on the node the
1032 cluster would like it to be stored on. This is an indication that
1033 data migration due to some recent cluster change has not yet completed.
1035 Misplaced data is not a dangerous condition in and of itself; data
1036 consistency is never at risk, and old copies of objects are never
1037 removed until the desired number of new copies (in the desired
1038 locations) are present.
1043 One or more objects in the cluster cannot be found. Specifically, the
1044 OSDs know that a new or updated copy of an object should exist, but a
1045 copy of that version of the object has not been found on OSDs that are
1048 Read or write requests to unfound objects will block.
1050 Ideally, a down OSD can be brought back online that has the more
1051 recent copy of the unfound object. Candidate OSDs can be identified from the
1052 peering state for the PG(s) responsible for the unfound object::
1054 ceph tell <pgid> query
1056 If the latest copy of the object is not available, the cluster can be
1057 told to roll back to a previous version of the object. See
1058 :ref:`failures-osd-unfound` for more information.
1063 One or more OSD requests is taking a long time to process. This can
1064 be an indication of extreme load, a slow storage device, or a software
1067 The request queue on the OSD(s) in question can be queried with the
1068 following command, executed from the OSD host::
1070 ceph daemon osd.<id> ops
1072 A summary of the slowest recent requests can be seen with::
1074 ceph daemon osd.<id> dump_historic_ops
1076 The location of an OSD can be found with::
1078 ceph osd find osd.<id>
1083 One or more PGs has not been scrubbed recently. PGs are normally
1084 scrubbed every ``mon_scrub_interval`` seconds, and this warning
1085 triggers when ``mon_warn_pg_not_scrubbed_ratio`` percentage of interval has elapsed
1086 without a scrub since it was due.
1088 PGs will not scrub if they are not flagged as *clean*, which may
1089 happen if they are misplaced or degraded (see *PG_AVAILABILITY* and
1090 *PG_DEGRADED* above).
1092 You can manually initiate a scrub of a clean PG with::
1094 ceph pg scrub <pgid>
1096 PG_NOT_DEEP_SCRUBBED
1097 ____________________
1099 One or more PGs has not been deep scrubbed recently. PGs are normally
1100 scrubbed every ``osd_deep_scrub_interval`` seconds, and this warning
1101 triggers when ``mon_warn_pg_not_deep_scrubbed_ratio`` percentage of interval has elapsed
1102 without a scrub since it was due.
1104 PGs will not (deep) scrub if they are not flagged as *clean*, which may
1105 happen if they are misplaced or degraded (see *PG_AVAILABILITY* and
1106 *PG_DEGRADED* above).
1108 You can manually initiate a scrub of a clean PG with::
1110 ceph pg deep-scrub <pgid>
1113 PG_SLOW_SNAP_TRIMMING
1114 _____________________
1116 The snapshot trim queue for one or more PGs has exceeded the
1117 configured warning threshold. This indicates that either an extremely
1118 large number of snapshots were recently deleted, or that the OSDs are
1119 unable to trim snapshots quickly enough to keep up with the rate of
1120 new snapshot deletions.
1122 The warning threshold is controlled by the
1123 ``mon_osd_snap_trim_queue_warn_on`` option (default: 32768).
1125 This warning may trigger if OSDs are under excessive load and unable
1126 to keep up with their background work, or if the OSDs' internal
1127 metadata database is heavily fragmented and unable to perform. It may
1128 also indicate some other performance issue with the OSDs.
1130 The exact size of the snapshot trim queue is reported by the
1131 ``snaptrimq_len`` field of ``ceph pg ls -f json-detail``.
1141 One or more Ceph daemons has crashed recently, and the crash has not
1142 yet been archived (acknowledged) by the administrator. This may
1143 indicate a software bug, a hardware problem (e.g., a failing disk), or
1146 New crashes can be listed with::
1150 Information about a specific crash can be examined with::
1152 ceph crash info <crash-id>
1154 This warning can be silenced by "archiving" the crash (perhaps after
1155 being examined by an administrator) so that it does not generate this
1158 ceph crash archive <crash-id>
1160 Similarly, all new crashes can be archived with::
1162 ceph crash archive-all
1164 Archived crashes will still be visible via ``ceph crash ls`` but not
1165 ``ceph crash ls-new``.
1167 The time period for what "recent" means is controlled by the option
1168 ``mgr/crash/warn_recent_interval`` (default: two weeks).
1170 These warnings can be disabled entirely with::
1172 ceph config set mgr/crash/warn_recent_interval 0
1177 Telemetry has been enabled, but the contents of the telemetry report
1178 have changed since that time, so telemetry reports will not be sent.
1180 The Ceph developers periodically revise the telemetry feature to
1181 include new and useful information, or to remove information found to
1182 be useless or sensitive. If any new information is included in the
1183 report, Ceph will require the administrator to re-enable telemetry to
1184 ensure they have an opportunity to (re)review what information will be
1187 To review the contents of the telemetry report,::
1191 Note that the telemetry report consists of several optional channels
1192 that may be independently enabled or disabled. For more information, see
1195 To re-enable telemetry (and make this warning go away),::
1199 To disable telemetry (and make this warning go away),::
1206 One or more auth users has capabilities that cannot be parsed by the
1207 monitor. This generally indicates that the user will not be
1208 authorized to perform any action with one or more daemon types.
1210 This error is mostly likely to occur after an upgrade if the
1211 capabilities were set with an older version of Ceph that did not
1212 properly validate their syntax, or if the syntax of the capabilities
1215 The user in question can be removed with::
1217 ceph auth rm <entity-name>
1219 (This will resolve the health alert, but obviously clients will not be
1220 able to authenticate as that user.)
1222 Alternatively, the capabilities for the user can be updated with::
1224 ceph auth <entity-name> <daemon-type> <caps> [<daemon-type> <caps> ...]
1226 For more information about auth capabilities, see :ref:`user-management`.
1229 OSD_NO_DOWN_OUT_INTERVAL
1230 ________________________
1232 The ``mon_osd_down_out_interval`` option is set to zero, which means
1233 that the system will not automatically perform any repair or healing
1234 operations after an OSD fails. Instead, an administrator (or some
1235 other external entity) will need to manually mark down OSDs as 'out'
1236 (i.e., via ``ceph osd out <osd-id>``) in order to trigger recovery.
1238 This option is normally set to five or ten minutes--enough time for a
1239 host to power-cycle or reboot.
1241 This warning can silenced by setting the
1242 ``mon_warn_on_osd_down_out_interval_zero`` to false::
1244 ceph config global mon mon_warn_on_osd_down_out_interval_zero false
1249 The Dashboard debug mode is enabled. This means, if there is an error
1250 while processing a REST API request, the HTTP error response contains
1251 a Python traceback. This behaviour should be disabled in production
1252 environments because such a traceback might contain and expose sensible
1255 The debug mode can be disabled with::
1257 ceph dashboard debug disable