]> git.proxmox.com Git - ceph.git/blob - ceph/doc/rados/operations/health-checks.rst
import ceph 15.2.11
[ceph.git] / ceph / doc / rados / operations / health-checks.rst
1 .. _health-checks:
2
3 =============
4 Health checks
5 =============
6
7 Overview
8 ========
9
10 There is a finite set of possible health messages that a Ceph cluster can
11 raise -- these are defined as *health checks* which have unique identifiers.
12
13 The identifier is a terse pseudo-human-readable (i.e. like a variable name)
14 string. It is intended to enable tools (such as UIs) to make sense of
15 health checks, and present them in a way that reflects their meaning.
16
17 This page lists the health checks that are raised by the monitor and manager
18 daemons. In addition to these, you may also see health checks that originate
19 from MDS daemons (see :ref:`cephfs-health-messages`), and health checks
20 that are defined by ceph-mgr python modules.
21
22 Definitions
23 ===========
24
25 Monitor
26 -------
27
28 MON_DOWN
29 ________
30
31 One or more monitor daemons is currently down. The cluster requires a
32 majority (more than 1/2) of the monitors in order to function. When
33 one or more monitors are down, clients may have a harder time forming
34 their initial connection to the cluster as they may need to try more
35 addresses before they reach an operating monitor.
36
37 The down monitor daemon should generally be restarted as soon as
38 possible to reduce the risk of a subsequen monitor failure leading to
39 a service outage.
40
41 MON_CLOCK_SKEW
42 ______________
43
44 The clocks on the hosts running the ceph-mon monitor daemons are not
45 sufficiently well synchronized. This health alert is raised if the
46 cluster detects a clock skew greater than ``mon_clock_drift_allowed``.
47
48 This is best resolved by synchronizing the clocks using a tool like
49 ``ntpd`` or ``chrony``.
50
51 If it is impractical to keep the clocks closely synchronized, the
52 ``mon_clock_drift_allowed`` threshold can also be increased, but this
53 value must stay significantly below the ``mon_lease`` interval in
54 order for monitor cluster to function properly.
55
56 MON_MSGR2_NOT_ENABLED
57 _____________________
58
59 The ``ms_bind_msgr2`` option is enabled but one or more monitors is
60 not configured to bind to a v2 port in the cluster's monmap. This
61 means that features specific to the msgr2 protocol (e.g., encryption)
62 are not available on some or all connections.
63
64 In most cases this can be corrected by issuing the command::
65
66 ceph mon enable-msgr2
67
68 That command will change any monitor configured for the old default
69 port 6789 to continue to listen for v1 connections on 6789 and also
70 listen for v2 connections on the new default 3300 port.
71
72 If a monitor is configured to listen for v1 connections on a non-standard port (not 6789), then the monmap will need to be modified manually.
73
74
75 MON_DISK_LOW
76 ____________
77
78 One or more monitors is low on disk space. This alert triggers if the
79 available space on the file system storing the monitor database
80 (normally ``/var/lib/ceph/mon``), as a percentage, drops below
81 ``mon_data_avail_warn`` (default: 30%).
82
83 This may indicate that some other process or user on the system is
84 filling up the same file system used by the monitor. It may also
85 indicate that the monitors database is large (see ``MON_DISK_BIG``
86 below).
87
88 If space cannot be freed, the monitor's data directory may need to be
89 moved to another storage device or file system (while the monitor
90 daemon is not running, of course).
91
92
93 MON_DISK_CRIT
94 _____________
95
96 One or more monitors is critically low on disk space. This alert
97 triggers if the available space on the file system storing the monitor
98 database (normally ``/var/lib/ceph/mon``), as a percentage, drops
99 below ``mon_data_avail_crit`` (default: 5%). See ``MON_DISK_LOW``, above.
100
101 MON_DISK_BIG
102 ____________
103
104 The database size for one or more monitors is very large. This alert
105 triggers if the size of the monitor's database is larger than
106 ``mon_data_size_warn`` (default: 15 GiB).
107
108 A large database is unusual, but may not necessarily indicate a
109 problem. Monitor databases may grow in size when there are placement
110 groups that have not reached an ``active+clean`` state in a long time.
111
112 This may also indicate that the monitor's database is not properly
113 compacting, which has been observed with some older versions of
114 leveldb and rocksdb. Forcing a compaction with ``ceph daemon mon.<id>
115 compact`` may shrink the on-disk size.
116
117 This warning may also indicate that the monitor has a bug that is
118 preventing it from pruning the cluster metadata it stores. If the
119 problem persists, please report a bug.
120
121 The warning threshold may be adjusted with::
122
123 ceph config set global mon_data_size_warn <size>
124
125 AUTH_INSECURE_GLOBAL_ID_RECLAIM
126 _______________________________
127
128 One or more clients or daemons are connected to the cluster that are
129 not securely reclaiming their global_id (a unique number identifying
130 each entity in the cluster) when reconnecting to a monitor. The
131 client is being permitted to connect anyway because the
132 ``auth_allow_insecure_global_id_reclaim`` option is set to true (which may
133 be necessary until all ceph clients have been upgraded), and the
134 ``auth_expose_insecure_global_id_reclaim`` option set to ``true`` (which
135 allows monitors to detect clients with insecure reclaim early by forcing them to
136 reconnect right after they first authenticate).
137
138 You can identify which client(s) are using unpatched ceph client code with::
139
140 ceph health detail
141
142 Clients global_id reclaim rehavior can also seen in the
143 ``global_id_status`` field in the dump of clients connected to an
144 individual monitor (``reclaim_insecure`` means the client is
145 unpatched and is contributing to this health alert)::
146
147 ceph tell mon.\* sessions
148
149 We strongly recommend that all clients in the system are upgraded to a
150 newer version of Ceph that correctly reclaims global_id values. Once
151 all clients have been updated, you can stop allowing insecure reconnections
152 with::
153
154 ceph config set mon auth_allow_insecure_global_id_reclaim false
155
156 If it is impractical to upgrade all clients immediately, you can silence
157 this warning temporarily with::
158
159 ceph health mute AUTH_INSECURE_GLOBAL_ID_RECLAIM 1w # 1 week
160
161 Although we do NOT recommend doing so, you can also disable this warning indefinitely
162 with::
163
164 ceph config set mon mon_warn_on_insecure_global_id_reclaim false
165
166 AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED
167 _______________________________________
168
169 Ceph is currently configured to allow clients to reconnect to monitors using
170 an insecure process to reclaim their previous global_id because the setting
171 ``auth_allow_insecure_global_id_reclaim`` is set to ``true``. It may be necessary to
172 leave this setting enabled while existing Ceph clients are upgraded to newer
173 versions of Ceph that correctly and securely reclaim their global_id.
174
175 If the ``AUTH_INSECURE_GLOBAL_ID_RECLAIM`` health alert has not also been raised and
176 the ``auth_expose_insecure_global_id_reclaim`` setting has not been disabled (it is
177 on by default), then there are currently no clients connected that need to be
178 upgraded, and it is safe to disallow insecure global_id reclaim with::
179
180 ceph config set mon auth_allow_insecure_global_id_reclaim false
181
182 If there are still clients that need to be upgraded, then this alert can be
183 silenced temporarily with::
184
185 ceph health mute AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED 1w # 1 week
186
187 Although we do NOT recommend doing so, you can also disable this warning indefinitely
188 with::
189
190 ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false
191
192
193 Manager
194 -------
195
196 MGR_DOWN
197 ________
198
199 All manager daemons are currently down. The cluster should normally
200 have at least one running manager (``ceph-mgr``) daemon. If no
201 manager daemon is running, the cluster's ability to monitor itself will
202 be compromised, and parts of the management API will become
203 unavailable (for example, the dashboard will not work, and most CLI
204 commands that report metrics or runtime state will block). However,
205 the cluster will still be able to perform all IO operations and
206 recover from failures.
207
208 The down manager daemon should generally be restarted as soon as
209 possible to ensure that the cluster can be monitored (e.g., so that
210 the ``ceph -s`` information is up to date, and/or metrics can be
211 scraped by Prometheus).
212
213
214 MGR_MODULE_DEPENDENCY
215 _____________________
216
217 An enabled manager module is failing its dependency check. This health check
218 should come with an explanatory message from the module about the problem.
219
220 For example, a module might report that a required package is not installed:
221 install the required package and restart your manager daemons.
222
223 This health check is only applied to enabled modules. If a module is
224 not enabled, you can see whether it is reporting dependency issues in
225 the output of `ceph module ls`.
226
227
228 MGR_MODULE_ERROR
229 ________________
230
231 A manager module has experienced an unexpected error. Typically,
232 this means an unhandled exception was raised from the module's `serve`
233 function. The human readable description of the error may be obscurely
234 worded if the exception did not provide a useful description of itself.
235
236 This health check may indicate a bug: please open a Ceph bug report if you
237 think you have encountered a bug.
238
239 If you believe the error is transient, you may restart your manager
240 daemon(s), or use `ceph mgr fail` on the active daemon to prompt
241 a failover to another daemon.
242
243
244 OSDs
245 ----
246
247 OSD_DOWN
248 ________
249
250 One or more OSDs are marked down. The ceph-osd daemon may have been
251 stopped, or peer OSDs may be unable to reach the OSD over the network.
252 Common causes include a stopped or crashed daemon, a down host, or a
253 network outage.
254
255 Verify the host is healthy, the daemon is started, and network is
256 functioning. If the daemon has crashed, the daemon log file
257 (``/var/log/ceph/ceph-osd.*``) may contain debugging information.
258
259 OSD_<crush type>_DOWN
260 _____________________
261
262 (e.g. OSD_HOST_DOWN, OSD_ROOT_DOWN)
263
264 All the OSDs within a particular CRUSH subtree are marked down, for example
265 all OSDs on a host.
266
267 OSD_ORPHAN
268 __________
269
270 An OSD is referenced in the CRUSH map hierarchy but does not exist.
271
272 The OSD can be removed from the CRUSH hierarchy with::
273
274 ceph osd crush rm osd.<id>
275
276 OSD_OUT_OF_ORDER_FULL
277 _____________________
278
279 The utilization thresholds for `nearfull`, `backfillfull`, `full`,
280 and/or `failsafe_full` are not ascending. In particular, we expect
281 `nearfull < backfillfull`, `backfillfull < full`, and `full <
282 failsafe_full`.
283
284 The thresholds can be adjusted with::
285
286 ceph osd set-nearfull-ratio <ratio>
287 ceph osd set-backfillfull-ratio <ratio>
288 ceph osd set-full-ratio <ratio>
289
290
291 OSD_FULL
292 ________
293
294 One or more OSDs has exceeded the `full` threshold and is preventing
295 the cluster from servicing writes.
296
297 Utilization by pool can be checked with::
298
299 ceph df
300
301 The currently defined `full` ratio can be seen with::
302
303 ceph osd dump | grep full_ratio
304
305 A short-term workaround to restore write availability is to raise the full
306 threshold by a small amount::
307
308 ceph osd set-full-ratio <ratio>
309
310 New storage should be added to the cluster by deploying more OSDs or
311 existing data should be deleted in order to free up space.
312
313 OSD_BACKFILLFULL
314 ________________
315
316 One or more OSDs has exceeded the `backfillfull` threshold, which will
317 prevent data from being allowed to rebalance to this device. This is
318 an early warning that rebalancing may not be able to complete and that
319 the cluster is approaching full.
320
321 Utilization by pool can be checked with::
322
323 ceph df
324
325 OSD_NEARFULL
326 ____________
327
328 One or more OSDs has exceeded the `nearfull` threshold. This is an early
329 warning that the cluster is approaching full.
330
331 Utilization by pool can be checked with::
332
333 ceph df
334
335 OSDMAP_FLAGS
336 ____________
337
338 One or more cluster flags of interest has been set. These flags include:
339
340 * *full* - the cluster is flagged as full and cannot serve writes
341 * *pauserd*, *pausewr* - paused reads or writes
342 * *noup* - OSDs are not allowed to start
343 * *nodown* - OSD failure reports are being ignored, such that the
344 monitors will not mark OSDs `down`
345 * *noin* - OSDs that were previously marked `out` will not be marked
346 back `in` when they start
347 * *noout* - down OSDs will not automatically be marked out after the
348 configured interval
349 * *nobackfill*, *norecover*, *norebalance* - recovery or data
350 rebalancing is suspended
351 * *noscrub*, *nodeep_scrub* - scrubbing is disabled
352 * *notieragent* - cache tiering activity is suspended
353
354 With the exception of *full*, these flags can be set or cleared with::
355
356 ceph osd set <flag>
357 ceph osd unset <flag>
358
359 OSD_FLAGS
360 _________
361
362 One or more OSDs or CRUSH {nodes,device classes} has a flag of interest set.
363 These flags include:
364
365 * *noup*: these OSDs are not allowed to start
366 * *nodown*: failure reports for these OSDs will be ignored
367 * *noin*: if these OSDs were previously marked `out` automatically
368 after a failure, they will not be marked in when they start
369 * *noout*: if these OSDs are down they will not automatically be marked
370 `out` after the configured interval
371
372 These flags can be set and cleared in batch with::
373
374 ceph osd set-group <flags> <who>
375 ceph osd unset-group <flags> <who>
376
377 For example, ::
378
379 ceph osd set-group noup,noout osd.0 osd.1
380 ceph osd unset-group noup,noout osd.0 osd.1
381 ceph osd set-group noup,noout host-foo
382 ceph osd unset-group noup,noout host-foo
383 ceph osd set-group noup,noout class-hdd
384 ceph osd unset-group noup,noout class-hdd
385
386 OLD_CRUSH_TUNABLES
387 __________________
388
389 The CRUSH map is using very old settings and should be updated. The
390 oldest tunables that can be used (i.e., the oldest client version that
391 can connect to the cluster) without triggering this health warning is
392 determined by the ``mon_crush_min_required_version`` config option.
393 See :ref:`crush-map-tunables` for more information.
394
395 OLD_CRUSH_STRAW_CALC_VERSION
396 ____________________________
397
398 The CRUSH map is using an older, non-optimal method for calculating
399 intermediate weight values for ``straw`` buckets.
400
401 The CRUSH map should be updated to use the newer method
402 (``straw_calc_version=1``). See
403 :ref:`crush-map-tunables` for more information.
404
405 CACHE_POOL_NO_HIT_SET
406 _____________________
407
408 One or more cache pools is not configured with a *hit set* to track
409 utilization, which will prevent the tiering agent from identifying
410 cold objects to flush and evict from the cache.
411
412 Hit sets can be configured on the cache pool with::
413
414 ceph osd pool set <poolname> hit_set_type <type>
415 ceph osd pool set <poolname> hit_set_period <period-in-seconds>
416 ceph osd pool set <poolname> hit_set_count <number-of-hitsets>
417 ceph osd pool set <poolname> hit_set_fpp <target-false-positive-rate>
418
419 OSD_NO_SORTBITWISE
420 __________________
421
422 No pre-luminous v12.y.z OSDs are running but the ``sortbitwise`` flag has not
423 been set.
424
425 The ``sortbitwise`` flag must be set before luminous v12.y.z or newer
426 OSDs can start. You can safely set the flag with::
427
428 ceph osd set sortbitwise
429
430 POOL_FULL
431 _________
432
433 One or more pools has reached its quota and is no longer allowing writes.
434
435 Pool quotas and utilization can be seen with::
436
437 ceph df detail
438
439 You can either raise the pool quota with::
440
441 ceph osd pool set-quota <poolname> max_objects <num-objects>
442 ceph osd pool set-quota <poolname> max_bytes <num-bytes>
443
444 or delete some existing data to reduce utilization.
445
446 BLUEFS_SPILLOVER
447 ________________
448
449 One or more OSDs that use the BlueStore backend have been allocated
450 `db` partitions (storage space for metadata, normally on a faster
451 device) but that space has filled, such that metadata has "spilled
452 over" onto the normal slow device. This isn't necessarily an error
453 condition or even unexpected, but if the administrator's expectation
454 was that all metadata would fit on the faster device, it indicates
455 that not enough space was provided.
456
457 This warning can be disabled on all OSDs with::
458
459 ceph config set osd bluestore_warn_on_bluefs_spillover false
460
461 Alternatively, it can be disabled on a specific OSD with::
462
463 ceph config set osd.123 bluestore_warn_on_bluefs_spillover false
464
465 To provide more metadata space, the OSD in question could be destroyed and
466 reprovisioned. This will involve data migration and recovery.
467
468 It may also be possible to expand the LVM logical volume backing the
469 `db` storage. If the underlying LV has been expanded, the OSD daemon
470 needs to be stopped and BlueFS informed of the device size change with::
471
472 ceph-bluestore-tool bluefs-bdev-expand --path /var/lib/ceph/osd/ceph-$ID
473
474 BLUEFS_AVAILABLE_SPACE
475 ______________________
476
477 To check how much space is free for BlueFS do::
478
479 ceph daemon osd.123 bluestore bluefs available
480
481 This will output up to 3 values: `BDEV_DB free`, `BDEV_SLOW free` and
482 `available_from_bluestore`. `BDEV_DB` and `BDEV_SLOW` report amount of space that
483 has been acquired by BlueFS and is considered free. Value `available_from_bluestore`
484 denotes ability of BlueStore to relinquish more space to BlueFS.
485 It is normal that this value is different from amount of BlueStore free space, as
486 BlueFS allocation unit is typically larger than BlueStore allocation unit.
487 This means that only part of BlueStore free space will be acceptable for BlueFS.
488
489 BLUEFS_LOW_SPACE
490 _________________
491
492 If BlueFS is running low on available free space and there is little
493 `available_from_bluestore` one can consider reducing BlueFS allocation unit size.
494 To simulate available space when allocation unit is different do::
495
496 ceph daemon osd.123 bluestore bluefs available <alloc-unit-size>
497
498 BLUESTORE_FRAGMENTATION
499 _______________________
500
501 As BlueStore works free space on underlying storage will get fragmented.
502 This is normal and unavoidable but excessive fragmentation will cause slowdown.
503 To inspect BlueStore fragmentation one can do::
504
505 ceph daemon osd.123 bluestore allocator score block
506
507 Score is given in [0-1] range.
508 [0.0 .. 0.4] tiny fragmentation
509 [0.4 .. 0.7] small, acceptable fragmentation
510 [0.7 .. 0.9] considerable, but safe fragmentation
511 [0.9 .. 1.0] severe fragmentation, may impact BlueFS ability to get space from BlueStore
512
513 If detailed report of free fragments is required do::
514
515 ceph daemon osd.123 bluestore allocator dump block
516
517 In case when handling OSD process that is not running fragmentation can be
518 inspected with `ceph-bluestore-tool`.
519 Get fragmentation score::
520
521 ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-123 --allocator block free-score
522
523 And dump detailed free chunks::
524
525 ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-123 --allocator block free-dump
526
527 BLUESTORE_LEGACY_STATFS
528 _______________________
529
530 In the Nautilus release, BlueStore tracks its internal usage
531 statistics on a per-pool granular basis, and one or more OSDs have
532 BlueStore volumes that were created prior to Nautilus. If *all* OSDs
533 are older than Nautilus, this just means that the per-pool metrics are
534 not available. However, if there is a mix of pre-Nautilus and
535 post-Nautilus OSDs, the cluster usage statistics reported by ``ceph
536 df`` will not be accurate.
537
538 The old OSDs can be updated to use the new usage tracking scheme by stopping each OSD, running a repair operation, and the restarting it. For example, if ``osd.123`` needed to be updated,::
539
540 systemctl stop ceph-osd@123
541 ceph-bluestore-tool repair --path /var/lib/ceph/osd/ceph-123
542 systemctl start ceph-osd@123
543
544 This warning can be disabled with::
545
546 ceph config set global bluestore_warn_on_legacy_statfs false
547
548 BLUESTORE_NO_PER_POOL_OMAP
549 __________________________
550
551 Starting with the Octopus release, BlueStore tracks omap space utilization
552 by pool, and one or more OSDs have volumes that were created prior to
553 Octopus. If all OSDs are not running BlueStore with the new tracking
554 enabled, the cluster will report and approximate value for per-pool omap usage
555 based on the most recent deep-scrub.
556
557 The old OSDs can be updated to track by pool by stopping each OSD,
558 running a repair operation, and the restarting it. For example, if
559 ``osd.123`` needed to be updated,::
560
561 systemctl stop ceph-osd@123
562 ceph-bluestore-tool repair --path /var/lib/ceph/osd/ceph-123
563 systemctl start ceph-osd@123
564
565 This warning can be disabled with::
566
567 ceph config set global bluestore_warn_on_no_per_pool_omap false
568
569
570 BLUESTORE_DISK_SIZE_MISMATCH
571 ____________________________
572
573 One or more OSDs using BlueStore has an internal inconsistency between the size
574 of the physical device and the metadata tracking its size. This can lead to
575 the OSD crashing in the future.
576
577 The OSDs in question should be destroyed and reprovisioned. Care should be
578 taken to do this one OSD at a time, and in a way that doesn't put any data at
579 risk. For example, if osd ``$N`` has the error,::
580
581 ceph osd out osd.$N
582 while ! ceph osd safe-to-destroy osd.$N ; do sleep 1m ; done
583 ceph osd destroy osd.$N
584 ceph-volume lvm zap /path/to/device
585 ceph-volume lvm create --osd-id $N --data /path/to/device
586
587 BLUESTORE_NO_COMPRESSION
588 ________________________
589
590 One or more OSDs is unable to load a BlueStore compression plugin.
591 This can be caused by a broken installation, in which the ``ceph-osd``
592 binary does not match the compression plugins, or a recent upgrade
593 that did not include a restart of the ``ceph-osd`` daemon.
594
595 Verify that the package(s) on the host running the OSD(s) in question
596 are correctly installed and that the OSD daemon(s) have been
597 restarted. If the problem persists, check the OSD log for any clues
598 as to the source of the problem.
599
600
601
602 Device health
603 -------------
604
605 DEVICE_HEALTH
606 _____________
607
608 One or more devices is expected to fail soon, where the warning
609 threshold is controlled by the ``mgr/devicehealth/warn_threshold``
610 config option.
611
612 This warning only applies to OSDs that are currently marked "in", so
613 the expected response to this failure is to mark the device "out" so
614 that data is migrated off of the device, and then to remove the
615 hardware from the system. Note that the marking out is normally done
616 automatically if ``mgr/devicehealth/self_heal`` is enabled based on
617 the ``mgr/devicehealth/mark_out_threshold``.
618
619 Device health can be checked with::
620
621 ceph device info <device-id>
622
623 Device life expectancy is set by a prediction model run by
624 the mgr or an by external tool via the command::
625
626 ceph device set-life-expectancy <device-id> <from> <to>
627
628 You can change the stored life expectancy manually, but that usually
629 doesn't accomplish anything as whatever tool originally set it will
630 probably set it again, and changing the stored value does not affect
631 the actual health of the hardware device.
632
633 DEVICE_HEALTH_IN_USE
634 ____________________
635
636 One or more devices is expected to fail soon and has been marked "out"
637 of the cluster based on ``mgr/devicehealth/mark_out_threshold``, but it
638 is still participating in one more PGs. This may be because it was
639 only recently marked "out" and data is still migrating, or because data
640 cannot be migrated off for some reason (e.g., the cluster is nearly
641 full, or the CRUSH hierarchy is such that there isn't another suitable
642 OSD to migrate the data too).
643
644 This message can be silenced by disabling the self heal behavior
645 (setting ``mgr/devicehealth/self_heal`` to false), by adjusting the
646 ``mgr/devicehealth/mark_out_threshold``, or by addressing what is
647 preventing data from being migrated off of the ailing device.
648
649 DEVICE_HEALTH_TOOMANY
650 _____________________
651
652 Too many devices is expected to fail soon and the
653 ``mgr/devicehealth/self_heal`` behavior is enabled, such that marking
654 out all of the ailing devices would exceed the clusters
655 ``mon_osd_min_in_ratio`` ratio that prevents too many OSDs from being
656 automatically marked "out".
657
658 This generally indicates that too many devices in your cluster are
659 expected to fail soon and you should take action to add newer
660 (healthier) devices before too many devices fail and data is lost.
661
662 The health message can also be silenced by adjusting parameters like
663 ``mon_osd_min_in_ratio`` or ``mgr/devicehealth/mark_out_threshold``,
664 but be warned that this will increase the likelihood of unrecoverable
665 data loss in the cluster.
666
667
668 Data health (pools & placement groups)
669 --------------------------------------
670
671 PG_AVAILABILITY
672 _______________
673
674 Data availability is reduced, meaning that the cluster is unable to
675 service potential read or write requests for some data in the cluster.
676 Specifically, one or more PGs is in a state that does not allow IO
677 requests to be serviced. Problematic PG states include *peering*,
678 *stale*, *incomplete*, and the lack of *active* (if those conditions do not clear
679 quickly).
680
681 Detailed information about which PGs are affected is available from::
682
683 ceph health detail
684
685 In most cases the root cause is that one or more OSDs is currently
686 down; see the discussion for ``OSD_DOWN`` above.
687
688 The state of specific problematic PGs can be queried with::
689
690 ceph tell <pgid> query
691
692 PG_DEGRADED
693 ___________
694
695 Data redundancy is reduced for some data, meaning the cluster does not
696 have the desired number of replicas for all data (for replicated
697 pools) or erasure code fragments (for erasure coded pools).
698 Specifically, one or more PGs:
699
700 * has the *degraded* or *undersized* flag set, meaning there are not
701 enough instances of that placement group in the cluster;
702 * has not had the *clean* flag set for some time.
703
704 Detailed information about which PGs are affected is available from::
705
706 ceph health detail
707
708 In most cases the root cause is that one or more OSDs is currently
709 down; see the dicussion for ``OSD_DOWN`` above.
710
711 The state of specific problematic PGs can be queried with::
712
713 ceph tell <pgid> query
714
715
716 PG_RECOVERY_FULL
717 ________________
718
719 Data redundancy may be reduced or at risk for some data due to a lack
720 of free space in the cluster. Specifically, one or more PGs has the
721 *recovery_toofull* flag set, meaning that the
722 cluster is unable to migrate or recover data because one or more OSDs
723 is above the *full* threshold.
724
725 See the discussion for *OSD_FULL* above for steps to resolve this condition.
726
727 PG_BACKFILL_FULL
728 ________________
729
730 Data redundancy may be reduced or at risk for some data due to a lack
731 of free space in the cluster. Specifically, one or more PGs has the
732 *backfill_toofull* flag set, meaning that the
733 cluster is unable to migrate or recover data because one or more OSDs
734 is above the *backfillfull* threshold.
735
736 See the discussion for *OSD_BACKFILLFULL* above for
737 steps to resolve this condition.
738
739 PG_DAMAGED
740 __________
741
742 Data scrubbing has discovered some problems with data consistency in
743 the cluster. Specifically, one or more PGs has the *inconsistent* or
744 *snaptrim_error* flag is set, indicating an earlier scrub operation
745 found a problem, or that the *repair* flag is set, meaning a repair
746 for such an inconsistency is currently in progress.
747
748 See :doc:`pg-repair` for more information.
749
750 OSD_SCRUB_ERRORS
751 ________________
752
753 Recent OSD scrubs have uncovered inconsistencies. This error is generally
754 paired with *PG_DAMAGED* (see above).
755
756 See :doc:`pg-repair` for more information.
757
758 OSD_TOO_MANY_REPAIRS
759 ____________________
760
761 When a read error occurs and another replica is available it is used to repair
762 the error immediately, so that the client can get the object data. Scrub
763 handles errors for data at rest. In order to identify possible failing disks
764 that aren't seeing scrub errors, a count of read repairs is maintained. If
765 it exceeds a config value threshold *mon_osd_warn_num_repaired* default 10,
766 this health warning is generated.
767
768 LARGE_OMAP_OBJECTS
769 __________________
770
771 One or more pools contain large omap objects as determined by
772 ``osd_deep_scrub_large_omap_object_key_threshold`` (threshold for number of keys
773 to determine a large omap object) or
774 ``osd_deep_scrub_large_omap_object_value_sum_threshold`` (the threshold for
775 summed size (bytes) of all key values to determine a large omap object) or both.
776 More information on the object name, key count, and size in bytes can be found
777 by searching the cluster log for 'Large omap object found'. Large omap objects
778 can be caused by RGW bucket index objects that do not have automatic resharding
779 enabled. Please see :ref:`RGW Dynamic Bucket Index Resharding
780 <rgw_dynamic_bucket_index_resharding>` for more information on resharding.
781
782 The thresholds can be adjusted with::
783
784 ceph config set osd osd_deep_scrub_large_omap_object_key_threshold <keys>
785 ceph config set osd osd_deep_scrub_large_omap_object_value_sum_threshold <bytes>
786
787 CACHE_POOL_NEAR_FULL
788 ____________________
789
790 A cache tier pool is nearly full. Full in this context is determined
791 by the ``target_max_bytes`` and ``target_max_objects`` properties on
792 the cache pool. Once the pool reaches the target threshold, write
793 requests to the pool may block while data is flushed and evicted
794 from the cache, a state that normally leads to very high latencies and
795 poor performance.
796
797 The cache pool target size can be adjusted with::
798
799 ceph osd pool set <cache-pool-name> target_max_bytes <bytes>
800 ceph osd pool set <cache-pool-name> target_max_objects <objects>
801
802 Normal cache flush and evict activity may also be throttled due to reduced
803 availability or performance of the base tier, or overall cluster load.
804
805 TOO_FEW_PGS
806 ___________
807
808 The number of PGs in use in the cluster is below the configurable
809 threshold of ``mon_pg_warn_min_per_osd`` PGs per OSD. This can lead
810 to suboptimal distribution and balance of data across the OSDs in
811 the cluster, and similarly reduce overall performance.
812
813 This may be an expected condition if data pools have not yet been
814 created.
815
816 The PG count for existing pools can be increased or new pools can be created.
817 Please refer to :ref:`choosing-number-of-placement-groups` for more
818 information.
819
820 POOL_PG_NUM_NOT_POWER_OF_TWO
821 ____________________________
822
823 One or more pools has a ``pg_num`` value that is not a power of two.
824 Although this is not strictly incorrect, it does lead to a less
825 balanced distribution of data because some PGs have roughly twice as
826 much data as others.
827
828 This is easily corrected by setting the ``pg_num`` value for the
829 affected pool(s) to a nearby power of two::
830
831 ceph osd pool set <pool-name> pg_num <value>
832
833 This health warning can be disabled with::
834
835 ceph config set global mon_warn_on_pool_pg_num_not_power_of_two false
836
837 POOL_TOO_FEW_PGS
838 ________________
839
840 One or more pools should probably have more PGs, based on the amount
841 of data that is currently stored in the pool. This can lead to
842 suboptimal distribution and balance of data across the OSDs in the
843 cluster, and similarly reduce overall performance. This warning is
844 generated if the ``pg_autoscale_mode`` property on the pool is set to
845 ``warn``.
846
847 To disable the warning, you can disable auto-scaling of PGs for the
848 pool entirely with::
849
850 ceph osd pool set <pool-name> pg_autoscale_mode off
851
852 To allow the cluster to automatically adjust the number of PGs,::
853
854 ceph osd pool set <pool-name> pg_autoscale_mode on
855
856 You can also manually set the number of PGs for the pool to the
857 recommended amount with::
858
859 ceph osd pool set <pool-name> pg_num <new-pg-num>
860
861 Please refer to :ref:`choosing-number-of-placement-groups` and
862 :ref:`pg-autoscaler` for more information.
863
864 TOO_MANY_PGS
865 ____________
866
867 The number of PGs in use in the cluster is above the configurable
868 threshold of ``mon_max_pg_per_osd`` PGs per OSD. If this threshold is
869 exceed the cluster will not allow new pools to be created, pool `pg_num` to
870 be increased, or pool replication to be increased (any of which would lead to
871 more PGs in the cluster). A large number of PGs can lead
872 to higher memory utilization for OSD daemons, slower peering after
873 cluster state changes (like OSD restarts, additions, or removals), and
874 higher load on the Manager and Monitor daemons.
875
876 The simplest way to mitigate the problem is to increase the number of
877 OSDs in the cluster by adding more hardware. Note that the OSD count
878 used for the purposes of this health check is the number of "in" OSDs,
879 so marking "out" OSDs "in" (if there are any) can also help::
880
881 ceph osd in <osd id(s)>
882
883 Please refer to :ref:`choosing-number-of-placement-groups` for more
884 information.
885
886 POOL_TOO_MANY_PGS
887 _________________
888
889 One or more pools should probably have more PGs, based on the amount
890 of data that is currently stored in the pool. This can lead to higher
891 memory utilization for OSD daemons, slower peering after cluster state
892 changes (like OSD restarts, additions, or removals), and higher load
893 on the Manager and Monitor daemons. This warning is generated if the
894 ``pg_autoscale_mode`` property on the pool is set to ``warn``.
895
896 To disable the warning, you can disable auto-scaling of PGs for the
897 pool entirely with::
898
899 ceph osd pool set <pool-name> pg_autoscale_mode off
900
901 To allow the cluster to automatically adjust the number of PGs,::
902
903 ceph osd pool set <pool-name> pg_autoscale_mode on
904
905 You can also manually set the number of PGs for the pool to the
906 recommended amount with::
907
908 ceph osd pool set <pool-name> pg_num <new-pg-num>
909
910 Please refer to :ref:`choosing-number-of-placement-groups` and
911 :ref:`pg-autoscaler` for more information.
912
913 POOL_TARGET_SIZE_BYTES_OVERCOMMITTED
914 ____________________________________
915
916 One or more pools have a ``target_size_bytes`` property set to
917 estimate the expected size of the pool,
918 but the value(s) exceed the total available storage (either by
919 themselves or in combination with other pools' actual usage).
920
921 This is usually an indication that the ``target_size_bytes`` value for
922 the pool is too large and should be reduced or set to zero with::
923
924 ceph osd pool set <pool-name> target_size_bytes 0
925
926 For more information, see :ref:`specifying_pool_target_size`.
927
928 POOL_HAS_TARGET_SIZE_BYTES_AND_RATIO
929 ____________________________________
930
931 One or more pools have both ``target_size_bytes`` and
932 ``target_size_ratio`` set to estimate the expected size of the pool.
933 Only one of these properties should be non-zero. If both are set,
934 ``target_size_ratio`` takes precedence and ``target_size_bytes`` is
935 ignored.
936
937 To reset ``target_size_bytes`` to zero::
938
939 ceph osd pool set <pool-name> target_size_bytes 0
940
941 For more information, see :ref:`specifying_pool_target_size`.
942
943 TOO_FEW_OSDS
944 ____________
945
946 The number of OSDs in the cluster is below the configurable
947 threshold of ``osd_pool_default_size``.
948
949 SMALLER_PGP_NUM
950 _______________
951
952 One or more pools has a ``pgp_num`` value less than ``pg_num``. This
953 is normally an indication that the PG count was increased without
954 also increasing the placement behavior.
955
956 This is sometimes done deliberately to separate out the `split` step
957 when the PG count is adjusted from the data migration that is needed
958 when ``pgp_num`` is changed.
959
960 This is normally resolved by setting ``pgp_num`` to match ``pg_num``,
961 triggering the data migration, with::
962
963 ceph osd pool set <pool> pgp_num <pg-num-value>
964
965 MANY_OBJECTS_PER_PG
966 ___________________
967
968 One or more pools has an average number of objects per PG that is
969 significantly higher than the overall cluster average. The specific
970 threshold is controlled by the ``mon_pg_warn_max_object_skew``
971 configuration value.
972
973 This is usually an indication that the pool(s) containing most of the
974 data in the cluster have too few PGs, and/or that other pools that do
975 not contain as much data have too many PGs. See the discussion of
976 *TOO_MANY_PGS* above.
977
978 The threshold can be raised to silence the health warning by adjusting
979 the ``mon_pg_warn_max_object_skew`` config option on the managers.
980
981
982 POOL_APP_NOT_ENABLED
983 ____________________
984
985 A pool exists that contains one or more objects but has not been
986 tagged for use by a particular application.
987
988 Resolve this warning by labeling the pool for use by an application. For
989 example, if the pool is used by RBD,::
990
991 rbd pool init <poolname>
992
993 If the pool is being used by a custom application 'foo', you can also label
994 via the low-level command::
995
996 ceph osd pool application enable foo
997
998 For more information, see :ref:`associate-pool-to-application`.
999
1000 POOL_FULL
1001 _________
1002
1003 One or more pools has reached (or is very close to reaching) its
1004 quota. The threshold to trigger this error condition is controlled by
1005 the ``mon_pool_quota_crit_threshold`` configuration option.
1006
1007 Pool quotas can be adjusted up or down (or removed) with::
1008
1009 ceph osd pool set-quota <pool> max_bytes <bytes>
1010 ceph osd pool set-quota <pool> max_objects <objects>
1011
1012 Setting the quota value to 0 will disable the quota.
1013
1014 POOL_NEAR_FULL
1015 ______________
1016
1017 One or more pools is approaching is quota. The threshold to trigger
1018 this warning condition is controlled by the
1019 ``mon_pool_quota_warn_threshold`` configuration option.
1020
1021 Pool quotas can be adjusted up or down (or removed) with::
1022
1023 ceph osd pool set-quota <pool> max_bytes <bytes>
1024 ceph osd pool set-quota <pool> max_objects <objects>
1025
1026 Setting the quota value to 0 will disable the quota.
1027
1028 OBJECT_MISPLACED
1029 ________________
1030
1031 One or more objects in the cluster is not stored on the node the
1032 cluster would like it to be stored on. This is an indication that
1033 data migration due to some recent cluster change has not yet completed.
1034
1035 Misplaced data is not a dangerous condition in and of itself; data
1036 consistency is never at risk, and old copies of objects are never
1037 removed until the desired number of new copies (in the desired
1038 locations) are present.
1039
1040 OBJECT_UNFOUND
1041 ______________
1042
1043 One or more objects in the cluster cannot be found. Specifically, the
1044 OSDs know that a new or updated copy of an object should exist, but a
1045 copy of that version of the object has not been found on OSDs that are
1046 currently online.
1047
1048 Read or write requests to unfound objects will block.
1049
1050 Ideally, a down OSD can be brought back online that has the more
1051 recent copy of the unfound object. Candidate OSDs can be identified from the
1052 peering state for the PG(s) responsible for the unfound object::
1053
1054 ceph tell <pgid> query
1055
1056 If the latest copy of the object is not available, the cluster can be
1057 told to roll back to a previous version of the object. See
1058 :ref:`failures-osd-unfound` for more information.
1059
1060 SLOW_OPS
1061 ________
1062
1063 One or more OSD requests is taking a long time to process. This can
1064 be an indication of extreme load, a slow storage device, or a software
1065 bug.
1066
1067 The request queue on the OSD(s) in question can be queried with the
1068 following command, executed from the OSD host::
1069
1070 ceph daemon osd.<id> ops
1071
1072 A summary of the slowest recent requests can be seen with::
1073
1074 ceph daemon osd.<id> dump_historic_ops
1075
1076 The location of an OSD can be found with::
1077
1078 ceph osd find osd.<id>
1079
1080 PG_NOT_SCRUBBED
1081 _______________
1082
1083 One or more PGs has not been scrubbed recently. PGs are normally
1084 scrubbed every ``mon_scrub_interval`` seconds, and this warning
1085 triggers when ``mon_warn_pg_not_scrubbed_ratio`` percentage of interval has elapsed
1086 without a scrub since it was due.
1087
1088 PGs will not scrub if they are not flagged as *clean*, which may
1089 happen if they are misplaced or degraded (see *PG_AVAILABILITY* and
1090 *PG_DEGRADED* above).
1091
1092 You can manually initiate a scrub of a clean PG with::
1093
1094 ceph pg scrub <pgid>
1095
1096 PG_NOT_DEEP_SCRUBBED
1097 ____________________
1098
1099 One or more PGs has not been deep scrubbed recently. PGs are normally
1100 scrubbed every ``osd_deep_scrub_interval`` seconds, and this warning
1101 triggers when ``mon_warn_pg_not_deep_scrubbed_ratio`` percentage of interval has elapsed
1102 without a scrub since it was due.
1103
1104 PGs will not (deep) scrub if they are not flagged as *clean*, which may
1105 happen if they are misplaced or degraded (see *PG_AVAILABILITY* and
1106 *PG_DEGRADED* above).
1107
1108 You can manually initiate a scrub of a clean PG with::
1109
1110 ceph pg deep-scrub <pgid>
1111
1112
1113 PG_SLOW_SNAP_TRIMMING
1114 _____________________
1115
1116 The snapshot trim queue for one or more PGs has exceeded the
1117 configured warning threshold. This indicates that either an extremely
1118 large number of snapshots were recently deleted, or that the OSDs are
1119 unable to trim snapshots quickly enough to keep up with the rate of
1120 new snapshot deletions.
1121
1122 The warning threshold is controlled by the
1123 ``mon_osd_snap_trim_queue_warn_on`` option (default: 32768).
1124
1125 This warning may trigger if OSDs are under excessive load and unable
1126 to keep up with their background work, or if the OSDs' internal
1127 metadata database is heavily fragmented and unable to perform. It may
1128 also indicate some other performance issue with the OSDs.
1129
1130 The exact size of the snapshot trim queue is reported by the
1131 ``snaptrimq_len`` field of ``ceph pg ls -f json-detail``.
1132
1133
1134
1135 Miscellaneous
1136 -------------
1137
1138 RECENT_CRASH
1139 ____________
1140
1141 One or more Ceph daemons has crashed recently, and the crash has not
1142 yet been archived (acknowledged) by the administrator. This may
1143 indicate a software bug, a hardware problem (e.g., a failing disk), or
1144 some other problem.
1145
1146 New crashes can be listed with::
1147
1148 ceph crash ls-new
1149
1150 Information about a specific crash can be examined with::
1151
1152 ceph crash info <crash-id>
1153
1154 This warning can be silenced by "archiving" the crash (perhaps after
1155 being examined by an administrator) so that it does not generate this
1156 warning::
1157
1158 ceph crash archive <crash-id>
1159
1160 Similarly, all new crashes can be archived with::
1161
1162 ceph crash archive-all
1163
1164 Archived crashes will still be visible via ``ceph crash ls`` but not
1165 ``ceph crash ls-new``.
1166
1167 The time period for what "recent" means is controlled by the option
1168 ``mgr/crash/warn_recent_interval`` (default: two weeks).
1169
1170 These warnings can be disabled entirely with::
1171
1172 ceph config set mgr/crash/warn_recent_interval 0
1173
1174 TELEMETRY_CHANGED
1175 _________________
1176
1177 Telemetry has been enabled, but the contents of the telemetry report
1178 have changed since that time, so telemetry reports will not be sent.
1179
1180 The Ceph developers periodically revise the telemetry feature to
1181 include new and useful information, or to remove information found to
1182 be useless or sensitive. If any new information is included in the
1183 report, Ceph will require the administrator to re-enable telemetry to
1184 ensure they have an opportunity to (re)review what information will be
1185 shared.
1186
1187 To review the contents of the telemetry report,::
1188
1189 ceph telemetry show
1190
1191 Note that the telemetry report consists of several optional channels
1192 that may be independently enabled or disabled. For more information, see
1193 :ref:`telemetry`.
1194
1195 To re-enable telemetry (and make this warning go away),::
1196
1197 ceph telemetry on
1198
1199 To disable telemetry (and make this warning go away),::
1200
1201 ceph telemetry off
1202
1203 AUTH_BAD_CAPS
1204 _____________
1205
1206 One or more auth users has capabilities that cannot be parsed by the
1207 monitor. This generally indicates that the user will not be
1208 authorized to perform any action with one or more daemon types.
1209
1210 This error is mostly likely to occur after an upgrade if the
1211 capabilities were set with an older version of Ceph that did not
1212 properly validate their syntax, or if the syntax of the capabilities
1213 has changed.
1214
1215 The user in question can be removed with::
1216
1217 ceph auth rm <entity-name>
1218
1219 (This will resolve the health alert, but obviously clients will not be
1220 able to authenticate as that user.)
1221
1222 Alternatively, the capabilities for the user can be updated with::
1223
1224 ceph auth <entity-name> <daemon-type> <caps> [<daemon-type> <caps> ...]
1225
1226 For more information about auth capabilities, see :ref:`user-management`.
1227
1228
1229 OSD_NO_DOWN_OUT_INTERVAL
1230 ________________________
1231
1232 The ``mon_osd_down_out_interval`` option is set to zero, which means
1233 that the system will not automatically perform any repair or healing
1234 operations after an OSD fails. Instead, an administrator (or some
1235 other external entity) will need to manually mark down OSDs as 'out'
1236 (i.e., via ``ceph osd out <osd-id>``) in order to trigger recovery.
1237
1238 This option is normally set to five or ten minutes--enough time for a
1239 host to power-cycle or reboot.
1240
1241 This warning can silenced by setting the
1242 ``mon_warn_on_osd_down_out_interval_zero`` to false::
1243
1244 ceph config global mon mon_warn_on_osd_down_out_interval_zero false
1245
1246 DASHBOARD_DEBUG
1247 _______________
1248
1249 The Dashboard debug mode is enabled. This means, if there is an error
1250 while processing a REST API request, the HTTP error response contains
1251 a Python traceback. This behaviour should be disabled in production
1252 environments because such a traceback might contain and expose sensible
1253 information.
1254
1255 The debug mode can be disabled with::
1256
1257 ceph dashboard debug disable