]> git.proxmox.com Git - ceph.git/blob - ceph/doc/rados/operations/health-checks.rst
update sources to ceph Nautilus 14.2.1
[ceph.git] / ceph / doc / rados / operations / health-checks.rst
1
2 =============
3 Health checks
4 =============
5
6 Overview
7 ========
8
9 There is a finite set of possible health messages that a Ceph cluster can
10 raise -- these are defined as *health checks* which have unique identifiers.
11
12 The identifier is a terse pseudo-human-readable (i.e. like a variable name)
13 string. It is intended to enable tools (such as UIs) to make sense of
14 health checks, and present them in a way that reflects their meaning.
15
16 This page lists the health checks that are raised by the monitor and manager
17 daemons. In addition to these, you may also see health checks that originate
18 from MDS daemons (see :ref:`cephfs-health-messages`), and health checks
19 that are defined by ceph-mgr python modules.
20
21 Definitions
22 ===========
23
24 Monitor
25 -------
26
27 MON_DOWN
28 ________
29
30 One or more monitor daemons is currently down. The cluster requires a
31 majority (more than 1/2) of the monitors in order to function. When
32 one or more monitors are down, clients may have a harder time forming
33 their initial connection to the cluster as they may need to try more
34 addresses before they reach an operating monitor.
35
36 The down monitor daemon should generally be restarted as soon as
37 possible to reduce the risk of a subsequen monitor failure leading to
38 a service outage.
39
40 MON_CLOCK_SKEW
41 ______________
42
43 The clocks on the hosts running the ceph-mon monitor daemons are not
44 sufficiently well synchronized. This health alert is raised if the
45 cluster detects a clock skew greater than ``mon_clock_drift_allowed``.
46
47 This is best resolved by synchronizing the clocks using a tool like
48 ``ntpd`` or ``chrony``.
49
50 If it is impractical to keep the clocks closely synchronized, the
51 ``mon_clock_drift_allowed`` threshold can also be increased, but this
52 value must stay significantly below the ``mon_lease`` interval in
53 order for monitor cluster to function properly.
54
55 MON_MSGR2_NOT_ENABLED
56 _____________________
57
58 The ``ms_bind_msgr2`` option is enabled but one or more monitors is
59 not configured to bind to a v2 port in the cluster's monmap. This
60 means that features specific to the msgr2 protocol (e.g., encryption)
61 are not available on some or all connections.
62
63 In most cases this can be corrected by issuing the command::
64
65 ceph mon enable-msgr2
66
67 That command will change any monitor configured for the old default
68 port 6789 to continue to listen for v1 connections on 6789 and also
69 listen for v2 connections on the new default 3300 port.
70
71 If a monitor is configured to listen for v1 connections on a non-standard port (not 6789), then the monmap will need to be modified manually.
72
73
74
75 Manager
76 -------
77
78 MGR_MODULE_DEPENDENCY
79 _____________________
80
81 An enabled manager module is failing its dependency check. This health check
82 should come with an explanatory message from the module about the problem.
83
84 For example, a module might report that a required package is not installed:
85 install the required package and restart your manager daemons.
86
87 This health check is only applied to enabled modules. If a module is
88 not enabled, you can see whether it is reporting dependency issues in
89 the output of `ceph module ls`.
90
91
92 MGR_MODULE_ERROR
93 ________________
94
95 A manager module has experienced an unexpected error. Typically,
96 this means an unhandled exception was raised from the module's `serve`
97 function. The human readable description of the error may be obscurely
98 worded if the exception did not provide a useful description of itself.
99
100 This health check may indicate a bug: please open a Ceph bug report if you
101 think you have encountered a bug.
102
103 If you believe the error is transient, you may restart your manager
104 daemon(s), or use `ceph mgr fail` on the active daemon to prompt
105 a failover to another daemon.
106
107
108 OSDs
109 ----
110
111 OSD_DOWN
112 ________
113
114 One or more OSDs are marked down. The ceph-osd daemon may have been
115 stopped, or peer OSDs may be unable to reach the OSD over the network.
116 Common causes include a stopped or crashed daemon, a down host, or a
117 network outage.
118
119 Verify the host is healthy, the daemon is started, and network is
120 functioning. If the daemon has crashed, the daemon log file
121 (``/var/log/ceph/ceph-osd.*``) may contain debugging information.
122
123 OSD_<crush type>_DOWN
124 _____________________
125
126 (e.g. OSD_HOST_DOWN, OSD_ROOT_DOWN)
127
128 All the OSDs within a particular CRUSH subtree are marked down, for example
129 all OSDs on a host.
130
131 OSD_ORPHAN
132 __________
133
134 An OSD is referenced in the CRUSH map hierarchy but does not exist.
135
136 The OSD can be removed from the CRUSH hierarchy with::
137
138 ceph osd crush rm osd.<id>
139
140 OSD_OUT_OF_ORDER_FULL
141 _____________________
142
143 The utilization thresholds for `backfillfull`, `nearfull`, `full`,
144 and/or `failsafe_full` are not ascending. In particular, we expect
145 `backfillfull < nearfull`, `nearfull < full`, and `full <
146 failsafe_full`.
147
148 The thresholds can be adjusted with::
149
150 ceph osd set-backfillfull-ratio <ratio>
151 ceph osd set-nearfull-ratio <ratio>
152 ceph osd set-full-ratio <ratio>
153
154
155 OSD_FULL
156 ________
157
158 One or more OSDs has exceeded the `full` threshold and is preventing
159 the cluster from servicing writes.
160
161 Utilization by pool can be checked with::
162
163 ceph df
164
165 The currently defined `full` ratio can be seen with::
166
167 ceph osd dump | grep full_ratio
168
169 A short-term workaround to restore write availability is to raise the full
170 threshold by a small amount::
171
172 ceph osd set-full-ratio <ratio>
173
174 New storage should be added to the cluster by deploying more OSDs or
175 existing data should be deleted in order to free up space.
176
177 OSD_BACKFILLFULL
178 ________________
179
180 One or more OSDs has exceeded the `backfillfull` threshold, which will
181 prevent data from being allowed to rebalance to this device. This is
182 an early warning that rebalancing may not be able to complete and that
183 the cluster is approaching full.
184
185 Utilization by pool can be checked with::
186
187 ceph df
188
189 OSD_NEARFULL
190 ____________
191
192 One or more OSDs has exceeded the `nearfull` threshold. This is an early
193 warning that the cluster is approaching full.
194
195 Utilization by pool can be checked with::
196
197 ceph df
198
199 OSDMAP_FLAGS
200 ____________
201
202 One or more cluster flags of interest has been set. These flags include:
203
204 * *full* - the cluster is flagged as full and cannot service writes
205 * *pauserd*, *pausewr* - paused reads or writes
206 * *noup* - OSDs are not allowed to start
207 * *nodown* - OSD failure reports are being ignored, such that the
208 monitors will not mark OSDs `down`
209 * *noin* - OSDs that were previously marked `out` will not be marked
210 back `in` when they start
211 * *noout* - down OSDs will not automatically be marked out after the
212 configured interval
213 * *nobackfill*, *norecover*, *norebalance* - recovery or data
214 rebalancing is suspended
215 * *noscrub*, *nodeep_scrub* - scrubbing is disabled
216 * *notieragent* - cache tiering activity is suspended
217
218 With the exception of *full*, these flags can be set or cleared with::
219
220 ceph osd set <flag>
221 ceph osd unset <flag>
222
223 OSD_FLAGS
224 _________
225
226 One or more OSDs has a per-OSD flag of interest set. These flags include:
227
228 * *noup*: OSD is not allowed to start
229 * *nodown*: failure reports for this OSD will be ignored
230 * *noin*: if this OSD was previously marked `out` automatically
231 after a failure, it will not be marked in when it stats
232 * *noout*: if this OSD is down it will not automatically be marked
233 `out` after the configured interval
234
235 Per-OSD flags can be set and cleared with::
236
237 ceph osd add-<flag> <osd-id>
238 ceph osd rm-<flag> <osd-id>
239
240 For example, ::
241
242 ceph osd rm-nodown osd.123
243
244 OLD_CRUSH_TUNABLES
245 __________________
246
247 The CRUSH map is using very old settings and should be updated. The
248 oldest tunables that can be used (i.e., the oldest client version that
249 can connect to the cluster) without triggering this health warning is
250 determined by the ``mon_crush_min_required_version`` config option.
251 See :ref:`crush-map-tunables` for more information.
252
253 OLD_CRUSH_STRAW_CALC_VERSION
254 ____________________________
255
256 The CRUSH map is using an older, non-optimal method for calculating
257 intermediate weight values for ``straw`` buckets.
258
259 The CRUSH map should be updated to use the newer method
260 (``straw_calc_version=1``). See
261 :ref:`crush-map-tunables` for more information.
262
263 CACHE_POOL_NO_HIT_SET
264 _____________________
265
266 One or more cache pools is not configured with a *hit set* to track
267 utilization, which will prevent the tiering agent from identifying
268 cold objects to flush and evict from the cache.
269
270 Hit sets can be configured on the cache pool with::
271
272 ceph osd pool set <poolname> hit_set_type <type>
273 ceph osd pool set <poolname> hit_set_period <period-in-seconds>
274 ceph osd pool set <poolname> hit_set_count <number-of-hitsets>
275 ceph osd pool set <poolname> hit_set_fpp <target-false-positive-rate>
276
277 OSD_NO_SORTBITWISE
278 __________________
279
280 No pre-luminous v12.y.z OSDs are running but the ``sortbitwise`` flag has not
281 been set.
282
283 The ``sortbitwise`` flag must be set before luminous v12.y.z or newer
284 OSDs can start. You can safely set the flag with::
285
286 ceph osd set sortbitwise
287
288 POOL_FULL
289 _________
290
291 One or more pools has reached its quota and is no longer allowing writes.
292
293 Pool quotas and utilization can be seen with::
294
295 ceph df detail
296
297 You can either raise the pool quota with::
298
299 ceph osd pool set-quota <poolname> max_objects <num-objects>
300 ceph osd pool set-quota <poolname> max_bytes <num-bytes>
301
302 or delete some existing data to reduce utilization.
303
304
305 Device health
306 -------------
307
308 DEVICE_HEALTH
309 _____________
310
311 One or more devices is expected to fail soon, where the warning
312 threshold is controlled by the ``mgr/devicehealth/warn_threshold``
313 config option.
314
315 This warning only applies to OSDs that are currently marked "in", so
316 the expected response to this failure is to mark the device "out" so
317 that data is migrated off of the device, and then to remove the
318 hardware from the system. Note that the marking out is normally done
319 automatically if ``mgr/devicehealth/self_heal`` is enabled based on
320 the ``mgr/devicehealth/mark_out_threshold``.
321
322 Device health can be checked with::
323
324 ceph device info <device-id>
325
326 Device life expectancy is set by a prediction model run by
327 the mgr or an by external tool via the command::
328
329 ceph device set-life-expectancy <device-id> <from> <to>
330
331 You can change the stored life expectancy manually, but that usually
332 doesn't accomplish anything as whatever tool originally set it will
333 probably set it again, and changing the stored value does not affect
334 the actual health of the hardware device.
335
336 DEVICE_HEALTH_IN_USE
337 ____________________
338
339 One or more devices is expected to fail soon and has been marked "out"
340 of the cluster based on ``mgr/devicehealth/mark_out_threshold``, but it
341 is still participating in one more PGs. This may be because it was
342 only recently marked "out" and data is still migrating, or because data
343 cannot be migrated off for some reason (e.g., the cluster is nearly
344 full, or the CRUSH hierarchy is such that there isn't another suitable
345 OSD to migrate the data too).
346
347 This message can be silenced by disabling the self heal behavior
348 (setting ``mgr/devicehealth/self_heal`` to false), by adjusting the
349 ``mgr/devicehealth/mark_out_threshold``, or by addressing what is
350 preventing data from being migrated off of the ailing device.
351
352 DEVICE_HEALTH_TOOMANY
353 _____________________
354
355 Too many devices is expected to fail soon and the
356 ``mgr/devicehealth/self_heal`` behavior is enabled, such that marking
357 out all of the ailing devices would exceed the clusters
358 ``mon_osd_min_in_ratio`` ratio that prevents too many OSDs from being
359 automatically marked "out".
360
361 This generally indicates that too many devices in your cluster are
362 expected to fail soon and you should take action to add newer
363 (healthier) devices before too many devices fail and data is lost.
364
365 The health message can also be silenced by adjusting parameters like
366 ``mon_osd_min_in_ratio`` or ``mgr/devicehealth/mark_out_threshold``,
367 but be warned that this will increase the likelihood of unrecoverable
368 data loss in the cluster.
369
370
371 Data health (pools & placement groups)
372 --------------------------------------
373
374 PG_AVAILABILITY
375 _______________
376
377 Data availability is reduced, meaning that the cluster is unable to
378 service potential read or write requests for some data in the cluster.
379 Specifically, one or more PGs is in a state that does not allow IO
380 requests to be serviced. Problematic PG states include *peering*,
381 *stale*, *incomplete*, and the lack of *active* (if those conditions do not clear
382 quickly).
383
384 Detailed information about which PGs are affected is available from::
385
386 ceph health detail
387
388 In most cases the root cause is that one or more OSDs is currently
389 down; see the discussion for ``OSD_DOWN`` above.
390
391 The state of specific problematic PGs can be queried with::
392
393 ceph tell <pgid> query
394
395 PG_DEGRADED
396 ___________
397
398 Data redundancy is reduced for some data, meaning the cluster does not
399 have the desired number of replicas for all data (for replicated
400 pools) or erasure code fragments (for erasure coded pools).
401 Specifically, one or more PGs:
402
403 * has the *degraded* or *undersized* flag set, meaning there are not
404 enough instances of that placement group in the cluster;
405 * has not had the *clean* flag set for some time.
406
407 Detailed information about which PGs are affected is available from::
408
409 ceph health detail
410
411 In most cases the root cause is that one or more OSDs is currently
412 down; see the dicussion for ``OSD_DOWN`` above.
413
414 The state of specific problematic PGs can be queried with::
415
416 ceph tell <pgid> query
417
418
419 PG_DEGRADED_FULL
420 ________________
421
422 Data redundancy may be reduced or at risk for some data due to a lack
423 of free space in the cluster. Specifically, one or more PGs has the
424 *backfill_toofull* or *recovery_toofull* flag set, meaning that the
425 cluster is unable to migrate or recover data because one or more OSDs
426 is above the *backfillfull* threshold.
427
428 See the discussion for *OSD_BACKFILLFULL* or *OSD_FULL* above for
429 steps to resolve this condition.
430
431 PG_DAMAGED
432 __________
433
434 Data scrubbing has discovered some problems with data consistency in
435 the cluster. Specifically, one or more PGs has the *inconsistent* or
436 *snaptrim_error* flag is set, indicating an earlier scrub operation
437 found a problem, or that the *repair* flag is set, meaning a repair
438 for such an inconsistency is currently in progress.
439
440 See :doc:`pg-repair` for more information.
441
442 OSD_SCRUB_ERRORS
443 ________________
444
445 Recent OSD scrubs have uncovered inconsistencies. This error is generally
446 paired with *PG_DAMAGED* (see above).
447
448 See :doc:`pg-repair` for more information.
449
450 LARGE_OMAP_OBJECTS
451 __________________
452
453 One or more pools contain large omap objects as determined by
454 ``osd_deep_scrub_large_omap_object_key_threshold`` (threshold for number of keys
455 to determine a large omap object) or
456 ``osd_deep_scrub_large_omap_object_value_sum_threshold`` (the threshold for
457 summed size (bytes) of all key values to determine a large omap object) or both.
458 More information on the object name, key count, and size in bytes can be found
459 by searching the cluster log for 'Large omap object found'. Large omap objects
460 can be caused by RGW bucket index objects that do not have automatic resharding
461 enabled. Please see :ref:`RGW Dynamic Bucket Index Resharding
462 <rgw_dynamic_bucket_index_resharding>` for more information on resharding.
463
464 The thresholds can be adjusted with::
465
466 ceph config set osd osd_deep_scrub_large_omap_object_key_threshold <keys>
467 ceph config set osd osd_deep_scrub_large_omap_object_value_sum_threshold <bytes>
468
469 CACHE_POOL_NEAR_FULL
470 ____________________
471
472 A cache tier pool is nearly full. Full in this context is determined
473 by the ``target_max_bytes`` and ``target_max_objects`` properties on
474 the cache pool. Once the pool reaches the target threshold, write
475 requests to the pool may block while data is flushed and evicted
476 from the cache, a state that normally leads to very high latencies and
477 poor performance.
478
479 The cache pool target size can be adjusted with::
480
481 ceph osd pool set <cache-pool-name> target_max_bytes <bytes>
482 ceph osd pool set <cache-pool-name> target_max_objects <objects>
483
484 Normal cache flush and evict activity may also be throttled due to reduced
485 availability or performance of the base tier, or overall cluster load.
486
487 TOO_FEW_PGS
488 ___________
489
490 The number of PGs in use in the cluster is below the configurable
491 threshold of ``mon_pg_warn_min_per_osd`` PGs per OSD. This can lead
492 to suboptimal distribution and balance of data across the OSDs in
493 the cluster, and similarly reduce overall performance.
494
495 This may be an expected condition if data pools have not yet been
496 created.
497
498 The PG count for existing pools can be increased or new pools can be created.
499 Please refer to :ref:`choosing-number-of-placement-groups` for more
500 information.
501
502 POOL_TOO_FEW_PGS
503 ________________
504
505 One or more pools should probably have more PGs, based on the amount
506 of data that is currently stored in the pool. This can lead to
507 suboptimal distribution and balance of data across the OSDs in the
508 cluster, and similarly reduce overall performance. This warning is
509 generated if the ``pg_autoscale_mode`` property on the pool is set to
510 ``warn``.
511
512 To disable the warning, you can disable auto-scaling of PGs for the
513 pool entirely with::
514
515 ceph osd pool set <pool-name> pg_autoscale_mode off
516
517 To allow the cluster to automatically adjust the number of PGs,::
518
519 ceph osd pool set <pool-name> pg_autoscale_mode on
520
521 You can also manually set the number of PGs for the pool to the
522 recommended amount with::
523
524 ceph osd pool set <pool-name> pg_num <new-pg-num>
525
526 Please refer to :ref:`choosing-number-of-placement-groups` and
527 :ref:`pg-autoscaler` for more information.
528
529 TOO_MANY_PGS
530 ____________
531
532 The number of PGs in use in the cluster is above the configurable
533 threshold of ``mon_max_pg_per_osd`` PGs per OSD. If this threshold is
534 exceed the cluster will not allow new pools to be created, pool `pg_num` to
535 be increased, or pool replication to be increased (any of which would lead to
536 more PGs in the cluster). A large number of PGs can lead
537 to higher memory utilization for OSD daemons, slower peering after
538 cluster state changes (like OSD restarts, additions, or removals), and
539 higher load on the Manager and Monitor daemons.
540
541 The simplest way to mitigate the problem is to increase the number of
542 OSDs in the cluster by adding more hardware. Note that the OSD count
543 used for the purposes of this health check is the number of "in" OSDs,
544 so marking "out" OSDs "in" (if there are any) can also help::
545
546 ceph osd in <osd id(s)>
547
548 Please refer to :ref:`choosing-number-of-placement-groups` for more
549 information.
550
551 POOL_TOO_MANY_PGS
552 _________________
553
554 One or more pools should probably have more PGs, based on the amount
555 of data that is currently stored in the pool. This can lead to higher
556 memory utilization for OSD daemons, slower peering after cluster state
557 changes (like OSD restarts, additions, or removals), and higher load
558 on the Manager and Monitor daemons. This warning is generated if the
559 ``pg_autoscale_mode`` property on the pool is set to ``warn``.
560
561 To disable the warning, you can disable auto-scaling of PGs for the
562 pool entirely with::
563
564 ceph osd pool set <pool-name> pg_autoscale_mode off
565
566 To allow the cluster to automatically adjust the number of PGs,::
567
568 ceph osd pool set <pool-name> pg_autoscale_mode on
569
570 You can also manually set the number of PGs for the pool to the
571 recommended amount with::
572
573 ceph osd pool set <pool-name> pg_num <new-pg-num>
574
575 Please refer to :ref:`choosing-number-of-placement-groups` and
576 :ref:`pg-autoscaler` for more information.
577
578 POOL_TARGET_SIZE_RATIO_OVERCOMMITTED
579 ____________________________________
580
581 One or more pools have a ``target_size_ratio`` property set to
582 estimate the expected size of the pool as a fraction of total storage,
583 but the value(s) exceed the total available storage (either by
584 themselves or in combination with other pools' actual usage).
585
586 This is usually an indication that the ``target_size_ratio`` value for
587 the pool is too large and should be reduced or set to zero with::
588
589 ceph osd pool set <pool-name> target_size_ratio 0
590
591 For more information, see :ref:`specifying_pool_target_size`.
592
593 POOL_TARGET_SIZE_BYTES_OVERCOMMITTED
594 ____________________________________
595
596 One or more pools have a ``target_size_bytes`` property set to
597 estimate the expected size of the pool,
598 but the value(s) exceed the total available storage (either by
599 themselves or in combination with other pools' actual usage).
600
601 This is usually an indication that the ``target_size_bytes`` value for
602 the pool is too large and should be reduced or set to zero with::
603
604 ceph osd pool set <pool-name> target_size_bytes 0
605
606 For more information, see :ref:`specifying_pool_target_size`.
607
608 SMALLER_PGP_NUM
609 _______________
610
611 One or more pools has a ``pgp_num`` value less than ``pg_num``. This
612 is normally an indication that the PG count was increased without
613 also increasing the placement behavior.
614
615 This is sometimes done deliberately to separate out the `split` step
616 when the PG count is adjusted from the data migration that is needed
617 when ``pgp_num`` is changed.
618
619 This is normally resolved by setting ``pgp_num`` to match ``pg_num``,
620 triggering the data migration, with::
621
622 ceph osd pool set <pool> pgp_num <pg-num-value>
623
624 MANY_OBJECTS_PER_PG
625 ___________________
626
627 One or more pools has an average number of objects per PG that is
628 significantly higher than the overall cluster average. The specific
629 threshold is controlled by the ``mon_pg_warn_max_object_skew``
630 configuration value.
631
632 This is usually an indication that the pool(s) containing most of the
633 data in the cluster have too few PGs, and/or that other pools that do
634 not contain as much data have too many PGs. See the discussion of
635 *TOO_MANY_PGS* above.
636
637 The threshold can be raised to silence the health warning by adjusting
638 the ``mon_pg_warn_max_object_skew`` config option on the monitors.
639
640
641 POOL_APP_NOT_ENABLED
642 ____________________
643
644 A pool exists that contains one or more objects but has not been
645 tagged for use by a particular application.
646
647 Resolve this warning by labeling the pool for use by an application. For
648 example, if the pool is used by RBD,::
649
650 rbd pool init <poolname>
651
652 If the pool is being used by a custom application 'foo', you can also label
653 via the low-level command::
654
655 ceph osd pool application enable foo
656
657 For more information, see :ref:`associate-pool-to-application`.
658
659 POOL_FULL
660 _________
661
662 One or more pools has reached (or is very close to reaching) its
663 quota. The threshold to trigger this error condition is controlled by
664 the ``mon_pool_quota_crit_threshold`` configuration option.
665
666 Pool quotas can be adjusted up or down (or removed) with::
667
668 ceph osd pool set-quota <pool> max_bytes <bytes>
669 ceph osd pool set-quota <pool> max_objects <objects>
670
671 Setting the quota value to 0 will disable the quota.
672
673 POOL_NEAR_FULL
674 ______________
675
676 One or more pools is approaching is quota. The threshold to trigger
677 this warning condition is controlled by the
678 ``mon_pool_quota_warn_threshold`` configuration option.
679
680 Pool quotas can be adjusted up or down (or removed) with::
681
682 ceph osd pool set-quota <pool> max_bytes <bytes>
683 ceph osd pool set-quota <pool> max_objects <objects>
684
685 Setting the quota value to 0 will disable the quota.
686
687 OBJECT_MISPLACED
688 ________________
689
690 One or more objects in the cluster is not stored on the node the
691 cluster would like it to be stored on. This is an indication that
692 data migration due to some recent cluster change has not yet completed.
693
694 Misplaced data is not a dangerous condition in and of itself; data
695 consistency is never at risk, and old copies of objects are never
696 removed until the desired number of new copies (in the desired
697 locations) are present.
698
699 OBJECT_UNFOUND
700 ______________
701
702 One or more objects in the cluster cannot be found. Specifically, the
703 OSDs know that a new or updated copy of an object should exist, but a
704 copy of that version of the object has not been found on OSDs that are
705 currently online.
706
707 Read or write requests to unfound objects will block.
708
709 Ideally, a down OSD can be brought back online that has the more
710 recent copy of the unfound object. Candidate OSDs can be identified from the
711 peering state for the PG(s) responsible for the unfound object::
712
713 ceph tell <pgid> query
714
715 If the latest copy of the object is not available, the cluster can be
716 told to roll back to a previous version of the object. See
717 :ref:`failures-osd-unfound` for more information.
718
719 SLOW_OPS
720 ________
721
722 One or more OSD requests is taking a long time to process. This can
723 be an indication of extreme load, a slow storage device, or a software
724 bug.
725
726 The request queue on the OSD(s) in question can be queried with the
727 following command, executed from the OSD host::
728
729 ceph daemon osd.<id> ops
730
731 A summary of the slowest recent requests can be seen with::
732
733 ceph daemon osd.<id> dump_historic_ops
734
735 The location of an OSD can be found with::
736
737 ceph osd find osd.<id>
738
739 PG_NOT_SCRUBBED
740 _______________
741
742 One or more PGs has not been scrubbed recently. PGs are normally
743 scrubbed every ``mon_scrub_interval`` seconds, and this warning
744 triggers when ``mon_warn_pg_not_scrubbed_ratio`` percentage of interval has elapsed
745 without a scrub since it was due.
746
747 PGs will not scrub if they are not flagged as *clean*, which may
748 happen if they are misplaced or degraded (see *PG_AVAILABILITY* and
749 *PG_DEGRADED* above).
750
751 You can manually initiate a scrub of a clean PG with::
752
753 ceph pg scrub <pgid>
754
755 PG_NOT_DEEP_SCRUBBED
756 ____________________
757
758 One or more PGs has not been deep scrubbed recently. PGs are normally
759 scrubbed every ``osd_deep_scrub_interval`` seconds, and this warning
760 triggers when ``mon_warn_pg_not_deep_scrubbed_ratio`` percentage of interval has elapsed
761 without a scrub since it was due.
762
763 PGs will not (deep) scrub if they are not flagged as *clean*, which may
764 happen if they are misplaced or degraded (see *PG_AVAILABILITY* and
765 *PG_DEGRADED* above).
766
767 You can manually initiate a scrub of a clean PG with::
768
769 ceph pg deep-scrub <pgid>