]> git.proxmox.com Git - ceph.git/blame - ceph/doc/rados/operations/pg-states.rst
update sources to v12.2.5
[ceph.git] / ceph / doc / rados / operations / pg-states.rst
CommitLineData
7c673cae
FG
1========================
2 Placement Group States
3========================
4
5When checking a cluster's status (e.g., running ``ceph -w`` or ``ceph -s``),
6Ceph will report on the status of the placement groups. A placement group has
7one or more states. The optimum state for placement groups in the placement group
8map is ``active + clean``.
9
10*Creating*
11 Ceph is still creating the placement group.
12
94b18763
FG
13*Activating*
14 The placement group is peered but not yet active.
15
7c673cae
FG
16*Active*
17 Ceph will process requests to the placement group.
18
19*Clean*
20 Ceph replicated all objects in the placement group the correct number of times.
21
22*Down*
23 A replica with necessary data is down, so the placement group is offline.
24
25*Scrubbing*
26 Ceph is checking the placement group for inconsistencies.
27
28*Degraded*
29 Ceph has not replicated some objects in the placement group the correct number of times yet.
30
31*Inconsistent*
32 Ceph detects inconsistencies in the one or more replicas of an object in the placement group
33 (e.g. objects are the wrong size, objects are missing from one replica *after* recovery finished, etc.).
34
35*Peering*
36 The placement group is undergoing the peering process
37
38*Repair*
39 Ceph is checking the placement group and repairing any inconsistencies it finds (if possible).
40
41*Recovering*
42 Ceph is migrating/synchronizing objects and their replicas.
43
c07f9fc5
FG
44*Forced-Recovery*
45 High recovery priority of that PG is enforced by user.
46
94b18763
FG
47*Recovery-wait*
48 The placement group is waiting in line to start recover.
49
50*Recovery-toofull*
51 A recovery operation is waiting because the destination OSD is over its
52 full ratio.
53
54*Recovery-unfound*
55 Recovery stopped due to unfound objects.
56
57*Backfilling*
7c673cae
FG
58 Ceph is scanning and synchronizing the entire contents of a placement group
59 instead of inferring what contents need to be synchronized from the logs of
94b18763 60 recent operations. Backfill is a special case of recovery.
7c673cae 61
c07f9fc5
FG
62*Forced-Backfill*
63 High backfill priority of that PG is enforced by user.
64
94b18763 65*Backfill-wait*
7c673cae
FG
66 The placement group is waiting in line to start backfill.
67
68*Backfill-toofull*
69 A backfill operation is waiting because the destination OSD is over its
70 full ratio.
71
94b18763
FG
72*Backfill-unfound*
73 Backfill stopped due to unfound objects.
74
7c673cae
FG
75*Incomplete*
76 Ceph detects that a placement group is missing information about
77 writes that may have occurred, or does not have any healthy
78 copies. If you see this state, try to start any failed OSDs that may
79 contain the needed information. In the case of an erasure coded pool
80 temporarily reducing min_size may allow recovery.
81
82*Stale*
83 The placement group is in an unknown state - the monitors have not received
84 an update for it since the placement group mapping changed.
85
86*Remapped*
87 The placement group is temporarily mapped to a different set of OSDs from what
88 CRUSH specified.
89
90*Undersized*
91 The placement group fewer copies than the configured pool replication level.
92
93*Peered*
94 The placement group has peered, but cannot serve client IO due to not having
95 enough copies to reach the pool's configured min_size parameter. Recovery
96 may occur in this state, so the pg may heal up to min_size eventually.
94b18763
FG
97
98*Snaptrim*
99 Trimming snaps.
100
101*Snaptrim-wait*
102 Queued to trim snaps.
103
104*Snaptrim-error*
105 Error stopped trimming snaps.