]> git.proxmox.com Git - ceph.git/blame - ceph/doc/rados/operations/monitoring-osd-pg.rst
import ceph quincy 17.2.6
[ceph.git] / ceph / doc / rados / operations / monitoring-osd-pg.rst
CommitLineData
7c673cae
FG
1=========================
2 Monitoring OSDs and PGs
3=========================
4
5High availability and high reliability require a fault-tolerant approach to
6managing hardware and software issues. Ceph has no single point-of-failure, and
7can service requests for data in a "degraded" mode. Ceph's `data placement`_
8introduces a layer of indirection to ensure that data doesn't bind directly to
9particular OSD addresses. This means that tracking down system faults requires
10finding the `placement group`_ and the underlying OSDs at root of the problem.
11
12.. tip:: A fault in one part of the cluster may prevent you from accessing a
c07f9fc5 13 particular object, but that doesn't mean that you cannot access other objects.
7c673cae
FG
14 When you run into a fault, don't panic. Just follow the steps for monitoring
15 your OSDs and placement groups. Then, begin troubleshooting.
16
17Ceph is generally self-repairing. However, when problems persist, monitoring
18OSDs and placement groups will help you identify the problem.
19
20
21Monitoring OSDs
22===============
23
24An OSD's status is either in the cluster (``in``) or out of the cluster
25(``out``); and, it is either up and running (``up``), or it is down and not
26running (``down``). If an OSD is ``up``, it may be either ``in`` the cluster
27(you can read and write data) or it is ``out`` of the cluster. If it was
28``in`` the cluster and recently moved ``out`` of the cluster, Ceph will migrate
29placement groups to other OSDs. If an OSD is ``out`` of the cluster, CRUSH will
30not assign placement groups to the OSD. If an OSD is ``down``, it should also be
31``out``.
32
33.. note:: If an OSD is ``down`` and ``in``, there is a problem and the cluster
34 will not be in a healthy state.
35
f91f0fd5
TL
36.. ditaa::
37
38 +----------------+ +----------------+
7c673cae
FG
39 | | | |
40 | OSD #n In | | OSD #n Up |
41 | | | |
42 +----------------+ +----------------+
43 ^ ^
44 | |
45 | |
46 v v
47 +----------------+ +----------------+
48 | | | |
49 | OSD #n Out | | OSD #n Down |
50 | | | |
51 +----------------+ +----------------+
52
53If you execute a command such as ``ceph health``, ``ceph -s`` or ``ceph -w``,
54you may notice that the cluster does not always echo back ``HEALTH OK``. Don't
55panic. With respect to OSDs, you should expect that the cluster will **NOT**
56echo ``HEALTH OK`` in a few expected circumstances:
57
58#. You haven't started the cluster yet (it won't respond).
59#. You have just started or restarted the cluster and it's not ready yet,
60 because the placement groups are getting created and the OSDs are in
61 the process of peering.
62#. You just added or removed an OSD.
63#. You just have modified your cluster map.
64
65An important aspect of monitoring OSDs is to ensure that when the cluster
66is up and running that all OSDs that are ``in`` the cluster are ``up`` and
39ae355f
TL
67running, too. To see if all OSDs are running, execute:
68
69.. prompt:: bash $
7c673cae
FG
70
71 ceph osd stat
72
11fdf7f2
TL
73The result should tell you the total number of OSDs (x),
74how many are ``up`` (y), how many are ``in`` (z) and the map epoch (eNNNN). ::
7c673cae 75
11fdf7f2 76 x osds: y up, z in; epoch: eNNNN
7c673cae
FG
77
78If the number of OSDs that are ``in`` the cluster is more than the number of
79OSDs that are ``up``, execute the following command to identify the ``ceph-osd``
39ae355f
TL
80daemons that are not running:
81
82.. prompt:: bash $
7c673cae
FG
83
84 ceph osd tree
85
86::
87
11fdf7f2
TL
88 #ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
89 -1 2.00000 pool openstack
90 -3 2.00000 rack dell-2950-rack-A
91 -2 2.00000 host dell-2950-A1
92 0 ssd 1.00000 osd.0 up 1.00000 1.00000
93 1 ssd 1.00000 osd.1 down 1.00000 1.00000
7c673cae
FG
94
95.. tip:: The ability to search through a well-designed CRUSH hierarchy may help
11fdf7f2 96 you troubleshoot your cluster by identifying the physical locations faster.
7c673cae 97
39ae355f
TL
98If an OSD is ``down``, start it:
99
100.. prompt:: bash $
7c673cae
FG
101
102 sudo systemctl start ceph-osd@1
103
104See `OSD Not Running`_ for problems associated with OSDs that stopped, or won't
105restart.
106
107
108PG Sets
109=======
110
111When CRUSH assigns placement groups to OSDs, it looks at the number of replicas
112for the pool and assigns the placement group to OSDs such that each replica of
113the placement group gets assigned to a different OSD. For example, if the pool
114requires three replicas of a placement group, CRUSH may assign them to
115``osd.1``, ``osd.2`` and ``osd.3`` respectively. CRUSH actually seeks a
116pseudo-random placement that will take into account failure domains you set in
117your `CRUSH map`_, so you will rarely see placement groups assigned to nearest
f67539c2
TL
118neighbor OSDs in a large cluster.
119
120Ceph processes a client request using the **Acting Set**, which is the set of
121OSDs that will actually handle the requests since they have a full and working
122version of a placement group shard. The set of OSDs that should contain a shard
123of a particular placement group as the **Up Set**, i.e. where data is
124moved/copied to (or planned to be).
125
126In some cases, an OSD in the Acting Set is ``down`` or otherwise not able to
7c673cae
FG
127service requests for objects in the placement group. When these situations
128arise, don't panic. Common examples include:
129
130- You added or removed an OSD. Then, CRUSH reassigned the placement group to
131 other OSDs--thereby changing the composition of the Acting Set and spawning
132 the migration of data with a "backfill" process.
133- An OSD was ``down``, was restarted, and is now ``recovering``.
134- An OSD in the Acting Set is ``down`` or unable to service requests,
135 and another OSD has temporarily assumed its duties.
136
f67539c2
TL
137In most cases, the Up Set and the Acting Set are identical. When they are not,
138it may indicate that Ceph is migrating the PG (it's remapped), an OSD is
139recovering, or that there is a problem (i.e., Ceph usually echoes a "HEALTH
140WARN" state with a "stuck stale" message in such scenarios).
7c673cae 141
39ae355f
TL
142To retrieve a list of placement groups, execute:
143
144.. prompt:: bash $
7c673cae
FG
145
146 ceph pg dump
147
148To view which OSDs are within the Acting Set or the Up Set for a given placement
39ae355f
TL
149group, execute:
150
151.. prompt:: bash $
7c673cae
FG
152
153 ceph pg map {pg-num}
154
155The result should tell you the osdmap epoch (eNNN), the placement group number
39ae355f
TL
156({pg-num}), the OSDs in the Up Set (up[]), and the OSDs in the acting set
157(acting[])::
7c673cae 158
11fdf7f2 159 osdmap eNNN pg {raw-pg-num} ({pg-num}) -> up [0,1,2] acting [0,1,2]
7c673cae
FG
160
161.. note:: If the Up Set and Acting Set do not match, this may be an indicator
162 that the cluster rebalancing itself or of a potential problem with
163 the cluster.
164
165
166Peering
167=======
168
169Before you can write data to a placement group, it must be in an ``active``
170state, and it **should** be in a ``clean`` state. For Ceph to determine the
171current state of a placement group, the primary OSD of the placement group
172(i.e., the first OSD in the acting set), peers with the secondary and tertiary
173OSDs to establish agreement on the current state of the placement group
174(assuming a pool with 3 replicas of the PG).
175
176
f91f0fd5
TL
177.. ditaa::
178
179 +---------+ +---------+ +-------+
7c673cae
FG
180 | OSD 1 | | OSD 2 | | OSD 3 |
181 +---------+ +---------+ +-------+
182 | | |
183 | Request To | |
184 | Peer | |
185 |-------------->| |
186 |<--------------| |
187 | Peering |
188 | |
189 | Request To |
190 | Peer |
191 |----------------------------->|
192 |<-----------------------------|
193 | Peering |
194
195The OSDs also report their status to the monitor. See `Configuring Monitor/OSD
196Interaction`_ for details. To troubleshoot peering issues, see `Peering
197Failure`_.
198
199
200Monitoring Placement Group States
201=================================
202
203If you execute a command such as ``ceph health``, ``ceph -s`` or ``ceph -w``,
204you may notice that the cluster does not always echo back ``HEALTH OK``. After
205you check to see if the OSDs are running, you should also check placement group
206states. You should expect that the cluster will **NOT** echo ``HEALTH OK`` in a
207number of placement group peering-related circumstances:
208
209#. You have just created a pool and placement groups haven't peered yet.
210#. The placement groups are recovering.
211#. You have just added an OSD to or removed an OSD from the cluster.
212#. You have just modified your CRUSH map and your placement groups are migrating.
213#. There is inconsistent data in different replicas of a placement group.
214#. Ceph is scrubbing a placement group's replicas.
215#. Ceph doesn't have enough storage capacity to complete backfilling operations.
216
217If one of the foregoing circumstances causes Ceph to echo ``HEALTH WARN``, don't
218panic. In many cases, the cluster will recover on its own. In some cases, you
219may need to take action. An important aspect of monitoring placement groups is
220to ensure that when the cluster is up and running that all placement groups are
221``active``, and preferably in the ``clean`` state. To see the status of all
39ae355f
TL
222placement groups, execute:
223
224.. prompt:: bash $
7c673cae
FG
225
226 ceph pg stat
227
11fdf7f2
TL
228The result should tell you the total number of placement groups (x), how many
229placement groups are in a particular state such as ``active+clean`` (y) and the
230amount of data stored (z). ::
7c673cae 231
11fdf7f2 232 x pgs: y active+clean; z bytes data, aa MB used, bb GB / cc GB avail
7c673cae
FG
233
234.. note:: It is common for Ceph to report multiple states for placement groups.
235
11fdf7f2
TL
236In addition to the placement group states, Ceph will also echo back the amount of
237storage capacity used (aa), the amount of storage capacity remaining (bb), and the total
7c673cae
FG
238storage capacity for the placement group. These numbers can be important in a
239few cases:
240
241- You are reaching your ``near full ratio`` or ``full ratio``.
c07f9fc5 242- Your data is not getting distributed across the cluster due to an
7c673cae
FG
243 error in your CRUSH configuration.
244
245
246.. topic:: Placement Group IDs
247
248 Placement group IDs consist of the pool number (not pool name) followed
249 by a period (.) and the placement group ID--a hexadecimal number. You
250 can view pool numbers and their names from the output of ``ceph osd
11fdf7f2
TL
251 lspools``. For example, the first pool created corresponds to
252 pool number ``1``. A fully qualified placement group ID has the
7c673cae
FG
253 following form::
254
255 {pool-num}.{pg-id}
256
257 And it typically looks like this::
258
11fdf7f2 259 1.1f
7c673cae
FG
260
261
39ae355f
TL
262To retrieve a list of placement groups, execute the following:
263
264.. prompt:: bash $
7c673cae
FG
265
266 ceph pg dump
267
39ae355f
TL
268You can also format the output in JSON format and save it to a file:
269
270.. prompt:: bash $
7c673cae
FG
271
272 ceph pg dump -o {filename} --format=json
273
39ae355f
TL
274To query a particular placement group, execute the following:
275
276.. prompt:: bash $
7c673cae
FG
277
278 ceph pg {poolnum}.{pg-id} query
279
280Ceph will output the query in JSON format.
281
11fdf7f2 282The following subsections describe the common pg states in detail.
7c673cae
FG
283
284Creating
285--------
286
287When you create a pool, it will create the number of placement groups you
288specified. Ceph will echo ``creating`` when it is creating one or more
289placement groups. Once they are created, the OSDs that are part of a placement
290group's Acting Set will peer. Once peering is complete, the placement group
291status should be ``active+clean``, which means a Ceph client can begin writing
292to the placement group.
293
f91f0fd5
TL
294.. ditaa::
295
7c673cae
FG
296 /-----------\ /-----------\ /-----------\
297 | Creating |------>| Peering |------>| Active |
298 \-----------/ \-----------/ \-----------/
299
300Peering
301-------
302
303When Ceph is Peering a placement group, Ceph is bringing the OSDs that
304store the replicas of the placement group into **agreement about the state**
305of the objects and metadata in the placement group. When Ceph completes peering,
306this means that the OSDs that store the placement group agree about the current
307state of the placement group. However, completion of the peering process does
308**NOT** mean that each replica has the latest contents.
309
11fdf7f2 310.. topic:: Authoritative History
7c673cae
FG
311
312 Ceph will **NOT** acknowledge a write operation to a client, until
313 all OSDs of the acting set persist the write operation. This practice
314 ensures that at least one member of the acting set will have a record
315 of every acknowledged write operation since the last successful
316 peering operation.
317
318 With an accurate record of each acknowledged write operation, Ceph can
319 construct and disseminate a new authoritative history of the placement
320 group--a complete, and fully ordered set of operations that, if performed,
321 would bring an OSD’s copy of a placement group up to date.
322
323
324Active
325------
326
327Once Ceph completes the peering process, a placement group may become
328``active``. The ``active`` state means that the data in the placement group is
329generally available in the primary placement group and the replicas for read
330and write operations.
331
332
333Clean
334-----
335
336When a placement group is in the ``clean`` state, the primary OSD and the
337replica OSDs have successfully peered and there are no stray replicas for the
338placement group. Ceph replicated all objects in the placement group the correct
339number of times.
340
341
342Degraded
343--------
344
345When a client writes an object to the primary OSD, the primary OSD is
346responsible for writing the replicas to the replica OSDs. After the primary OSD
347writes the object to storage, the placement group will remain in a ``degraded``
348state until the primary OSD has received an acknowledgement from the replica
349OSDs that Ceph created the replica objects successfully.
350
351The reason a placement group can be ``active+degraded`` is that an OSD may be
352``active`` even though it doesn't hold all of the objects yet. If an OSD goes
353``down``, Ceph marks each placement group assigned to the OSD as ``degraded``.
354The OSDs must peer again when the OSD comes back online. However, a client can
355still write a new object to a ``degraded`` placement group if it is ``active``.
356
357If an OSD is ``down`` and the ``degraded`` condition persists, Ceph may mark the
358``down`` OSD as ``out`` of the cluster and remap the data from the ``down`` OSD
359to another OSD. The time between being marked ``down`` and being marked ``out``
360is controlled by ``mon osd down out interval``, which is set to ``600`` seconds
361by default.
362
363A placement group can also be ``degraded``, because Ceph cannot find one or more
364objects that Ceph thinks should be in the placement group. While you cannot
365read or write to unfound objects, you can still access all of the other objects
366in the ``degraded`` placement group.
367
368
369Recovering
370----------
371
372Ceph was designed for fault-tolerance at a scale where hardware and software
373problems are ongoing. When an OSD goes ``down``, its contents may fall behind
374the current state of other replicas in the placement groups. When the OSD is
375back ``up``, the contents of the placement groups must be updated to reflect the
376current state. During that time period, the OSD may reflect a ``recovering``
377state.
378
c07f9fc5 379Recovery is not always trivial, because a hardware failure might cause a
7c673cae
FG
380cascading failure of multiple OSDs. For example, a network switch for a rack or
381cabinet may fail, which can cause the OSDs of a number of host machines to fall
382behind the current state of the cluster. Each one of the OSDs must recover once
383the fault is resolved.
384
385Ceph provides a number of settings to balance the resource contention between
386new service requests and the need to recover data objects and restore the
387placement groups to the current state. The ``osd recovery delay start`` setting
388allows an OSD to restart, re-peer and even process some replay requests before
389starting the recovery process. The ``osd
390recovery thread timeout`` sets a thread timeout, because multiple OSDs may fail,
391restart and re-peer at staggered rates. The ``osd recovery max active`` setting
392limits the number of recovery requests an OSD will entertain simultaneously to
393prevent the OSD from failing to serve . The ``osd recovery max chunk`` setting
394limits the size of the recovered data chunks to prevent network congestion.
395
396
397Back Filling
398------------
399
400When a new OSD joins the cluster, CRUSH will reassign placement groups from OSDs
401in the cluster to the newly added OSD. Forcing the new OSD to accept the
402reassigned placement groups immediately can put excessive load on the new OSD.
403Back filling the OSD with the placement groups allows this process to begin in
404the background. Once backfilling is complete, the new OSD will begin serving
405requests when it is ready.
406
407During the backfill operations, you may see one of several states:
c07f9fc5 408``backfill_wait`` indicates that a backfill operation is pending, but is not
11fdf7f2
TL
409underway yet; ``backfilling`` indicates that a backfill operation is underway;
410and, ``backfill_toofull`` indicates that a backfill operation was requested,
7c673cae 411but couldn't be completed due to insufficient storage capacity. When a
c07f9fc5 412placement group cannot be backfilled, it may be considered ``incomplete``.
7c673cae 413
eafe8130
TL
414The ``backfill_toofull`` state may be transient. It is possible that as PGs
415are moved around, space may become available. The ``backfill_toofull`` is
416similar to ``backfill_wait`` in that as soon as conditions change
417backfill can proceed.
418
7c673cae
FG
419Ceph provides a number of settings to manage the load spike associated with
420reassigning placement groups to an OSD (especially a new OSD). By default,
11fdf7f2
TL
421``osd_max_backfills`` sets the maximum number of concurrent backfills to and from
422an OSD to 1. The ``backfill full ratio`` enables an OSD to refuse a
7c673cae 423backfill request if the OSD is approaching its full ratio (90%, by default) and
11fdf7f2 424change with ``ceph osd set-backfillfull-ratio`` command.
7c673cae 425If an OSD refuses a backfill request, the ``osd backfill retry interval``
11fdf7f2 426enables an OSD to retry the request (after 30 seconds, by default). OSDs can
7c673cae
FG
427also set ``osd backfill scan min`` and ``osd backfill scan max`` to manage scan
428intervals (64 and 512, by default).
429
430
431Remapped
432--------
433
434When the Acting Set that services a placement group changes, the data migrates
435from the old acting set to the new acting set. It may take some time for a new
436primary OSD to service requests. So it may ask the old primary to continue to
437service requests until the placement group migration is complete. Once data
438migration completes, the mapping uses the primary OSD of the new acting set.
439
440
441Stale
442-----
443
444While Ceph uses heartbeats to ensure that hosts and daemons are running, the
c07f9fc5 445``ceph-osd`` daemons may also get into a ``stuck`` state where they are not
7c673cae 446reporting statistics in a timely manner (e.g., a temporary network fault). By
11fdf7f2 447default, OSD daemons report their placement group, up through, boot and failure
7c673cae
FG
448statistics every half second (i.e., ``0.5``), which is more frequent than the
449heartbeat thresholds. If the **Primary OSD** of a placement group's acting set
450fails to report to the monitor or if other OSDs have reported the primary OSD
451``down``, the monitors will mark the placement group ``stale``.
452
453When you start your cluster, it is common to see the ``stale`` state until
454the peering process completes. After your cluster has been running for awhile,
455seeing placement groups in the ``stale`` state indicates that the primary OSD
456for those placement groups is ``down`` or not reporting placement group statistics
457to the monitor.
458
459
460Identifying Troubled PGs
461========================
462
c07f9fc5
FG
463As previously noted, a placement group is not necessarily problematic just
464because its state is not ``active+clean``. Generally, Ceph's ability to self
7c673cae
FG
465repair may not be working when placement groups get stuck. The stuck states
466include:
467
468- **Unclean**: Placement groups contain objects that are not replicated the
469 desired number of times. They should be recovering.
470- **Inactive**: Placement groups cannot process reads or writes because they
471 are waiting for an OSD with the most up-to-date data to come back ``up``.
472- **Stale**: Placement groups are in an unknown state, because the OSDs that
473 host them have not reported to the monitor cluster in a while (configured
474 by ``mon osd report timeout``).
475
39ae355f
TL
476To identify stuck placement groups, execute the following:
477
478.. prompt:: bash $
7c673cae
FG
479
480 ceph pg dump_stuck [unclean|inactive|stale|undersized|degraded]
481
482See `Placement Group Subsystem`_ for additional details. To troubleshoot
483stuck placement groups, see `Troubleshooting PG Errors`_.
484
485
486Finding an Object Location
487==========================
488
489To store object data in the Ceph Object Store, a Ceph client must:
490
491#. Set an object name
492#. Specify a `pool`_
493
494The Ceph client retrieves the latest cluster map and the CRUSH algorithm
495calculates how to map the object to a `placement group`_, and then calculates
496how to assign the placement group to an OSD dynamically. To find the object
39ae355f
TL
497location, all you need is the object name and the pool name. For example:
498
499.. prompt:: bash $
7c673cae 500
11fdf7f2 501 ceph osd map {poolname} {object-name} [namespace]
7c673cae
FG
502
503.. topic:: Exercise: Locate an Object
504
39ae355f
TL
505 As an exercise, let's create an object. Specify an object name, a path
506 to a test file containing some object data and a pool name using the
507 ``rados put`` command on the command line. For example:
508
509 .. prompt:: bash $
7c673cae
FG
510
511 rados put {object-name} {file-path} --pool=data
512 rados put test-object-1 testfile.txt --pool=data
513
39ae355f
TL
514 To verify that the Ceph Object Store stored the object, execute the
515 following:
7c673cae 516
39ae355f
TL
517 .. prompt:: bash $
518
519 rados -p data ls
7c673cae 520
39ae355f
TL
521 Now, identify the object location:
522
523 .. prompt:: bash $
7c673cae 524
39ae355f
TL
525 ceph osd map {pool-name} {object-name}
526 ceph osd map data test-object-1
7c673cae
FG
527
528 Ceph should output the object's location. For example::
529
11fdf7f2 530 osdmap e537 pool 'data' (1) object 'test-object-1' -> pg 1.d1743484 (1.4) -> up ([0,1], p0) acting ([0,1], p0)
7c673cae 531
39ae355f
TL
532 To remove the test object, simply delete it using the ``rados rm``
533 command. For example:
534
535 .. prompt:: bash $
7c673cae 536
39ae355f 537 rados rm test-object-1 --pool=data
7c673cae
FG
538
539
540As the cluster evolves, the object location may change dynamically. One benefit
541of Ceph's dynamic rebalancing is that Ceph relieves you from having to perform
542the migration manually. See the `Architecture`_ section for details.
543
544.. _data placement: ../data-placement
545.. _pool: ../pools
546.. _placement group: ../placement-groups
547.. _Architecture: ../../../architecture
548.. _OSD Not Running: ../../troubleshooting/troubleshooting-osd#osd-not-running
549.. _Troubleshooting PG Errors: ../../troubleshooting/troubleshooting-pg#troubleshooting-pg-errors
550.. _Peering Failure: ../../troubleshooting/troubleshooting-pg#failures-osd-peering
551.. _CRUSH map: ../crush-map
552.. _Configuring Monitor/OSD Interaction: ../../configuration/mon-osd-interaction/
553.. _Placement Group Subsystem: ../control#placement-group-subsystem