]> git.proxmox.com Git - ceph.git/blame - ceph/doc/rados/troubleshooting/troubleshooting-osd.rst
update sources to v12.1.1
[ceph.git] / ceph / doc / rados / troubleshooting / troubleshooting-osd.rst
CommitLineData
7c673cae
FG
1======================
2 Troubleshooting OSDs
3======================
4
5Before troubleshooting your OSDs, check your monitors and network first. If
6you execute ``ceph health`` or ``ceph -s`` on the command line and Ceph returns
7a health status, it means that the monitors have a quorum.
8If you don't have a monitor quorum or if there are errors with the monitor
9status, `address the monitor issues first <../troubleshooting-mon>`_.
10Check your networks to ensure they
11are running properly, because networks may have a significant impact on OSD
12operation and performance.
13
14
15
16Obtaining Data About OSDs
17=========================
18
19A good first step in troubleshooting your OSDs is to obtain information in
20addition to the information you collected while `monitoring your OSDs`_
21(e.g., ``ceph osd tree``).
22
23
24Ceph Logs
25---------
26
27If you haven't changed the default path, you can find Ceph log files at
28``/var/log/ceph``::
29
30 ls /var/log/ceph
31
32If you don't get enough log detail, you can change your logging level. See
33`Logging and Debugging`_ for details to ensure that Ceph performs adequately
34under high logging volume.
35
36
37Admin Socket
38------------
39
40Use the admin socket tool to retrieve runtime information. For details, list
41the sockets for your Ceph processes::
42
43 ls /var/run/ceph
44
45Then, execute the following, replacing ``{daemon-name}`` with an actual
46daemon (e.g., ``osd.0``)::
47
48 ceph daemon osd.0 help
49
50Alternatively, you can specify a ``{socket-file}`` (e.g., something in ``/var/run/ceph``)::
51
52 ceph daemon {socket-file} help
53
54
55The admin socket, among other things, allows you to:
56
57- List your configuration at runtime
58- Dump historic operations
59- Dump the operation priority queue state
60- Dump operations in flight
61- Dump perfcounters
62
63
64Display Freespace
65-----------------
66
67Filesystem issues may arise. To display your filesystem's free space, execute
68``df``. ::
69
70 df -h
71
72Execute ``df --help`` for additional usage.
73
74
75I/O Statistics
76--------------
77
78Use `iostat`_ to identify I/O-related issues. ::
79
80 iostat -x
81
82
83Diagnostic Messages
84-------------------
85
86To retrieve diagnostic messages, use ``dmesg`` with ``less``, ``more``, ``grep``
87or ``tail``. For example::
88
89 dmesg | grep scsi
90
91
92Stopping w/out Rebalancing
93==========================
94
95Periodically, you may need to perform maintenance on a subset of your cluster,
96or resolve a problem that affects a failure domain (e.g., a rack). If you do not
97want CRUSH to automatically rebalance the cluster as you stop OSDs for
98maintenance, set the cluster to ``noout`` first::
99
100 ceph osd set noout
101
102Once the cluster is set to ``noout``, you can begin stopping the OSDs within the
103failure domain that requires maintenance work. ::
104
105 stop ceph-osd id={num}
106
107.. note:: Placement groups within the OSDs you stop will become ``degraded``
108 while you are addressing issues with within the failure domain.
109
110Once you have completed your maintenance, restart the OSDs. ::
111
112 start ceph-osd id={num}
113
114Finally, you must unset the cluster from ``noout``. ::
115
116 ceph osd unset noout
117
118
119
120.. _osd-not-running:
121
122OSD Not Running
123===============
124
125Under normal circumstances, simply restarting the ``ceph-osd`` daemon will
126allow it to rejoin the cluster and recover.
127
128An OSD Won't Start
129------------------
130
131If you start your cluster and an OSD won't start, check the following:
132
133- **Configuration File:** If you were not able to get OSDs running from
134 a new installation, check your configuration file to ensure it conforms
135 (e.g., ``host`` not ``hostname``, etc.).
136
137- **Check Paths:** Check the paths in your configuration, and the actual
138 paths themselves for data and journals. If you separate the OSD data from
139 the journal data and there are errors in your configuration file or in the
140 actual mounts, you may have trouble starting OSDs. If you want to store the
141 journal on a block device, you should partition your journal disk and assign
142 one partition per OSD.
143
144- **Check Max Threadcount:** If you have a node with a lot of OSDs, you may be
145 hitting the default maximum number of threads (e.g., usually 32k), especially
146 during recovery. You can increase the number of threads using ``sysctl`` to
147 see if increasing the maximum number of threads to the maximum possible
148 number of threads allowed (i.e., 4194303) will help. For example::
149
150 sysctl -w kernel.pid_max=4194303
151
152 If increasing the maximum thread count resolves the issue, you can make it
153 permanent by including a ``kernel.pid_max`` setting in the
154 ``/etc/sysctl.conf`` file. For example::
155
156 kernel.pid_max = 4194303
157
158- **Kernel Version:** Identify the kernel version and distribution you
159 are using. Ceph uses some third party tools by default, which may be
160 buggy or may conflict with certain distributions and/or kernel
161 versions (e.g., Google perftools). Check the `OS recommendations`_
162 to ensure you have addressed any issues related to your kernel.
163
164- **Segment Fault:** If there is a segment fault, turn your logging up
165 (if it isn't already), and try again. If it segment faults again,
166 contact the ceph-devel email list and provide your Ceph configuration
167 file, your monitor output and the contents of your log file(s).
168
169
170
171An OSD Failed
172-------------
173
174When a ``ceph-osd`` process dies, the monitor will learn about the failure
175from surviving ``ceph-osd`` daemons and report it via the ``ceph health``
176command::
177
178 ceph health
179 HEALTH_WARN 1/3 in osds are down
180
181Specifically, you will get a warning whenever there are ``ceph-osd``
182processes that are marked ``in`` and ``down``. You can identify which
183``ceph-osds`` are ``down`` with::
184
185 ceph health detail
186 HEALTH_WARN 1/3 in osds are down
187 osd.0 is down since epoch 23, last address 192.168.106.220:6800/11080
188
189If there is a disk
190failure or other fault preventing ``ceph-osd`` from functioning or
191restarting, an error message should be present in its log file in
192``/var/log/ceph``.
193
194If the daemon stopped because of a heartbeat failure, the underlying
195kernel file system may be unresponsive. Check ``dmesg`` output for disk
196or other kernel errors.
197
198If the problem is a software error (failed assertion or other
199unexpected error), it should be reported to the `ceph-devel`_ email list.
200
201
202No Free Drive Space
203-------------------
204
205Ceph prevents you from writing to a full OSD so that you don't lose data.
206In an operational cluster, you should receive a warning when your cluster
207is getting near its full ratio. The ``mon osd full ratio`` defaults to
208``0.95``, or 95% of capacity before it stops clients from writing data.
209The ``mon osd backfillfull ratio`` defaults to ``0.90``, or 90 % of
210capacity when it blocks backfills from starting. The
211``mon osd nearfull ratio`` defaults to ``0.85``, or 85% of capacity
212when it generates a health warning.
213
214Full cluster issues usually arise when testing how Ceph handles an OSD
215failure on a small cluster. When one node has a high percentage of the
216cluster's data, the cluster can easily eclipse its nearfull and full ratio
217immediately. If you are testing how Ceph reacts to OSD failures on a small
218cluster, you should leave ample free disk space and consider temporarily
219lowering the ``mon osd full ratio``, ``mon osd backfillfull ratio`` and
220``mon osd nearfull ratio``.
221
222Full ``ceph-osds`` will be reported by ``ceph health``::
223
224 ceph health
225 HEALTH_WARN 1 nearfull osd(s)
226
227Or::
228
229 ceph health detail
230 HEALTH_ERR 1 full osd(s); 1 backfillfull osd(s); 1 nearfull osd(s)
231 osd.3 is full at 97%
232 osd.4 is backfill full at 91%
233 osd.2 is near full at 87%
234
235The best way to deal with a full cluster is to add new ``ceph-osds``, allowing
236the cluster to redistribute data to the newly available storage.
237
238If you cannot start an OSD because it is full, you may delete some data by deleting
239some placement group directories in the full OSD.
240
241.. important:: If you choose to delete a placement group directory on a full OSD,
242 **DO NOT** delete the same placement group directory on another full OSD, or
243 **YOU MAY LOSE DATA**. You **MUST** maintain at least one copy of your data on
244 at least one OSD.
245
246See `Monitor Config Reference`_ for additional details.
247
248
249OSDs are Slow/Unresponsive
250==========================
251
252A commonly recurring issue involves slow or unresponsive OSDs. Ensure that you
253have eliminated other troubleshooting possibilities before delving into OSD
254performance issues. For example, ensure that your network(s) is working properly
255and your OSDs are running. Check to see if OSDs are throttling recovery traffic.
256
257.. tip:: Newer versions of Ceph provide better recovery handling by preventing
258 recovering OSDs from using up system resources so that ``up`` and ``in``
259 OSDs aren't available or are otherwise slow.
260
261
262Networking Issues
263-----------------
264
265Ceph is a distributed storage system, so it depends upon networks to peer with
266OSDs, replicate objects, recover from faults and check heartbeats. Networking
267issues can cause OSD latency and flapping OSDs. See `Flapping OSDs`_ for
268details.
269
270Ensure that Ceph processes and Ceph-dependent processes are connected and/or
271listening. ::
272
273 netstat -a | grep ceph
274 netstat -l | grep ceph
275 sudo netstat -p | grep ceph
276
277Check network statistics. ::
278
279 netstat -s
280
281
282Drive Configuration
283-------------------
284
285A storage drive should only support one OSD. Sequential read and sequential
286write throughput can bottleneck if other processes share the drive, including
287journals, operating systems, monitors, other OSDs and non-Ceph processes.
288
224ce89b
WB
289Ceph acknowledges writes *after* journaling, so fast SSDs are an
290attractive option to accelerate the response time--particularly when
291using the ``XFS`` or ``ext4`` filesystems. By contrast, the ``btrfs``
292filesystem can write and journal simultaneously. (Note, however, that
293we recommend against using ``btrfs`` for production deployments.)
7c673cae
FG
294
295.. note:: Partitioning a drive does not change its total throughput or
296 sequential read/write limits. Running a journal in a separate partition
297 may help, but you should prefer a separate physical drive.
298
299
300Bad Sectors / Fragmented Disk
301-----------------------------
302
303Check your disks for bad sectors and fragmentation. This can cause total throughput
304to drop substantially.
305
306
307Co-resident Monitors/OSDs
308-------------------------
309
310Monitors are generally light-weight processes, but they do lots of ``fsync()``,
311which can interfere with other workloads, particularly if monitors run on the
312same drive as your OSDs. Additionally, if you run monitors on the same host as
313the OSDs, you may incur performance issues related to:
314
315- Running an older kernel (pre-3.0)
316- Running Argonaut with an old ``glibc``
317- Running a kernel with no syncfs(2) syscall.
318
319In these cases, multiple OSDs running on the same host can drag each other down
320by doing lots of commits. That often leads to the bursty writes.
321
322
323Co-resident Processes
324---------------------
325
326Spinning up co-resident processes such as a cloud-based solution, virtual
327machines and other applications that write data to Ceph while operating on the
328same hardware as OSDs can introduce significant OSD latency. Generally, we
329recommend optimizing a host for use with Ceph and using other hosts for other
330processes. The practice of separating Ceph operations from other applications
331may help improve performance and may streamline troubleshooting and maintenance.
332
333
334Logging Levels
335--------------
336
337If you turned logging levels up to track an issue and then forgot to turn
338logging levels back down, the OSD may be putting a lot of logs onto the disk. If
339you intend to keep logging levels high, you may consider mounting a drive to the
340default path for logging (i.e., ``/var/log/ceph/$cluster-$name.log``).
341
342
343Recovery Throttling
344-------------------
345
346Depending upon your configuration, Ceph may reduce recovery rates to maintain
347performance or it may increase recovery rates to the point that recovery
348impacts OSD performance. Check to see if the OSD is recovering.
349
350
351Kernel Version
352--------------
353
354Check the kernel version you are running. Older kernels may not receive
355new backports that Ceph depends upon for better performance.
356
357
358Kernel Issues with SyncFS
359-------------------------
360
361Try running one OSD per host to see if performance improves. Old kernels
362might not have a recent enough version of ``glibc`` to support ``syncfs(2)``.
363
364
365Filesystem Issues
366-----------------
367
224ce89b
WB
368Currently, we recommend deploying clusters with XFS.
369
370We recommend against using btrfs or ext4. The btrfs filesystem has
371many attractive features, but bugs in the filesystem may lead to
372performance issues and suprious ENOSPC errors. We do not recommend
373ext4 because xattr size limitations break our support for long object
374names (needed for RGW).
7c673cae
FG
375
376For more information, see `Filesystem Recommendations`_.
377
378.. _Filesystem Recommendations: ../configuration/filesystem-recommendations
379
380
381Insufficient RAM
382----------------
383
384We recommend 1GB of RAM per OSD daemon. You may notice that during normal
385operations, the OSD only uses a fraction of that amount (e.g., 100-200MB).
386Unused RAM makes it tempting to use the excess RAM for co-resident applications,
387VMs and so forth. However, when OSDs go into recovery mode, their memory
388utilization spikes. If there is no RAM available, the OSD performance will slow
389considerably.
390
391
392Old Requests or Slow Requests
393-----------------------------
394
395If a ``ceph-osd`` daemon is slow to respond to a request, it will generate log messages
396complaining about requests that are taking too long. The warning threshold
397defaults to 30 seconds, and is configurable via the ``osd op complaint time``
398option. When this happens, the cluster log will receive messages.
399
400Legacy versions of Ceph complain about 'old requests`::
401
402 osd.0 192.168.106.220:6800/18813 312 : [WRN] old request osd_op(client.5099.0:790 fatty_26485_object789 [write 0~4096] 2.5e54f643) v4 received at 2012-03-06 15:42:56.054801 currently waiting for sub ops
403
404New versions of Ceph complain about 'slow requests`::
405
406 {date} {osd.num} [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.005692 secs
407 {date} {osd.num} [WRN] slow request 30.005692 seconds old, received at {date-time}: osd_op(client.4240.0:8 benchmark_data_ceph-1_39426_object7 [write 0~4194304] 0.69848840) v4 currently waiting for subops from [610]
408
409
410Possible causes include:
411
412- A bad drive (check ``dmesg`` output)
413- A bug in the kernel file system bug (check ``dmesg`` output)
414- An overloaded cluster (check system load, iostat, etc.)
415- A bug in the ``ceph-osd`` daemon.
416
417Possible solutions
418
419- Remove VMs Cloud Solutions from Ceph Hosts
420- Upgrade Kernel
421- Upgrade Ceph
422- Restart OSDs
423
31f18b77
FG
424Debugging Slow Requests
425-----------------------
426
427If you run "ceph daemon osd.<id> dump_historic_ops" or "dump_ops_in_flight",
428you will see a set of operations and a list of events each operation went
429through. These are briefly described below.
430
431Events from the Messenger layer:
432
433- header_read: when the messenger first started reading the message off the wire
434- throttled: when the messenger tried to acquire memory throttle space to read
435 the message into memory
436- all_read: when the messenger finished reading the message off the wire
437- dispatched: when the messenger gave the message to the OSD
438- Initiated: <This is identical to header_read. The existence of both is a
439 historical oddity.
440
441Events from the OSD as it prepares operations
442
443- queued_for_pg: the op has been put into the queue for processing by its PG
444- reached_pg: the PG has started doing the op
224ce89b 445- waiting for \*: the op is waiting for some other work to complete before it
31f18b77
FG
446 can proceed (a new OSDMap; for its object target to scrub; for the PG to
447 finish peering; all as specified in the message)
448- started: the op has been accepted as something the OSD should actually do
449 (reasons not to do it: failed security/permission checks; out-of-date local
450 state; etc) and is now actually being performed
451- waiting for subops from: the op has been sent to replica OSDs
452
453Events from the FileStore
454
455- commit_queued_for_journal_write: the op has been given to the FileStore
456- write_thread_in_journal_buffer: the op is in the journal's buffer and waiting
457 to be persisted (as the next disk write)
458- journaled_completion_queued: the op was journaled to disk and its callback
459 queued for invocation
460
461Events from the OSD after stuff has been given to local disk
462
463- op_commit: the op has been committed (ie, written to journal) by the
464 primary OSD
465- op_applied: The op has been write()'en to the backing FS (ie, applied in
466 memory but not flushed out to disk) on the primary
467- sub_op_applied: op_applied, but for a replica's "subop"
468- sub_op_committed: op_commited, but for a replica's subop (only for EC pools)
469- sub_op_commit_rec/sub_op_apply_rec from <X>: the primary marks this when it
470 hears about the above, but for a particular replica <X>
471- commit_sent: we sent a reply back to the client (or primary OSD, for sub ops)
472
473Many of these events are seemingly redundant, but cross important boundaries in
474the internal code (such as passing data across locks into new threads).
7c673cae
FG
475
476Flapping OSDs
477=============
478
479We recommend using both a public (front-end) network and a cluster (back-end)
480network so that you can better meet the capacity requirements of object
481replication. Another advantage is that you can run a cluster network such that
482it isn't connected to the internet, thereby preventing some denial of service
483attacks. When OSDs peer and check heartbeats, they use the cluster (back-end)
484network when it's available. See `Monitor/OSD Interaction`_ for details.
485
486However, if the cluster (back-end) network fails or develops significant latency
487while the public (front-end) network operates optimally, OSDs currently do not
488handle this situation well. What happens is that OSDs mark each other ``down``
489on the monitor, while marking themselves ``up``. We call this scenario
490'flapping`.
491
492If something is causing OSDs to 'flap' (repeatedly getting marked ``down`` and
493then ``up`` again), you can force the monitors to stop the flapping with::
494
495 ceph osd set noup # prevent OSDs from getting marked up
496 ceph osd set nodown # prevent OSDs from getting marked down
497
498These flags are recorded in the osdmap structure::
499
500 ceph osd dump | grep flags
501 flags no-up,no-down
502
503You can clear the flags with::
504
505 ceph osd unset noup
506 ceph osd unset nodown
507
508Two other flags are supported, ``noin`` and ``noout``, which prevent
509booting OSDs from being marked ``in`` (allocated data) or protect OSDs
510from eventually being marked ``out`` (regardless of what the current value for
511``mon osd down out interval`` is).
512
513.. note:: ``noup``, ``noout``, and ``nodown`` are temporary in the
514 sense that once the flags are cleared, the action they were blocking
515 should occur shortly after. The ``noin`` flag, on the other hand,
516 prevents OSDs from being marked ``in`` on boot, and any daemons that
517 started while the flag was set will remain that way.
518
519
520
521
522
523
524.. _iostat: http://en.wikipedia.org/wiki/Iostat
525.. _Ceph Logging and Debugging: ../../configuration/ceph-conf#ceph-logging-and-debugging
526.. _Logging and Debugging: ../log-and-debug
527.. _Debugging and Logging: ../debug
528.. _Monitor/OSD Interaction: ../../configuration/mon-osd-interaction
529.. _Monitor Config Reference: ../../configuration/mon-config-ref
530.. _monitoring your OSDs: ../../operations/monitoring-osd-pg
531.. _subscribe to the ceph-devel email list: mailto:majordomo@vger.kernel.org?body=subscribe+ceph-devel
532.. _unsubscribe from the ceph-devel email list: mailto:majordomo@vger.kernel.org?body=unsubscribe+ceph-devel
533.. _subscribe to the ceph-users email list: mailto:ceph-users-join@lists.ceph.com
534.. _unsubscribe from the ceph-users email list: mailto:ceph-users-leave@lists.ceph.com
535.. _OS recommendations: ../../../start/os-recommendations
536.. _ceph-devel: ceph-devel@vger.kernel.org