5 Before troubleshooting your OSDs, check your monitors and network first. If
6 you execute ``ceph health`` or ``ceph -s`` on the command line and Ceph returns
7 a health status, it means that the monitors have a quorum.
8 If you don't have a monitor quorum or if there are errors with the monitor
9 status, `address the monitor issues first <../troubleshooting-mon>`_.
10 Check your networks to ensure they
11 are running properly, because networks may have a significant impact on OSD
12 operation and performance.
16 Obtaining Data About OSDs
17 =========================
19 A good first step in troubleshooting your OSDs is to obtain information in
20 addition to the information you collected while `monitoring your OSDs`_
21 (e.g., ``ceph osd tree``).
27 If you haven't changed the default path, you can find Ceph log files at
32 If you don't get enough log detail, you can change your logging level. See
33 `Logging and Debugging`_ for details to ensure that Ceph performs adequately
34 under high logging volume.
40 Use the admin socket tool to retrieve runtime information. For details, list
41 the sockets for your Ceph processes::
45 Then, execute the following, replacing ``{daemon-name}`` with an actual
46 daemon (e.g., ``osd.0``)::
48 ceph daemon osd.0 help
50 Alternatively, you can specify a ``{socket-file}`` (e.g., something in ``/var/run/ceph``)::
52 ceph daemon {socket-file} help
55 The admin socket, among other things, allows you to:
57 - List your configuration at runtime
58 - Dump historic operations
59 - Dump the operation priority queue state
60 - Dump operations in flight
67 Filesystem issues may arise. To display your filesystem's free space, execute
72 Execute ``df --help`` for additional usage.
78 Use `iostat`_ to identify I/O-related issues. ::
86 To retrieve diagnostic messages, use ``dmesg`` with ``less``, ``more``, ``grep``
87 or ``tail``. For example::
92 Stopping w/out Rebalancing
93 ==========================
95 Periodically, you may need to perform maintenance on a subset of your cluster,
96 or resolve a problem that affects a failure domain (e.g., a rack). If you do not
97 want CRUSH to automatically rebalance the cluster as you stop OSDs for
98 maintenance, set the cluster to ``noout`` first::
102 Once the cluster is set to ``noout``, you can begin stopping the OSDs within the
103 failure domain that requires maintenance work. ::
105 stop ceph-osd id={num}
107 .. note:: Placement groups within the OSDs you stop will become ``degraded``
108 while you are addressing issues with within the failure domain.
110 Once you have completed your maintenance, restart the OSDs. ::
112 start ceph-osd id={num}
114 Finally, you must unset the cluster from ``noout``. ::
125 Under normal circumstances, simply restarting the ``ceph-osd`` daemon will
126 allow it to rejoin the cluster and recover.
131 If you start your cluster and an OSD won't start, check the following:
133 - **Configuration File:** If you were not able to get OSDs running from
134 a new installation, check your configuration file to ensure it conforms
135 (e.g., ``host`` not ``hostname``, etc.).
137 - **Check Paths:** Check the paths in your configuration, and the actual
138 paths themselves for data and journals. If you separate the OSD data from
139 the journal data and there are errors in your configuration file or in the
140 actual mounts, you may have trouble starting OSDs. If you want to store the
141 journal on a block device, you should partition your journal disk and assign
142 one partition per OSD.
144 - **Check Max Threadcount:** If you have a node with a lot of OSDs, you may be
145 hitting the default maximum number of threads (e.g., usually 32k), especially
146 during recovery. You can increase the number of threads using ``sysctl`` to
147 see if increasing the maximum number of threads to the maximum possible
148 number of threads allowed (i.e., 4194303) will help. For example::
150 sysctl -w kernel.pid_max=4194303
152 If increasing the maximum thread count resolves the issue, you can make it
153 permanent by including a ``kernel.pid_max`` setting in the
154 ``/etc/sysctl.conf`` file. For example::
156 kernel.pid_max = 4194303
158 - **Kernel Version:** Identify the kernel version and distribution you
159 are using. Ceph uses some third party tools by default, which may be
160 buggy or may conflict with certain distributions and/or kernel
161 versions (e.g., Google perftools). Check the `OS recommendations`_
162 to ensure you have addressed any issues related to your kernel.
164 - **Segment Fault:** If there is a segment fault, turn your logging up
165 (if it isn't already), and try again. If it segment faults again,
166 contact the ceph-devel email list and provide your Ceph configuration
167 file, your monitor output and the contents of your log file(s).
174 When a ``ceph-osd`` process dies, the monitor will learn about the failure
175 from surviving ``ceph-osd`` daemons and report it via the ``ceph health``
179 HEALTH_WARN 1/3 in osds are down
181 Specifically, you will get a warning whenever there are ``ceph-osd``
182 processes that are marked ``in`` and ``down``. You can identify which
183 ``ceph-osds`` are ``down`` with::
186 HEALTH_WARN 1/3 in osds are down
187 osd.0 is down since epoch 23, last address 192.168.106.220:6800/11080
190 failure or other fault preventing ``ceph-osd`` from functioning or
191 restarting, an error message should be present in its log file in
194 If the daemon stopped because of a heartbeat failure, the underlying
195 kernel file system may be unresponsive. Check ``dmesg`` output for disk
196 or other kernel errors.
198 If the problem is a software error (failed assertion or other
199 unexpected error), it should be reported to the `ceph-devel`_ email list.
205 Ceph prevents you from writing to a full OSD so that you don't lose data.
206 In an operational cluster, you should receive a warning when your cluster
207 is getting near its full ratio. The ``mon osd full ratio`` defaults to
208 ``0.95``, or 95% of capacity before it stops clients from writing data.
209 The ``mon osd backfillfull ratio`` defaults to ``0.90``, or 90 % of
210 capacity when it blocks backfills from starting. The
211 ``mon osd nearfull ratio`` defaults to ``0.85``, or 85% of capacity
212 when it generates a health warning.
214 Full cluster issues usually arise when testing how Ceph handles an OSD
215 failure on a small cluster. When one node has a high percentage of the
216 cluster's data, the cluster can easily eclipse its nearfull and full ratio
217 immediately. If you are testing how Ceph reacts to OSD failures on a small
218 cluster, you should leave ample free disk space and consider temporarily
219 lowering the ``mon osd full ratio``, ``mon osd backfillfull ratio`` and
220 ``mon osd nearfull ratio``.
222 Full ``ceph-osds`` will be reported by ``ceph health``::
225 HEALTH_WARN 1 nearfull osd(s)
230 HEALTH_ERR 1 full osd(s); 1 backfillfull osd(s); 1 nearfull osd(s)
232 osd.4 is backfill full at 91%
233 osd.2 is near full at 87%
235 The best way to deal with a full cluster is to add new ``ceph-osds``, allowing
236 the cluster to redistribute data to the newly available storage.
238 If you cannot start an OSD because it is full, you may delete some data by deleting
239 some placement group directories in the full OSD.
241 .. important:: If you choose to delete a placement group directory on a full OSD,
242 **DO NOT** delete the same placement group directory on another full OSD, or
243 **YOU MAY LOSE DATA**. You **MUST** maintain at least one copy of your data on
246 See `Monitor Config Reference`_ for additional details.
249 OSDs are Slow/Unresponsive
250 ==========================
252 A commonly recurring issue involves slow or unresponsive OSDs. Ensure that you
253 have eliminated other troubleshooting possibilities before delving into OSD
254 performance issues. For example, ensure that your network(s) is working properly
255 and your OSDs are running. Check to see if OSDs are throttling recovery traffic.
257 .. tip:: Newer versions of Ceph provide better recovery handling by preventing
258 recovering OSDs from using up system resources so that ``up`` and ``in``
259 OSDs aren't available or are otherwise slow.
265 Ceph is a distributed storage system, so it depends upon networks to peer with
266 OSDs, replicate objects, recover from faults and check heartbeats. Networking
267 issues can cause OSD latency and flapping OSDs. See `Flapping OSDs`_ for
270 Ensure that Ceph processes and Ceph-dependent processes are connected and/or
273 netstat -a | grep ceph
274 netstat -l | grep ceph
275 sudo netstat -p | grep ceph
277 Check network statistics. ::
285 A storage drive should only support one OSD. Sequential read and sequential
286 write throughput can bottleneck if other processes share the drive, including
287 journals, operating systems, monitors, other OSDs and non-Ceph processes.
289 Ceph acknowledges writes *after* journaling, so fast SSDs are an
290 attractive option to accelerate the response time--particularly when
291 using the ``XFS`` or ``ext4`` filesystems. By contrast, the ``btrfs``
292 filesystem can write and journal simultaneously. (Note, however, that
293 we recommend against using ``btrfs`` for production deployments.)
295 .. note:: Partitioning a drive does not change its total throughput or
296 sequential read/write limits. Running a journal in a separate partition
297 may help, but you should prefer a separate physical drive.
300 Bad Sectors / Fragmented Disk
301 -----------------------------
303 Check your disks for bad sectors and fragmentation. This can cause total throughput
304 to drop substantially.
307 Co-resident Monitors/OSDs
308 -------------------------
310 Monitors are generally light-weight processes, but they do lots of ``fsync()``,
311 which can interfere with other workloads, particularly if monitors run on the
312 same drive as your OSDs. Additionally, if you run monitors on the same host as
313 the OSDs, you may incur performance issues related to:
315 - Running an older kernel (pre-3.0)
316 - Running Argonaut with an old ``glibc``
317 - Running a kernel with no syncfs(2) syscall.
319 In these cases, multiple OSDs running on the same host can drag each other down
320 by doing lots of commits. That often leads to the bursty writes.
323 Co-resident Processes
324 ---------------------
326 Spinning up co-resident processes such as a cloud-based solution, virtual
327 machines and other applications that write data to Ceph while operating on the
328 same hardware as OSDs can introduce significant OSD latency. Generally, we
329 recommend optimizing a host for use with Ceph and using other hosts for other
330 processes. The practice of separating Ceph operations from other applications
331 may help improve performance and may streamline troubleshooting and maintenance.
337 If you turned logging levels up to track an issue and then forgot to turn
338 logging levels back down, the OSD may be putting a lot of logs onto the disk. If
339 you intend to keep logging levels high, you may consider mounting a drive to the
340 default path for logging (i.e., ``/var/log/ceph/$cluster-$name.log``).
346 Depending upon your configuration, Ceph may reduce recovery rates to maintain
347 performance or it may increase recovery rates to the point that recovery
348 impacts OSD performance. Check to see if the OSD is recovering.
354 Check the kernel version you are running. Older kernels may not receive
355 new backports that Ceph depends upon for better performance.
358 Kernel Issues with SyncFS
359 -------------------------
361 Try running one OSD per host to see if performance improves. Old kernels
362 might not have a recent enough version of ``glibc`` to support ``syncfs(2)``.
368 Currently, we recommend deploying clusters with XFS.
370 We recommend against using btrfs or ext4. The btrfs filesystem has
371 many attractive features, but bugs in the filesystem may lead to
372 performance issues and suprious ENOSPC errors. We do not recommend
373 ext4 because xattr size limitations break our support for long object
374 names (needed for RGW).
376 For more information, see `Filesystem Recommendations`_.
378 .. _Filesystem Recommendations: ../configuration/filesystem-recommendations
384 We recommend 1GB of RAM per OSD daemon. You may notice that during normal
385 operations, the OSD only uses a fraction of that amount (e.g., 100-200MB).
386 Unused RAM makes it tempting to use the excess RAM for co-resident applications,
387 VMs and so forth. However, when OSDs go into recovery mode, their memory
388 utilization spikes. If there is no RAM available, the OSD performance will slow
392 Old Requests or Slow Requests
393 -----------------------------
395 If a ``ceph-osd`` daemon is slow to respond to a request, it will generate log messages
396 complaining about requests that are taking too long. The warning threshold
397 defaults to 30 seconds, and is configurable via the ``osd op complaint time``
398 option. When this happens, the cluster log will receive messages.
400 Legacy versions of Ceph complain about 'old requests`::
402 osd.0 192.168.106.220:6800/18813 312 : [WRN] old request osd_op(client.5099.0:790 fatty_26485_object789 [write 0~4096] 2.5e54f643) v4 received at 2012-03-06 15:42:56.054801 currently waiting for sub ops
404 New versions of Ceph complain about 'slow requests`::
406 {date} {osd.num} [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.005692 secs
407 {date} {osd.num} [WRN] slow request 30.005692 seconds old, received at {date-time}: osd_op(client.4240.0:8 benchmark_data_ceph-1_39426_object7 [write 0~4194304] 0.69848840) v4 currently waiting for subops from [610]
410 Possible causes include:
412 - A bad drive (check ``dmesg`` output)
413 - A bug in the kernel file system bug (check ``dmesg`` output)
414 - An overloaded cluster (check system load, iostat, etc.)
415 - A bug in the ``ceph-osd`` daemon.
419 - Remove VMs Cloud Solutions from Ceph Hosts
424 Debugging Slow Requests
425 -----------------------
427 If you run "ceph daemon osd.<id> dump_historic_ops" or "dump_ops_in_flight",
428 you will see a set of operations and a list of events each operation went
429 through. These are briefly described below.
431 Events from the Messenger layer:
433 - header_read: when the messenger first started reading the message off the wire
434 - throttled: when the messenger tried to acquire memory throttle space to read
435 the message into memory
436 - all_read: when the messenger finished reading the message off the wire
437 - dispatched: when the messenger gave the message to the OSD
438 - Initiated: <This is identical to header_read. The existence of both is a
441 Events from the OSD as it prepares operations
443 - queued_for_pg: the op has been put into the queue for processing by its PG
444 - reached_pg: the PG has started doing the op
445 - waiting for \*: the op is waiting for some other work to complete before it
446 can proceed (a new OSDMap; for its object target to scrub; for the PG to
447 finish peering; all as specified in the message)
448 - started: the op has been accepted as something the OSD should actually do
449 (reasons not to do it: failed security/permission checks; out-of-date local
450 state; etc) and is now actually being performed
451 - waiting for subops from: the op has been sent to replica OSDs
453 Events from the FileStore
455 - commit_queued_for_journal_write: the op has been given to the FileStore
456 - write_thread_in_journal_buffer: the op is in the journal's buffer and waiting
457 to be persisted (as the next disk write)
458 - journaled_completion_queued: the op was journaled to disk and its callback
459 queued for invocation
461 Events from the OSD after stuff has been given to local disk
463 - op_commit: the op has been committed (ie, written to journal) by the
465 - op_applied: The op has been write()'en to the backing FS (ie, applied in
466 memory but not flushed out to disk) on the primary
467 - sub_op_applied: op_applied, but for a replica's "subop"
468 - sub_op_committed: op_commited, but for a replica's subop (only for EC pools)
469 - sub_op_commit_rec/sub_op_apply_rec from <X>: the primary marks this when it
470 hears about the above, but for a particular replica <X>
471 - commit_sent: we sent a reply back to the client (or primary OSD, for sub ops)
473 Many of these events are seemingly redundant, but cross important boundaries in
474 the internal code (such as passing data across locks into new threads).
479 We recommend using both a public (front-end) network and a cluster (back-end)
480 network so that you can better meet the capacity requirements of object
481 replication. Another advantage is that you can run a cluster network such that
482 it isn't connected to the internet, thereby preventing some denial of service
483 attacks. When OSDs peer and check heartbeats, they use the cluster (back-end)
484 network when it's available. See `Monitor/OSD Interaction`_ for details.
486 However, if the cluster (back-end) network fails or develops significant latency
487 while the public (front-end) network operates optimally, OSDs currently do not
488 handle this situation well. What happens is that OSDs mark each other ``down``
489 on the monitor, while marking themselves ``up``. We call this scenario
492 If something is causing OSDs to 'flap' (repeatedly getting marked ``down`` and
493 then ``up`` again), you can force the monitors to stop the flapping with::
495 ceph osd set noup # prevent OSDs from getting marked up
496 ceph osd set nodown # prevent OSDs from getting marked down
498 These flags are recorded in the osdmap structure::
500 ceph osd dump | grep flags
503 You can clear the flags with::
506 ceph osd unset nodown
508 Two other flags are supported, ``noin`` and ``noout``, which prevent
509 booting OSDs from being marked ``in`` (allocated data) or protect OSDs
510 from eventually being marked ``out`` (regardless of what the current value for
511 ``mon osd down out interval`` is).
513 .. note:: ``noup``, ``noout``, and ``nodown`` are temporary in the
514 sense that once the flags are cleared, the action they were blocking
515 should occur shortly after. The ``noin`` flag, on the other hand,
516 prevents OSDs from being marked ``in`` on boot, and any daemons that
517 started while the flag was set will remain that way.
524 .. _iostat: http://en.wikipedia.org/wiki/Iostat
525 .. _Ceph Logging and Debugging: ../../configuration/ceph-conf#ceph-logging-and-debugging
526 .. _Logging and Debugging: ../log-and-debug
527 .. _Debugging and Logging: ../debug
528 .. _Monitor/OSD Interaction: ../../configuration/mon-osd-interaction
529 .. _Monitor Config Reference: ../../configuration/mon-config-ref
530 .. _monitoring your OSDs: ../../operations/monitoring-osd-pg
531 .. _subscribe to the ceph-devel email list: mailto:majordomo@vger.kernel.org?body=subscribe+ceph-devel
532 .. _unsubscribe from the ceph-devel email list: mailto:majordomo@vger.kernel.org?body=unsubscribe+ceph-devel
533 .. _subscribe to the ceph-users email list: mailto:ceph-users-join@lists.ceph.com
534 .. _unsubscribe from the ceph-users email list: mailto:ceph-users-leave@lists.ceph.com
535 .. _OS recommendations: ../../../start/os-recommendations
536 .. _ceph-devel: ceph-devel@vger.kernel.org