authentication data structure similar to a Kerberos ticket that contains a
session key for use in obtaining Ceph services. This session key is itself
encrypted with the user's permanent secret key, so that only the user can
-request services from the Ceph monitor(s). The client then uses the session key
+request services from the Ceph Monitor(s). The client then uses the session key
to request its desired services from the monitor, and the monitor provides the
client with a ticket that will authenticate the client to the OSDs that actually
-handle data. Ceph monitors and OSDs share a secret, so the client can use the
+handle data. Ceph Monitors and OSDs share a secret, so the client can use the
ticket provided by the monitor with any OSD or metadata server in the cluster.
Like Kerberos, ``cephx`` tickets expire, so an attacker cannot use an expired
ticket or session key obtained surreptitiously. This form of authentication will
Ceph eliminates the bottleneck: Ceph's OSD Daemons AND Ceph Clients are cluster
aware. Like Ceph clients, each Ceph OSD Daemon knows about other Ceph OSD
Daemons in the cluster. This enables Ceph OSD Daemons to interact directly with
-other Ceph OSD Daemons and Ceph monitors. Additionally, it enables Ceph Clients
+other Ceph OSD Daemons and Ceph Monitors. Additionally, it enables Ceph Clients
to interact directly with Ceph OSD Daemons.
The ability of Ceph Clients, Ceph Monitors and Ceph OSD Daemons to interact with
Ceph Client requests. If a Ceph OSD Daemon is ``down`` and ``in`` the Ceph
Storage Cluster, this status may indicate the failure of the Ceph OSD
Daemon. If a Ceph OSD Daemon is not running (e.g., it crashes), the Ceph OSD
- Daemon cannot notify the Ceph Monitor that it is ``down``. The Ceph Monitor
- can ping a Ceph OSD Daemon periodically to ensure that it is running.
- However, Ceph also empowers Ceph OSD Daemons to determine if a neighboring
- OSD is ``down``, to update the cluster map and to report it to the Ceph
- monitor(s). This means that Ceph monitors can remain light weight processes.
- See `Monitoring OSDs`_ and `Heartbeats`_ for additional details.
-
+ Daemon cannot notify the Ceph Monitor that it is ``down``. The OSDs
+ periodically send messages to the Ceph Monitor (``MPGStats`` pre-luminous,
+ and a new ``MOSDBeacon`` in luminous). If the Ceph Monitor doesn't see that
+ message after a configurable period of time then it marks the OSD down.
+ This mechanism is a failsafe, however. Normally, Ceph OSD Daemons will
+ determine if a neighboring OSD is down and report it to the Ceph Monitor(s).
+ This assures that Ceph Monitors are lightweight processes. See `Monitoring
+ OSDs`_ and `Heartbeats`_ for additional details.
+
#. **Data Scrubbing:** As part of maintaining data consistency and cleanliness,
Ceph OSD Daemons can scrub objects within placement groups. That is, Ceph
OSD Daemons can compare object metadata in one placement group with its
partitions for storing objects.
Ceph Clients retrieve a `Cluster Map`_ from a Ceph Monitor, and write objects to
-pools. The pool's ``size`` or number of replicas, the CRUSH ruleset and the
+pools. The pool's ``size`` or number of replicas, the CRUSH rule and the
number of placement groups determine how Ceph will place the data.
.. ditaa::
| To
v
+--------+ +---------------+
- | Pool |---------->| CRUSH Ruleset |
+ | Pool |---------->| CRUSH Rule |
+--------+ Selects +---------------+
- Ownership/Access to Objects
- The Number of Placement Groups, and
-- The CRUSH Ruleset to Use.
+- The CRUSH Rule to Use.
See `Set Pool Values`_ for details.
.. _Monitoring OSDs and PGs: ../rados/operations/monitoring-osd-pg
.. _Heartbeats: ../rados/configuration/mon-osd-interaction
.. _Monitoring OSDs: ../rados/operations/monitoring-osd-pg/#monitoring-osds
-.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: http://ceph.com/papers/weil-crush-sc06.pdf
+.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf
.. _Data Scrubbing: ../rados/configuration/osd-config-ref#scrubbing
.. _Report Peering Failure: ../rados/configuration/mon-osd-interaction#osds-report-peering-failure
.. _Troubleshooting Peering Failure: ../rados/troubleshooting/troubleshooting-pg#placement-group-down-peering-failure