8 More properly called a :term:`client`, an application is any program
9 external to Ceph that uses a Ceph Cluster to store and
12 :ref:`BlueStore<rados_config_storage_devices_bluestore>`
13 OSD BlueStore is a storage back end used by OSD daemons, and
14 was designed specifically for use with Ceph. BlueStore was
15 introduced in the Ceph Kraken release. The Luminous release of
16 Ceph promoted BlueStore to the default OSD back end,
17 supplanting FileStore. As of the Reef release, FileStore is no
18 longer available as a storage backend.
20 BlueStore stores objects directly on Ceph block devices without
21 a mounted file system.
24 In the context of :term:`RGW`, a bucket is a group of objects.
25 In a filesystem-based analogy in which objects are the
26 counterpart of files, buckets are the counterpart of
27 directories. :ref:`Multisite sync
28 policies<radosgw-multisite-sync-policy>` can be set on buckets,
29 to provide fine-grained control of data movement from one zone
32 The concept of the bucket has been taken from AWS S3. See also
33 `the AWS S3 page on creating buckets <https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-buckets-s3.html>`_
34 and `the AWS S3 'Buckets Overview' page <https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html>`_.
36 OpenStack Swift uses the term "containers" for what RGW and AWS call "buckets".
37 See `the OpenStack Storage API overview page <https://docs.openstack.org/swift/latest/api/object_api_v1_overview.html>`_.
40 Ceph is a distributed network storage and file system with
41 distributed metadata management and POSIX semantics.
44 Also called "RADOS Block Device" and :term:`RBD`. A software
45 instrument that orchestrates the storage of block-based data in
46 Ceph. Ceph Block Device splits block-based application data
47 into "chunks". RADOS stores these chunks as objects. Ceph Block
48 Device orchestrates the storage of those objects across the
52 One of the three kinds of storage supported by Ceph (the other
53 two are object storage and file storage). Ceph Block Storage is
54 the block storage "product", which refers to block-storage
55 related services and capabilities when used in conjunction with
56 the collection of (1) ``librbd`` (a python module that provides
57 file-like access to :term:`RBD` images), (2) a hypervisor such
58 as QEMU or Xen, and (3) a hypervisor abstraction layer such as
62 Any of the Ceph components that can access a Ceph Storage
63 Cluster. This includes the Ceph Object Gateway, the Ceph Block
64 Device, the Ceph File System, and their corresponding
65 libraries. It also includes kernel modules, and FUSEs
66 (Filesystems in USERspace).
69 The collection of libraries that can be used to interact with
70 components of the Ceph Cluster.
73 See :term:`Cluster Map`
76 :ref:`The Ceph Dashboard<mgr-dashboard>` is a built-in
77 web-based Ceph management and monitoring application through
78 which you can inspect and administer various resources within
79 the cluster. It is implemented as a :ref:`ceph-manager-daemon`
85 :ref:`CephFS<ceph-file-system>`
86 The **Ceph F**\ile **S**\ystem, or CephFS, is a
87 POSIX-compliant file system built on top of Ceph’s distributed
88 object store, RADOS. See :ref:`CephFS Architecture
89 <arch-cephfs>` for more details.
95 The collection of kernel modules that can be used to interact
96 with the Ceph Cluster (for example: ``ceph.ko``, ``rbd.ko``).
98 :ref:`Ceph Manager<ceph-manager-daemon>`
99 The Ceph manager daemon (ceph-mgr) is a daemon that runs
100 alongside monitor daemons to provide monitoring and interfacing
101 to external monitoring and management systems. Since the
102 Luminous release (12.x), no Ceph cluster functions properly
103 unless it contains a running ceph-mgr daemon.
105 Ceph Manager Dashboard
106 See :term:`Ceph Dashboard`.
112 A daemon that maintains a map of the state of the cluster. This
113 "cluster state" includes the monitor map, the manager map, the
114 OSD map, and the CRUSH map. A Ceph cluster must contain a
115 minimum of three running monitors in order to be both redundant
116 and highly-available. Ceph monitors and the nodes on which they
117 run are often referred to as "mon"s. See :ref:`Monitor Config
118 Reference <monitor-config-reference>`.
121 A Ceph node is a unit of the Ceph Cluster that communicates with
122 other nodes in the Ceph Cluster in order to replicate and
123 redistribute data. All of the nodes together are called the
124 :term:`Ceph Storage Cluster`. Ceph nodes include :term:`OSD`\s,
125 :term:`Ceph Monitor`\s, :term:`Ceph Manager`\s, and
126 :term:`MDS`\es. The term "node" is usually equivalent to "host"
127 in the Ceph documentation. If you have a running Ceph Cluster,
128 you can list all of the nodes in it by running the command
129 ``ceph node ls all``.
131 :ref:`Ceph Object Gateway<object-gateway>`
132 An object storage interface built on top of librados. Ceph
133 Object Gateway provides a RESTful gateway between applications
134 and Ceph storage clusters.
137 See :term:`Ceph Object Store`.
140 A Ceph Object Store consists of a :term:`Ceph Storage Cluster`
141 and a :term:`Ceph Object Gateway` (RGW).
143 :ref:`Ceph OSD<rados_configuration_storage-devices_ceph_osd>`
144 Ceph **O**\bject **S**\torage **D**\aemon. The Ceph OSD
145 software, which interacts with logical disks (:term:`OSD`).
146 Around 2013, there was an attempt by "research and industry"
147 (Sage's own words) to insist on using the term "OSD" to mean
148 only "Object Storage Device", but the Ceph community has always
149 persisted in using the term to mean "Object Storage Daemon" and
150 no less an authority than Sage Weil himself confirms in
151 November of 2022 that "Daemon is more accurate for how Ceph is
152 built" (private correspondence between Zac Dover and Sage Weil,
156 See :term:`Ceph OSD`.
159 See :term:`Ceph OSD`.
162 All Ceph software, which includes any piece of code hosted at
163 `https://github.com/ceph`_.
166 See :term:`Releases`.
169 The aggregate term for the people, software, mission and
170 infrastructure of Ceph.
173 See :term:`Releases`.
175 Ceph Release Candidate
176 See :term:`Releases`.
179 See :term:`Releases`.
182 A collection of two or more components of Ceph.
184 :ref:`Ceph Storage Cluster<arch-ceph-storage-cluster>`
185 The collection of :term:`Ceph Monitor`\s, :term:`Ceph
186 Manager`\s, :term:`Ceph Metadata Server`\s, and :term:`OSD`\s
187 that work together to store and replicate data for use by
188 applications, Ceph Users, and :term:`Ceph Client`\s. Ceph
189 Storage Clusters receive data from :term:`Ceph Client`\s.
192 The Ceph authentication protocol. CephX authenticates users and
193 daemons. CephX operates like Kerberos, but it has no single
194 point of failure. See the :ref:`High-availability
195 Authentication section<arch_high_availability_authentication>`
196 of the Architecture document and the :ref:`CephX Configuration
197 Reference<rados-cephx-config-ref>`.
200 A client is any program external to Ceph that uses a Ceph
201 Cluster to store and replicate data.
205 Third party cloud provisioning platforms such as OpenStack,
206 CloudStack, OpenNebula, and Proxmox VE.
209 The set of maps consisting of the monitor map, OSD map, PG map,
210 MDS map, and CRUSH map, which together report the state of the
211 Ceph cluster. See :ref:`the "Cluster Map" section of the
212 Architecture document<architecture_cluster_map>` for details.
215 **C**\ontrolled **R**\eplication **U**\nder **S**\calable
216 **H**\ashing. The algorithm that Ceph uses to compute object
220 The CRUSH data placement rule that applies to a particular
224 **D**\irect-\ **A**\ttached **S**\torage. Storage that is
225 attached directly to the computer accessing it, without passing
226 through a network. Contrast with NAS and SAN.
228 :ref:`Dashboard<mgr-dashboard>`
229 A built-in web-based Ceph management and monitoring application
230 to administer various aspects and objects of the cluster. The
231 dashboard is implemented as a Ceph Manager module. See
232 :ref:`mgr-dashboard` for more details.
235 Another name for :term:`Dashboard`.
239 **F**\ully **Q**\ualified **D**\omain **N**\ame. A domain name
240 that is applied to a node in a network and that specifies the
241 node's exact location in the tree hierarchy of the DNS.
243 In the context of Ceph cluster administration, FQDNs are often
244 applied to hosts. In this documentation, the term "FQDN" is
245 used mostly to distinguish between FQDNs and relatively simpler
246 hostnames, which do not specify the exact location of the host
247 in the tree hierarchy of the DNS but merely name the host.
250 Any single machine or server in a Ceph Cluster. See :term:`Ceph
254 Refers to an OSD that has both HDD and SSD drives.
257 **L**\ogical **V**\olume **M**\anager tags. Extensible metadata
258 for LVM volumes and groups. They are used to store
259 Ceph-specific information about devices and its relationship
262 :ref:`MDS<cephfs_add_remote_mds>`
263 The Ceph **M**\eta\ **D**\ata **S**\erver daemon. Also referred
264 to as "ceph-mds". The Ceph metadata server daemon must be
265 running in any Ceph cluster that runs the CephFS file system.
266 The MDS stores all filesystem metadata.
269 The Ceph manager software, which collects all the state from
270 the whole cluster in one place.
273 The Ceph monitor software.
276 See :term:`Ceph Node`.
278 Object Storage Device
282 Probably :term:`Ceph OSD`, but not necessarily. Sometimes
283 (especially in older correspondence, and especially in
284 documentation that is not written specifically for Ceph), "OSD"
285 means "**O**\bject **S**\torage **D**\evice", which refers to a
286 physical or logical storage unit (for example: LUN). The Ceph
287 community has always used the term "OSD" to refer to
288 :term:`Ceph OSD Daemon` despite an industry push in the
289 mid-2010s to insist that "OSD" should refer to "Object Storage
290 Device", so it is important to know which meaning is intended.
293 This is a unique identifier used to identify an OSD. It is
294 found in the OSD path in a file called ``osd_fsid``. The
295 term ``fsid`` is used interchangeably with ``uuid``
298 The integer that defines an OSD. It is generated by the
299 monitors during the creation of each OSD.
302 This is the unique identifier of an OSD. This term is used
303 interchangeably with ``fsid``
306 In the context of :term:`RGW`, a period is the configuration
307 state of the :term:`Realm`. The period stores the configuration
308 state of a multi-site configuration. When the period is updated,
309 the "epoch" is said thereby to have been changed.
311 Placement Groups (PGs)
312 Placement groups (PGs) are subsets of each logical Ceph pool.
313 Placement groups perform the function of placing objects (as a
314 group) into OSDs. Ceph manages data internally at
315 placement-group granularity: this scales better than would
316 managing individual (and therefore more numerous) RADOS
317 objects. A cluster that has a larger number of placement groups
318 (for example, 100 per OSD) is better balanced than an otherwise
319 identical cluster with a smaller number of placement groups.
321 Ceph's internal RADOS objects are each mapped to a specific
322 placement group, and each placement group belongs to exactly
325 :ref:`Pool<rados_pools>`
326 A pool is a logical partition used to store objects.
332 **R**\eliable **A**\utonomic **D**\istributed **O**\bject
333 **S**\tore. RADOS is the object store that provides a scalable
334 service for variably-sized objects. The RADOS object store is
335 the core component of a Ceph cluster. `This blog post from
337 <https://ceph.io/en/news/blog/2009/the-rados-distributed-object-store/>`_
338 provides a beginner's introduction to RADOS. Readers interested
339 in a deeper understanding of RADOS are directed to `RADOS: A
340 Scalable, Reliable Storage Service for Petabyte-scale Storage
341 Clusters <https://ceph.io/assets/pdfs/weil-rados-pdsw07.pdf>`_.
344 A proper subset of the Ceph Cluster consisting of
345 :term:`OSD`\s, :term:`Ceph Monitor`\s, and :term:`Ceph
352 **R**\ADOS **B**\lock **D**\evice. See :term:`Ceph Block
355 :ref:`Realm<rgw-realms>`
356 In the context of RADOS Gateway (RGW), a realm is a globally
357 unique namespace that consists of one or more zonegroups.
362 A version of Ceph that has not yet been put through
363 quality assurance testing. May contain new features.
366 Any ad hoc release that includes only bug fixes and
370 Any distinct numbered version of Ceph.
372 Ceph Release Candidate
373 A major version of Ceph that has undergone initial
374 quality assurance testing and is ready for beta
378 A major version of Ceph where all features from the
379 preceding interim releases have been put through
380 quality assurance testing successfully.
382 Reliable Autonomic Distributed Object Store
383 The core set of storage software which stores the user's data
384 (MON+OSD). See also :term:`RADOS`.
386 :ref:`RGW<object-gateway>`
387 **R**\ADOS **G**\ate\ **w**\ay.
389 Also called "Ceph Object Gateway". The component of Ceph that
390 provides a gateway to both the Amazon S3 RESTful API and the
395 The processes by which Ceph ensures data integrity. During the
396 process of scrubbing, Ceph generates a catalog of all objects
397 in a placement group, then ensures that none of the objects are
398 missing or mismatched by comparing each primary object against
399 its replicas, which are stored across other OSDs. Any PG
400 is determined to have a copy of an object that is different
401 than the other copies or is missing entirely is marked
402 "inconsistent" (that is, the PG is marked "inconsistent").
404 There are two kinds of scrubbing: light scrubbing and deep
405 scrubbing (also called "normal scrubbing" and "deep scrubbing",
406 respectively). Light scrubbing is performed daily and does
407 nothing more than confirm that a given object exists and that
408 its metadata is correct. Deep scrubbing is performed weekly and
409 reads the data and uses checksums to ensure data integrity.
411 See :ref:`Scrubbing <rados_config_scrubbing>` in the RADOS OSD
412 Configuration Reference Guide and page 141 of *Mastering Ceph,
413 second edition* (Fisk, Nick. 2019).
416 Secrets are credentials used to perform digital authentication
417 whenever privileged users must access systems that require
418 authentication. Secrets can be passwords, API keys, tokens, SSH
419 keys, private certificates, or encryption keys.
422 **S**\oftware-**d**\efined **S**\torage.
425 A systemd ``type`` where a command is defined in ``ExecStart``
426 which will exit upon completion (it is not intended to
430 The collection of software that performs scripted tests on Ceph.
433 An individual or a system actor (for example, an application)
434 that uses Ceph clients to interact with the :term:`Ceph Storage
435 Cluster`. See :ref:`User<rados-ops-user>` and :ref:`User
436 Management<user-management>`.
439 In the context of :term:`RGW`, a zone is a logical group that
440 consists of one or more :term:`RGW` instances. A zone's
441 configuration state is stored in the :term:`period`. See
442 :ref:`Zones<radosgw-zones>`.
444 .. _https://github.com/ceph: https://github.com/ceph
445 .. _Cluster Map: ../architecture#cluster-map