8 More properly called a :term:`client`, an application is any program
9 external to Ceph that uses a Ceph Cluster to store and
12 :ref:`BlueStore<rados_config_storage_devices_bluestore>`
13 OSD BlueStore is a storage back end used by OSD daemons, and
14 was designed specifically for use with Ceph. BlueStore was
15 introduced in the Ceph Kraken release. In the Ceph Luminous
16 release, BlueStore became Ceph's default storage back end,
17 supplanting FileStore. Unlike :term:`filestore`, BlueStore
18 stores objects directly on Ceph block devices without any file
19 system interface. Since Luminous (12.2), BlueStore has been
20 Ceph's default and recommended storage back end.
23 In the context of :term:`RGW`, a bucket is a group of objects.
24 In a filesystem-based analogy in which objects are the
25 counterpart of files, buckets are the counterpart of
26 directories. :ref:`Multisite sync
27 policies<radosgw-multisite-sync-policy>` can be set on buckets,
28 to provide fine-grained control of data movement from one zone
31 The concept of the bucket has been taken from AWS S3. See also
32 `the AWS S3 page on creating buckets <https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-buckets-s3.html>`_
33 and `the AWS S3 'Buckets Overview' page <https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html>`_.
35 OpenStack Swift uses the term "containers" for what RGW and AWS call "buckets".
36 See `the OpenStack Storage API overview page <https://docs.openstack.org/swift/latest/api/object_api_v1_overview.html>`_.
39 Ceph is a distributed network storage and file system with
40 distributed metadata management and POSIX semantics.
43 A software instrument that orchestrates the storage of
44 block-based data in Ceph. Ceph Block Device (also called "RBD",
45 or "RADOS block device") splits block-based application data
46 into "chunks". RADOS stores these chunks as objects. Ceph Block
47 Device orchestrates the storage of those objects across the
48 storage cluster. See also :term:`RBD`.
51 One of the three kinds of storage supported by Ceph (the other
52 two are object storage and file storage). Ceph Block Storage is
53 the block storage "product", which refers to block-storage
54 related services and capabilities when used in conjunction with
55 the collection of (1) ``librbd`` (a python module that provides
56 file-like access to :term:`RBD` images), (2) a hypervisor such
57 as QEMU or Xen, and (3) a hypervisor abstraction layer such as
61 Any of the Ceph components that can access a Ceph Storage
62 Cluster. This includes the Ceph Object Gateway, the Ceph Block
63 Device, the Ceph File System, and their corresponding
64 libraries. It also includes kernel modules, and FUSEs
65 (Filesystems in USERspace).
68 The collection of libraries that can be used to interact with
69 components of the Ceph Cluster.
72 See :term:`Cluster Map`
75 :ref:`The Ceph Dashboard<mgr-dashboard>` is a built-in
76 web-based Ceph management and monitoring application through
77 which you can inspect and administer various resources within
78 the cluster. It is implemented as a :ref:`ceph-manager-daemon`
84 :ref:`CephFS<ceph-file-system>`
85 The **Ceph F**\ile **S**\ystem, or CephFS, is a
86 POSIX-compliant file system built on top of Ceph’s distributed
87 object store, RADOS. See :ref:`CephFS Architecture
88 <arch-cephfs>` for more details.
94 The collection of kernel modules that can be used to interact
95 with the Ceph Cluster (for example: ``ceph.ko``, ``rbd.ko``).
97 :ref:`Ceph Manager<ceph-manager-daemon>`
98 The Ceph manager daemon (ceph-mgr) is a daemon that runs
99 alongside monitor daemons to provide monitoring and interfacing
100 to external monitoring and management systems. Since the
101 Luminous release (12.x), no Ceph cluster functions properly
102 unless it contains a running ceph-mgr daemon.
104 Ceph Manager Dashboard
105 See :term:`Ceph Dashboard`.
111 A daemon that maintains a map of the state of the cluster. This
112 "cluster state" includes the monitor map, the manager map, the
113 OSD map, and the CRUSH map. A Ceph cluster must contain a
114 minimum of three running monitors in order to be both redundant
115 and highly-available. Ceph monitors and the nodes on which they
116 run are often referred to as "mon"s. See :ref:`Monitor Config
117 Reference <monitor-config-reference>`.
120 A Ceph node is a unit of the Ceph Cluster that communicates with
121 other nodes in the Ceph Cluster in order to replicate and
122 redistribute data. All of the nodes together are called the
123 :term:`Ceph Storage Cluster`. Ceph nodes include :term:`OSD`\s,
124 :term:`Ceph Monitor`\s, :term:`Ceph Manager`\s, and
125 :term:`MDS`\es. The term "node" is usually equivalent to "host"
126 in the Ceph documentation. If you have a running Ceph Cluster,
127 you can list all of the nodes in it by running the command
128 ``ceph node ls all``.
130 :ref:`Ceph Object Gateway<object-gateway>`
131 An object storage interface built on top of librados. Ceph
132 Object Gateway provides a RESTful gateway between applications
133 and Ceph storage clusters.
136 See :term:`Ceph Object Store`.
139 A Ceph Object Store consists of a :term:`Ceph Storage Cluster`
140 and a :term:`Ceph Object Gateway` (RGW).
142 :ref:`Ceph OSD<rados_configuration_storage-devices_ceph_osd>`
143 Ceph **O**\bject **S**\torage **D**\aemon. The Ceph OSD
144 software, which interacts with logical disks (:term:`OSD`).
145 Around 2013, there was an attempt by "research and industry"
146 (Sage's own words) to insist on using the term "OSD" to mean
147 only "Object Storage Device", but the Ceph community has always
148 persisted in using the term to mean "Object Storage Daemon" and
149 no less an authority than Sage Weil himself confirms in
150 November of 2022 that "Daemon is more accurate for how Ceph is
151 built" (private correspondence between Zac Dover and Sage Weil,
155 See :term:`Ceph OSD`.
158 See :term:`Ceph OSD`.
161 All Ceph software, which includes any piece of code hosted at
162 `https://github.com/ceph`_.
165 See :term:`Releases`.
168 The aggregate term for the people, software, mission and
169 infrastructure of Ceph.
172 See :term:`Releases`.
174 Ceph Release Candidate
175 See :term:`Releases`.
178 See :term:`Releases`.
181 A collection of two or more components of Ceph.
183 :ref:`Ceph Storage Cluster<arch-ceph-storage-cluster>`
184 The collection of :term:`Ceph Monitor`\s, :term:`Ceph
185 Manager`\s, :term:`Ceph Metadata Server`\s, and :term:`OSD`\s
186 that work together to store and replicate data for use by
187 applications, Ceph Users, and :term:`Ceph Client`\s. Ceph
188 Storage Clusters receive data from :term:`Ceph Client`\s.
191 The Ceph authentication protocol. Cephx operates like Kerberos,
192 but it has no single point of failure.
195 A client is any program external to Ceph that uses a Ceph
196 Cluster to store and replicate data.
200 Third party cloud provisioning platforms such as OpenStack,
201 CloudStack, OpenNebula, and Proxmox VE.
204 The set of maps consisting of the monitor map, OSD map, PG map,
205 MDS map, and CRUSH map, which together report the state of the
206 Ceph cluster. See :ref:`the "Cluster Map" section of the
207 Architecture document<architecture_cluster_map>` for details.
210 Controlled Replication Under Scalable Hashing. It is the
211 algorithm Ceph uses to compute object storage locations.
214 The CRUSH data placement rule that applies to a particular
218 **D**\irect-\ **A**\ttached **S**\torage. Storage that is
219 attached directly to the computer accessing it, without passing
220 through a network. Contrast with NAS and SAN.
222 :ref:`Dashboard<mgr-dashboard>`
223 A built-in web-based Ceph management and monitoring application
224 to administer various aspects and objects of the cluster. The
225 dashboard is implemented as a Ceph Manager module. See
226 :ref:`mgr-dashboard` for more details.
229 Another name for :term:`Dashboard`.
233 A back end for OSD daemons, where a Journal is needed and files
234 are written to the filesystem.
237 **F**\ully **Q**\ualified **D**\omain **N**\ame. A domain name
238 that is applied to a node in a network and that specifies the
239 node's exact location in the tree hierarchy of the DNS.
241 In the context of Ceph cluster administration, FQDNs are often
242 applied to hosts. In this documentation, the term "FQDN" is
243 used mostly to distinguish between FQDNs and relatively simpler
244 hostnames, which do not specify the exact location of the host
245 in the tree hierarchy of the DNS but merely name the host.
248 Any single machine or server in a Ceph Cluster. See :term:`Ceph
252 Extensible metadata for LVM volumes and groups. It is used to
253 store Ceph-specific information about devices and its
254 relationship with OSDs.
256 :ref:`MDS<cephfs_add_remote_mds>`
257 The Ceph **M**\eta\ **D**\ata **S**\erver daemon. Also referred
258 to as "ceph-mds". The Ceph metadata server daemon must be
259 running in any Ceph cluster that runs the CephFS file system.
260 The MDS stores all filesystem metadata.
263 The Ceph manager software, which collects all the state from
264 the whole cluster in one place.
267 The Ceph monitor software.
270 See :term:`Ceph Node`.
272 Object Storage Device
276 Probably :term:`Ceph OSD`, but not necessarily. Sometimes
277 (especially in older correspondence, and especially in
278 documentation that is not written specifically for Ceph), "OSD"
279 means "**O**\bject **S**\torage **D**\evice", which refers to a
280 physical or logical storage unit (for example: LUN). The Ceph
281 community has always used the term "OSD" to refer to
282 :term:`Ceph OSD Daemon` despite an industry push in the
283 mid-2010s to insist that "OSD" should refer to "Object Storage
284 Device", so it is important to know which meaning is intended.
287 This is a unique identifier used to identify an OSD. It is
288 found in the OSD path in a file called ``osd_fsid``. The
289 term ``fsid`` is used interchangeably with ``uuid``
292 The integer that defines an OSD. It is generated by the
293 monitors during the creation of each OSD.
296 This is the unique identifier of an OSD. This term is used
297 interchangeably with ``fsid``
300 In the context of :term:`RGW`, a period is the configuration
301 state of the :term:`Realm`. The period stores the configuration
302 state of a multi-site configuration. When the period is updated,
303 the "epoch" is said thereby to have been changed.
305 :ref:`Pool<rados_pools>`
306 A pool is a logical partition used to store objects.
312 **R**\eliable **A**\utonomic **D**\istributed **O**\bject
313 **S**\tore. RADOS is the object store that provides a scalable
314 service for variably-sized objects. The RADOS object store is
315 the core component of a Ceph cluster. `This blog post from
317 <https://ceph.io/en/news/blog/2009/the-rados-distributed-object-store/>`_
318 provides a beginner's introduction to RADOS. Readers interested
319 in a deeper understanding of RADOS are directed to `RADOS: A
320 Scalable, Reliable Storage Service for Petabyte-scale Storage
321 Clusters <https://ceph.io/assets/pdfs/weil-rados-pdsw07.pdf>`_.
324 A proper subset of the Ceph Cluster consisting of
325 :term:`OSD`\s, :term:`Ceph Monitor`\s, and :term:`Ceph
332 The block storage component of Ceph. Also called "RADOS Block
333 Device" or :term:`Ceph Block Device`.
335 :ref:`Realm<rgw-realms>`
336 In the context of RADOS Gateway (RGW), a realm is a globally
337 unique namespace that consists of one or more zonegroups.
342 A version of Ceph that has not yet been put through
343 quality assurance testing. May contain new features.
346 Any ad hoc release that includes only bug fixes and
350 Any distinct numbered version of Ceph.
352 Ceph Release Candidate
353 A major version of Ceph that has undergone initial
354 quality assurance testing and is ready for beta
358 A major version of Ceph where all features from the
359 preceding interim releases have been put through
360 quality assurance testing successfully.
362 Reliable Autonomic Distributed Object Store
363 The core set of storage software which stores the user's data
364 (MON+OSD). See also :term:`RADOS`.
366 :ref:`RGW<object-gateway>`
367 **R**\ADOS **G**\ate **W**\ay.
369 The component of Ceph that provides a gateway to both the
370 Amazon S3 RESTful API and the OpenStack Swift API. Also called
371 "RADOS Gateway" and "Ceph Object Gateway".
374 Secrets are credentials used to perform digital authentication
375 whenever privileged users must access systems that require
376 authentication. Secrets can be passwords, API keys, tokens, SSH
377 keys, private certificates, or encryption keys.
380 Software-defined storage.
383 A systemd ``type`` where a command is defined in ``ExecStart``
384 which will exit upon completion (it is not intended to
388 The collection of software that performs scripted tests on Ceph.
391 In the context of :term:`RGW`, a zone is a logical group that
392 consists of one or more :term:`RGW` instances. A zone's
393 configuration state is stored in the :term:`period`. See
394 :ref:`Zones<radosgw-zones>`.
396 .. _https://github.com/ceph: https://github.com/ceph
397 .. _Cluster Map: ../architecture#cluster-map