5 Ceph is growing rapidly. As firms deploy Ceph, the technical terms such as
6 "RADOS", "RBD," "RGW" and so forth require corresponding marketing terms
7 that explain what each component does. The terms in this glossary are
8 intended to complement the existing technical terminology.
10 Sometimes more than one term applies to a definition. Generally, the first
11 term reflects a term consistent with Ceph's marketing, and secondary terms
12 reflect either technical terms or legacy ways of referring to Ceph systems.
18 The aggregate term for the people, software, mission and infrastructure
22 The Ceph authentication protocol. Cephx operates like Kerberos, but it
23 has no single point of failure.
27 All Ceph software, which includes any piece of code hosted at
28 `https://github.com/ceph`_.
32 A collection of two or more components of Ceph.
37 Any single machine or server in a Ceph System.
43 Reliable Autonomic Distributed Object Store
44 The core set of storage software which stores the user's data (MON+OSD).
48 The set of maps comprising the monitor map, OSD map, PG map, MDS map and
49 CRUSH map. See `Cluster Map`_ for details.
52 The object storage "product", service or capabilities, which consists
53 essentially of a Ceph Storage Cluster and a Ceph Object Gateway.
58 The S3/Swift gateway component of Ceph.
62 The block storage component of Ceph.
65 The block storage "product," service or capabilities when used in
66 conjunction with ``librbd``, a hypervisor such as QEMU or Xen, and a
67 hypervisor abstraction layer such as ``libvirt``.
72 The POSIX filesystem components of Ceph. Refer
73 :ref:`CephFS Architecture <arch-cephfs>` and :ref:`ceph-file-system` for
78 Third party cloud provisioning platforms such as OpenStack, CloudStack,
79 OpenNebula, Proxmox VE, etc.
83 A physical or logical storage unit (*e.g.*, LUN).
84 Sometimes, Ceph users use the
85 term "OSD" to refer to :term:`Ceph OSD Daemon`, though the
86 proper term is "Ceph OSD".
91 The Ceph OSD software, which interacts with a logical
92 disk (:term:`OSD`). Sometimes, Ceph users use the
93 term "OSD" to refer to "Ceph OSD Daemon", though the
94 proper term is "Ceph OSD".
97 The integer that defines an OSD. It is generated by the monitors as part
98 of the creation of a new OSD.
101 This is a unique identifier used to further improve the uniqueness of an
102 OSD and it is found in the OSD path in a file called ``osd_fsid``. This
103 ``fsid`` term is used interchangeably with ``uuid``
106 Just like the OSD fsid, this is the OSD unique identifier and is used
107 interchangeably with ``fsid``
110 OSD BlueStore is a new back end for OSD daemons (kraken and newer
111 versions). Unlike :term:`filestore` it stores objects directly on the
112 Ceph block devices without any file system interface.
115 A back end for OSD daemons, where a Journal is needed and files are
116 written to the filesystem.
120 The Ceph monitor software.
124 The Ceph manager software, which collects all the state from the whole
125 cluster in one place.
127 Ceph Manager Dashboard
132 A built-in web-based Ceph management and monitoring application to
133 administer various aspects and objects of the cluster. The dashboard is
134 implemented as a Ceph Manager module. See :ref:`mgr-dashboard` for more
139 The Ceph metadata software.
143 The collection of Ceph components which can access a Ceph Storage
144 Cluster. These include the Ceph Object Gateway, the Ceph Block Device,
145 the Ceph File System, and their corresponding libraries, kernel modules,
149 The collection of kernel modules which can be used to interact with the
150 Ceph System (e.g., ``ceph.ko``, ``rbd.ko``).
152 Ceph Client Libraries
153 The collection of libraries that can be used to interact with components
157 Any distinct numbered version of Ceph.
160 Any ad-hoc release that includes only bug or security fixes.
163 Versions of Ceph that have not yet been put through quality assurance
164 testing, but may contain new features.
166 Ceph Release Candidate
167 A major version of Ceph that has undergone initial quality assurance
168 testing and is ready for beta testers.
171 A major version of Ceph where all features from the preceding interim
172 releases have been put through quality assurance testing successfully.
176 The collection of software that performs scripted tests on Ceph.
179 Controlled Replication Under Scalable Hashing. It is the algorithm
180 Ceph uses to compute object storage locations.
183 The CRUSH data placement rule that applies to a particular pool(s).
187 Pools are logical partitions for storing objects.
190 A systemd ``type`` where a command is defined in ``ExecStart`` which will
191 exit upon completion (it is not intended to daemonize)
194 Extensible metadata for LVM volumes and groups. It is used to store
195 Ceph-specific information about devices and its relationship with
198 .. _https://github.com/ceph: https://github.com/ceph
199 .. _Cluster Map: ../architecture#cluster-map