]>
Commit | Line | Data |
---|---|---|
7c673cae FG |
1 | =============== |
2 | Ceph Glossary | |
3 | =============== | |
4 | ||
39ae355f | 5 | .. glossary:: |
7c673cae | 6 | |
39ae355f TL |
7 | Application |
8 | More properly called a :term:`client`, an application is any program | |
9 | external to Ceph that uses a Ceph Cluster to store and | |
10 | replicate data. | |
11 | ||
12 | :ref:`BlueStore<rados_config_storage_devices_bluestore>` | |
13 | OSD BlueStore is a storage back end used by OSD daemons, and | |
14 | was designed specifically for use with Ceph. BlueStore was | |
15 | introduced in the Ceph Kraken release. In the Ceph Luminous | |
16 | release, BlueStore became Ceph's default storage back end, | |
17 | supplanting FileStore. Unlike :term:`filestore`, BlueStore | |
18 | stores objects directly on Ceph block devices without any file | |
19 | system interface. Since Luminous (12.2), BlueStore has been | |
20 | Ceph's default and recommended storage back end. | |
21 | ||
22 | Bucket | |
23 | In the context of :term:`RGW`, a bucket is a group of objects. | |
24 | In a filesystem-based analogy in which objects are the | |
25 | counterpart of files, buckets are the counterpart of | |
26 | directories. :ref:`Multisite sync | |
27 | policies<radosgw-multisite-sync-policy>` can be set on buckets, | |
28 | to provide fine-grained control of data movement from one zone | |
29 | to another zone. | |
30 | ||
31 | The concept of the bucket has been taken from AWS S3. See also | |
32 | `the AWS S3 page on creating buckets <https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-buckets-s3.html>`_ | |
33 | and `the AWS S3 'Buckets Overview' page <https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html>`_. | |
34 | ||
35 | OpenStack Swift uses the term "containers" for what RGW and AWS call "buckets". | |
36 | See `the OpenStack Storage API overview page <https://docs.openstack.org/swift/latest/api/object_api_v1_overview.html>`_. | |
37 | ||
38 | Ceph | |
39 | Ceph is a distributed network storage and file system with | |
40 | distributed metadata management and POSIX semantics. | |
7c673cae | 41 | |
39ae355f TL |
42 | Ceph Block Device |
43 | A software instrument that orchestrates the storage of | |
44 | block-based data in Ceph. Ceph Block Device (also called "RBD", | |
45 | or "RADOS block device") splits block-based application data | |
46 | into "chunks". RADOS stores these chunks as objects. Ceph Block | |
47 | Device orchestrates the storage of those objects across the | |
48 | storage cluster. See also :term:`RBD`. | |
7c673cae | 49 | |
39ae355f TL |
50 | Ceph Block Storage |
51 | One of the three kinds of storage supported by Ceph (the other | |
52 | two are object storage and file storage). Ceph Block Storage is | |
53 | the block storage "product", which refers to block-storage | |
54 | related services and capabilities when used in conjunction with | |
55 | the collection of (1) ``librbd`` (a python module that provides | |
56 | file-like access to :term:`RBD` images), (2) a hypervisor such | |
57 | as QEMU or Xen, and (3) a hypervisor abstraction layer such as | |
58 | ``libvirt``. | |
7c673cae | 59 | |
39ae355f TL |
60 | Ceph Client |
61 | Any of the Ceph components that can access a Ceph Storage | |
62 | Cluster. This includes the Ceph Object Gateway, the Ceph Block | |
63 | Device, the Ceph File System, and their corresponding | |
64 | libraries. It also includes kernel modules, and FUSEs | |
65 | (Filesystems in USERspace). | |
b5b8bbf5 | 66 | |
39ae355f TL |
67 | Ceph Client Libraries |
68 | The collection of libraries that can be used to interact with | |
69 | components of the Ceph Cluster. | |
7c673cae | 70 | |
39ae355f TL |
71 | Ceph Cluster Map |
72 | See :term:`Cluster Map` | |
b5b8bbf5 | 73 | |
39ae355f TL |
74 | Ceph Dashboard |
75 | :ref:`The Ceph Dashboard<mgr-dashboard>` is a built-in | |
76 | web-based Ceph management and monitoring application through | |
77 | which you can inspect and administer various resources within | |
78 | the cluster. It is implemented as a :ref:`ceph-manager-daemon` | |
79 | module. | |
80 | ||
81 | Ceph File System | |
82 | See :term:`CephFS` | |
83 | ||
84 | :ref:`CephFS<ceph-file-system>` | |
85 | The **Ceph F**\ile **S**\ystem, or CephFS, is a | |
86 | POSIX-compliant file system built on top of Ceph’s distributed | |
87 | object store, RADOS. See :ref:`CephFS Architecture | |
88 | <arch-cephfs>` for more details. | |
89 | ||
90 | Ceph Interim Release | |
91 | See :term:`Releases`. | |
92 | ||
93 | Ceph Kernel Modules | |
94 | The collection of kernel modules that can be used to interact | |
95 | with the Ceph Cluster (for example: ``ceph.ko``, ``rbd.ko``). | |
96 | ||
97 | :ref:`Ceph Manager<ceph-manager-daemon>` | |
98 | The Ceph manager daemon (ceph-mgr) is a daemon that runs | |
99 | alongside monitor daemons to provide monitoring and interfacing | |
100 | to external monitoring and management systems. Since the | |
101 | Luminous release (12.x), no Ceph cluster functions properly | |
102 | unless it contains a running ceph-mgr daemon. | |
103 | ||
104 | Ceph Manager Dashboard | |
105 | See :term:`Ceph Dashboard`. | |
106 | ||
107 | Ceph Metadata Server | |
108 | See :term:`MDS`. | |
109 | ||
110 | Ceph Monitor | |
111 | A daemon that maintains a map of the state of the cluster. This | |
112 | "cluster state" includes the monitor map, the manager map, the | |
113 | OSD map, and the CRUSH map. A Ceph cluster must contain a | |
114 | minimum of three running monitors in order to be both redundant | |
115 | and highly-available. Ceph monitors and the nodes on which they | |
116 | run are often referred to as "mon"s. See :ref:`Monitor Config | |
117 | Reference <monitor-config-reference>`. | |
7c673cae FG |
118 | |
119 | Ceph Node | |
39ae355f TL |
120 | A Ceph node is a unit of the Ceph Cluster that communicates with |
121 | other nodes in the Ceph Cluster in order to replicate and | |
122 | redistribute data. All of the nodes together are called the | |
123 | :term:`Ceph Storage Cluster`. Ceph nodes include :term:`OSD`\s, | |
124 | :term:`Ceph Monitor`\s, :term:`Ceph Manager`\s, and | |
125 | :term:`MDS`\es. The term "node" is usually equivalent to "host" | |
126 | in the Ceph documentation. If you have a running Ceph Cluster, | |
127 | you can list all of the nodes in it by running the command | |
128 | ``ceph node ls all``. | |
129 | ||
130 | :ref:`Ceph Object Gateway<object-gateway>` | |
131 | An object storage interface built on top of librados. Ceph | |
132 | Object Gateway provides a RESTful gateway between applications | |
133 | and Ceph storage clusters. | |
134 | ||
135 | Ceph Object Storage | |
136 | See :term:`Ceph Object Store`. | |
b5b8bbf5 | 137 | |
7c673cae | 138 | Ceph Object Store |
39ae355f TL |
139 | A Ceph Object Store consists of a :term:`Ceph Storage Cluster` |
140 | and a :term:`Ceph Object Gateway` (RGW). | |
141 | ||
142 | :ref:`Ceph OSD<rados_configuration_storage-devices_ceph_osd>` | |
143 | Ceph **O**\bject **S**\torage **D**\aemon. The Ceph OSD | |
144 | software, which interacts with logical disks (:term:`OSD`). | |
145 | Around 2013, there was an attempt by "research and industry" | |
146 | (Sage's own words) to insist on using the term "OSD" to mean | |
147 | only "Object Storage Device", but the Ceph community has always | |
148 | persisted in using the term to mean "Object Storage Daemon" and | |
149 | no less an authority than Sage Weil himself confirms in | |
150 | November of 2022 that "Daemon is more accurate for how Ceph is | |
151 | built" (private correspondence between Zac Dover and Sage Weil, | |
152 | 07 Nov 2022). | |
7c673cae | 153 | |
39ae355f TL |
154 | Ceph OSD Daemon |
155 | See :term:`Ceph OSD`. | |
7c673cae | 156 | |
39ae355f TL |
157 | Ceph OSD Daemons |
158 | See :term:`Ceph OSD`. | |
7c673cae | 159 | |
39ae355f TL |
160 | Ceph Platform |
161 | All Ceph software, which includes any piece of code hosted at | |
162 | `https://github.com/ceph`_. | |
b5b8bbf5 | 163 | |
39ae355f TL |
164 | Ceph Point Release |
165 | See :term:`Releases`. | |
b5b8bbf5 | 166 | |
39ae355f TL |
167 | Ceph Project |
168 | The aggregate term for the people, software, mission and | |
169 | infrastructure of Ceph. | |
7c673cae | 170 | |
39ae355f TL |
171 | Ceph Release |
172 | See :term:`Releases`. | |
173 | ||
174 | Ceph Release Candidate | |
175 | See :term:`Releases`. | |
176 | ||
177 | Ceph Stable Release | |
178 | See :term:`Releases`. | |
179 | ||
180 | Ceph Stack | |
181 | A collection of two or more components of Ceph. | |
182 | ||
183 | :ref:`Ceph Storage Cluster<arch-ceph-storage-cluster>` | |
184 | The collection of :term:`Ceph Monitor`\s, :term:`Ceph | |
185 | Manager`\s, :term:`Ceph Metadata Server`\s, and :term:`OSD`\s | |
186 | that work together to store and replicate data for use by | |
187 | applications, Ceph Users, and :term:`Ceph Client`\s. Ceph | |
188 | Storage Clusters receive data from :term:`Ceph Client`\s. | |
189 | ||
190 | cephx | |
191 | The Ceph authentication protocol. Cephx operates like Kerberos, | |
192 | but it has no single point of failure. | |
193 | ||
194 | Client | |
195 | A client is any program external to Ceph that uses a Ceph | |
196 | Cluster to store and replicate data. | |
7c673cae FG |
197 | |
198 | Cloud Platforms | |
199 | Cloud Stacks | |
39ae355f TL |
200 | Third party cloud provisioning platforms such as OpenStack, |
201 | CloudStack, OpenNebula, and Proxmox VE. | |
7c673cae | 202 | |
39ae355f TL |
203 | Cluster Map |
204 | The set of maps consisting of the monitor map, OSD map, PG map, | |
205 | MDS map, and CRUSH map, which together report the state of the | |
206 | Ceph cluster. See :ref:`the "Cluster Map" section of the | |
207 | Architecture document<architecture_cluster_map>` for details. | |
b5b8bbf5 | 208 | |
39ae355f TL |
209 | CRUSH |
210 | Controlled Replication Under Scalable Hashing. It is the | |
211 | algorithm Ceph uses to compute object storage locations. | |
b5b8bbf5 | 212 | |
39ae355f TL |
213 | CRUSH rule |
214 | The CRUSH data placement rule that applies to a particular | |
215 | pool(s). | |
b5b8bbf5 | 216 | |
39ae355f TL |
217 | DAS |
218 | **D**\irect-\ **A**\ttached **S**\torage. Storage that is | |
219 | attached directly to the computer accessing it, without passing | |
220 | through a network. Contrast with NAS and SAN. | |
b5b8bbf5 | 221 | |
39ae355f TL |
222 | :ref:`Dashboard<mgr-dashboard>` |
223 | A built-in web-based Ceph management and monitoring application | |
224 | to administer various aspects and objects of the cluster. The | |
225 | dashboard is implemented as a Ceph Manager module. See | |
226 | :ref:`mgr-dashboard` for more details. | |
b5b8bbf5 | 227 | |
39ae355f TL |
228 | Dashboard Module |
229 | Another name for :term:`Dashboard`. | |
b5b8bbf5 | 230 | |
39ae355f | 231 | Dashboard Plugin |
b5b8bbf5 | 232 | filestore |
39ae355f TL |
233 | A back end for OSD daemons, where a Journal is needed and files |
234 | are written to the filesystem. | |
235 | ||
236 | FQDN | |
237 | **F**\ully **Q**\ualified **D**\omain **N**\ame. A domain name | |
238 | that is applied to a node in a network and that specifies the | |
239 | node's exact location in the tree hierarchy of the DNS. | |
240 | ||
241 | In the context of Ceph cluster administration, FQDNs are often | |
242 | applied to hosts. In this documentation, the term "FQDN" is | |
243 | used mostly to distinguish between FQDNs and relatively simpler | |
244 | hostnames, which do not specify the exact location of the host | |
245 | in the tree hierarchy of the DNS but merely name the host. | |
246 | ||
247 | Host | |
248 | Any single machine or server in a Ceph Cluster. See :term:`Ceph | |
249 | Node`. | |
250 | ||
251 | LVM tags | |
252 | Extensible metadata for LVM volumes and groups. It is used to | |
253 | store Ceph-specific information about devices and its | |
254 | relationship with OSDs. | |
255 | ||
256 | :ref:`MDS<cephfs_add_remote_mds>` | |
257 | The Ceph **M**\eta\ **D**\ata **S**\erver daemon. Also referred | |
258 | to as "ceph-mds". The Ceph metadata server daemon must be | |
259 | running in any Ceph cluster that runs the CephFS file system. | |
260 | The MDS stores all filesystem metadata. | |
261 | ||
262 | MGR | |
263 | The Ceph manager software, which collects all the state from | |
264 | the whole cluster in one place. | |
b5b8bbf5 | 265 | |
7c673cae FG |
266 | MON |
267 | The Ceph monitor software. | |
268 | ||
39ae355f TL |
269 | Node |
270 | See :term:`Ceph Node`. | |
7c673cae | 271 | |
39ae355f TL |
272 | Object Storage Device |
273 | See :term:`OSD`. | |
11fdf7f2 | 274 | |
39ae355f TL |
275 | OSD |
276 | Probably :term:`Ceph OSD`, but not necessarily. Sometimes | |
277 | (especially in older correspondence, and especially in | |
278 | documentation that is not written specifically for Ceph), "OSD" | |
279 | means "**O**\bject **S**\torage **D**\evice", which refers to a | |
280 | physical or logical storage unit (for example: LUN). The Ceph | |
281 | community has always used the term "OSD" to refer to | |
282 | :term:`Ceph OSD Daemon` despite an industry push in the | |
283 | mid-2010s to insist that "OSD" should refer to "Object Storage | |
284 | Device", so it is important to know which meaning is intended. | |
7c673cae | 285 | |
39ae355f TL |
286 | OSD fsid |
287 | This is a unique identifier used to identify an OSD. It is | |
288 | found in the OSD path in a file called ``osd_fsid``. The | |
289 | term ``fsid`` is used interchangeably with ``uuid`` | |
7c673cae | 290 | |
39ae355f TL |
291 | OSD id |
292 | The integer that defines an OSD. It is generated by the | |
293 | monitors during the creation of each OSD. | |
7c673cae | 294 | |
39ae355f TL |
295 | OSD uuid |
296 | This is the unique identifier of an OSD. This term is used | |
297 | interchangeably with ``fsid`` | |
7c673cae | 298 | |
39ae355f TL |
299 | Period |
300 | In the context of :term:`RGW`, a period is the configuration | |
301 | state of the :term:`Realm`. The period stores the configuration | |
302 | state of a multi-site configuration. When the period is updated, | |
303 | the "epoch" is said thereby to have been changed. | |
b5b8bbf5 | 304 | |
39ae355f TL |
305 | :ref:`Pool<rados_pools>` |
306 | A pool is a logical partition used to store objects. | |
7c673cae | 307 | |
39ae355f TL |
308 | Pools |
309 | See :term:`pool`. | |
7c673cae | 310 | |
39ae355f TL |
311 | RADOS |
312 | **R**\eliable **A**\utonomic **D**\istributed **O**\bject | |
313 | **S**\tore. RADOS is the object store that provides a scalable | |
314 | service for variably-sized objects. The RADOS object store is | |
315 | the core component of a Ceph cluster. `This blog post from | |
316 | 2009 | |
317 | <https://ceph.io/en/news/blog/2009/the-rados-distributed-object-store/>`_ | |
318 | provides a beginner's introduction to RADOS. Readers interested | |
319 | in a deeper understanding of RADOS are directed to `RADOS: A | |
320 | Scalable, Reliable Storage Service for Petabyte-scale Storage | |
321 | Clusters <https://ceph.io/assets/pdfs/weil-rados-pdsw07.pdf>`_. | |
7c673cae | 322 | |
39ae355f TL |
323 | RADOS Cluster |
324 | A proper subset of the Ceph Cluster consisting of | |
325 | :term:`OSD`\s, :term:`Ceph Monitor`\s, and :term:`Ceph | |
326 | Manager`\s. | |
327 | ||
328 | RADOS Gateway | |
329 | See :term:`RGW`. | |
7c673cae | 330 | |
39ae355f TL |
331 | RBD |
332 | The block storage component of Ceph. Also called "RADOS Block | |
333 | Device" or :term:`Ceph Block Device`. | |
7c673cae | 334 | |
39ae355f TL |
335 | :ref:`Realm<rgw-realms>` |
336 | In the context of RADOS Gateway (RGW), a realm is a globally | |
337 | unique namespace that consists of one or more zonegroups. | |
b5b8bbf5 | 338 | |
39ae355f | 339 | Releases |
7c673cae | 340 | |
39ae355f TL |
341 | Ceph Interim Release |
342 | A version of Ceph that has not yet been put through | |
343 | quality assurance testing. May contain new features. | |
344 | ||
345 | Ceph Point Release | |
346 | Any ad hoc release that includes only bug fixes and | |
347 | security fixes. | |
348 | ||
349 | Ceph Release | |
350 | Any distinct numbered version of Ceph. | |
351 | ||
352 | Ceph Release Candidate | |
353 | A major version of Ceph that has undergone initial | |
354 | quality assurance testing and is ready for beta | |
355 | testers. | |
356 | ||
357 | Ceph Stable Release | |
358 | A major version of Ceph where all features from the | |
359 | preceding interim releases have been put through | |
360 | quality assurance testing successfully. | |
361 | ||
362 | Reliable Autonomic Distributed Object Store | |
363 | The core set of storage software which stores the user's data | |
364 | (MON+OSD). See also :term:`RADOS`. | |
365 | ||
366 | :ref:`RGW<object-gateway>` | |
367 | **R**\ADOS **G**\ate **W**\ay. | |
368 | ||
369 | The component of Ceph that provides a gateway to both the | |
370 | Amazon S3 RESTful API and the OpenStack Swift API. Also called | |
371 | "RADOS Gateway" and "Ceph Object Gateway". | |
372 | ||
373 | secrets | |
374 | Secrets are credentials used to perform digital authentication | |
375 | whenever privileged users must access systems that require | |
376 | authentication. Secrets can be passwords, API keys, tokens, SSH | |
377 | keys, private certificates, or encryption keys. | |
378 | ||
379 | SDS | |
380 | Software-defined storage. | |
7c673cae | 381 | |
b5b8bbf5 | 382 | systemd oneshot |
39ae355f TL |
383 | A systemd ``type`` where a command is defined in ``ExecStart`` |
384 | which will exit upon completion (it is not intended to | |
385 | daemonize) | |
b5b8bbf5 | 386 | |
39ae355f TL |
387 | Teuthology |
388 | The collection of software that performs scripted tests on Ceph. | |
389 | ||
390 | Zone | |
391 | In the context of :term:`RGW`, a zone is a logical group that | |
392 | consists of one or more :term:`RGW` instances. A zone's | |
393 | configuration state is stored in the :term:`period`. See | |
394 | :ref:`Zones<radosgw-zones>`. | |
b5b8bbf5 | 395 | |
11fdf7f2 | 396 | .. _https://github.com/ceph: https://github.com/ceph |
39ae355f | 397 | .. _Cluster Map: ../architecture#cluster-map |