]> git.proxmox.com Git - ceph.git/blob - ceph/doc/glossary.rst
e473895762ea1ad8b760eee7e0d7fc771e603ffd
[ceph.git] / ceph / doc / glossary.rst
1 ===============
2 Ceph Glossary
3 ===============
4
5 Ceph is growing rapidly. As firms deploy Ceph, the technical terms such as
6 "RADOS", "RBD," "RGW" and so forth require corresponding marketing terms
7 that explain what each component does. The terms in this glossary are
8 intended to complement the existing technical terminology.
9
10 Sometimes more than one term applies to a definition. Generally, the first
11 term reflects a term consistent with Ceph's marketing, and secondary terms
12 reflect either technical terms or legacy ways of referring to Ceph systems.
13
14
15 .. glossary::
16
17 Ceph Project
18 The aggregate term for the people, software, mission and infrastructure
19 of Ceph.
20
21 cephx
22 The Ceph authentication protocol. Cephx operates like Kerberos, but it
23 has no single point of failure.
24
25 Ceph
26 Ceph Platform
27 All Ceph software, which includes any piece of code hosted at
28 `https://github.com/ceph`_.
29
30 Ceph System
31 Ceph Stack
32 A collection of two or more components of Ceph.
33
34 Ceph Node
35 Node
36 Host
37 Any single machine or server in a Ceph System.
38
39 Ceph Storage Cluster
40 Ceph Object Store
41 RADOS
42 RADOS Cluster
43 Reliable Autonomic Distributed Object Store
44 The core set of storage software which stores the user's data (MON+OSD).
45
46 Ceph Cluster Map
47 Cluster Map
48 The set of maps comprising the monitor map, OSD map, PG map, MDS map and
49 CRUSH map. See `Cluster Map`_ for details.
50
51 Ceph Object Storage
52 The object storage "product", service or capabilities, which consists
53 essentially of a Ceph Storage Cluster and a Ceph Object Gateway.
54
55 Ceph Object Gateway
56 RADOS Gateway
57 RGW
58 The S3/Swift gateway component of Ceph.
59
60 Ceph Block Device
61 RBD
62 The block storage component of Ceph.
63
64 Ceph Block Storage
65 The block storage "product," service or capabilities when used in
66 conjunction with ``librbd``, a hypervisor such as QEMU or Xen, and a
67 hypervisor abstraction layer such as ``libvirt``.
68
69 Ceph File System
70 CephFS
71 Ceph FS
72 The POSIX filesystem components of Ceph. Refer
73 :ref:`CephFS Architecture <arch-cephfs>` and :ref:`ceph-file-system` for
74 more details.
75
76 Cloud Platforms
77 Cloud Stacks
78 Third party cloud provisioning platforms such as OpenStack, CloudStack,
79 OpenNebula, Proxmox VE, etc.
80
81 Object Storage Device
82 OSD
83 A physical or logical storage unit (*e.g.*, LUN).
84 Sometimes, Ceph users use the
85 term "OSD" to refer to :term:`Ceph OSD Daemon`, though the
86 proper term is "Ceph OSD".
87
88 Ceph OSD Daemon
89 Ceph OSD Daemons
90 Ceph OSD
91 The Ceph OSD software, which interacts with a logical
92 disk (:term:`OSD`). Sometimes, Ceph users use the
93 term "OSD" to refer to "Ceph OSD Daemon", though the
94 proper term is "Ceph OSD".
95
96 OSD id
97 The integer that defines an OSD. It is generated by the monitors as part
98 of the creation of a new OSD.
99
100 OSD fsid
101 This is a unique identifier used to further improve the uniqueness of an
102 OSD and it is found in the OSD path in a file called ``osd_fsid``. This
103 ``fsid`` term is used interchangeably with ``uuid``
104
105 OSD uuid
106 Just like the OSD fsid, this is the OSD unique identifier and is used
107 interchangeably with ``fsid``
108
109 bluestore
110 OSD BlueStore is a new back end for OSD daemons (kraken and newer
111 versions). Unlike :term:`filestore` it stores objects directly on the
112 Ceph block devices without any file system interface.
113
114 filestore
115 A back end for OSD daemons, where a Journal is needed and files are
116 written to the filesystem.
117
118 Ceph Monitor
119 MON
120 The Ceph monitor software.
121
122 Ceph Manager
123 MGR
124 The Ceph manager software, which collects all the state from the whole
125 cluster in one place.
126
127 Ceph Manager Dashboard
128 Ceph Dashboard
129 Dashboard Module
130 Dashboard Plugin
131 Dashboard
132 A built-in web-based Ceph management and monitoring application to
133 administer various aspects and objects of the cluster. The dashboard is
134 implemented as a Ceph Manager module. See :ref:`mgr-dashboard` for more
135 details.
136
137 Ceph Metadata Server
138 MDS
139 The Ceph metadata software.
140
141 Ceph Clients
142 Ceph Client
143 The collection of Ceph components which can access a Ceph Storage
144 Cluster. These include the Ceph Object Gateway, the Ceph Block Device,
145 the Ceph File System, and their corresponding libraries, kernel modules,
146 and FUSEs.
147
148 Ceph Kernel Modules
149 The collection of kernel modules which can be used to interact with the
150 Ceph System (e.g., ``ceph.ko``, ``rbd.ko``).
151
152 Ceph Client Libraries
153 The collection of libraries that can be used to interact with components
154 of the Ceph System.
155
156 Ceph Release
157 Any distinct numbered version of Ceph.
158
159 Ceph Point Release
160 Any ad-hoc release that includes only bug or security fixes.
161
162 Ceph Interim Release
163 Versions of Ceph that have not yet been put through quality assurance
164 testing, but may contain new features.
165
166 Ceph Release Candidate
167 A major version of Ceph that has undergone initial quality assurance
168 testing and is ready for beta testers.
169
170 Ceph Stable Release
171 A major version of Ceph where all features from the preceding interim
172 releases have been put through quality assurance testing successfully.
173
174 Ceph Test Framework
175 Teuthology
176 The collection of software that performs scripted tests on Ceph.
177
178 CRUSH
179 Controlled Replication Under Scalable Hashing. It is the algorithm
180 Ceph uses to compute object storage locations.
181
182 CRUSH rule
183 The CRUSH data placement rule that applies to a particular pool(s).
184
185 Pool
186 Pools
187 Pools are logical partitions for storing objects.
188
189 systemd oneshot
190 A systemd ``type`` where a command is defined in ``ExecStart`` which will
191 exit upon completion (it is not intended to daemonize)
192
193 LVM tags
194 Extensible metadata for LVM volumes and groups. It is used to store
195 Ceph-specific information about devices and its relationship with
196 OSDs.
197
198 .. _https://github.com/ceph: https://github.com/ceph
199 .. _Cluster Map: ../architecture#cluster-map