]> git.proxmox.com Git - ceph.git/blob - ceph/doc/glossary.rst
update sources to v12.2.0
[ceph.git] / ceph / doc / glossary.rst
1 ===============
2 Ceph Glossary
3 ===============
4
5 Ceph is growing rapidly. As firms deploy Ceph, the technical terms such as
6 "RADOS", "RBD," "RGW" and so forth require corresponding marketing terms
7 that explain what each component does. The terms in this glossary are
8 intended to complement the existing technical terminology.
9
10 Sometimes more than one term applies to a definition. Generally, the first
11 term reflects a term consistent with Ceph's marketing, and secondary terms
12 reflect either technical terms or legacy ways of referring to Ceph systems.
13
14
15 .. glossary::
16
17 Ceph Project
18 The aggregate term for the people, software, mission and infrastructure
19 of Ceph.
20
21 cephx
22 The Ceph authentication protocol. Cephx operates like Kerberos, but it
23 has no single point of failure.
24
25 Ceph
26 Ceph Platform
27 All Ceph software, which includes any piece of code hosted at
28 `http://github.com/ceph`_.
29
30 Ceph System
31 Ceph Stack
32 A collection of two or more components of Ceph.
33
34 Ceph Node
35 Node
36 Host
37 Any single machine or server in a Ceph System.
38
39 Ceph Storage Cluster
40 Ceph Object Store
41 RADOS
42 RADOS Cluster
43 Reliable Autonomic Distributed Object Store
44 The core set of storage software which stores the user's data (MON+OSD).
45
46 Ceph Cluster Map
47 cluster map
48 The set of maps comprising the monitor map, OSD map, PG map, MDS map and
49 CRUSH map. See `Cluster Map`_ for details.
50
51 Ceph Object Storage
52 The object storage "product", service or capabilities, which consists
53 essentially of a Ceph Storage Cluster and a Ceph Object Gateway.
54
55 Ceph Object Gateway
56 RADOS Gateway
57 RGW
58 The S3/Swift gateway component of Ceph.
59
60 Ceph Block Device
61 RBD
62 The block storage component of Ceph.
63
64 Ceph Block Storage
65 The block storage "product," service or capabilities when used in
66 conjunction with ``librbd``, a hypervisor such as QEMU or Xen, and a
67 hypervisor abstraction layer such as ``libvirt``.
68
69 Ceph Filesystem
70 CephFS
71 Ceph FS
72 The POSIX filesystem components of Ceph.
73
74 Cloud Platforms
75 Cloud Stacks
76 Third party cloud provisioning platforms such as OpenStack, CloudStack,
77 OpenNebula, ProxMox, etc.
78
79 Object Storage Device
80 OSD
81 A physical or logical storage unit (*e.g.*, LUN).
82 Sometimes, Ceph users use the
83 term "OSD" to refer to :term:`Ceph OSD Daemon`, though the
84 proper term is "Ceph OSD".
85
86 Ceph OSD Daemon
87 Ceph OSD Daemons
88 Ceph OSD
89 The Ceph OSD software, which interacts with a logical
90 disk (:term:`OSD`). Sometimes, Ceph users use the
91 term "OSD" to refer to "Ceph OSD Daemon", though the
92 proper term is "Ceph OSD".
93
94 OSD id
95 The integer that defines an OSD. It is generated by the monitors as part
96 of the creation of a new OSD.
97
98 OSD fsid
99 This is a unique identifier used to further improve the uniqueness of an
100 OSD and it is found in the OSD path in a file called ``osd_fsid``. This
101 ``fsid`` term is used interchangeably with ``uuid``
102
103 OSD uuid
104 Just like the OSD fsid, this is the OSD unique identifer and is used
105 interchangeably with ``fsid``
106
107 bluestore
108 OSD BlueStore is a new back end for OSD daemons (kraken and newer
109 versions). Unlike :term:`filestore` it stores objects directly on the
110 Ceph block devices without any file system interface.
111
112 filestore
113 A back end for OSD daemons, where a Journal is needed and files are
114 written to the filesystem.
115
116 Ceph Monitor
117 MON
118 The Ceph monitor software.
119
120 Ceph Manager
121 MGR
122 The Ceph manager software, which collects all the state from the whole
123 cluster in one place.
124
125 Ceph Metadata Server
126 MDS
127 The Ceph metadata software.
128
129 Ceph Clients
130 Ceph Client
131 The collection of Ceph components which can access a Ceph Storage
132 Cluster. These include the Ceph Object Gateway, the Ceph Block Device,
133 the Ceph Filesystem, and their corresponding libraries, kernel modules,
134 and FUSEs.
135
136 Ceph Kernel Modules
137 The collection of kernel modules which can be used to interact with the
138 Ceph System (e.g,. ``ceph.ko``, ``rbd.ko``).
139
140 Ceph Client Libraries
141 The collection of libraries that can be used to interact with components
142 of the Ceph System.
143
144 Ceph Release
145 Any distinct numbered version of Ceph.
146
147 Ceph Point Release
148 Any ad-hoc release that includes only bug or security fixes.
149
150 Ceph Interim Release
151 Versions of Ceph that have not yet been put through quality assurance
152 testing, but may contain new features.
153
154 Ceph Release Candidate
155 A major version of Ceph that has undergone initial quality assurance
156 testing and is ready for beta testers.
157
158 Ceph Stable Release
159 A major version of Ceph where all features from the preceding interim
160 releases have been put through quality assurance testing successfully.
161
162 Ceph Test Framework
163 Teuthology
164 The collection of software that performs scripted tests on Ceph.
165
166 CRUSH
167 Controlled Replication Under Scalable Hashing. It is the algorithm
168 Ceph uses to compute object storage locations.
169
170 ruleset
171 A set of CRUSH data placement rules that applies to a particular pool(s).
172
173 Pool
174 Pools
175 Pools are logical partitions for storing objects.
176
177 systemd oneshot
178 A systemd ``type`` where a command is defined in ``ExecStart`` which will
179 exit upon completion (it is not intended to daemonize)
180
181 LVM tags
182 Extensible metadata for LVM volumes and groups. It is used to store
183 Ceph-specific information about devices and its relationship with
184 OSDs.
185
186 .. _http://github.com/ceph: http://github.com/ceph
187 .. _Cluster Map: ../architecture#cluster-map