]> git.proxmox.com Git - ceph.git/blame - ceph/doc/glossary.rst
bump version to 17.2.5-pve1
[ceph.git] / ceph / doc / glossary.rst
CommitLineData
7c673cae
FG
1===============
2 Ceph Glossary
3===============
4
5Ceph is growing rapidly. As firms deploy Ceph, the technical terms such as
6"RADOS", "RBD," "RGW" and so forth require corresponding marketing terms
b5b8bbf5 7that explain what each component does. The terms in this glossary are
7c673cae
FG
8intended to complement the existing technical terminology.
9
10Sometimes more than one term applies to a definition. Generally, the first
11term reflects a term consistent with Ceph's marketing, and secondary terms
12reflect either technical terms or legacy ways of referring to Ceph systems.
13
14
b5b8bbf5 15.. glossary::
7c673cae
FG
16
17 Ceph Project
b5b8bbf5 18 The aggregate term for the people, software, mission and infrastructure
7c673cae 19 of Ceph.
b5b8bbf5 20
7c673cae
FG
21 cephx
22 The Ceph authentication protocol. Cephx operates like Kerberos, but it
23 has no single point of failure.
24
25 Ceph
26 Ceph Platform
b5b8bbf5 27 All Ceph software, which includes any piece of code hosted at
11fdf7f2 28 `https://github.com/ceph`_.
b5b8bbf5 29
7c673cae
FG
30 Ceph System
31 Ceph Stack
32 A collection of two or more components of Ceph.
33
34 Ceph Node
35 Node
36 Host
37 Any single machine or server in a Ceph System.
b5b8bbf5 38
7c673cae
FG
39 Ceph Storage Cluster
40 Ceph Object Store
41 RADOS
42 RADOS Cluster
43 Reliable Autonomic Distributed Object Store
44 The core set of storage software which stores the user's data (MON+OSD).
45
46 Ceph Cluster Map
9f95a23c 47 Cluster Map
b5b8bbf5 48 The set of maps comprising the monitor map, OSD map, PG map, MDS map and
7c673cae
FG
49 CRUSH map. See `Cluster Map`_ for details.
50
51 Ceph Object Storage
52 The object storage "product", service or capabilities, which consists
53 essentially of a Ceph Storage Cluster and a Ceph Object Gateway.
54
55 Ceph Object Gateway
56 RADOS Gateway
57 RGW
58 The S3/Swift gateway component of Ceph.
b5b8bbf5 59
7c673cae
FG
60 Ceph Block Device
61 RBD
62 The block storage component of Ceph.
b5b8bbf5 63
7c673cae 64 Ceph Block Storage
b5b8bbf5 65 The block storage "product," service or capabilities when used in
7c673cae
FG
66 conjunction with ``librbd``, a hypervisor such as QEMU or Xen, and a
67 hypervisor abstraction layer such as ``libvirt``.
68
9f95a23c 69 Ceph File System
7c673cae
FG
70 CephFS
71 Ceph FS
91327a77 72 The POSIX filesystem components of Ceph. Refer
9f95a23c 73 :ref:`CephFS Architecture <arch-cephfs>` and :ref:`ceph-file-system` for
91327a77 74 more details.
7c673cae
FG
75
76 Cloud Platforms
77 Cloud Stacks
b5b8bbf5 78 Third party cloud provisioning platforms such as OpenStack, CloudStack,
20effc67 79 OpenNebula, Proxmox VE, etc.
7c673cae
FG
80
81 Object Storage Device
82 OSD
83 A physical or logical storage unit (*e.g.*, LUN).
84 Sometimes, Ceph users use the
85 term "OSD" to refer to :term:`Ceph OSD Daemon`, though the
86 proper term is "Ceph OSD".
b5b8bbf5 87
7c673cae
FG
88 Ceph OSD Daemon
89 Ceph OSD Daemons
90 Ceph OSD
91 The Ceph OSD software, which interacts with a logical
92 disk (:term:`OSD`). Sometimes, Ceph users use the
93 term "OSD" to refer to "Ceph OSD Daemon", though the
94 proper term is "Ceph OSD".
b5b8bbf5
FG
95
96 OSD id
97 The integer that defines an OSD. It is generated by the monitors as part
98 of the creation of a new OSD.
99
100 OSD fsid
101 This is a unique identifier used to further improve the uniqueness of an
102 OSD and it is found in the OSD path in a file called ``osd_fsid``. This
103 ``fsid`` term is used interchangeably with ``uuid``
104
105 OSD uuid
11fdf7f2 106 Just like the OSD fsid, this is the OSD unique identifier and is used
b5b8bbf5
FG
107 interchangeably with ``fsid``
108
109 bluestore
110 OSD BlueStore is a new back end for OSD daemons (kraken and newer
111 versions). Unlike :term:`filestore` it stores objects directly on the
112 Ceph block devices without any file system interface.
113
114 filestore
115 A back end for OSD daemons, where a Journal is needed and files are
116 written to the filesystem.
117
7c673cae
FG
118 Ceph Monitor
119 MON
120 The Ceph monitor software.
121
122 Ceph Manager
123 MGR
124 The Ceph manager software, which collects all the state from the whole
125 cluster in one place.
126
11fdf7f2
TL
127 Ceph Manager Dashboard
128 Ceph Dashboard
129 Dashboard Module
130 Dashboard Plugin
131 Dashboard
132 A built-in web-based Ceph management and monitoring application to
133 administer various aspects and objects of the cluster. The dashboard is
134 implemented as a Ceph Manager module. See :ref:`mgr-dashboard` for more
135 details.
136
7c673cae
FG
137 Ceph Metadata Server
138 MDS
139 The Ceph metadata software.
140
141 Ceph Clients
142 Ceph Client
b5b8bbf5
FG
143 The collection of Ceph components which can access a Ceph Storage
144 Cluster. These include the Ceph Object Gateway, the Ceph Block Device,
9f95a23c 145 the Ceph File System, and their corresponding libraries, kernel modules,
7c673cae
FG
146 and FUSEs.
147
148 Ceph Kernel Modules
b5b8bbf5 149 The collection of kernel modules which can be used to interact with the
11fdf7f2 150 Ceph System (e.g., ``ceph.ko``, ``rbd.ko``).
7c673cae
FG
151
152 Ceph Client Libraries
b5b8bbf5 153 The collection of libraries that can be used to interact with components
7c673cae
FG
154 of the Ceph System.
155
156 Ceph Release
157 Any distinct numbered version of Ceph.
b5b8bbf5 158
7c673cae
FG
159 Ceph Point Release
160 Any ad-hoc release that includes only bug or security fixes.
161
162 Ceph Interim Release
163 Versions of Ceph that have not yet been put through quality assurance
164 testing, but may contain new features.
165
166 Ceph Release Candidate
b5b8bbf5 167 A major version of Ceph that has undergone initial quality assurance
7c673cae
FG
168 testing and is ready for beta testers.
169
170 Ceph Stable Release
b5b8bbf5 171 A major version of Ceph where all features from the preceding interim
7c673cae
FG
172 releases have been put through quality assurance testing successfully.
173
174 Ceph Test Framework
175 Teuthology
176 The collection of software that performs scripted tests on Ceph.
177
178 CRUSH
179 Controlled Replication Under Scalable Hashing. It is the algorithm
180 Ceph uses to compute object storage locations.
b5b8bbf5 181
b32b8144
FG
182 CRUSH rule
183 The CRUSH data placement rule that applies to a particular pool(s).
7c673cae
FG
184
185 Pool
186 Pools
187 Pools are logical partitions for storing objects.
188
b5b8bbf5
FG
189 systemd oneshot
190 A systemd ``type`` where a command is defined in ``ExecStart`` which will
191 exit upon completion (it is not intended to daemonize)
192
193 LVM tags
194 Extensible metadata for LVM volumes and groups. It is used to store
195 Ceph-specific information about devices and its relationship with
196 OSDs.
197
11fdf7f2 198.. _https://github.com/ceph: https://github.com/ceph
7c673cae 199.. _Cluster Map: ../architecture#cluster-map