]> git.proxmox.com Git - pve-docs.git/blame_incremental - pveceph.adoc
Precise certificate generation
[pve-docs.git] / pveceph.adoc
... / ...
CommitLineData
1[[chapter_pveceph]]
2ifdef::manvolnum[]
3pveceph(1)
4==========
5:pve-toplevel:
6
7NAME
8----
9
10pveceph - Manage Ceph Services on Proxmox VE Nodes
11
12SYNOPSIS
13--------
14
15include::pveceph.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20ifndef::manvolnum[]
21Manage Ceph Services on Proxmox VE Nodes
22========================================
23:pve-toplevel:
24endif::manvolnum[]
25
26[thumbnail="screenshot/gui-ceph-status.png"]
27
28{pve} unifies your compute and storage systems, i.e. you can use the same
29physical nodes within a cluster for both computing (processing VMs and
30containers) and replicated storage. The traditional silos of compute and
31storage resources can be wrapped up into a single hyper-converged appliance.
32Separate storage networks (SANs) and connections via network attached storages
33(NAS) disappear. With the integration of Ceph, an open source software-defined
34storage platform, {pve} has the ability to run and manage Ceph storage directly
35on the hypervisor nodes.
36
37Ceph is a distributed object store and file system designed to provide
38excellent performance, reliability and scalability.
39
40.Some advantages of Ceph on {pve} are:
41- Easy setup and management with CLI and GUI support
42- Thin provisioning
43- Snapshots support
44- Self healing
45- Scalable to the exabyte level
46- Setup pools with different performance and redundancy characteristics
47- Data is replicated, making it fault tolerant
48- Runs on economical commodity hardware
49- No need for hardware RAID controllers
50- Open source
51
52For small to mid sized deployments, it is possible to install a Ceph server for
53RADOS Block Devices (RBD) directly on your {pve} cluster nodes, see
54xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]. Recent
55hardware has plenty of CPU power and RAM, so running storage services
56and VMs on the same node is possible.
57
58To simplify management, we provide 'pveceph' - a tool to install and
59manage {ceph} services on {pve} nodes.
60
61.Ceph consists of a couple of Daemons footnote:[Ceph intro http://docs.ceph.com/docs/master/start/intro/], for use as a RBD storage:
62- Ceph Monitor (ceph-mon)
63- Ceph Manager (ceph-mgr)
64- Ceph OSD (ceph-osd; Object Storage Daemon)
65
66TIP: We recommend to get familiar with the Ceph vocabulary.
67footnote:[Ceph glossary http://docs.ceph.com/docs/luminous/glossary]
68
69
70Precondition
71------------
72
73To build a Proxmox Ceph Cluster there should be at least three (preferably)
74identical servers for the setup.
75
76A 10Gb network, exclusively used for Ceph, is recommended. A meshed network
77setup is also an option if there are no 10Gb switches available, see our wiki
78article footnote:[Full Mesh Network for Ceph {webwiki-url}Full_Mesh_Network_for_Ceph_Server] .
79
80Check also the recommendations from
81http://docs.ceph.com/docs/luminous/start/hardware-recommendations/[Ceph's website].
82
83.Avoid RAID
84As Ceph handles data object redundancy and multiple parallel writes to disks
85(OSDs) on its own, using a RAID controller normally doesn’t improve
86performance or availability. On the contrary, Ceph is designed to handle whole
87disks on it's own, without any abstraction in between. RAID controller are not
88designed for the Ceph use case and may complicate things and sometimes even
89reduce performance, as their write and caching algorithms may interfere with
90the ones from Ceph.
91
92WARNING: Avoid RAID controller, use host bus adapter (HBA) instead.
93
94
95[[pve_ceph_install]]
96Installation of Ceph Packages
97-----------------------------
98
99On each node run the installation script as follows:
100
101[source,bash]
102----
103pveceph install
104----
105
106This sets up an `apt` package repository in
107`/etc/apt/sources.list.d/ceph.list` and installs the required software.
108
109
110Creating initial Ceph configuration
111-----------------------------------
112
113[thumbnail="screenshot/gui-ceph-config.png"]
114
115After installation of packages, you need to create an initial Ceph
116configuration on just one node, based on your network (`10.10.10.0/24`
117in the following example) dedicated for Ceph:
118
119[source,bash]
120----
121pveceph init --network 10.10.10.0/24
122----
123
124This creates an initial configuration at `/etc/pve/ceph.conf`. That file is
125automatically distributed to all {pve} nodes by using
126xref:chapter_pmxcfs[pmxcfs]. The command also creates a symbolic link
127from `/etc/ceph/ceph.conf` pointing to that file. So you can simply run
128Ceph commands without the need to specify a configuration file.
129
130
131[[pve_ceph_monitors]]
132Creating Ceph Monitors
133----------------------
134
135[thumbnail="screenshot/gui-ceph-monitor.png"]
136
137The Ceph Monitor (MON)
138footnote:[Ceph Monitor http://docs.ceph.com/docs/luminous/start/intro/]
139maintains a master copy of the cluster map. For high availability you need to
140have at least 3 monitors.
141
142On each node where you want to place a monitor (three monitors are recommended),
143create it by using the 'Ceph -> Monitor' tab in the GUI or run.
144
145
146[source,bash]
147----
148pveceph createmon
149----
150
151This will also install the needed Ceph Manager ('ceph-mgr') by default. If you
152do not want to install a manager, specify the '-exclude-manager' option.
153
154
155[[pve_ceph_manager]]
156Creating Ceph Manager
157----------------------
158
159The Manager daemon runs alongside the monitors, providing an interface for
160monitoring the cluster. Since the Ceph luminous release the
161ceph-mgr footnote:[Ceph Manager http://docs.ceph.com/docs/luminous/mgr/] daemon
162is required. During monitor installation the ceph manager will be installed as
163well.
164
165NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For
166high availability install more then one manager.
167
168[source,bash]
169----
170pveceph createmgr
171----
172
173
174[[pve_ceph_osds]]
175Creating Ceph OSDs
176------------------
177
178[thumbnail="screenshot/gui-ceph-osd-status.png"]
179
180via GUI or via CLI as follows:
181
182[source,bash]
183----
184pveceph createosd /dev/sd[X]
185----
186
187TIP: We recommend a Ceph cluster size, starting with 12 OSDs, distributed evenly
188among your, at least three nodes (4 OSDs on each node).
189
190If the disk was used before (eg. ZFS/RAID/OSD), to remove partition table, boot
191sector and any OSD leftover the following commands should be sufficient.
192
193[source,bash]
194----
195dd if=/dev/zero of=/dev/sd[X] bs=1M count=200
196ceph-disk zap /dev/sd[X]
197----
198
199WARNING: The above commands will destroy data on the disk!
200
201Ceph Bluestore
202~~~~~~~~~~~~~~
203
204Starting with the Ceph Kraken release, a new Ceph OSD storage type was
205introduced, the so called Bluestore
206footnote:[Ceph Bluestore http://ceph.com/community/new-luminous-bluestore/].
207This is the default when creating OSDs in Ceph luminous.
208
209[source,bash]
210----
211pveceph createosd /dev/sd[X]
212----
213
214NOTE: In order to select a disk in the GUI, to be more failsafe, the disk needs
215to have a GPT footnoteref:[GPT, GPT partition table
216https://en.wikipedia.org/wiki/GUID_Partition_Table] partition table. You can
217create this with `gdisk /dev/sd(x)`. If there is no GPT, you cannot select the
218disk as DB/WAL.
219
220If you want to use a separate DB/WAL device for your OSDs, you can specify it
221through the '-journal_dev' option. The WAL is placed with the DB, if not
222specified separately.
223
224[source,bash]
225----
226pveceph createosd /dev/sd[X] -journal_dev /dev/sd[Y]
227----
228
229NOTE: The DB stores BlueStore’s internal metadata and the WAL is BlueStore’s
230internal journal or write-ahead log. It is recommended to use a fast SSDs or
231NVRAM for better performance.
232
233
234Ceph Filestore
235~~~~~~~~~~~~~
236Till Ceph luminous, Filestore was used as storage type for Ceph OSDs. It can
237still be used and might give better performance in small setups, when backed by
238a NVMe SSD or similar.
239
240[source,bash]
241----
242pveceph createosd /dev/sd[X] -bluestore 0
243----
244
245NOTE: In order to select a disk in the GUI, the disk needs to have a
246GPT footnoteref:[GPT] partition table. You can
247create this with `gdisk /dev/sd(x)`. If there is no GPT, you cannot select the
248disk as journal. Currently the journal size is fixed to 5 GB.
249
250If you want to use a dedicated SSD journal disk:
251
252[source,bash]
253----
254pveceph createosd /dev/sd[X] -journal_dev /dev/sd[Y] -bluestore 0
255----
256
257Example: Use /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD
258journal disk.
259
260[source,bash]
261----
262pveceph createosd /dev/sdf -journal_dev /dev/sdb -bluestore 0
263----
264
265This partitions the disk (data and journal partition), creates
266filesystems and starts the OSD, afterwards it is running and fully
267functional.
268
269NOTE: This command refuses to initialize disk when it detects existing data. So
270if you want to overwrite a disk you should remove existing data first. You can
271do that using: 'ceph-disk zap /dev/sd[X]'
272
273You can create OSDs containing both journal and data partitions or you
274can place the journal on a dedicated SSD. Using a SSD journal disk is
275highly recommended to achieve good performance.
276
277
278[[pve_ceph_pools]]
279Creating Ceph Pools
280-------------------
281
282[thumbnail="screenshot/gui-ceph-pools.png"]
283
284A pool is a logical group for storing objects. It holds **P**lacement
285**G**roups (`PG`, `pg_num`), a collection of objects.
286
287When no options are given, we set a default of **128 PGs**, a **size of 3
288replicas** and a **min_size of 2 replicas** for serving objects in a degraded
289state.
290
291NOTE: The default number of PGs works for 2-5 disks. Ceph throws a
292'HEALTH_WARNING' if you have too few or too many PGs in your cluster.
293
294It is advised to calculate the PG number depending on your setup, you can find
295the formula and the PG calculator footnote:[PG calculator
296http://ceph.com/pgcalc/] online. While PGs can be increased later on, they can
297never be decreased.
298
299
300You can create pools through command line or on the GUI on each PVE host under
301**Ceph -> Pools**.
302
303[source,bash]
304----
305pveceph createpool <name>
306----
307
308If you would like to automatically get also a storage definition for your pool,
309active the checkbox "Add storages" on the GUI or use the command line option
310'--add_storages' on pool creation.
311
312Further information on Ceph pool handling can be found in the Ceph pool
313operation footnote:[Ceph pool operation
314http://docs.ceph.com/docs/luminous/rados/operations/pools/]
315manual.
316
317Ceph CRUSH & device classes
318---------------------------
319The foundation of Ceph is its algorithm, **C**ontrolled **R**eplication
320**U**nder **S**calable **H**ashing
321(CRUSH footnote:[CRUSH https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf]).
322
323CRUSH calculates where to store to and retrieve data from, this has the
324advantage that no central index service is needed. CRUSH works with a map of
325OSDs, buckets (device locations) and rulesets (data replication) for pools.
326
327NOTE: Further information can be found in the Ceph documentation, under the
328section CRUSH map footnote:[CRUSH map http://docs.ceph.com/docs/luminous/rados/operations/crush-map/].
329
330This map can be altered to reflect different replication hierarchies. The object
331replicas can be separated (eg. failure domains), while maintaining the desired
332distribution.
333
334A common use case is to use different classes of disks for different Ceph pools.
335For this reason, Ceph introduced the device classes with luminous, to
336accommodate the need for easy ruleset generation.
337
338The device classes can be seen in the 'ceph osd tree' output. These classes
339represent their own root bucket, which can be seen with the below command.
340
341[source, bash]
342----
343ceph osd crush tree --show-shadow
344----
345
346Example output form the above command:
347
348[source, bash]
349----
350ID CLASS WEIGHT TYPE NAME
351-16 nvme 2.18307 root default~nvme
352-13 nvme 0.72769 host sumi1~nvme
353 12 nvme 0.72769 osd.12
354-14 nvme 0.72769 host sumi2~nvme
355 13 nvme 0.72769 osd.13
356-15 nvme 0.72769 host sumi3~nvme
357 14 nvme 0.72769 osd.14
358 -1 7.70544 root default
359 -3 2.56848 host sumi1
360 12 nvme 0.72769 osd.12
361 -5 2.56848 host sumi2
362 13 nvme 0.72769 osd.13
363 -7 2.56848 host sumi3
364 14 nvme 0.72769 osd.14
365----
366
367To let a pool distribute its objects only on a specific device class, you need
368to create a ruleset with the specific class first.
369
370[source, bash]
371----
372ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
373----
374
375[frame="none",grid="none", align="left", cols="30%,70%"]
376|===
377|<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
378|<root>|which crush root it should belong to (default ceph root "default")
379|<failure-domain>|at which failure-domain the objects should be distributed (usually host)
380|<class>|what type of OSD backing store to use (eg. nvme, ssd, hdd)
381|===
382
383Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
384
385[source, bash]
386----
387ceph osd pool set <pool-name> crush_rule <rule-name>
388----
389
390TIP: If the pool already contains objects, all of these have to be moved
391accordingly. Depending on your setup this may introduce a big performance hit on
392your cluster. As an alternative, you can create a new pool and move disks
393separately.
394
395
396Ceph Client
397-----------
398
399[thumbnail="screenshot/gui-ceph-log.png"]
400
401You can then configure {pve} to use such pools to store VM or
402Container images. Simply use the GUI too add a new `RBD` storage (see
403section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
404
405You also need to copy the keyring to a predefined location for a external Ceph
406cluster. If Ceph is installed on the Proxmox nodes itself, then this will be
407done automatically.
408
409NOTE: The file name needs to be `<storage_id> + `.keyring` - `<storage_id>` is
410the expression after 'rbd:' in `/etc/pve/storage.cfg` which is
411`my-ceph-storage` in the following example:
412
413[source,bash]
414----
415mkdir /etc/pve/priv/ceph
416cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
417----
418
419[[pveceph_fs]]
420CephFS
421------
422
423Ceph provides also a filesystem running on top of the same object storage as
424RADOS block devices do. A **M**eta**d**ata **S**erver (`MDS`) is used to map
425the RADOS backed objects to files and directories, allowing to provide a
426POSIX-compliant replicated filesystem. This allows one to have a clustered
427highly available shared filesystem in an easy way if ceph is already used. Its
428Metadata Servers guarantee that files get balanced out over the whole Ceph
429cluster, this way even high load will not overload a single host, which can be
430be an issue with traditional shared filesystem approaches, like `NFS`, for
431example.
432
433{pve} supports both, using an existing xref:storage_cephfs[CephFS as storage])
434to save backups, ISO files or container templates and creating a
435hyper-converged CephFS itself.
436
437
438[[pveceph_fs_mds]]
439Metadata Server (MDS)
440~~~~~~~~~~~~~~~~~~~~~
441
442CephFS needs at least one Metadata Server to be configured and running to be
443able to work. One can simply create one through the {pve} web GUI's `Node ->
444CephFS` panel or on the command line with:
445
446----
447pveceph mds create
448----
449
450Multiple metadata servers can be created in a cluster. But with the default
451settings only one can be active at any time. If an MDS, or its node, becomes
452unresponsive (or crashes), another `standby` MDS will get promoted to `active`.
453One can speed up the hand-over between the active and a standby MDS up by using
454the 'hotstandby' parameter option on create, or if you have already created it
455you may set/add:
456
457----
458mds standby replay = true
459----
460
461in the ceph.conf respective MDS section. With this enabled, this specific MDS
462will always poll the active one, so that it can take over faster as it is in a
463`warm' state. But naturally, the active polling will cause some additional
464performance impact on your system and active `MDS`.
465
466Multiple Active MDS
467^^^^^^^^^^^^^^^^^^^
468
469Since Luminous (12.2.x) you can also have multiple active metadata servers
470running, but this is normally only useful for a high count on parallel clients,
471as else the `MDS` seldom is the bottleneck. If you want to set this up please
472refer to the ceph documentation. footnote:[Configuring multiple active MDS
473daemons http://docs.ceph.com/docs/mimic/cephfs/multimds/]
474
475[[pveceph_fs_create]]
476Create a CephFS
477~~~~~~~~~~~~~~~
478
479With {pve}'s CephFS integration into you can create a CephFS easily over the
480Web GUI, the CLI or an external API interface. Some prerequisites are required
481for this to work:
482
483.Prerequisites for a successful CephFS setup:
484- xref:pve_ceph_install[Install Ceph packages], if this was already done some
485 time ago you might want to rerun it on an up to date system to ensure that
486 also all CephFS related packages get installed.
487- xref:pve_ceph_monitors[Setup Monitors]
488- xref:pve_ceph_monitors[Setup your OSDs]
489- xref:pveceph_fs_mds[Setup at least one MDS]
490
491After this got all checked and done you can simply create a CephFS through
492either the Web GUI's `Node -> CephFS` panel or the command line tool `pveceph`,
493for example with:
494
495----
496pveceph fs create --pg_num 128 --add-storage
497----
498
499This creates a CephFS named `'cephfs'' using a pool for its data named
500`'cephfs_data'' with `128` placement groups and a pool for its metadata named
501`'cephfs_metadata'' with one quarter of the data pools placement groups (`32`).
502Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
503Ceph documentation for more information regarding a fitting placement group
504number (`pg_num`) for your setup footnote:[Ceph Placement Groups
505http://docs.ceph.com/docs/mimic/rados/operations/placement-groups/].
506Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
507storage configuration after it was created successfully.
508
509Destroy CephFS
510~~~~~~~~~~~~~~
511
512WARNING: Destroying a CephFS will render all its data unusable, this cannot be
513undone!
514
515If you really want to destroy an existing CephFS you first need to stop, or
516destroy, all metadata server (`M̀DS`). You can destroy them either over the Web
517GUI or the command line interface, with:
518
519----
520pveceph mds destroy NAME
521----
522on each {pve} node hosting a MDS daemon.
523
524Then, you can remove (destroy) CephFS by issuing a:
525
526----
527ceph fs rm NAME --yes-i-really-mean-it
528----
529on a single node hosting Ceph. After this you may want to remove the created
530data and metadata pools, this can be done either over the Web GUI or the CLI
531with:
532
533----
534pveceph pool destroy NAME
535----
536
537ifdef::manvolnum[]
538include::pve-copyright.adoc[]
539endif::manvolnum[]