]> git.proxmox.com Git - pve-docs.git/blame_incremental - pveceph.adoc
sys: boot/zfs: adapt docs to proxmox-boot-tool update
[pve-docs.git] / pveceph.adoc
... / ...
CommitLineData
1[[chapter_pveceph]]
2ifdef::manvolnum[]
3pveceph(1)
4==========
5:pve-toplevel:
6
7NAME
8----
9
10pveceph - Manage Ceph Services on Proxmox VE Nodes
11
12SYNOPSIS
13--------
14
15include::pveceph.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20ifndef::manvolnum[]
21Deploy Hyper-Converged Ceph Cluster
22===================================
23:pve-toplevel:
24endif::manvolnum[]
25
26[thumbnail="screenshot/gui-ceph-status.png"]
27
28{pve} unifies your compute and storage systems, that is, you can use the same
29physical nodes within a cluster for both computing (processing VMs and
30containers) and replicated storage. The traditional silos of compute and
31storage resources can be wrapped up into a single hyper-converged appliance.
32Separate storage networks (SANs) and connections via network attached storage
33(NAS) disappear. With the integration of Ceph, an open source software-defined
34storage platform, {pve} has the ability to run and manage Ceph storage directly
35on the hypervisor nodes.
36
37Ceph is a distributed object store and file system designed to provide
38excellent performance, reliability and scalability.
39
40.Some advantages of Ceph on {pve} are:
41- Easy setup and management via CLI and GUI
42- Thin provisioning
43- Snapshot support
44- Self healing
45- Scalable to the exabyte level
46- Setup pools with different performance and redundancy characteristics
47- Data is replicated, making it fault tolerant
48- Runs on commodity hardware
49- No need for hardware RAID controllers
50- Open source
51
52For small to medium-sized deployments, it is possible to install a Ceph server for
53RADOS Block Devices (RBD) directly on your {pve} cluster nodes (see
54xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]). Recent
55hardware has a lot of CPU power and RAM, so running storage services
56and VMs on the same node is possible.
57
58To simplify management, we provide 'pveceph' - a tool for installing and
59managing {ceph} services on {pve} nodes.
60
61.Ceph consists of multiple Daemons, for use as an RBD storage:
62- Ceph Monitor (ceph-mon)
63- Ceph Manager (ceph-mgr)
64- Ceph OSD (ceph-osd; Object Storage Daemon)
65
66TIP: We highly recommend to get familiar with Ceph
67footnote:[Ceph intro {cephdocs-url}/start/intro/],
68its architecture
69footnote:[Ceph architecture {cephdocs-url}/architecture/]
70and vocabulary
71footnote:[Ceph glossary {cephdocs-url}/glossary].
72
73
74Precondition
75------------
76
77To build a hyper-converged Proxmox + Ceph Cluster, you must use at least
78three (preferably) identical servers for the setup.
79
80Check also the recommendations from
81{cephdocs-url}/start/hardware-recommendations/[Ceph's website].
82
83.CPU
84A high CPU core frequency reduces latency and should be preferred. As a simple
85rule of thumb, you should assign a CPU core (or thread) to each Ceph service to
86provide enough resources for stable and durable Ceph performance.
87
88.Memory
89Especially in a hyper-converged setup, the memory consumption needs to be
90carefully monitored. In addition to the predicted memory usage of virtual
91machines and containers, you must also account for having enough memory
92available for Ceph to provide excellent and stable performance.
93
94As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will be used
95by an OSD. Especially during recovery, rebalancing or backfilling.
96
97The daemon itself will use additional memory. The Bluestore backend of the
98daemon requires by default **3-5 GiB of memory** (adjustable). In contrast, the
99legacy Filestore backend uses the OS page cache and the memory consumption is
100generally related to PGs of an OSD daemon.
101
102.Network
103We recommend a network bandwidth of at least 10 GbE or more, which is used
104exclusively for Ceph. A meshed network setup
105footnote:[Full Mesh Network for Ceph {webwiki-url}Full_Mesh_Network_for_Ceph_Server]
106is also an option if there are no 10 GbE switches available.
107
108The volume of traffic, especially during recovery, will interfere with other
109services on the same network and may even break the {pve} cluster stack.
110
111Furthermore, you should estimate your bandwidth needs. While one HDD might not
112saturate a 1 Gb link, multiple HDD OSDs per node can, and modern NVMe SSDs will
113even saturate 10 Gbps of bandwidth quickly. Deploying a network capable of even
114more bandwidth will ensure that this isn't your bottleneck and won't be anytime
115soon. 25, 40 or even 100 Gbps are possible.
116
117.Disks
118When planning the size of your Ceph cluster, it is important to take the
119recovery time into consideration. Especially with small clusters, recovery
120might take long. It is recommended that you use SSDs instead of HDDs in small
121setups to reduce recovery time, minimizing the likelihood of a subsequent
122failure event during recovery.
123
124In general SSDs will provide more IOPs than spinning disks. With this in mind,
125in addition to the higher cost, it may make sense to implement a
126xref:pve_ceph_device_classes[class based] separation of pools. Another way to
127speed up OSDs is to use a faster disk as a journal or
128DB/**W**rite-**A**head-**L**og device, see xref:pve_ceph_osds[creating Ceph
129OSDs]. If a faster disk is used for multiple OSDs, a proper balance between OSD
130and WAL / DB (or journal) disk must be selected, otherwise the faster disk
131becomes the bottleneck for all linked OSDs.
132
133Aside from the disk type, Ceph performs best with an even sized and distributed
134amount of disks per node. For example, 4 x 500 GB disks within each node is
135better than a mixed setup with a single 1 TB and three 250 GB disk.
136
137You also need to balance OSD count and single OSD capacity. More capacity
138allows you to increase storage density, but it also means that a single OSD
139failure forces Ceph to recover more data at once.
140
141.Avoid RAID
142As Ceph handles data object redundancy and multiple parallel writes to disks
143(OSDs) on its own, using a RAID controller normally doesn’t improve
144performance or availability. On the contrary, Ceph is designed to handle whole
145disks on it's own, without any abstraction in between. RAID controllers are not
146designed for the Ceph workload and may complicate things and sometimes even
147reduce performance, as their write and caching algorithms may interfere with
148the ones from Ceph.
149
150WARNING: Avoid RAID controllers. Use host bus adapter (HBA) instead.
151
152NOTE: The above recommendations should be seen as a rough guidance for choosing
153hardware. Therefore, it is still essential to adapt it to your specific needs.
154You should test your setup and monitor health and performance continuously.
155
156[[pve_ceph_install_wizard]]
157Initial Ceph Installation & Configuration
158-----------------------------------------
159
160[thumbnail="screenshot/gui-node-ceph-install.png"]
161
162With {pve} you have the benefit of an easy to use installation wizard
163for Ceph. Click on one of your cluster nodes and navigate to the Ceph
164section in the menu tree. If Ceph is not already installed, you will see a
165prompt offering to do so.
166
167The wizard is divided into multiple sections, where each needs to
168finish successfully, in order to use Ceph. After starting the installation,
169the wizard will download and install all the required packages from {pve}'s Ceph
170repository.
171
172After finishing the first step, you will need to create a configuration.
173This step is only needed once per cluster, as this configuration is distributed
174automatically to all remaining cluster members through {pve}'s clustered
175xref:chapter_pmxcfs[configuration file system (pmxcfs)].
176
177The configuration step includes the following settings:
178
179* *Public Network:* You can set up a dedicated network for Ceph. This
180setting is required. Separating your Ceph traffic is highly recommended.
181Otherwise, it could cause trouble with other latency dependent services,
182for example, cluster communication may decrease Ceph's performance.
183
184[thumbnail="screenshot/gui-node-ceph-install-wizard-step2.png"]
185
186* *Cluster Network:* As an optional step, you can go even further and
187separate the xref:pve_ceph_osds[OSD] replication & heartbeat traffic
188as well. This will relieve the public network and could lead to
189significant performance improvements, especially in large clusters.
190
191You have two more options which are considered advanced and therefore
192should only changed if you know what you are doing.
193
194* *Number of replicas*: Defines how often an object is replicated
195* *Minimum replicas*: Defines the minimum number of required replicas
196for I/O to be marked as complete.
197
198Additionally, you need to choose your first monitor node. This step is required.
199
200That's it. You should now see a success page as the last step, with further
201instructions on how to proceed. Your system is now ready to start using Ceph.
202To get started, you will need to create some additional xref:pve_ceph_monitors[monitors],
203xref:pve_ceph_osds[OSDs] and at least one xref:pve_ceph_pools[pool].
204
205The rest of this chapter will guide you through getting the most out of
206your {pve} based Ceph setup. This includes the aforementioned tips and
207more, such as xref:pveceph_fs[CephFS], which is a helpful addition to your
208new Ceph cluster.
209
210[[pve_ceph_install]]
211Installation of Ceph Packages
212-----------------------------
213Use the {pve} Ceph installation wizard (recommended) or run the following
214command on each node:
215
216[source,bash]
217----
218pveceph install
219----
220
221This sets up an `apt` package repository in
222`/etc/apt/sources.list.d/ceph.list` and installs the required software.
223
224
225Create initial Ceph configuration
226---------------------------------
227
228[thumbnail="screenshot/gui-ceph-config.png"]
229
230Use the {pve} Ceph installation wizard (recommended) or run the
231following command on one node:
232
233[source,bash]
234----
235pveceph init --network 10.10.10.0/24
236----
237
238This creates an initial configuration at `/etc/pve/ceph.conf` with a
239dedicated network for Ceph. This file is automatically distributed to
240all {pve} nodes, using xref:chapter_pmxcfs[pmxcfs]. The command also
241creates a symbolic link at `/etc/ceph/ceph.conf`, which points to that file.
242Thus, you can simply run Ceph commands without the need to specify a
243configuration file.
244
245
246[[pve_ceph_monitors]]
247Ceph Monitor
248-----------
249The Ceph Monitor (MON)
250footnote:[Ceph Monitor {cephdocs-url}/start/intro/]
251maintains a master copy of the cluster map. For high availability, you need at
252least 3 monitors. One monitor will already be installed if you
253used the installation wizard. You won't need more than 3 monitors, as long
254as your cluster is small to medium-sized. Only really large clusters will
255require more than this.
256
257
258[[pveceph_create_mon]]
259Create Monitors
260~~~~~~~~~~~~~~~
261
262[thumbnail="screenshot/gui-ceph-monitor.png"]
263
264On each node where you want to place a monitor (three monitors are recommended),
265create one by using the 'Ceph -> Monitor' tab in the GUI or run:
266
267
268[source,bash]
269----
270pveceph mon create
271----
272
273[[pveceph_destroy_mon]]
274Destroy Monitors
275~~~~~~~~~~~~~~~~
276
277To remove a Ceph Monitor via the GUI, first select a node in the tree view and
278go to the **Ceph -> Monitor** panel. Select the MON and click the **Destroy**
279button.
280
281To remove a Ceph Monitor via the CLI, first connect to the node on which the MON
282is running. Then execute the following command:
283[source,bash]
284----
285pveceph mon destroy
286----
287
288NOTE: At least three Monitors are needed for quorum.
289
290
291[[pve_ceph_manager]]
292Ceph Manager
293------------
294
295The Manager daemon runs alongside the monitors. It provides an interface to
296monitor the cluster. Since the release of Ceph luminous, at least one ceph-mgr
297footnote:[Ceph Manager {cephdocs-url}/mgr/] daemon is
298required.
299
300[[pveceph_create_mgr]]
301Create Manager
302~~~~~~~~~~~~~~
303
304Multiple Managers can be installed, but only one Manager is active at any given
305time.
306
307[source,bash]
308----
309pveceph mgr create
310----
311
312NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For
313high availability install more then one manager.
314
315
316[[pveceph_destroy_mgr]]
317Destroy Manager
318~~~~~~~~~~~~~~~
319
320To remove a Ceph Manager via the GUI, first select a node in the tree view and
321go to the **Ceph -> Monitor** panel. Select the Manager and click the
322**Destroy** button.
323
324To remove a Ceph Monitor via the CLI, first connect to the node on which the
325Manager is running. Then execute the following command:
326[source,bash]
327----
328pveceph mgr destroy
329----
330
331NOTE: While a manager is not a hard-dependency, it is crucial for a Ceph cluster,
332as it handles important features like PG-autoscaling, device health monitoring,
333telemetry and more.
334
335[[pve_ceph_osds]]
336Ceph OSDs
337---------
338Ceph **O**bject **S**torage **D**aemons store objects for Ceph over the
339network. It is recommended to use one OSD per physical disk.
340
341NOTE: By default an object is 4 MiB in size.
342
343[[pve_ceph_osd_create]]
344Create OSDs
345~~~~~~~~~~~
346
347[thumbnail="screenshot/gui-ceph-osd-status.png"]
348
349You can create an OSD either via the {pve} web-interface or via the CLI using
350`pveceph`. For example:
351
352[source,bash]
353----
354pveceph osd create /dev/sd[X]
355----
356
357TIP: We recommend a Ceph cluster with at least three nodes and at least 12
358OSDs, evenly distributed among the nodes.
359
360If the disk was in use before (for example, for ZFS or as an OSD) you first need
361to zap all traces of that usage. To remove the partition table, boot sector and
362any other OSD leftover, you can use the following command:
363
364[source,bash]
365----
366ceph-volume lvm zap /dev/sd[X] --destroy
367----
368
369WARNING: The above command will destroy all data on the disk!
370
371.Ceph Bluestore
372
373Starting with the Ceph Kraken release, a new Ceph OSD storage type was
374introduced called Bluestore
375footnote:[Ceph Bluestore https://ceph.com/community/new-luminous-bluestore/].
376This is the default when creating OSDs since Ceph Luminous.
377
378[source,bash]
379----
380pveceph osd create /dev/sd[X]
381----
382
383.Block.db and block.wal
384
385If you want to use a separate DB/WAL device for your OSDs, you can specify it
386through the '-db_dev' and '-wal_dev' options. The WAL is placed with the DB, if
387not specified separately.
388
389[source,bash]
390----
391pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]
392----
393
394You can directly choose the size of those with the '-db_size' and '-wal_size'
395parameters respectively. If they are not given, the following values (in order)
396will be used:
397
398* bluestore_block_{db,wal}_size from Ceph configuration...
399** ... database, section 'osd'
400** ... database, section 'global'
401** ... file, section 'osd'
402** ... file, section 'global'
403* 10% (DB)/1% (WAL) of OSD size
404
405NOTE: The DB stores BlueStore’s internal metadata, and the WAL is BlueStore’s
406internal journal or write-ahead log. It is recommended to use a fast SSD or
407NVRAM for better performance.
408
409
410.Ceph Filestore
411
412Before Ceph Luminous, Filestore was used as the default storage type for Ceph OSDs.
413Starting with Ceph Nautilus, {pve} does not support creating such OSDs with
414'pveceph' anymore. If you still want to create filestore OSDs, use
415'ceph-volume' directly.
416
417[source,bash]
418----
419ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]
420----
421
422[[pve_ceph_osd_destroy]]
423Destroy OSDs
424~~~~~~~~~~~~
425
426To remove an OSD via the GUI, first select a {PVE} node in the tree view and go
427to the **Ceph -> OSD** panel. Then select the OSD to destroy and click the **OUT**
428button. Once the OSD status has changed from `in` to `out`, click the **STOP**
429button. Finally, after the status has changed from `up` to `down`, select
430**Destroy** from the `More` drop-down menu.
431
432To remove an OSD via the CLI run the following commands.
433
434[source,bash]
435----
436ceph osd out <ID>
437systemctl stop ceph-osd@<ID>.service
438----
439
440NOTE: The first command instructs Ceph not to include the OSD in the data
441distribution. The second command stops the OSD service. Until this time, no
442data is lost.
443
444The following command destroys the OSD. Specify the '-cleanup' option to
445additionally destroy the partition table.
446
447[source,bash]
448----
449pveceph osd destroy <ID>
450----
451
452WARNING: The above command will destroy all data on the disk!
453
454
455[[pve_ceph_pools]]
456Ceph Pools
457----------
458A pool is a logical group for storing objects. It holds a collection of objects,
459known as **P**lacement **G**roups (`PG`, `pg_num`).
460
461
462Create and Edit Pools
463~~~~~~~~~~~~~~~~~~~~~
464
465You can create pools from the command line or the web-interface of any {pve}
466host under **Ceph -> Pools**.
467
468[thumbnail="screenshot/gui-ceph-pools.png"]
469
470When no options are given, we set a default of **128 PGs**, a **size of 3
471replicas** and a **min_size of 2 replicas**, to ensure no data loss occurs if
472any OSD fails.
473
474WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1
475allows I/O on an object when it has only 1 replica, which could lead to data
476loss, incomplete PGs or unfound objects.
477
478It is advised that you calculate the PG number based on your setup. You can
479find the formula and the PG calculator footnote:[PG calculator
480https://ceph.com/pgcalc/] online. From Ceph Nautilus onward, you can change the
481number of PGs footnoteref:[placement_groups,Placement Groups
482{cephdocs-url}/rados/operations/placement-groups/] after the setup.
483
484In addition to manual adjustment, the PG autoscaler
485footnoteref:[autoscaler,Automated Scaling
486{cephdocs-url}/rados/operations/placement-groups/#automated-scaling] can
487automatically scale the PG count for a pool in the background.
488
489.Example for creating a pool over the CLI
490[source,bash]
491----
492pveceph pool create <name> --add_storages
493----
494
495TIP: If you would also like to automatically define a storage for your
496pool, keep the `Add as Storage' checkbox checked in the web-interface, or use the
497command line option '--add_storages' at pool creation.
498
499.Base Options
500Name:: The name of the pool. This must be unique and can't be changed afterwards.
501Size:: The number of replicas per object. Ceph always tries to have this many
502copies of an object. Default: `3`.
503PG Autoscale Mode:: The automatic PG scaling mode footnoteref:[autoscaler] of
504the pool. If set to `warn`, it produces a warning message when a pool
505has a non-optimal PG count. Default: `warn`.
506Add as Storage:: Configure a VM or container storage using the new pool.
507Default: `true` (only visible on creation).
508
509.Advanced Options
510Min. Size:: The minimum number of replicas per object. Ceph will reject I/O on
511the pool if a PG has less than this many replicas. Default: `2`.
512Crush Rule:: The rule to use for mapping object placement in the cluster. These
513rules define how data is placed within the cluster. See
514xref:pve_ceph_device_classes[Ceph CRUSH & device classes] for information on
515device-based rules.
516# of PGs:: The number of placement groups footnoteref:[placement_groups] that
517the pool should have at the beginning. Default: `128`.
518Target Size Ratio:: The ratio of data that is expected in the pool. The PG
519autoscaler uses the ratio relative to other ratio sets. It takes precedence
520over the `target size` if both are set.
521Target Size:: The estimated amount of data expected in the pool. The PG
522autoscaler uses this size to estimate the optimal PG count.
523Min. # of PGs:: The minimum number of placement groups. This setting is used to
524fine-tune the lower bound of the PG count for that pool. The PG autoscaler
525will not merge PGs below this threshold.
526
527Further information on Ceph pool handling can be found in the Ceph pool
528operation footnote:[Ceph pool operation
529{cephdocs-url}/rados/operations/pools/]
530manual.
531
532
533Destroy Pools
534~~~~~~~~~~~~~
535
536To destroy a pool via the GUI, select a node in the tree view and go to the
537**Ceph -> Pools** panel. Select the pool to destroy and click the **Destroy**
538button. To confirm the destruction of the pool, you need to enter the pool name.
539
540Run the following command to destroy a pool. Specify the '-remove_storages' to
541also remove the associated storage.
542
543[source,bash]
544----
545pveceph pool destroy <name>
546----
547
548NOTE: Pool deletion runs in the background and can take some time.
549You will notice the data usage in the cluster decreasing throughout this
550process.
551
552
553PG Autoscaler
554~~~~~~~~~~~~~
555
556The PG autoscaler allows the cluster to consider the amount of (expected) data
557stored in each pool and to choose the appropriate pg_num values automatically.
558
559You may need to activate the PG autoscaler module before adjustments can take
560effect.
561
562[source,bash]
563----
564ceph mgr module enable pg_autoscaler
565----
566
567The autoscaler is configured on a per pool basis and has the following modes:
568
569[horizontal]
570warn:: A health warning is issued if the suggested `pg_num` value differs too
571much from the current value.
572on:: The `pg_num` is adjusted automatically with no need for any manual
573interaction.
574off:: No automatic `pg_num` adjustments are made, and no warning will be issued
575if the PG count is not optimal.
576
577The scaling factor can be adjusted to facilitate future data storage with the
578`target_size`, `target_size_ratio` and the `pg_num_min` options.
579
580WARNING: By default, the autoscaler considers tuning the PG count of a pool if
581it is off by a factor of 3. This will lead to a considerable shift in data
582placement and might introduce a high load on the cluster.
583
584You can find a more in-depth introduction to the PG autoscaler on Ceph's Blog -
585https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/[New in
586Nautilus: PG merging and autotuning].
587
588
589[[pve_ceph_device_classes]]
590Ceph CRUSH & device classes
591---------------------------
592The footnote:[CRUSH
593https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf] (**C**ontrolled
594**R**eplication **U**nder **S**calable **H**ashing) algorithm is at the
595foundation of Ceph.
596
597CRUSH calculates where to store and retrieve data from. This has the
598advantage that no central indexing service is needed. CRUSH works using a map of
599OSDs, buckets (device locations) and rulesets (data replication) for pools.
600
601NOTE: Further information can be found in the Ceph documentation, under the
602section CRUSH map footnote:[CRUSH map {cephdocs-url}/rados/operations/crush-map/].
603
604This map can be altered to reflect different replication hierarchies. The object
605replicas can be separated (eg. failure domains), while maintaining the desired
606distribution.
607
608A common configuration is to use different classes of disks for different Ceph
609pools. For this reason, Ceph introduced device classes with luminous, to
610accommodate the need for easy ruleset generation.
611
612The device classes can be seen in the 'ceph osd tree' output. These classes
613represent their own root bucket, which can be seen with the below command.
614
615[source, bash]
616----
617ceph osd crush tree --show-shadow
618----
619
620Example output form the above command:
621
622[source, bash]
623----
624ID CLASS WEIGHT TYPE NAME
625-16 nvme 2.18307 root default~nvme
626-13 nvme 0.72769 host sumi1~nvme
627 12 nvme 0.72769 osd.12
628-14 nvme 0.72769 host sumi2~nvme
629 13 nvme 0.72769 osd.13
630-15 nvme 0.72769 host sumi3~nvme
631 14 nvme 0.72769 osd.14
632 -1 7.70544 root default
633 -3 2.56848 host sumi1
634 12 nvme 0.72769 osd.12
635 -5 2.56848 host sumi2
636 13 nvme 0.72769 osd.13
637 -7 2.56848 host sumi3
638 14 nvme 0.72769 osd.14
639----
640
641To instruct a pool to only distribute objects on a specific device class, you
642first need to create a ruleset for the device class:
643
644[source, bash]
645----
646ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
647----
648
649[frame="none",grid="none", align="left", cols="30%,70%"]
650|===
651|<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
652|<root>|which crush root it should belong to (default ceph root "default")
653|<failure-domain>|at which failure-domain the objects should be distributed (usually host)
654|<class>|what type of OSD backing store to use (eg. nvme, ssd, hdd)
655|===
656
657Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
658
659[source, bash]
660----
661ceph osd pool set <pool-name> crush_rule <rule-name>
662----
663
664TIP: If the pool already contains objects, these must be moved accordingly.
665Depending on your setup, this may introduce a big performance impact on your
666cluster. As an alternative, you can create a new pool and move disks separately.
667
668
669Ceph Client
670-----------
671
672[thumbnail="screenshot/gui-ceph-log.png"]
673
674Following the setup from the previous sections, you can configure {pve} to use
675such pools to store VM and Container images. Simply use the GUI to add a new
676`RBD` storage (see section xref:ceph_rados_block_devices[Ceph RADOS Block
677Devices (RBD)]).
678
679You also need to copy the keyring to a predefined location for an external Ceph
680cluster. If Ceph is installed on the Proxmox nodes itself, then this will be
681done automatically.
682
683NOTE: The filename needs to be `<storage_id> + `.keyring`, where `<storage_id>` is
684the expression after 'rbd:' in `/etc/pve/storage.cfg`. In the following example,
685`my-ceph-storage` is the `<storage_id>`:
686
687[source,bash]
688----
689mkdir /etc/pve/priv/ceph
690cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
691----
692
693[[pveceph_fs]]
694CephFS
695------
696
697Ceph also provides a filesystem, which runs on top of the same object storage as
698RADOS block devices do. A **M**eta**d**ata **S**erver (`MDS`) is used to map the
699RADOS backed objects to files and directories, allowing Ceph to provide a
700POSIX-compliant, replicated filesystem. This allows you to easily configure a
701clustered, highly available, shared filesystem. Ceph's Metadata Servers
702guarantee that files are evenly distributed over the entire Ceph cluster. As a
703result, even cases of high load will not overwhelm a single host, which can be
704an issue with traditional shared filesystem approaches, for example `NFS`.
705
706[thumbnail="screenshot/gui-node-ceph-cephfs-panel.png"]
707
708{pve} supports both creating a hyper-converged CephFS and using an existing
709xref:storage_cephfs[CephFS as storage] to save backups, ISO files, and container
710templates.
711
712
713[[pveceph_fs_mds]]
714Metadata Server (MDS)
715~~~~~~~~~~~~~~~~~~~~~
716
717CephFS needs at least one Metadata Server to be configured and running, in order
718to function. You can create an MDS through the {pve} web GUI's `Node
719-> CephFS` panel or from the command line with:
720
721----
722pveceph mds create
723----
724
725Multiple metadata servers can be created in a cluster, but with the default
726settings, only one can be active at a time. If an MDS or its node becomes
727unresponsive (or crashes), another `standby` MDS will get promoted to `active`.
728You can speed up the handover between the active and standby MDS by using
729the 'hotstandby' parameter option on creation, or if you have already created it
730you may set/add:
731
732----
733mds standby replay = true
734----
735
736in the respective MDS section of `/etc/pve/ceph.conf`. With this enabled, the
737specified MDS will remain in a `warm` state, polling the active one, so that it
738can take over faster in case of any issues.
739
740NOTE: This active polling will have an additional performance impact on your
741system and the active `MDS`.
742
743.Multiple Active MDS
744
745Since Luminous (12.2.x) you can have multiple active metadata servers
746running at once, but this is normally only useful if you have a high amount of
747clients running in parallel. Otherwise the `MDS` is rarely the bottleneck in a
748system. If you want to set this up, please refer to the Ceph documentation.
749footnote:[Configuring multiple active MDS daemons
750{cephdocs-url}/cephfs/multimds/]
751
752[[pveceph_fs_create]]
753Create CephFS
754~~~~~~~~~~~~~
755
756With {pve}'s integration of CephFS, you can easily create a CephFS using the
757web interface, CLI or an external API interface. Some prerequisites are required
758for this to work:
759
760.Prerequisites for a successful CephFS setup:
761- xref:pve_ceph_install[Install Ceph packages] - if this was already done some
762time ago, you may want to rerun it on an up-to-date system to
763ensure that all CephFS related packages get installed.
764- xref:pve_ceph_monitors[Setup Monitors]
765- xref:pve_ceph_monitors[Setup your OSDs]
766- xref:pveceph_fs_mds[Setup at least one MDS]
767
768After this is complete, you can simply create a CephFS through
769either the Web GUI's `Node -> CephFS` panel or the command line tool `pveceph`,
770for example:
771
772----
773pveceph fs create --pg_num 128 --add-storage
774----
775
776This creates a CephFS named 'cephfs', using a pool for its data named
777'cephfs_data' with '128' placement groups and a pool for its metadata named
778'cephfs_metadata' with one quarter of the data pool's placement groups (`32`).
779Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
780Ceph documentation for more information regarding an appropriate placement group
781number (`pg_num`) for your setup footnoteref:[placement_groups].
782Additionally, the '--add-storage' parameter will add the CephFS to the {pve}
783storage configuration after it has been created successfully.
784
785Destroy CephFS
786~~~~~~~~~~~~~~
787
788WARNING: Destroying a CephFS will render all of its data unusable. This cannot be
789undone!
790
791If you really want to destroy an existing CephFS, you first need to stop or
792destroy all metadata servers (`M̀DS`). You can destroy them either via the web
793interface or via the command line interface, by issuing
794
795----
796pveceph mds destroy NAME
797----
798on each {pve} node hosting an MDS daemon.
799
800Then, you can remove (destroy) the CephFS by issuing
801
802----
803ceph fs rm NAME --yes-i-really-mean-it
804----
805on a single node hosting Ceph. After this, you may want to remove the created
806data and metadata pools, this can be done either over the Web GUI or the CLI
807with:
808
809----
810pveceph pool destroy NAME
811----
812
813
814Ceph maintenance
815----------------
816
817Replace OSDs
818~~~~~~~~~~~~
819
820One of the most common maintenance tasks in Ceph is to replace the disk of an
821OSD. If a disk is already in a failed state, then you can go ahead and run
822through the steps in xref:pve_ceph_osd_destroy[Destroy OSDs]. Ceph will recreate
823those copies on the remaining OSDs if possible. This rebalancing will start as
824soon as an OSD failure is detected or an OSD was actively stopped.
825
826NOTE: With the default size/min_size (3/2) of a pool, recovery only starts when
827`size + 1` nodes are available. The reason for this is that the Ceph object
828balancer xref:pve_ceph_device_classes[CRUSH] defaults to a full node as
829`failure domain'.
830
831To replace a functioning disk from the GUI, go through the steps in
832xref:pve_ceph_osd_destroy[Destroy OSDs]. The only addition is to wait until
833the cluster shows 'HEALTH_OK' before stopping the OSD to destroy it.
834
835On the command line, use the following commands:
836
837----
838ceph osd out osd.<id>
839----
840
841You can check with the command below if the OSD can be safely removed.
842
843----
844ceph osd safe-to-destroy osd.<id>
845----
846
847Once the above check tells you that it is safe to remove the OSD, you can
848continue with the following commands:
849
850----
851systemctl stop ceph-osd@<id>.service
852pveceph osd destroy <id>
853----
854
855Replace the old disk with the new one and use the same procedure as described
856in xref:pve_ceph_osd_create[Create OSDs].
857
858Trim/Discard
859~~~~~~~~~~~~
860
861It is good practice to run 'fstrim' (discard) regularly on VMs and containers.
862This releases data blocks that the filesystem isn’t using anymore. It reduces
863data usage and resource load. Most modern operating systems issue such discard
864commands to their disks regularly. You only need to ensure that the Virtual
865Machines enable the xref:qm_hard_disk_discard[disk discard option].
866
867[[pveceph_scrub]]
868Scrub & Deep Scrub
869~~~~~~~~~~~~~~~~~~
870
871Ceph ensures data integrity by 'scrubbing' placement groups. Ceph checks every
872object in a PG for its health. There are two forms of Scrubbing, daily
873cheap metadata checks and weekly deep data checks. The weekly deep scrub reads
874the objects and uses checksums to ensure data integrity. If a running scrub
875interferes with business (performance) needs, you can adjust the time when
876scrubs footnote:[Ceph scrubbing {cephdocs-url}/rados/configuration/osd-config-ref/#scrubbing]
877are executed.
878
879
880Ceph Monitoring and Troubleshooting
881-----------------------------------
882
883It is important to continuously monitor the health of a Ceph deployment from the
884beginning, either by using the Ceph tools or by accessing
885the status through the {pve} link:api-viewer/index.html[API].
886
887The following Ceph commands can be used to see if the cluster is healthy
888('HEALTH_OK'), if there are warnings ('HEALTH_WARN'), or even errors
889('HEALTH_ERR'). If the cluster is in an unhealthy state, the status commands
890below will also give you an overview of the current events and actions to take.
891
892----
893# single time output
894pve# ceph -s
895# continuously output status changes (press CTRL+C to stop)
896pve# ceph -w
897----
898
899To get a more detailed view, every Ceph service has a log file under
900`/var/log/ceph/`. If more detail is required, the log level can be
901adjusted footnote:[Ceph log and debugging {cephdocs-url}/rados/troubleshooting/log-and-debug/].
902
903You can find more information about troubleshooting
904footnote:[Ceph troubleshooting {cephdocs-url}/rados/troubleshooting/]
905a Ceph cluster on the official website.
906
907
908ifdef::manvolnum[]
909include::pve-copyright.adoc[]
910endif::manvolnum[]