]> git.proxmox.com Git - pve-docs.git/blame_incremental - pveceph.adoc
ceph: update and add screenshots
[pve-docs.git] / pveceph.adoc
... / ...
CommitLineData
1[[chapter_pveceph]]
2ifdef::manvolnum[]
3pveceph(1)
4==========
5:pve-toplevel:
6
7NAME
8----
9
10pveceph - Manage Ceph Services on Proxmox VE Nodes
11
12SYNOPSIS
13--------
14
15include::pveceph.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20ifndef::manvolnum[]
21Deploy Hyper-Converged Ceph Cluster
22===================================
23:pve-toplevel:
24endif::manvolnum[]
25
26[thumbnail="screenshot/gui-ceph-status-dashboard.png"]
27
28{pve} unifies your compute and storage systems, that is, you can use the same
29physical nodes within a cluster for both computing (processing VMs and
30containers) and replicated storage. The traditional silos of compute and
31storage resources can be wrapped up into a single hyper-converged appliance.
32Separate storage networks (SANs) and connections via network attached storage
33(NAS) disappear. With the integration of Ceph, an open source software-defined
34storage platform, {pve} has the ability to run and manage Ceph storage directly
35on the hypervisor nodes.
36
37Ceph is a distributed object store and file system designed to provide
38excellent performance, reliability and scalability.
39
40.Some advantages of Ceph on {pve} are:
41- Easy setup and management via CLI and GUI
42- Thin provisioning
43- Snapshot support
44- Self healing
45- Scalable to the exabyte level
46- Setup pools with different performance and redundancy characteristics
47- Data is replicated, making it fault tolerant
48- Runs on commodity hardware
49- No need for hardware RAID controllers
50- Open source
51
52For small to medium-sized deployments, it is possible to install a Ceph server for
53RADOS Block Devices (RBD) directly on your {pve} cluster nodes (see
54xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]). Recent
55hardware has a lot of CPU power and RAM, so running storage services
56and VMs on the same node is possible.
57
58To simplify management, we provide 'pveceph' - a tool for installing and
59managing {ceph} services on {pve} nodes.
60
61.Ceph consists of multiple Daemons, for use as an RBD storage:
62- Ceph Monitor (ceph-mon)
63- Ceph Manager (ceph-mgr)
64- Ceph OSD (ceph-osd; Object Storage Daemon)
65
66TIP: We highly recommend to get familiar with Ceph
67footnote:[Ceph intro {cephdocs-url}/start/intro/],
68its architecture
69footnote:[Ceph architecture {cephdocs-url}/architecture/]
70and vocabulary
71footnote:[Ceph glossary {cephdocs-url}/glossary].
72
73
74Precondition
75------------
76
77To build a hyper-converged Proxmox + Ceph Cluster, you must use at least
78three (preferably) identical servers for the setup.
79
80Check also the recommendations from
81{cephdocs-url}/start/hardware-recommendations/[Ceph's website].
82
83.CPU
84A high CPU core frequency reduces latency and should be preferred. As a simple
85rule of thumb, you should assign a CPU core (or thread) to each Ceph service to
86provide enough resources for stable and durable Ceph performance.
87
88.Memory
89Especially in a hyper-converged setup, the memory consumption needs to be
90carefully monitored. In addition to the predicted memory usage of virtual
91machines and containers, you must also account for having enough memory
92available for Ceph to provide excellent and stable performance.
93
94As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will be used
95by an OSD. Especially during recovery, rebalancing or backfilling.
96
97The daemon itself will use additional memory. The Bluestore backend of the
98daemon requires by default **3-5 GiB of memory** (adjustable). In contrast, the
99legacy Filestore backend uses the OS page cache and the memory consumption is
100generally related to PGs of an OSD daemon.
101
102.Network
103We recommend a network bandwidth of at least 10 GbE or more, which is used
104exclusively for Ceph. A meshed network setup
105footnote:[Full Mesh Network for Ceph {webwiki-url}Full_Mesh_Network_for_Ceph_Server]
106is also an option if there are no 10 GbE switches available.
107
108The volume of traffic, especially during recovery, will interfere with other
109services on the same network and may even break the {pve} cluster stack.
110
111Furthermore, you should estimate your bandwidth needs. While one HDD might not
112saturate a 1 Gb link, multiple HDD OSDs per node can, and modern NVMe SSDs will
113even saturate 10 Gbps of bandwidth quickly. Deploying a network capable of even
114more bandwidth will ensure that this isn't your bottleneck and won't be anytime
115soon. 25, 40 or even 100 Gbps are possible.
116
117.Disks
118When planning the size of your Ceph cluster, it is important to take the
119recovery time into consideration. Especially with small clusters, recovery
120might take long. It is recommended that you use SSDs instead of HDDs in small
121setups to reduce recovery time, minimizing the likelihood of a subsequent
122failure event during recovery.
123
124In general SSDs will provide more IOPs than spinning disks. With this in mind,
125in addition to the higher cost, it may make sense to implement a
126xref:pve_ceph_device_classes[class based] separation of pools. Another way to
127speed up OSDs is to use a faster disk as a journal or
128DB/**W**rite-**A**head-**L**og device, see
129xref:pve_ceph_osds[creating Ceph OSDs].
130If a faster disk is used for multiple OSDs, a proper balance between OSD
131and WAL / DB (or journal) disk must be selected, otherwise the faster disk
132becomes the bottleneck for all linked OSDs.
133
134Aside from the disk type, Ceph performs best with an even sized and distributed
135amount of disks per node. For example, 4 x 500 GB disks within each node is
136better than a mixed setup with a single 1 TB and three 250 GB disk.
137
138You also need to balance OSD count and single OSD capacity. More capacity
139allows you to increase storage density, but it also means that a single OSD
140failure forces Ceph to recover more data at once.
141
142.Avoid RAID
143As Ceph handles data object redundancy and multiple parallel writes to disks
144(OSDs) on its own, using a RAID controller normally doesn’t improve
145performance or availability. On the contrary, Ceph is designed to handle whole
146disks on it's own, without any abstraction in between. RAID controllers are not
147designed for the Ceph workload and may complicate things and sometimes even
148reduce performance, as their write and caching algorithms may interfere with
149the ones from Ceph.
150
151WARNING: Avoid RAID controllers. Use host bus adapter (HBA) instead.
152
153NOTE: The above recommendations should be seen as a rough guidance for choosing
154hardware. Therefore, it is still essential to adapt it to your specific needs.
155You should test your setup and monitor health and performance continuously.
156
157[[pve_ceph_install_wizard]]
158Initial Ceph Installation & Configuration
159-----------------------------------------
160
161Using the Web-based Wizard
162~~~~~~~~~~~~~~~~~~~~~~~~~~
163
164[thumbnail="screenshot/gui-node-ceph-install.png"]
165
166With {pve} you have the benefit of an easy to use installation wizard
167for Ceph. Click on one of your cluster nodes and navigate to the Ceph
168section in the menu tree. If Ceph is not already installed, you will see a
169prompt offering to do so.
170
171The wizard is divided into multiple sections, where each needs to
172finish successfully, in order to use Ceph.
173
174First you need to chose which Ceph version you want to install. Prefer the one
175from your other nodes, or the newest if this is the first node you install
176Ceph.
177
178After starting the installation, the wizard will download and install all the
179required packages from {pve}'s Ceph repository.
180[thumbnail="screenshot/gui-node-ceph-install-wizard-step0.png"]
181
182After finishing the installation step, you will need to create a configuration.
183This step is only needed once per cluster, as this configuration is distributed
184automatically to all remaining cluster members through {pve}'s clustered
185xref:chapter_pmxcfs[configuration file system (pmxcfs)].
186
187The configuration step includes the following settings:
188
189* *Public Network:* You can set up a dedicated network for Ceph. This
190setting is required. Separating your Ceph traffic is highly recommended.
191Otherwise, it could cause trouble with other latency dependent services,
192for example, cluster communication may decrease Ceph's performance.
193
194[thumbnail="screenshot/gui-node-ceph-install-wizard-step2.png"]
195
196* *Cluster Network:* As an optional step, you can go even further and
197separate the xref:pve_ceph_osds[OSD] replication & heartbeat traffic
198as well. This will relieve the public network and could lead to
199significant performance improvements, especially in large clusters.
200
201You have two more options which are considered advanced and therefore
202should only changed if you know what you are doing.
203
204* *Number of replicas*: Defines how often an object is replicated
205* *Minimum replicas*: Defines the minimum number of required replicas
206for I/O to be marked as complete.
207
208Additionally, you need to choose your first monitor node. This step is required.
209
210That's it. You should now see a success page as the last step, with further
211instructions on how to proceed. Your system is now ready to start using Ceph.
212To get started, you will need to create some additional xref:pve_ceph_monitors[monitors],
213xref:pve_ceph_osds[OSDs] and at least one xref:pve_ceph_pools[pool].
214
215The rest of this chapter will guide you through getting the most out of
216your {pve} based Ceph setup. This includes the aforementioned tips and
217more, such as xref:pveceph_fs[CephFS], which is a helpful addition to your
218new Ceph cluster.
219
220[[pve_ceph_install]]
221CLI Installation of Ceph Packages
222~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
223
224Alternatively to the the recommended {pve} Ceph installation wizard available
225in the web-interface, you can use the following CLI command on each node:
226
227[source,bash]
228----
229pveceph install
230----
231
232This sets up an `apt` package repository in
233`/etc/apt/sources.list.d/ceph.list` and installs the required software.
234
235
236Initial Ceph configuration via CLI
237~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
238
239Use the {pve} Ceph installation wizard (recommended) or run the
240following command on one node:
241
242[source,bash]
243----
244pveceph init --network 10.10.10.0/24
245----
246
247This creates an initial configuration at `/etc/pve/ceph.conf` with a
248dedicated network for Ceph. This file is automatically distributed to
249all {pve} nodes, using xref:chapter_pmxcfs[pmxcfs]. The command also
250creates a symbolic link at `/etc/ceph/ceph.conf`, which points to that file.
251Thus, you can simply run Ceph commands without the need to specify a
252configuration file.
253
254
255[[pve_ceph_monitors]]
256Ceph Monitor
257-----------
258
259[thumbnail="screenshot/gui-ceph-monitor.png"]
260
261The Ceph Monitor (MON)
262footnote:[Ceph Monitor {cephdocs-url}/start/intro/]
263maintains a master copy of the cluster map. For high availability, you need at
264least 3 monitors. One monitor will already be installed if you
265used the installation wizard. You won't need more than 3 monitors, as long
266as your cluster is small to medium-sized. Only really large clusters will
267require more than this.
268
269[[pveceph_create_mon]]
270Create Monitors
271~~~~~~~~~~~~~~~
272
273On each node where you want to place a monitor (three monitors are recommended),
274create one by using the 'Ceph -> Monitor' tab in the GUI or run:
275
276
277[source,bash]
278----
279pveceph mon create
280----
281
282[[pveceph_destroy_mon]]
283Destroy Monitors
284~~~~~~~~~~~~~~~~
285
286To remove a Ceph Monitor via the GUI, first select a node in the tree view and
287go to the **Ceph -> Monitor** panel. Select the MON and click the **Destroy**
288button.
289
290To remove a Ceph Monitor via the CLI, first connect to the node on which the MON
291is running. Then execute the following command:
292[source,bash]
293----
294pveceph mon destroy
295----
296
297NOTE: At least three Monitors are needed for quorum.
298
299
300[[pve_ceph_manager]]
301Ceph Manager
302------------
303
304The Manager daemon runs alongside the monitors. It provides an interface to
305monitor the cluster. Since the release of Ceph luminous, at least one ceph-mgr
306footnote:[Ceph Manager {cephdocs-url}/mgr/] daemon is
307required.
308
309[[pveceph_create_mgr]]
310Create Manager
311~~~~~~~~~~~~~~
312
313Multiple Managers can be installed, but only one Manager is active at any given
314time.
315
316[source,bash]
317----
318pveceph mgr create
319----
320
321NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For
322high availability install more then one manager.
323
324
325[[pveceph_destroy_mgr]]
326Destroy Manager
327~~~~~~~~~~~~~~~
328
329To remove a Ceph Manager via the GUI, first select a node in the tree view and
330go to the **Ceph -> Monitor** panel. Select the Manager and click the
331**Destroy** button.
332
333To remove a Ceph Monitor via the CLI, first connect to the node on which the
334Manager is running. Then execute the following command:
335[source,bash]
336----
337pveceph mgr destroy
338----
339
340NOTE: While a manager is not a hard-dependency, it is crucial for a Ceph cluster,
341as it handles important features like PG-autoscaling, device health monitoring,
342telemetry and more.
343
344[[pve_ceph_osds]]
345Ceph OSDs
346---------
347
348[thumbnail="screenshot/gui-ceph-osd-status.png"]
349
350Ceph **O**bject **S**torage **D**aemons store objects for Ceph over the
351network. It is recommended to use one OSD per physical disk.
352
353[[pve_ceph_osd_create]]
354Create OSDs
355~~~~~~~~~~~
356
357You can create an OSD either via the {pve} web-interface or via the CLI using
358`pveceph`. For example:
359
360[source,bash]
361----
362pveceph osd create /dev/sd[X]
363----
364
365TIP: We recommend a Ceph cluster with at least three nodes and at least 12
366OSDs, evenly distributed among the nodes.
367
368If the disk was in use before (for example, for ZFS or as an OSD) you first need
369to zap all traces of that usage. To remove the partition table, boot sector and
370any other OSD leftover, you can use the following command:
371
372[source,bash]
373----
374ceph-volume lvm zap /dev/sd[X] --destroy
375----
376
377WARNING: The above command will destroy all data on the disk!
378
379.Ceph Bluestore
380
381Starting with the Ceph Kraken release, a new Ceph OSD storage type was
382introduced called Bluestore
383footnote:[Ceph Bluestore https://ceph.com/community/new-luminous-bluestore/].
384This is the default when creating OSDs since Ceph Luminous.
385
386[source,bash]
387----
388pveceph osd create /dev/sd[X]
389----
390
391.Block.db and block.wal
392
393If you want to use a separate DB/WAL device for your OSDs, you can specify it
394through the '-db_dev' and '-wal_dev' options. The WAL is placed with the DB, if
395not specified separately.
396
397[source,bash]
398----
399pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]
400----
401
402You can directly choose the size of those with the '-db_size' and '-wal_size'
403parameters respectively. If they are not given, the following values (in order)
404will be used:
405
406* bluestore_block_{db,wal}_size from Ceph configuration...
407** ... database, section 'osd'
408** ... database, section 'global'
409** ... file, section 'osd'
410** ... file, section 'global'
411* 10% (DB)/1% (WAL) of OSD size
412
413NOTE: The DB stores BlueStore’s internal metadata, and the WAL is BlueStore’s
414internal journal or write-ahead log. It is recommended to use a fast SSD or
415NVRAM for better performance.
416
417.Ceph Filestore
418
419Before Ceph Luminous, Filestore was used as the default storage type for Ceph OSDs.
420Starting with Ceph Nautilus, {pve} does not support creating such OSDs with
421'pveceph' anymore. If you still want to create filestore OSDs, use
422'ceph-volume' directly.
423
424[source,bash]
425----
426ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]
427----
428
429[[pve_ceph_osd_destroy]]
430Destroy OSDs
431~~~~~~~~~~~~
432
433To remove an OSD via the GUI, first select a {PVE} node in the tree view and go
434to the **Ceph -> OSD** panel. Then select the OSD to destroy and click the **OUT**
435button. Once the OSD status has changed from `in` to `out`, click the **STOP**
436button. Finally, after the status has changed from `up` to `down`, select
437**Destroy** from the `More` drop-down menu.
438
439To remove an OSD via the CLI run the following commands.
440
441[source,bash]
442----
443ceph osd out <ID>
444systemctl stop ceph-osd@<ID>.service
445----
446
447NOTE: The first command instructs Ceph not to include the OSD in the data
448distribution. The second command stops the OSD service. Until this time, no
449data is lost.
450
451The following command destroys the OSD. Specify the '-cleanup' option to
452additionally destroy the partition table.
453
454[source,bash]
455----
456pveceph osd destroy <ID>
457----
458
459WARNING: The above command will destroy all data on the disk!
460
461
462[[pve_ceph_pools]]
463Ceph Pools
464----------
465
466[thumbnail="screenshot/gui-ceph-pools.png"]
467
468A pool is a logical group for storing objects. It holds a collection of objects,
469known as **P**lacement **G**roups (`PG`, `pg_num`).
470
471
472Create and Edit Pools
473~~~~~~~~~~~~~~~~~~~~~
474
475You can create and edit pools from the command line or the web-interface of any
476{pve} host under **Ceph -> Pools**.
477
478When no options are given, we set a default of **128 PGs**, a **size of 3
479replicas** and a **min_size of 2 replicas**, to ensure no data loss occurs if
480any OSD fails.
481
482WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1
483allows I/O on an object when it has only 1 replica, which could lead to data
484loss, incomplete PGs or unfound objects.
485
486It is advised that you either enable the PG-Autoscaler or calculate the PG
487number based on your setup. You can find the formula and the PG calculator
488footnote:[PG calculator https://ceph.com/pgcalc/] online. From Ceph Nautilus
489onward, you can change the number of PGs
490footnoteref:[placement_groups,Placement Groups
491{cephdocs-url}/rados/operations/placement-groups/] after the setup.
492
493The PG autoscaler footnoteref:[autoscaler,Automated Scaling
494{cephdocs-url}/rados/operations/placement-groups/#automated-scaling] can
495automatically scale the PG count for a pool in the background. Setting the
496`Target Size` or `Target Ratio` advanced parameters helps the PG-Autoscaler to
497make better decisions.
498
499.Example for creating a pool over the CLI
500[source,bash]
501----
502pveceph pool create <name> --add_storages
503----
504
505TIP: If you would also like to automatically define a storage for your
506pool, keep the `Add as Storage' checkbox checked in the web-interface, or use the
507command line option '--add_storages' at pool creation.
508
509Pool Options
510^^^^^^^^^^^^
511
512[thumbnail="screenshot/gui-ceph-pool-create.png"]
513
514The following options are available on pool creation, and partially also when
515editing a pool.
516
517Name:: The name of the pool. This must be unique and can't be changed afterwards.
518Size:: The number of replicas per object. Ceph always tries to have this many
519copies of an object. Default: `3`.
520PG Autoscale Mode:: The automatic PG scaling mode footnoteref:[autoscaler] of
521the pool. If set to `warn`, it produces a warning message when a pool
522has a non-optimal PG count. Default: `warn`.
523Add as Storage:: Configure a VM or container storage using the new pool.
524Default: `true` (only visible on creation).
525
526.Advanced Options
527Min. Size:: The minimum number of replicas per object. Ceph will reject I/O on
528the pool if a PG has less than this many replicas. Default: `2`.
529Crush Rule:: The rule to use for mapping object placement in the cluster. These
530rules define how data is placed within the cluster. See
531xref:pve_ceph_device_classes[Ceph CRUSH & device classes] for information on
532device-based rules.
533# of PGs:: The number of placement groups footnoteref:[placement_groups] that
534the pool should have at the beginning. Default: `128`.
535Target Ratio:: The ratio of data that is expected in the pool. The PG
536autoscaler uses the ratio relative to other ratio sets. It takes precedence
537over the `target size` if both are set.
538Target Size:: The estimated amount of data expected in the pool. The PG
539autoscaler uses this size to estimate the optimal PG count.
540Min. # of PGs:: The minimum number of placement groups. This setting is used to
541fine-tune the lower bound of the PG count for that pool. The PG autoscaler
542will not merge PGs below this threshold.
543
544Further information on Ceph pool handling can be found in the Ceph pool
545operation footnote:[Ceph pool operation
546{cephdocs-url}/rados/operations/pools/]
547manual.
548
549
550Destroy Pools
551~~~~~~~~~~~~~
552
553To destroy a pool via the GUI, select a node in the tree view and go to the
554**Ceph -> Pools** panel. Select the pool to destroy and click the **Destroy**
555button. To confirm the destruction of the pool, you need to enter the pool name.
556
557Run the following command to destroy a pool. Specify the '-remove_storages' to
558also remove the associated storage.
559
560[source,bash]
561----
562pveceph pool destroy <name>
563----
564
565NOTE: Pool deletion runs in the background and can take some time.
566You will notice the data usage in the cluster decreasing throughout this
567process.
568
569
570PG Autoscaler
571~~~~~~~~~~~~~
572
573The PG autoscaler allows the cluster to consider the amount of (expected) data
574stored in each pool and to choose the appropriate pg_num values automatically.
575It is available since Ceph Nautilus.
576
577You may need to activate the PG autoscaler module before adjustments can take
578effect.
579
580[source,bash]
581----
582ceph mgr module enable pg_autoscaler
583----
584
585The autoscaler is configured on a per pool basis and has the following modes:
586
587[horizontal]
588warn:: A health warning is issued if the suggested `pg_num` value differs too
589much from the current value.
590on:: The `pg_num` is adjusted automatically with no need for any manual
591interaction.
592off:: No automatic `pg_num` adjustments are made, and no warning will be issued
593if the PG count is not optimal.
594
595The scaling factor can be adjusted to facilitate future data storage with the
596`target_size`, `target_size_ratio` and the `pg_num_min` options.
597
598WARNING: By default, the autoscaler considers tuning the PG count of a pool if
599it is off by a factor of 3. This will lead to a considerable shift in data
600placement and might introduce a high load on the cluster.
601
602You can find a more in-depth introduction to the PG autoscaler on Ceph's Blog -
603https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/[New in
604Nautilus: PG merging and autotuning].
605
606
607[[pve_ceph_device_classes]]
608Ceph CRUSH & device classes
609---------------------------
610
611[thumbnail="screenshot/gui-ceph-config.png"]
612
613The footnote:[CRUSH
614https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf] (**C**ontrolled
615**R**eplication **U**nder **S**calable **H**ashing) algorithm is at the
616foundation of Ceph.
617
618CRUSH calculates where to store and retrieve data from. This has the
619advantage that no central indexing service is needed. CRUSH works using a map of
620OSDs, buckets (device locations) and rulesets (data replication) for pools.
621
622NOTE: Further information can be found in the Ceph documentation, under the
623section CRUSH map footnote:[CRUSH map {cephdocs-url}/rados/operations/crush-map/].
624
625This map can be altered to reflect different replication hierarchies. The object
626replicas can be separated (eg. failure domains), while maintaining the desired
627distribution.
628
629A common configuration is to use different classes of disks for different Ceph
630pools. For this reason, Ceph introduced device classes with luminous, to
631accommodate the need for easy ruleset generation.
632
633The device classes can be seen in the 'ceph osd tree' output. These classes
634represent their own root bucket, which can be seen with the below command.
635
636[source, bash]
637----
638ceph osd crush tree --show-shadow
639----
640
641Example output form the above command:
642
643[source, bash]
644----
645ID CLASS WEIGHT TYPE NAME
646-16 nvme 2.18307 root default~nvme
647-13 nvme 0.72769 host sumi1~nvme
648 12 nvme 0.72769 osd.12
649-14 nvme 0.72769 host sumi2~nvme
650 13 nvme 0.72769 osd.13
651-15 nvme 0.72769 host sumi3~nvme
652 14 nvme 0.72769 osd.14
653 -1 7.70544 root default
654 -3 2.56848 host sumi1
655 12 nvme 0.72769 osd.12
656 -5 2.56848 host sumi2
657 13 nvme 0.72769 osd.13
658 -7 2.56848 host sumi3
659 14 nvme 0.72769 osd.14
660----
661
662To instruct a pool to only distribute objects on a specific device class, you
663first need to create a ruleset for the device class:
664
665[source, bash]
666----
667ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
668----
669
670[frame="none",grid="none", align="left", cols="30%,70%"]
671|===
672|<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
673|<root>|which crush root it should belong to (default ceph root "default")
674|<failure-domain>|at which failure-domain the objects should be distributed (usually host)
675|<class>|what type of OSD backing store to use (eg. nvme, ssd, hdd)
676|===
677
678Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
679
680[source, bash]
681----
682ceph osd pool set <pool-name> crush_rule <rule-name>
683----
684
685TIP: If the pool already contains objects, these must be moved accordingly.
686Depending on your setup, this may introduce a big performance impact on your
687cluster. As an alternative, you can create a new pool and move disks separately.
688
689
690Ceph Client
691-----------
692
693[thumbnail="screenshot/gui-ceph-log.png"]
694
695Following the setup from the previous sections, you can configure {pve} to use
696such pools to store VM and Container images. Simply use the GUI to add a new
697`RBD` storage (see section
698xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
699
700You also need to copy the keyring to a predefined location for an external Ceph
701cluster. If Ceph is installed on the Proxmox nodes itself, then this will be
702done automatically.
703
704NOTE: The filename needs to be `<storage_id> + `.keyring`, where `<storage_id>` is
705the expression after 'rbd:' in `/etc/pve/storage.cfg`. In the following example,
706`my-ceph-storage` is the `<storage_id>`:
707
708[source,bash]
709----
710mkdir /etc/pve/priv/ceph
711cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
712----
713
714[[pveceph_fs]]
715CephFS
716------
717
718Ceph also provides a filesystem, which runs on top of the same object storage as
719RADOS block devices do. A **M**eta**d**ata **S**erver (`MDS`) is used to map the
720RADOS backed objects to files and directories, allowing Ceph to provide a
721POSIX-compliant, replicated filesystem. This allows you to easily configure a
722clustered, highly available, shared filesystem. Ceph's Metadata Servers
723guarantee that files are evenly distributed over the entire Ceph cluster. As a
724result, even cases of high load will not overwhelm a single host, which can be
725an issue with traditional shared filesystem approaches, for example `NFS`.
726
727[thumbnail="screenshot/gui-node-ceph-cephfs-panel.png"]
728
729{pve} supports both creating a hyper-converged CephFS and using an existing
730xref:storage_cephfs[CephFS as storage] to save backups, ISO files, and container
731templates.
732
733
734[[pveceph_fs_mds]]
735Metadata Server (MDS)
736~~~~~~~~~~~~~~~~~~~~~
737
738CephFS needs at least one Metadata Server to be configured and running, in order
739to function. You can create an MDS through the {pve} web GUI's `Node
740-> CephFS` panel or from the command line with:
741
742----
743pveceph mds create
744----
745
746Multiple metadata servers can be created in a cluster, but with the default
747settings, only one can be active at a time. If an MDS or its node becomes
748unresponsive (or crashes), another `standby` MDS will get promoted to `active`.
749You can speed up the handover between the active and standby MDS by using
750the 'hotstandby' parameter option on creation, or if you have already created it
751you may set/add:
752
753----
754mds standby replay = true
755----
756
757in the respective MDS section of `/etc/pve/ceph.conf`. With this enabled, the
758specified MDS will remain in a `warm` state, polling the active one, so that it
759can take over faster in case of any issues.
760
761NOTE: This active polling will have an additional performance impact on your
762system and the active `MDS`.
763
764.Multiple Active MDS
765
766Since Luminous (12.2.x) you can have multiple active metadata servers
767running at once, but this is normally only useful if you have a high amount of
768clients running in parallel. Otherwise the `MDS` is rarely the bottleneck in a
769system. If you want to set this up, please refer to the Ceph documentation.
770footnote:[Configuring multiple active MDS daemons
771{cephdocs-url}/cephfs/multimds/]
772
773[[pveceph_fs_create]]
774Create CephFS
775~~~~~~~~~~~~~
776
777With {pve}'s integration of CephFS, you can easily create a CephFS using the
778web interface, CLI or an external API interface. Some prerequisites are required
779for this to work:
780
781.Prerequisites for a successful CephFS setup:
782- xref:pve_ceph_install[Install Ceph packages] - if this was already done some
783time ago, you may want to rerun it on an up-to-date system to
784ensure that all CephFS related packages get installed.
785- xref:pve_ceph_monitors[Setup Monitors]
786- xref:pve_ceph_monitors[Setup your OSDs]
787- xref:pveceph_fs_mds[Setup at least one MDS]
788
789After this is complete, you can simply create a CephFS through
790either the Web GUI's `Node -> CephFS` panel or the command line tool `pveceph`,
791for example:
792
793----
794pveceph fs create --pg_num 128 --add-storage
795----
796
797This creates a CephFS named 'cephfs', using a pool for its data named
798'cephfs_data' with '128' placement groups and a pool for its metadata named
799'cephfs_metadata' with one quarter of the data pool's placement groups (`32`).
800Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
801Ceph documentation for more information regarding an appropriate placement group
802number (`pg_num`) for your setup footnoteref:[placement_groups].
803Additionally, the '--add-storage' parameter will add the CephFS to the {pve}
804storage configuration after it has been created successfully.
805
806Destroy CephFS
807~~~~~~~~~~~~~~
808
809WARNING: Destroying a CephFS will render all of its data unusable. This cannot be
810undone!
811
812If you really want to destroy an existing CephFS, you first need to stop or
813destroy all metadata servers (`M̀DS`). You can destroy them either via the web
814interface or via the command line interface, by issuing
815
816----
817pveceph mds destroy NAME
818----
819on each {pve} node hosting an MDS daemon.
820
821Then, you can remove (destroy) the CephFS by issuing
822
823----
824ceph fs rm NAME --yes-i-really-mean-it
825----
826on a single node hosting Ceph. After this, you may want to remove the created
827data and metadata pools, this can be done either over the Web GUI or the CLI
828with:
829
830----
831pveceph pool destroy NAME
832----
833
834
835Ceph maintenance
836----------------
837
838Replace OSDs
839~~~~~~~~~~~~
840
841One of the most common maintenance tasks in Ceph is to replace the disk of an
842OSD. If a disk is already in a failed state, then you can go ahead and run
843through the steps in xref:pve_ceph_osd_destroy[Destroy OSDs]. Ceph will recreate
844those copies on the remaining OSDs if possible. This rebalancing will start as
845soon as an OSD failure is detected or an OSD was actively stopped.
846
847NOTE: With the default size/min_size (3/2) of a pool, recovery only starts when
848`size + 1` nodes are available. The reason for this is that the Ceph object
849balancer xref:pve_ceph_device_classes[CRUSH] defaults to a full node as
850`failure domain'.
851
852To replace a functioning disk from the GUI, go through the steps in
853xref:pve_ceph_osd_destroy[Destroy OSDs]. The only addition is to wait until
854the cluster shows 'HEALTH_OK' before stopping the OSD to destroy it.
855
856On the command line, use the following commands:
857
858----
859ceph osd out osd.<id>
860----
861
862You can check with the command below if the OSD can be safely removed.
863
864----
865ceph osd safe-to-destroy osd.<id>
866----
867
868Once the above check tells you that it is safe to remove the OSD, you can
869continue with the following commands:
870
871----
872systemctl stop ceph-osd@<id>.service
873pveceph osd destroy <id>
874----
875
876Replace the old disk with the new one and use the same procedure as described
877in xref:pve_ceph_osd_create[Create OSDs].
878
879Trim/Discard
880~~~~~~~~~~~~
881
882It is good practice to run 'fstrim' (discard) regularly on VMs and containers.
883This releases data blocks that the filesystem isn’t using anymore. It reduces
884data usage and resource load. Most modern operating systems issue such discard
885commands to their disks regularly. You only need to ensure that the Virtual
886Machines enable the xref:qm_hard_disk_discard[disk discard option].
887
888[[pveceph_scrub]]
889Scrub & Deep Scrub
890~~~~~~~~~~~~~~~~~~
891
892Ceph ensures data integrity by 'scrubbing' placement groups. Ceph checks every
893object in a PG for its health. There are two forms of Scrubbing, daily
894cheap metadata checks and weekly deep data checks. The weekly deep scrub reads
895the objects and uses checksums to ensure data integrity. If a running scrub
896interferes with business (performance) needs, you can adjust the time when
897scrubs footnote:[Ceph scrubbing {cephdocs-url}/rados/configuration/osd-config-ref/#scrubbing]
898are executed.
899
900
901Ceph Monitoring and Troubleshooting
902-----------------------------------
903
904It is important to continuously monitor the health of a Ceph deployment from the
905beginning, either by using the Ceph tools or by accessing
906the status through the {pve} link:api-viewer/index.html[API].
907
908The following Ceph commands can be used to see if the cluster is healthy
909('HEALTH_OK'), if there are warnings ('HEALTH_WARN'), or even errors
910('HEALTH_ERR'). If the cluster is in an unhealthy state, the status commands
911below will also give you an overview of the current events and actions to take.
912
913----
914# single time output
915pve# ceph -s
916# continuously output status changes (press CTRL+C to stop)
917pve# ceph -w
918----
919
920To get a more detailed view, every Ceph service has a log file under
921`/var/log/ceph/`. If more detail is required, the log level can be
922adjusted footnote:[Ceph log and debugging {cephdocs-url}/rados/troubleshooting/log-and-debug/].
923
924You can find more information about troubleshooting
925footnote:[Ceph troubleshooting {cephdocs-url}/rados/troubleshooting/]
926a Ceph cluster on the official website.
927
928
929ifdef::manvolnum[]
930include::pve-copyright.adoc[]
931endif::manvolnum[]