]> git.proxmox.com Git - pve-docs.git/blob - pveceph.adoc
bump version to 8.2.1
[pve-docs.git] / pveceph.adoc
1 [[chapter_pveceph]]
2 ifdef::manvolnum[]
3 pveceph(1)
4 ==========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pveceph - Manage Ceph Services on Proxmox VE Nodes
11
12 SYNOPSIS
13 --------
14
15 include::pveceph.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20 ifndef::manvolnum[]
21 Deploy Hyper-Converged Ceph Cluster
22 ===================================
23 :pve-toplevel:
24 endif::manvolnum[]
25
26 [thumbnail="screenshot/gui-ceph-status-dashboard.png"]
27
28 {pve} unifies your compute and storage systems, that is, you can use the same
29 physical nodes within a cluster for both computing (processing VMs and
30 containers) and replicated storage. The traditional silos of compute and
31 storage resources can be wrapped up into a single hyper-converged appliance.
32 Separate storage networks (SANs) and connections via network attached storage
33 (NAS) disappear. With the integration of Ceph, an open source software-defined
34 storage platform, {pve} has the ability to run and manage Ceph storage directly
35 on the hypervisor nodes.
36
37 Ceph is a distributed object store and file system designed to provide
38 excellent performance, reliability and scalability.
39
40 .Some advantages of Ceph on {pve} are:
41 - Easy setup and management via CLI and GUI
42 - Thin provisioning
43 - Snapshot support
44 - Self healing
45 - Scalable to the exabyte level
46 - Setup pools with different performance and redundancy characteristics
47 - Data is replicated, making it fault tolerant
48 - Runs on commodity hardware
49 - No need for hardware RAID controllers
50 - Open source
51
52 For small to medium-sized deployments, it is possible to install a Ceph server for
53 RADOS Block Devices (RBD) directly on your {pve} cluster nodes (see
54 xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]). Recent
55 hardware has a lot of CPU power and RAM, so running storage services
56 and VMs on the same node is possible.
57
58 To simplify management, we provide 'pveceph' - a tool for installing and
59 managing {ceph} services on {pve} nodes.
60
61 .Ceph consists of multiple Daemons, for use as an RBD storage:
62 - Ceph Monitor (ceph-mon)
63 - Ceph Manager (ceph-mgr)
64 - Ceph OSD (ceph-osd; Object Storage Daemon)
65
66 TIP: We highly recommend to get familiar with Ceph
67 footnote:[Ceph intro {cephdocs-url}/start/intro/],
68 its architecture
69 footnote:[Ceph architecture {cephdocs-url}/architecture/]
70 and vocabulary
71 footnote:[Ceph glossary {cephdocs-url}/glossary].
72
73
74 Precondition
75 ------------
76
77 To build a hyper-converged Proxmox + Ceph Cluster, you must use at least
78 three (preferably) identical servers for the setup.
79
80 Check also the recommendations from
81 {cephdocs-url}/start/hardware-recommendations/[Ceph's website].
82
83 .CPU
84 A high CPU core frequency reduces latency and should be preferred. As a simple
85 rule of thumb, you should assign a CPU core (or thread) to each Ceph service to
86 provide enough resources for stable and durable Ceph performance.
87
88 .Memory
89 Especially in a hyper-converged setup, the memory consumption needs to be
90 carefully monitored. In addition to the predicted memory usage of virtual
91 machines and containers, you must also account for having enough memory
92 available for Ceph to provide excellent and stable performance.
93
94 As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will be used
95 by an OSD. Especially during recovery, re-balancing or backfilling.
96
97 The daemon itself will use additional memory. The Bluestore backend of the
98 daemon requires by default **3-5 GiB of memory** (adjustable). In contrast, the
99 legacy Filestore backend uses the OS page cache and the memory consumption is
100 generally related to PGs of an OSD daemon.
101
102 .Network
103 We recommend a network bandwidth of at least 10 GbE or more, which is used
104 exclusively for Ceph. A meshed network setup
105 footnote:[Full Mesh Network for Ceph {webwiki-url}Full_Mesh_Network_for_Ceph_Server]
106 is also an option if there are no 10 GbE switches available.
107
108 The volume of traffic, especially during recovery, will interfere with other
109 services on the same network and may even break the {pve} cluster stack.
110
111 Furthermore, you should estimate your bandwidth needs. While one HDD might not
112 saturate a 1 Gb link, multiple HDD OSDs per node can, and modern NVMe SSDs will
113 even saturate 10 Gbps of bandwidth quickly. Deploying a network capable of even
114 more bandwidth will ensure that this isn't your bottleneck and won't be anytime
115 soon. 25, 40 or even 100 Gbps are possible.
116
117 .Disks
118 When planning the size of your Ceph cluster, it is important to take the
119 recovery time into consideration. Especially with small clusters, recovery
120 might take long. It is recommended that you use SSDs instead of HDDs in small
121 setups to reduce recovery time, minimizing the likelihood of a subsequent
122 failure event during recovery.
123
124 In general, SSDs will provide more IOPS than spinning disks. With this in mind,
125 in addition to the higher cost, it may make sense to implement a
126 xref:pve_ceph_device_classes[class based] separation of pools. Another way to
127 speed up OSDs is to use a faster disk as a journal or
128 DB/**W**rite-**A**head-**L**og device, see
129 xref:pve_ceph_osds[creating Ceph OSDs].
130 If a faster disk is used for multiple OSDs, a proper balance between OSD
131 and WAL / DB (or journal) disk must be selected, otherwise the faster disk
132 becomes the bottleneck for all linked OSDs.
133
134 Aside from the disk type, Ceph performs best with an even sized and distributed
135 amount of disks per node. For example, 4 x 500 GB disks within each node is
136 better than a mixed setup with a single 1 TB and three 250 GB disk.
137
138 You also need to balance OSD count and single OSD capacity. More capacity
139 allows you to increase storage density, but it also means that a single OSD
140 failure forces Ceph to recover more data at once.
141
142 .Avoid RAID
143 As Ceph handles data object redundancy and multiple parallel writes to disks
144 (OSDs) on its own, using a RAID controller normally doesn’t improve
145 performance or availability. On the contrary, Ceph is designed to handle whole
146 disks on it's own, without any abstraction in between. RAID controllers are not
147 designed for the Ceph workload and may complicate things and sometimes even
148 reduce performance, as their write and caching algorithms may interfere with
149 the ones from Ceph.
150
151 WARNING: Avoid RAID controllers. Use host bus adapter (HBA) instead.
152
153 NOTE: The above recommendations should be seen as a rough guidance for choosing
154 hardware. Therefore, it is still essential to adapt it to your specific needs.
155 You should test your setup and monitor health and performance continuously.
156
157 [[pve_ceph_install_wizard]]
158 Initial Ceph Installation & Configuration
159 -----------------------------------------
160
161 Using the Web-based Wizard
162 ~~~~~~~~~~~~~~~~~~~~~~~~~~
163
164 [thumbnail="screenshot/gui-node-ceph-install.png"]
165
166 With {pve} you have the benefit of an easy to use installation wizard
167 for Ceph. Click on one of your cluster nodes and navigate to the Ceph
168 section in the menu tree. If Ceph is not already installed, you will see a
169 prompt offering to do so.
170
171 The wizard is divided into multiple sections, where each needs to
172 finish successfully, in order to use Ceph.
173
174 First you need to chose which Ceph version you want to install. Prefer the one
175 from your other nodes, or the newest if this is the first node you install
176 Ceph.
177
178 After starting the installation, the wizard will download and install all the
179 required packages from {pve}'s Ceph repository.
180 [thumbnail="screenshot/gui-node-ceph-install-wizard-step0.png"]
181
182 After finishing the installation step, you will need to create a configuration.
183 This step is only needed once per cluster, as this configuration is distributed
184 automatically to all remaining cluster members through {pve}'s clustered
185 xref:chapter_pmxcfs[configuration file system (pmxcfs)].
186
187 The configuration step includes the following settings:
188
189 * *Public Network:* You can set up a dedicated network for Ceph. This
190 setting is required. Separating your Ceph traffic is highly recommended.
191 Otherwise, it could cause trouble with other latency dependent services,
192 for example, cluster communication may decrease Ceph's performance.
193
194 [thumbnail="screenshot/gui-node-ceph-install-wizard-step2.png"]
195
196 * *Cluster Network:* As an optional step, you can go even further and
197 separate the xref:pve_ceph_osds[OSD] replication & heartbeat traffic
198 as well. This will relieve the public network and could lead to
199 significant performance improvements, especially in large clusters.
200
201 You have two more options which are considered advanced and therefore
202 should only changed if you know what you are doing.
203
204 * *Number of replicas*: Defines how often an object is replicated
205 * *Minimum replicas*: Defines the minimum number of required replicas
206 for I/O to be marked as complete.
207
208 Additionally, you need to choose your first monitor node. This step is required.
209
210 That's it. You should now see a success page as the last step, with further
211 instructions on how to proceed. Your system is now ready to start using Ceph.
212 To get started, you will need to create some additional xref:pve_ceph_monitors[monitors],
213 xref:pve_ceph_osds[OSDs] and at least one xref:pve_ceph_pools[pool].
214
215 The rest of this chapter will guide you through getting the most out of
216 your {pve} based Ceph setup. This includes the aforementioned tips and
217 more, such as xref:pveceph_fs[CephFS], which is a helpful addition to your
218 new Ceph cluster.
219
220 [[pve_ceph_install]]
221 CLI Installation of Ceph Packages
222 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
223
224 Alternatively to the the recommended {pve} Ceph installation wizard available
225 in the web-interface, you can use the following CLI command on each node:
226
227 [source,bash]
228 ----
229 pveceph install
230 ----
231
232 This sets up an `apt` package repository in
233 `/etc/apt/sources.list.d/ceph.list` and installs the required software.
234
235
236 Initial Ceph configuration via CLI
237 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
238
239 Use the {pve} Ceph installation wizard (recommended) or run the
240 following command on one node:
241
242 [source,bash]
243 ----
244 pveceph init --network 10.10.10.0/24
245 ----
246
247 This creates an initial configuration at `/etc/pve/ceph.conf` with a
248 dedicated network for Ceph. This file is automatically distributed to
249 all {pve} nodes, using xref:chapter_pmxcfs[pmxcfs]. The command also
250 creates a symbolic link at `/etc/ceph/ceph.conf`, which points to that file.
251 Thus, you can simply run Ceph commands without the need to specify a
252 configuration file.
253
254
255 [[pve_ceph_monitors]]
256 Ceph Monitor
257 -----------
258
259 [thumbnail="screenshot/gui-ceph-monitor.png"]
260
261 The Ceph Monitor (MON)
262 footnote:[Ceph Monitor {cephdocs-url}/start/intro/]
263 maintains a master copy of the cluster map. For high availability, you need at
264 least 3 monitors. One monitor will already be installed if you
265 used the installation wizard. You won't need more than 3 monitors, as long
266 as your cluster is small to medium-sized. Only really large clusters will
267 require more than this.
268
269 [[pveceph_create_mon]]
270 Create Monitors
271 ~~~~~~~~~~~~~~~
272
273 On each node where you want to place a monitor (three monitors are recommended),
274 create one by using the 'Ceph -> Monitor' tab in the GUI or run:
275
276
277 [source,bash]
278 ----
279 pveceph mon create
280 ----
281
282 [[pveceph_destroy_mon]]
283 Destroy Monitors
284 ~~~~~~~~~~~~~~~~
285
286 To remove a Ceph Monitor via the GUI, first select a node in the tree view and
287 go to the **Ceph -> Monitor** panel. Select the MON and click the **Destroy**
288 button.
289
290 To remove a Ceph Monitor via the CLI, first connect to the node on which the MON
291 is running. Then execute the following command:
292 [source,bash]
293 ----
294 pveceph mon destroy
295 ----
296
297 NOTE: At least three Monitors are needed for quorum.
298
299
300 [[pve_ceph_manager]]
301 Ceph Manager
302 ------------
303
304 The Manager daemon runs alongside the monitors. It provides an interface to
305 monitor the cluster. Since the release of Ceph luminous, at least one ceph-mgr
306 footnote:[Ceph Manager {cephdocs-url}/mgr/] daemon is
307 required.
308
309 [[pveceph_create_mgr]]
310 Create Manager
311 ~~~~~~~~~~~~~~
312
313 Multiple Managers can be installed, but only one Manager is active at any given
314 time.
315
316 [source,bash]
317 ----
318 pveceph mgr create
319 ----
320
321 NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For
322 high availability install more then one manager.
323
324
325 [[pveceph_destroy_mgr]]
326 Destroy Manager
327 ~~~~~~~~~~~~~~~
328
329 To remove a Ceph Manager via the GUI, first select a node in the tree view and
330 go to the **Ceph -> Monitor** panel. Select the Manager and click the
331 **Destroy** button.
332
333 To remove a Ceph Monitor via the CLI, first connect to the node on which the
334 Manager is running. Then execute the following command:
335 [source,bash]
336 ----
337 pveceph mgr destroy
338 ----
339
340 NOTE: While a manager is not a hard-dependency, it is crucial for a Ceph cluster,
341 as it handles important features like PG-autoscaling, device health monitoring,
342 telemetry and more.
343
344 [[pve_ceph_osds]]
345 Ceph OSDs
346 ---------
347
348 [thumbnail="screenshot/gui-ceph-osd-status.png"]
349
350 Ceph **O**bject **S**torage **D**aemons store objects for Ceph over the
351 network. It is recommended to use one OSD per physical disk.
352
353 [[pve_ceph_osd_create]]
354 Create OSDs
355 ~~~~~~~~~~~
356
357 You can create an OSD either via the {pve} web-interface or via the CLI using
358 `pveceph`. For example:
359
360 [source,bash]
361 ----
362 pveceph osd create /dev/sd[X]
363 ----
364
365 TIP: We recommend a Ceph cluster with at least three nodes and at least 12
366 OSDs, evenly distributed among the nodes.
367
368 If the disk was in use before (for example, for ZFS or as an OSD) you first need
369 to zap all traces of that usage. To remove the partition table, boot sector and
370 any other OSD leftover, you can use the following command:
371
372 [source,bash]
373 ----
374 ceph-volume lvm zap /dev/sd[X] --destroy
375 ----
376
377 WARNING: The above command will destroy all data on the disk!
378
379 .Ceph Bluestore
380
381 Starting with the Ceph Kraken release, a new Ceph OSD storage type was
382 introduced called Bluestore
383 footnote:[Ceph Bluestore https://ceph.com/community/new-luminous-bluestore/].
384 This is the default when creating OSDs since Ceph Luminous.
385
386 [source,bash]
387 ----
388 pveceph osd create /dev/sd[X]
389 ----
390
391 .Block.db and block.wal
392
393 If you want to use a separate DB/WAL device for your OSDs, you can specify it
394 through the '-db_dev' and '-wal_dev' options. The WAL is placed with the DB, if
395 not specified separately.
396
397 [source,bash]
398 ----
399 pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]
400 ----
401
402 You can directly choose the size of those with the '-db_size' and '-wal_size'
403 parameters respectively. If they are not given, the following values (in order)
404 will be used:
405
406 * bluestore_block_{db,wal}_size from Ceph configuration...
407 ** ... database, section 'osd'
408 ** ... database, section 'global'
409 ** ... file, section 'osd'
410 ** ... file, section 'global'
411 * 10% (DB)/1% (WAL) of OSD size
412
413 NOTE: The DB stores BlueStore’s internal metadata, and the WAL is BlueStore’s
414 internal journal or write-ahead log. It is recommended to use a fast SSD or
415 NVRAM for better performance.
416
417 .Ceph Filestore
418
419 Before Ceph Luminous, Filestore was used as the default storage type for Ceph OSDs.
420 Starting with Ceph Nautilus, {pve} does not support creating such OSDs with
421 'pveceph' anymore. If you still want to create filestore OSDs, use
422 'ceph-volume' directly.
423
424 [source,bash]
425 ----
426 ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]
427 ----
428
429 [[pve_ceph_osd_destroy]]
430 Destroy OSDs
431 ~~~~~~~~~~~~
432
433 To remove an OSD via the GUI, first select a {PVE} node in the tree view and go
434 to the **Ceph -> OSD** panel. Then select the OSD to destroy and click the **OUT**
435 button. Once the OSD status has changed from `in` to `out`, click the **STOP**
436 button. Finally, after the status has changed from `up` to `down`, select
437 **Destroy** from the `More` drop-down menu.
438
439 To remove an OSD via the CLI run the following commands.
440
441 [source,bash]
442 ----
443 ceph osd out <ID>
444 systemctl stop ceph-osd@<ID>.service
445 ----
446
447 NOTE: The first command instructs Ceph not to include the OSD in the data
448 distribution. The second command stops the OSD service. Until this time, no
449 data is lost.
450
451 The following command destroys the OSD. Specify the '-cleanup' option to
452 additionally destroy the partition table.
453
454 [source,bash]
455 ----
456 pveceph osd destroy <ID>
457 ----
458
459 WARNING: The above command will destroy all data on the disk!
460
461
462 [[pve_ceph_pools]]
463 Ceph Pools
464 ----------
465
466 [thumbnail="screenshot/gui-ceph-pools.png"]
467
468 A pool is a logical group for storing objects. It holds a collection of objects,
469 known as **P**lacement **G**roups (`PG`, `pg_num`).
470
471
472 Create and Edit Pools
473 ~~~~~~~~~~~~~~~~~~~~~
474
475 You can create and edit pools from the command line or the web-interface of any
476 {pve} host under **Ceph -> Pools**.
477
478 When no options are given, we set a default of **128 PGs**, a **size of 3
479 replicas** and a **min_size of 2 replicas**, to ensure no data loss occurs if
480 any OSD fails.
481
482 WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1
483 allows I/O on an object when it has only 1 replica, which could lead to data
484 loss, incomplete PGs or unfound objects.
485
486 It is advised that you either enable the PG-Autoscaler or calculate the PG
487 number based on your setup. You can find the formula and the PG calculator
488 footnote:[PG calculator https://web.archive.org/web/20210301111112/http://ceph.com/pgcalc/] online. From Ceph Nautilus
489 onward, you can change the number of PGs
490 footnoteref:[placement_groups,Placement Groups
491 {cephdocs-url}/rados/operations/placement-groups/] after the setup.
492
493 The PG autoscaler footnoteref:[autoscaler,Automated Scaling
494 {cephdocs-url}/rados/operations/placement-groups/#automated-scaling] can
495 automatically scale the PG count for a pool in the background. Setting the
496 `Target Size` or `Target Ratio` advanced parameters helps the PG-Autoscaler to
497 make better decisions.
498
499 .Example for creating a pool over the CLI
500 [source,bash]
501 ----
502 pveceph pool create <pool-name> --add_storages
503 ----
504
505 TIP: If you would also like to automatically define a storage for your
506 pool, keep the `Add as Storage' checkbox checked in the web-interface, or use the
507 command line option '--add_storages' at pool creation.
508
509 Pool Options
510 ^^^^^^^^^^^^
511
512 [thumbnail="screenshot/gui-ceph-pool-create.png"]
513
514 The following options are available on pool creation, and partially also when
515 editing a pool.
516
517 Name:: The name of the pool. This must be unique and can't be changed afterwards.
518 Size:: The number of replicas per object. Ceph always tries to have this many
519 copies of an object. Default: `3`.
520 PG Autoscale Mode:: The automatic PG scaling mode footnoteref:[autoscaler] of
521 the pool. If set to `warn`, it produces a warning message when a pool
522 has a non-optimal PG count. Default: `warn`.
523 Add as Storage:: Configure a VM or container storage using the new pool.
524 Default: `true` (only visible on creation).
525
526 .Advanced Options
527 Min. Size:: The minimum number of replicas per object. Ceph will reject I/O on
528 the pool if a PG has less than this many replicas. Default: `2`.
529 Crush Rule:: The rule to use for mapping object placement in the cluster. These
530 rules define how data is placed within the cluster. See
531 xref:pve_ceph_device_classes[Ceph CRUSH & device classes] for information on
532 device-based rules.
533 # of PGs:: The number of placement groups footnoteref:[placement_groups] that
534 the pool should have at the beginning. Default: `128`.
535 Target Ratio:: The ratio of data that is expected in the pool. The PG
536 autoscaler uses the ratio relative to other ratio sets. It takes precedence
537 over the `target size` if both are set.
538 Target Size:: The estimated amount of data expected in the pool. The PG
539 autoscaler uses this size to estimate the optimal PG count.
540 Min. # of PGs:: The minimum number of placement groups. This setting is used to
541 fine-tune the lower bound of the PG count for that pool. The PG autoscaler
542 will not merge PGs below this threshold.
543
544 Further information on Ceph pool handling can be found in the Ceph pool
545 operation footnote:[Ceph pool operation
546 {cephdocs-url}/rados/operations/pools/]
547 manual.
548
549
550 [[pve_ceph_ec_pools]]
551 Erasure Coded Pools
552 ~~~~~~~~~~~~~~~~~~~
553
554 Erasure coding (EC) is a form of `forward error correction' codes that allows
555 to recover from a certain amount of data loss. Erasure coded pools can offer
556 more usable space compared to replicated pools, but they do that for the price
557 of performance.
558
559 For comparison: in classic, replicated pools, multiple replicas of the data
560 are stored (`size`) while in erasure coded pool, data is split into `k` data
561 chunks with additional `m` coding (checking) chunks. Those coding chunks can be
562 used to recreate data should data chunks be missing.
563
564 The number of coding chunks, `m`, defines how many OSDs can be lost without
565 losing any data. The total amount of objects stored is `k + m`.
566
567 Creating EC Pools
568 ^^^^^^^^^^^^^^^^^
569
570 Erasure coded (EC) pools can be created with the `pveceph` CLI tooling.
571 Planning an EC pool needs to account for the fact, that they work differently
572 than replicated pools.
573
574 The default `min_size` of an EC pool depends on the `m` parameter. If `m = 1`,
575 the `min_size` of the EC pool will be `k`. The `min_size` will be `k + 1` if
576 `m > 1`. The Ceph documentation recommends a conservative `min_size` of `k + 2`
577 footnote:[Ceph Erasure Coded Pool Recovery
578 {cephdocs-url}/rados/operations/erasure-code/#erasure-coded-pool-recovery].
579
580 If there are less than `min_size` OSDs available, any IO to the pool will be
581 blocked until there are enough OSDs available again.
582
583 NOTE: When planning an erasure coded pool, keep an eye on the `min_size` as it
584 defines how many OSDs need to be available. Otherwise, IO will be blocked.
585
586 For example, an EC pool with `k = 2` and `m = 1` will have `size = 3`,
587 `min_size = 2` and will stay operational if one OSD fails. If the pool is
588 configured with `k = 2`, `m = 2`, it will have a `size = 4` and `min_size = 3`
589 and stay operational if one OSD is lost.
590
591 To create a new EC pool, run the following command:
592
593 [source,bash]
594 ----
595 pveceph pool create <pool-name> --erasure-coding k=2,m=1
596 ----
597
598 Optional parameters are `failure-domain` and `device-class`. If you
599 need to change any EC profile settings used by the pool, you will have to
600 create a new pool with a new profile.
601
602 This will create a new EC pool plus the needed replicated pool to store the RBD
603 omap and other metadata. In the end, there will be a `<pool name>-data` and
604 `<pool name>-metada` pool. The default behavior is to create a matching storage
605 configuration as well. If that behavior is not wanted, you can disable it by
606 providing the `--add_storages 0` parameter. When configuring the storage
607 configuration manually, keep in mind that the `data-pool` parameter needs to be
608 set. Only then will the EC pool be used to store the data objects. For example:
609
610 NOTE: The optional parameters `--size`, `--min_size` and `--crush_rule` will be
611 used for the replicated metadata pool, but not for the erasure coded data pool.
612 If you need to change the `min_size` on the data pool, you can do it later.
613 The `size` and `crush_rule` parameters cannot be changed on erasure coded
614 pools.
615
616 If there is a need to further customize the EC profile, you can do so by
617 creating it with the Ceph tools directly footnote:[Ceph Erasure Code Profile
618 {cephdocs-url}/rados/operations/erasure-code/#erasure-code-profiles], and
619 specify the profile to use with the `profile` parameter.
620
621 For example:
622 [source,bash]
623 ----
624 pveceph pool create <pool-name> --erasure-coding profile=<profile-name>
625 ----
626
627 Adding EC Pools as Storage
628 ^^^^^^^^^^^^^^^^^^^^^^^^^^
629
630 You can add an already existing EC pool as storage to {pve}. It works the same
631 way as adding an `RBD` pool but requires the extra `data-pool` option.
632
633 [source,bash]
634 ----
635 pvesm add rbd <storage-name> --pool <replicated-pool> --data-pool <ec-pool>
636 ----
637
638 TIP: Do not forget to add the `keyring` and `monhost` option for any external
639 ceph clusters, not managed by the local {pve} cluster.
640
641 Destroy Pools
642 ~~~~~~~~~~~~~
643
644 To destroy a pool via the GUI, select a node in the tree view and go to the
645 **Ceph -> Pools** panel. Select the pool to destroy and click the **Destroy**
646 button. To confirm the destruction of the pool, you need to enter the pool name.
647
648 Run the following command to destroy a pool. Specify the '-remove_storages' to
649 also remove the associated storage.
650
651 [source,bash]
652 ----
653 pveceph pool destroy <name>
654 ----
655
656 NOTE: Pool deletion runs in the background and can take some time.
657 You will notice the data usage in the cluster decreasing throughout this
658 process.
659
660
661 PG Autoscaler
662 ~~~~~~~~~~~~~
663
664 The PG autoscaler allows the cluster to consider the amount of (expected) data
665 stored in each pool and to choose the appropriate pg_num values automatically.
666 It is available since Ceph Nautilus.
667
668 You may need to activate the PG autoscaler module before adjustments can take
669 effect.
670
671 [source,bash]
672 ----
673 ceph mgr module enable pg_autoscaler
674 ----
675
676 The autoscaler is configured on a per pool basis and has the following modes:
677
678 [horizontal]
679 warn:: A health warning is issued if the suggested `pg_num` value differs too
680 much from the current value.
681 on:: The `pg_num` is adjusted automatically with no need for any manual
682 interaction.
683 off:: No automatic `pg_num` adjustments are made, and no warning will be issued
684 if the PG count is not optimal.
685
686 The scaling factor can be adjusted to facilitate future data storage with the
687 `target_size`, `target_size_ratio` and the `pg_num_min` options.
688
689 WARNING: By default, the autoscaler considers tuning the PG count of a pool if
690 it is off by a factor of 3. This will lead to a considerable shift in data
691 placement and might introduce a high load on the cluster.
692
693 You can find a more in-depth introduction to the PG autoscaler on Ceph's Blog -
694 https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/[New in
695 Nautilus: PG merging and autotuning].
696
697
698 [[pve_ceph_device_classes]]
699 Ceph CRUSH & device classes
700 ---------------------------
701
702 [thumbnail="screenshot/gui-ceph-config.png"]
703
704 The footnote:[CRUSH
705 https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf] (**C**ontrolled
706 **R**eplication **U**nder **S**calable **H**ashing) algorithm is at the
707 foundation of Ceph.
708
709 CRUSH calculates where to store and retrieve data from. This has the
710 advantage that no central indexing service is needed. CRUSH works using a map of
711 OSDs, buckets (device locations) and rulesets (data replication) for pools.
712
713 NOTE: Further information can be found in the Ceph documentation, under the
714 section CRUSH map footnote:[CRUSH map {cephdocs-url}/rados/operations/crush-map/].
715
716 This map can be altered to reflect different replication hierarchies. The object
717 replicas can be separated (e.g., failure domains), while maintaining the desired
718 distribution.
719
720 A common configuration is to use different classes of disks for different Ceph
721 pools. For this reason, Ceph introduced device classes with luminous, to
722 accommodate the need for easy ruleset generation.
723
724 The device classes can be seen in the 'ceph osd tree' output. These classes
725 represent their own root bucket, which can be seen with the below command.
726
727 [source, bash]
728 ----
729 ceph osd crush tree --show-shadow
730 ----
731
732 Example output form the above command:
733
734 [source, bash]
735 ----
736 ID CLASS WEIGHT TYPE NAME
737 -16 nvme 2.18307 root default~nvme
738 -13 nvme 0.72769 host sumi1~nvme
739 12 nvme 0.72769 osd.12
740 -14 nvme 0.72769 host sumi2~nvme
741 13 nvme 0.72769 osd.13
742 -15 nvme 0.72769 host sumi3~nvme
743 14 nvme 0.72769 osd.14
744 -1 7.70544 root default
745 -3 2.56848 host sumi1
746 12 nvme 0.72769 osd.12
747 -5 2.56848 host sumi2
748 13 nvme 0.72769 osd.13
749 -7 2.56848 host sumi3
750 14 nvme 0.72769 osd.14
751 ----
752
753 To instruct a pool to only distribute objects on a specific device class, you
754 first need to create a ruleset for the device class:
755
756 [source, bash]
757 ----
758 ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
759 ----
760
761 [frame="none",grid="none", align="left", cols="30%,70%"]
762 |===
763 |<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
764 |<root>|which crush root it should belong to (default ceph root "default")
765 |<failure-domain>|at which failure-domain the objects should be distributed (usually host)
766 |<class>|what type of OSD backing store to use (e.g., nvme, ssd, hdd)
767 |===
768
769 Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
770
771 [source, bash]
772 ----
773 ceph osd pool set <pool-name> crush_rule <rule-name>
774 ----
775
776 TIP: If the pool already contains objects, these must be moved accordingly.
777 Depending on your setup, this may introduce a big performance impact on your
778 cluster. As an alternative, you can create a new pool and move disks separately.
779
780
781 Ceph Client
782 -----------
783
784 [thumbnail="screenshot/gui-ceph-log.png"]
785
786 Following the setup from the previous sections, you can configure {pve} to use
787 such pools to store VM and Container images. Simply use the GUI to add a new
788 `RBD` storage (see section
789 xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
790
791 You also need to copy the keyring to a predefined location for an external Ceph
792 cluster. If Ceph is installed on the Proxmox nodes itself, then this will be
793 done automatically.
794
795 NOTE: The filename needs to be `<storage_id> + `.keyring`, where `<storage_id>` is
796 the expression after 'rbd:' in `/etc/pve/storage.cfg`. In the following example,
797 `my-ceph-storage` is the `<storage_id>`:
798
799 [source,bash]
800 ----
801 mkdir /etc/pve/priv/ceph
802 cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
803 ----
804
805 [[pveceph_fs]]
806 CephFS
807 ------
808
809 Ceph also provides a filesystem, which runs on top of the same object storage as
810 RADOS block devices do. A **M**eta**d**ata **S**erver (`MDS`) is used to map the
811 RADOS backed objects to files and directories, allowing Ceph to provide a
812 POSIX-compliant, replicated filesystem. This allows you to easily configure a
813 clustered, highly available, shared filesystem. Ceph's Metadata Servers
814 guarantee that files are evenly distributed over the entire Ceph cluster. As a
815 result, even cases of high load will not overwhelm a single host, which can be
816 an issue with traditional shared filesystem approaches, for example `NFS`.
817
818 [thumbnail="screenshot/gui-node-ceph-cephfs-panel.png"]
819
820 {pve} supports both creating a hyper-converged CephFS and using an existing
821 xref:storage_cephfs[CephFS as storage] to save backups, ISO files, and container
822 templates.
823
824
825 [[pveceph_fs_mds]]
826 Metadata Server (MDS)
827 ~~~~~~~~~~~~~~~~~~~~~
828
829 CephFS needs at least one Metadata Server to be configured and running, in order
830 to function. You can create an MDS through the {pve} web GUI's `Node
831 -> CephFS` panel or from the command line with:
832
833 ----
834 pveceph mds create
835 ----
836
837 Multiple metadata servers can be created in a cluster, but with the default
838 settings, only one can be active at a time. If an MDS or its node becomes
839 unresponsive (or crashes), another `standby` MDS will get promoted to `active`.
840 You can speed up the handover between the active and standby MDS by using
841 the 'hotstandby' parameter option on creation, or if you have already created it
842 you may set/add:
843
844 ----
845 mds standby replay = true
846 ----
847
848 in the respective MDS section of `/etc/pve/ceph.conf`. With this enabled, the
849 specified MDS will remain in a `warm` state, polling the active one, so that it
850 can take over faster in case of any issues.
851
852 NOTE: This active polling will have an additional performance impact on your
853 system and the active `MDS`.
854
855 .Multiple Active MDS
856
857 Since Luminous (12.2.x) you can have multiple active metadata servers
858 running at once, but this is normally only useful if you have a high amount of
859 clients running in parallel. Otherwise the `MDS` is rarely the bottleneck in a
860 system. If you want to set this up, please refer to the Ceph documentation.
861 footnote:[Configuring multiple active MDS daemons
862 {cephdocs-url}/cephfs/multimds/]
863
864 [[pveceph_fs_create]]
865 Create CephFS
866 ~~~~~~~~~~~~~
867
868 With {pve}'s integration of CephFS, you can easily create a CephFS using the
869 web interface, CLI or an external API interface. Some prerequisites are required
870 for this to work:
871
872 .Prerequisites for a successful CephFS setup:
873 - xref:pve_ceph_install[Install Ceph packages] - if this was already done some
874 time ago, you may want to rerun it on an up-to-date system to
875 ensure that all CephFS related packages get installed.
876 - xref:pve_ceph_monitors[Setup Monitors]
877 - xref:pve_ceph_monitors[Setup your OSDs]
878 - xref:pveceph_fs_mds[Setup at least one MDS]
879
880 After this is complete, you can simply create a CephFS through
881 either the Web GUI's `Node -> CephFS` panel or the command line tool `pveceph`,
882 for example:
883
884 ----
885 pveceph fs create --pg_num 128 --add-storage
886 ----
887
888 This creates a CephFS named 'cephfs', using a pool for its data named
889 'cephfs_data' with '128' placement groups and a pool for its metadata named
890 'cephfs_metadata' with one quarter of the data pool's placement groups (`32`).
891 Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
892 Ceph documentation for more information regarding an appropriate placement group
893 number (`pg_num`) for your setup footnoteref:[placement_groups].
894 Additionally, the '--add-storage' parameter will add the CephFS to the {pve}
895 storage configuration after it has been created successfully.
896
897 Destroy CephFS
898 ~~~~~~~~~~~~~~
899
900 WARNING: Destroying a CephFS will render all of its data unusable. This cannot be
901 undone!
902
903 To completely and gracefully remove a CephFS, the following steps are
904 necessary:
905
906 * Disconnect every non-{PVE} client (e.g. unmount the CephFS in guests).
907 * Disable all related CephFS {PVE} storage entries (to prevent it from being
908 automatically mounted).
909 * Remove all used resources from guests (e.g. ISOs) that are on the CephFS you
910 want to destroy.
911 * Unmount the CephFS storages on all cluster nodes manually with
912 +
913 ----
914 umount /mnt/pve/<STORAGE-NAME>
915 ----
916 +
917 Where `<STORAGE-NAME>` is the name of the CephFS storage in your {PVE}.
918
919 * Now make sure that no metadata server (`MDS`) is running for that CephFS,
920 either by stopping or destroying them. This can be done through the web
921 interface or via the command line interface, for the latter you would issue
922 the following command:
923 +
924 ----
925 pveceph stop --service mds.NAME
926 ----
927 +
928 to stop them, or
929 +
930 ----
931 pveceph mds destroy NAME
932 ----
933 +
934 to destroy them.
935 +
936 Note that standby servers will automatically be promoted to active when an
937 active `MDS` is stopped or removed, so it is best to first stop all standby
938 servers.
939
940 * Now you can destroy the CephFS with
941 +
942 ----
943 pveceph fs destroy NAME --remove-storages --remove-pools
944 ----
945 +
946 This will automatically destroy the underlying ceph pools as well as remove
947 the storages from pve config.
948
949 After these steps, the CephFS should be completely removed and if you have
950 other CephFS instances, the stopped metadata servers can be started again
951 to act as standbys.
952
953 Ceph maintenance
954 ----------------
955
956 Replace OSDs
957 ~~~~~~~~~~~~
958
959 One of the most common maintenance tasks in Ceph is to replace the disk of an
960 OSD. If a disk is already in a failed state, then you can go ahead and run
961 through the steps in xref:pve_ceph_osd_destroy[Destroy OSDs]. Ceph will recreate
962 those copies on the remaining OSDs if possible. This rebalancing will start as
963 soon as an OSD failure is detected or an OSD was actively stopped.
964
965 NOTE: With the default size/min_size (3/2) of a pool, recovery only starts when
966 `size + 1` nodes are available. The reason for this is that the Ceph object
967 balancer xref:pve_ceph_device_classes[CRUSH] defaults to a full node as
968 `failure domain'.
969
970 To replace a functioning disk from the GUI, go through the steps in
971 xref:pve_ceph_osd_destroy[Destroy OSDs]. The only addition is to wait until
972 the cluster shows 'HEALTH_OK' before stopping the OSD to destroy it.
973
974 On the command line, use the following commands:
975
976 ----
977 ceph osd out osd.<id>
978 ----
979
980 You can check with the command below if the OSD can be safely removed.
981
982 ----
983 ceph osd safe-to-destroy osd.<id>
984 ----
985
986 Once the above check tells you that it is safe to remove the OSD, you can
987 continue with the following commands:
988
989 ----
990 systemctl stop ceph-osd@<id>.service
991 pveceph osd destroy <id>
992 ----
993
994 Replace the old disk with the new one and use the same procedure as described
995 in xref:pve_ceph_osd_create[Create OSDs].
996
997 Trim/Discard
998 ~~~~~~~~~~~~
999
1000 It is good practice to run 'fstrim' (discard) regularly on VMs and containers.
1001 This releases data blocks that the filesystem isn’t using anymore. It reduces
1002 data usage and resource load. Most modern operating systems issue such discard
1003 commands to their disks regularly. You only need to ensure that the Virtual
1004 Machines enable the xref:qm_hard_disk_discard[disk discard option].
1005
1006 [[pveceph_scrub]]
1007 Scrub & Deep Scrub
1008 ~~~~~~~~~~~~~~~~~~
1009
1010 Ceph ensures data integrity by 'scrubbing' placement groups. Ceph checks every
1011 object in a PG for its health. There are two forms of Scrubbing, daily
1012 cheap metadata checks and weekly deep data checks. The weekly deep scrub reads
1013 the objects and uses checksums to ensure data integrity. If a running scrub
1014 interferes with business (performance) needs, you can adjust the time when
1015 scrubs footnote:[Ceph scrubbing {cephdocs-url}/rados/configuration/osd-config-ref/#scrubbing]
1016 are executed.
1017
1018
1019 Ceph Monitoring and Troubleshooting
1020 -----------------------------------
1021
1022 It is important to continuously monitor the health of a Ceph deployment from the
1023 beginning, either by using the Ceph tools or by accessing
1024 the status through the {pve} link:api-viewer/index.html[API].
1025
1026 The following Ceph commands can be used to see if the cluster is healthy
1027 ('HEALTH_OK'), if there are warnings ('HEALTH_WARN'), or even errors
1028 ('HEALTH_ERR'). If the cluster is in an unhealthy state, the status commands
1029 below will also give you an overview of the current events and actions to take.
1030
1031 ----
1032 # single time output
1033 pve# ceph -s
1034 # continuously output status changes (press CTRL+C to stop)
1035 pve# ceph -w
1036 ----
1037
1038 To get a more detailed view, every Ceph service has a log file under
1039 `/var/log/ceph/`. If more detail is required, the log level can be
1040 adjusted footnote:[Ceph log and debugging {cephdocs-url}/rados/troubleshooting/log-and-debug/].
1041
1042 You can find more information about troubleshooting
1043 footnote:[Ceph troubleshooting {cephdocs-url}/rados/troubleshooting/]
1044 a Ceph cluster on the official website.
1045
1046
1047 ifdef::manvolnum[]
1048 include::pve-copyright.adoc[]
1049 endif::manvolnum[]