]> git.proxmox.com Git - pve-docs.git/blob - pveceph.adoc
e9c40445feef9905e4ecacbf4f7e38f71842e0c2
[pve-docs.git] / pveceph.adoc
1 [[chapter_pveceph]]
2 ifdef::manvolnum[]
3 pveceph(1)
4 ==========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pveceph - Manage Ceph Services on Proxmox VE Nodes
11
12 SYNOPSIS
13 --------
14
15 include::pveceph.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20 ifndef::manvolnum[]
21 Deploy Hyper-Converged Ceph Cluster
22 ===================================
23 :pve-toplevel:
24 endif::manvolnum[]
25
26 [thumbnail="screenshot/gui-ceph-status.png"]
27
28 {pve} unifies your compute and storage systems, i.e. you can use the same
29 physical nodes within a cluster for both computing (processing VMs and
30 containers) and replicated storage. The traditional silos of compute and
31 storage resources can be wrapped up into a single hyper-converged appliance.
32 Separate storage networks (SANs) and connections via network attached storages
33 (NAS) disappear. With the integration of Ceph, an open source software-defined
34 storage platform, {pve} has the ability to run and manage Ceph storage directly
35 on the hypervisor nodes.
36
37 Ceph is a distributed object store and file system designed to provide
38 excellent performance, reliability and scalability.
39
40 .Some advantages of Ceph on {pve} are:
41 - Easy setup and management with CLI and GUI support
42 - Thin provisioning
43 - Snapshots support
44 - Self healing
45 - Scalable to the exabyte level
46 - Setup pools with different performance and redundancy characteristics
47 - Data is replicated, making it fault tolerant
48 - Runs on economical commodity hardware
49 - No need for hardware RAID controllers
50 - Open source
51
52 For small to mid sized deployments, it is possible to install a Ceph server for
53 RADOS Block Devices (RBD) directly on your {pve} cluster nodes, see
54 xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]. Recent
55 hardware has plenty of CPU power and RAM, so running storage services
56 and VMs on the same node is possible.
57
58 To simplify management, we provide 'pveceph' - a tool to install and
59 manage {ceph} services on {pve} nodes.
60
61 .Ceph consists of a couple of Daemons, for use as a RBD storage:
62 - Ceph Monitor (ceph-mon)
63 - Ceph Manager (ceph-mgr)
64 - Ceph OSD (ceph-osd; Object Storage Daemon)
65
66 TIP: We highly recommend to get familiar with Ceph
67 footnote:[Ceph intro {cephdocs-url}/start/intro/],
68 its architecture
69 footnote:[Ceph architecture {cephdocs-url}/architecture/]
70 and vocabulary
71 footnote:[Ceph glossary {cephdocs-url}/glossary].
72
73
74 Precondition
75 ------------
76
77 To build a hyper-converged Proxmox + Ceph Cluster there should be at least
78 three (preferably) identical servers for the setup.
79
80 Check also the recommendations from
81 {cephdocs-url}/start/hardware-recommendations/[Ceph's website].
82
83 .CPU
84 Higher CPU core frequency reduce latency and should be preferred. As a simple
85 rule of thumb, you should assign a CPU core (or thread) to each Ceph service to
86 provide enough resources for stable and durable Ceph performance.
87
88 .Memory
89 Especially in a hyper-converged setup, the memory consumption needs to be
90 carefully monitored. In addition to the intended workload from virtual machines
91 and containers, Ceph needs enough memory available to provide excellent and
92 stable performance.
93
94 As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will be used
95 by an OSD. Especially during recovery, rebalancing or backfilling.
96
97 The daemon itself will use additional memory. The Bluestore backend of the
98 daemon requires by default **3-5 GiB of memory** (adjustable). In contrast, the
99 legacy Filestore backend uses the OS page cache and the memory consumption is
100 generally related to PGs of an OSD daemon.
101
102 .Network
103 We recommend a network bandwidth of at least 10 GbE or more, which is used
104 exclusively for Ceph. A meshed network setup
105 footnote:[Full Mesh Network for Ceph {webwiki-url}Full_Mesh_Network_for_Ceph_Server]
106 is also an option if there are no 10 GbE switches available.
107
108 The volume of traffic, especially during recovery, will interfere with other
109 services on the same network and may even break the {pve} cluster stack.
110
111 Further, estimate your bandwidth needs. While one HDD might not saturate a 1 Gb
112 link, multiple HDD OSDs per node can, and modern NVMe SSDs will even saturate
113 10 Gbps of bandwidth quickly. Deploying a network capable of even more bandwidth
114 will ensure that it isn't your bottleneck and won't be anytime soon, 25, 40 or
115 even 100 GBps are possible.
116
117 .Disks
118 When planning the size of your Ceph cluster, it is important to take the
119 recovery time into consideration. Especially with small clusters, the recovery
120 might take long. It is recommended that you use SSDs instead of HDDs in small
121 setups to reduce recovery time, minimizing the likelihood of a subsequent
122 failure event during recovery.
123
124 In general SSDs will provide more IOPs than spinning disks. This fact and the
125 higher cost may make a xref:pve_ceph_device_classes[class based] separation of
126 pools appealing. Another possibility to speedup OSDs is to use a faster disk
127 as journal or DB/**W**rite-**A**head-**L**og device, see
128 xref:pve_ceph_osds[creating Ceph OSDs]. If a faster disk is used for multiple
129 OSDs, a proper balance between OSD and WAL / DB (or journal) disk must be
130 selected, otherwise the faster disk becomes the bottleneck for all linked OSDs.
131
132 Aside from the disk type, Ceph best performs with an even sized and distributed
133 amount of disks per node. For example, 4 x 500 GB disks with in each node is
134 better than a mixed setup with a single 1 TB and three 250 GB disk.
135
136 One also need to balance OSD count and single OSD capacity. More capacity
137 allows to increase storage density, but it also means that a single OSD
138 failure forces ceph to recover more data at once.
139
140 .Avoid RAID
141 As Ceph handles data object redundancy and multiple parallel writes to disks
142 (OSDs) on its own, using a RAID controller normally doesn’t improve
143 performance or availability. On the contrary, Ceph is designed to handle whole
144 disks on it's own, without any abstraction in between. RAID controller are not
145 designed for the Ceph use case and may complicate things and sometimes even
146 reduce performance, as their write and caching algorithms may interfere with
147 the ones from Ceph.
148
149 WARNING: Avoid RAID controller, use host bus adapter (HBA) instead.
150
151 NOTE: Above recommendations should be seen as a rough guidance for choosing
152 hardware. Therefore, it is still essential to adapt it to your specific needs,
153 test your setup and monitor health and performance continuously.
154
155 [[pve_ceph_install_wizard]]
156 Initial Ceph installation & configuration
157 -----------------------------------------
158
159 [thumbnail="screenshot/gui-node-ceph-install.png"]
160
161 With {pve} you have the benefit of an easy to use installation wizard
162 for Ceph. Click on one of your cluster nodes and navigate to the Ceph
163 section in the menu tree. If Ceph is not already installed you will be
164 offered to do so now.
165
166 The wizard is divided into different sections, where each needs to be
167 finished successfully in order to use Ceph. After starting the installation
168 the wizard will download and install all required packages from {pve}'s ceph
169 repository.
170
171 After finishing the first step, you will need to create a configuration.
172 This step is only needed once per cluster, as this configuration is distributed
173 automatically to all remaining cluster members through {pve}'s clustered
174 xref:chapter_pmxcfs[configuration file system (pmxcfs)].
175
176 The configuration step includes the following settings:
177
178 * *Public Network:* You should setup a dedicated network for Ceph, this
179 setting is required. Separating your Ceph traffic is highly recommended,
180 because it could lead to troubles with other latency dependent services,
181 e.g., cluster communication may decrease Ceph's performance, if not done.
182
183 [thumbnail="screenshot/gui-node-ceph-install-wizard-step2.png"]
184
185 * *Cluster Network:* As an optional step you can go even further and
186 separate the xref:pve_ceph_osds[OSD] replication & heartbeat traffic
187 as well. This will relieve the public network and could lead to
188 significant performance improvements especially in big clusters.
189
190 You have two more options which are considered advanced and therefore
191 should only changed if you are an expert.
192
193 * *Number of replicas*: Defines the how often a object is replicated
194 * *Minimum replicas*: Defines the minimum number of required replicas
195 for I/O to be marked as complete.
196
197 Additionally you need to choose your first monitor node, this is required.
198
199 That's it, you should see a success page as the last step with further
200 instructions on how to go on. You are now prepared to start using Ceph,
201 even though you will need to create additional xref:pve_ceph_monitors[monitors],
202 create some xref:pve_ceph_osds[OSDs] and at least one xref:pve_ceph_pools[pool].
203
204 The rest of this chapter will guide you on how to get the most out of
205 your {pve} based Ceph setup, this will include aforementioned and
206 more like xref:pveceph_fs[CephFS] which is a very handy addition to your
207 new Ceph cluster.
208
209 [[pve_ceph_install]]
210 Installation of Ceph Packages
211 -----------------------------
212 Use {pve} Ceph installation wizard (recommended) or run the following
213 command on each node:
214
215 [source,bash]
216 ----
217 pveceph install
218 ----
219
220 This sets up an `apt` package repository in
221 `/etc/apt/sources.list.d/ceph.list` and installs the required software.
222
223
224 Create initial Ceph configuration
225 ---------------------------------
226
227 [thumbnail="screenshot/gui-ceph-config.png"]
228
229 Use the {pve} Ceph installation wizard (recommended) or run the
230 following command on one node:
231
232 [source,bash]
233 ----
234 pveceph init --network 10.10.10.0/24
235 ----
236
237 This creates an initial configuration at `/etc/pve/ceph.conf` with a
238 dedicated network for ceph. That file is automatically distributed to
239 all {pve} nodes by using xref:chapter_pmxcfs[pmxcfs]. The command also
240 creates a symbolic link from `/etc/ceph/ceph.conf` pointing to that file.
241 So you can simply run Ceph commands without the need to specify a
242 configuration file.
243
244
245 [[pve_ceph_monitors]]
246 Ceph Monitor
247 -----------
248 The Ceph Monitor (MON)
249 footnote:[Ceph Monitor {cephdocs-url}/start/intro/]
250 maintains a master copy of the cluster map. For high availability you need to
251 have at least 3 monitors. One monitor will already be installed if you
252 used the installation wizard. You won't need more than 3 monitors as long
253 as your cluster is small to midsize, only really large clusters will
254 need more than that.
255
256
257 [[pveceph_create_mon]]
258 Create Monitors
259 ~~~~~~~~~~~~~~~
260
261 [thumbnail="screenshot/gui-ceph-monitor.png"]
262
263 On each node where you want to place a monitor (three monitors are recommended),
264 create it by using the 'Ceph -> Monitor' tab in the GUI or run.
265
266
267 [source,bash]
268 ----
269 pveceph mon create
270 ----
271
272 [[pveceph_destroy_mon]]
273 Destroy Monitors
274 ~~~~~~~~~~~~~~~~
275
276 To remove a Ceph Monitor via the GUI first select a node in the tree view and
277 go to the **Ceph -> Monitor** panel. Select the MON and click the **Destroy**
278 button.
279
280 To remove a Ceph Monitor via the CLI first connect to the node on which the MON
281 is running. Then execute the following command:
282 [source,bash]
283 ----
284 pveceph mon destroy
285 ----
286
287 NOTE: At least three Monitors are needed for quorum.
288
289
290 [[pve_ceph_manager]]
291 Ceph Manager
292 ------------
293 The Manager daemon runs alongside the monitors. It provides an interface to
294 monitor the cluster. Since the Ceph luminous release at least one ceph-mgr
295 footnote:[Ceph Manager {cephdocs-url}/mgr/] daemon is
296 required.
297
298 [[pveceph_create_mgr]]
299 Create Manager
300 ~~~~~~~~~~~~~~
301
302 Multiple Managers can be installed, but at any time only one Manager is active.
303
304 [source,bash]
305 ----
306 pveceph mgr create
307 ----
308
309 NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For
310 high availability install more then one manager.
311
312
313 [[pveceph_destroy_mgr]]
314 Destroy Manager
315 ~~~~~~~~~~~~~~~
316
317 To remove a Ceph Manager via the GUI first select a node in the tree view and
318 go to the **Ceph -> Monitor** panel. Select the Manager and click the
319 **Destroy** button.
320
321 To remove a Ceph Monitor via the CLI first connect to the node on which the
322 Manager is running. Then execute the following command:
323 [source,bash]
324 ----
325 pveceph mgr destroy
326 ----
327
328 NOTE: A Ceph cluster can function without a Manager, but certain functions like
329 the cluster status or usage require a running Manager.
330
331
332 [[pve_ceph_osds]]
333 Ceph OSDs
334 ---------
335 Ceph **O**bject **S**torage **D**aemons are storing objects for Ceph over the
336 network. It is recommended to use one OSD per physical disk.
337
338 NOTE: By default an object is 4 MiB in size.
339
340 [[pve_ceph_osd_create]]
341 Create OSDs
342 ~~~~~~~~~~~
343
344 [thumbnail="screenshot/gui-ceph-osd-status.png"]
345
346 You can create an OSD either via the {pve} web-interface, or via CLI using
347 `pveceph`. For example:
348
349 [source,bash]
350 ----
351 pveceph osd create /dev/sd[X]
352 ----
353
354 TIP: We recommend a Ceph cluster with at least three nodes and a at least 12
355 OSDs, evenly distributed among the nodes.
356
357 If the disk was in use before (for example, in a ZFS, or as OSD) you need to
358 first zap all traces of that usage. To remove the partition table, boot
359 sector and any other OSD leftover, you can use the following command:
360
361 [source,bash]
362 ----
363 ceph-volume lvm zap /dev/sd[X] --destroy
364 ----
365
366 WARNING: The above command will destroy all data on the disk!
367
368 .Ceph Bluestore
369
370 Starting with the Ceph Kraken release, a new Ceph OSD storage type was
371 introduced, the so called Bluestore
372 footnote:[Ceph Bluestore https://ceph.com/community/new-luminous-bluestore/].
373 This is the default when creating OSDs since Ceph Luminous.
374
375 [source,bash]
376 ----
377 pveceph osd create /dev/sd[X]
378 ----
379
380 .Block.db and block.wal
381
382 If you want to use a separate DB/WAL device for your OSDs, you can specify it
383 through the '-db_dev' and '-wal_dev' options. The WAL is placed with the DB, if
384 not specified separately.
385
386 [source,bash]
387 ----
388 pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]
389 ----
390
391 You can directly choose the size for those with the '-db_size' and '-wal_size'
392 parameters respectively. If they are not given the following values (in order)
393 will be used:
394
395 * bluestore_block_{db,wal}_size from ceph configuration...
396 ** ... database, section 'osd'
397 ** ... database, section 'global'
398 ** ... file, section 'osd'
399 ** ... file, section 'global'
400 * 10% (DB)/1% (WAL) of OSD size
401
402 NOTE: The DB stores BlueStore’s internal metadata and the WAL is BlueStore’s
403 internal journal or write-ahead log. It is recommended to use a fast SSD or
404 NVRAM for better performance.
405
406
407 .Ceph Filestore
408
409 Before Ceph Luminous, Filestore was used as default storage type for Ceph OSDs.
410 Starting with Ceph Nautilus, {pve} does not support creating such OSDs with
411 'pveceph' anymore. If you still want to create filestore OSDs, use
412 'ceph-volume' directly.
413
414 [source,bash]
415 ----
416 ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]
417 ----
418
419 [[pve_ceph_osd_destroy]]
420 Destroy OSDs
421 ~~~~~~~~~~~~
422
423 To remove an OSD via the GUI first select a {PVE} node in the tree view and go
424 to the **Ceph -> OSD** panel. Select the OSD to destroy. Next click the **OUT**
425 button. Once the OSD status changed from `in` to `out` click the **STOP**
426 button. As soon as the status changed from `up` to `down` select **Destroy**
427 from the `More` drop-down menu.
428
429 To remove an OSD via the CLI run the following commands.
430 [source,bash]
431 ----
432 ceph osd out <ID>
433 systemctl stop ceph-osd@<ID>.service
434 ----
435 NOTE: The first command instructs Ceph not to include the OSD in the data
436 distribution. The second command stops the OSD service. Until this time, no
437 data is lost.
438
439 The following command destroys the OSD. Specify the '-cleanup' option to
440 additionally destroy the partition table.
441 [source,bash]
442 ----
443 pveceph osd destroy <ID>
444 ----
445 WARNING: The above command will destroy data on the disk!
446
447
448 [[pve_ceph_pools]]
449 Ceph Pools
450 ----------
451 A pool is a logical group for storing objects. It holds **P**lacement
452 **G**roups (`PG`, `pg_num`), a collection of objects.
453
454
455 Create and Edit Pools
456 ~~~~~~~~~~~~~~~~~~~~~
457
458 [thumbnail="screenshot/gui-ceph-pools.png"]
459
460 When no options are given, we set a default of **128 PGs**, a **size of 3
461 replicas** and a **min_size of 2 replicas** for serving objects in a degraded
462 state.
463
464 NOTE: The default number of PGs works for 2-5 disks. Ceph throws a
465 'HEALTH_WARNING' if you have too few or too many PGs in your cluster.
466
467 WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1
468 allows I/O on an object when it has only 1 replica which could lead to data
469 loss, incomplete PGs or unfound objects.
470
471 It is advised that you calculate the PG number based on your setup. You can
472 find the formula and the PG calculator footnote:[PG calculator
473 https://ceph.com/pgcalc/] online. From Ceph Nautilus onward, you can change the
474 number of PGs footnoteref:[placement_groups,Placement Groups
475 {cephdocs-url}/rados/operations/placement-groups/] after the setup.
476
477 In addition to manual adjustment, the PG autoscaler
478 footnoteref:[autoscaler,Automated Scaling
479 {cephdocs-url}/rados/operations/placement-groups/#automated-scaling] can
480 automatically scale the PG count for a pool in the background.
481
482 You can create pools through command line or on the GUI on each PVE host under
483 **Ceph -> Pools**.
484
485 [source,bash]
486 ----
487 pveceph pool create <name>
488 ----
489
490 If you would like to automatically also get a storage definition for your pool,
491 mark the checkbox "Add storages" in the GUI or use the command line option
492 '--add_storages' at pool creation.
493
494 .Base Options
495 Name:: The name of the pool. This must be unique and can't be changed afterwards.
496 Size:: The number of replicas per object. Ceph always tries to have this many
497 copies of an object. Default: `3`.
498 PG Autoscale Mode:: The automatic PG scaling mode footnoteref:[autoscaler] of
499 the pool. If set to `warn`, it produces a warning message when a pool
500 has a non-optimal PG count. Default: `warn`.
501 Add as Storage:: Configure a VM or container storage using the new pool.
502 Default: `true` (only visible on creation).
503
504 .Advanced Options
505 Min. Size:: The minimum number of replicas per object. Ceph will reject I/O on
506 the pool if a PG has less than this many replicas. Default: `2`.
507 Crush Rule:: The rule to use for mapping object placement in the cluster. These
508 rules define how data is placed within the cluster. See
509 xref:pve_ceph_device_classes[Ceph CRUSH & device classes] for information on
510 device-based rules.
511 # of PGs:: The number of placement groups footnoteref:[placement_groups] that
512 the pool should have at the beginning. Default: `128`.
513 Target Size Ratio:: The ratio of data that is expected in the pool. The PG
514 autoscaler uses the ratio relative to other ratio sets. It takes precedence
515 over the `target size` if both are set.
516 Target Size:: The estimated amount of data expected in the pool. The PG
517 autoscaler uses this size to estimate the optimal PG count.
518 Min. # of PGs:: The minimum number of placement groups. This setting is used to
519 fine-tune the lower bound of the PG count for that pool. The PG autoscaler
520 will not merge PGs below this threshold.
521
522 Further information on Ceph pool handling can be found in the Ceph pool
523 operation footnote:[Ceph pool operation
524 {cephdocs-url}/rados/operations/pools/]
525 manual.
526
527
528 Destroy Pools
529 ~~~~~~~~~~~~~
530
531 To destroy a pool via the GUI select a node in the tree view and go to the
532 **Ceph -> Pools** panel. Select the pool to destroy and click the **Destroy**
533 button. To confirm the destruction of the pool you need to enter the pool name.
534
535 Run the following command to destroy a pool. Specify the '-remove_storages' to
536 also remove the associated storage.
537 [source,bash]
538 ----
539 pveceph pool destroy <name>
540 ----
541
542 NOTE: Deleting the data of a pool is a background task and can take some time.
543 You will notice that the data usage in the cluster is decreasing.
544
545
546 PG Autoscaler
547 ~~~~~~~~~~~~~
548
549 The PG autoscaler allows the cluster to consider the amount of (expected) data
550 stored in each pool and to choose the appropriate pg_num values automatically.
551
552 You may need to activate the PG autoscaler module before adjustments can take
553 effect.
554 [source,bash]
555 ----
556 ceph mgr module enable pg_autoscaler
557 ----
558
559 The autoscaler is configured on a per pool basis and has the following modes:
560
561 [horizontal]
562 warn:: A health warning is issued if the suggested `pg_num` value differs too
563 much from the current value.
564 on:: The `pg_num` is adjusted automatically with no need for any manual
565 interaction.
566 off:: No automatic `pg_num` adjustments are made, and no warning will be issued
567 if the PG count is far from optimal.
568
569 The scaling factor can be adjusted to facilitate future data storage, with the
570 `target_size`, `target_size_ratio` and the `pg_num_min` options.
571
572 WARNING: By default, the autoscaler considers tuning the PG count of a pool if
573 it is off by a factor of 3. This will lead to a considerable shift in data
574 placement and might introduce a high load on the cluster.
575
576 You can find a more in-depth introduction to the PG autoscaler on Ceph's Blog -
577 https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/[New in
578 Nautilus: PG merging and autotuning].
579
580
581 [[pve_ceph_device_classes]]
582 Ceph CRUSH & device classes
583 ---------------------------
584 The foundation of Ceph is its algorithm, **C**ontrolled **R**eplication
585 **U**nder **S**calable **H**ashing
586 (CRUSH footnote:[CRUSH https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf]).
587
588 CRUSH calculates where to store to and retrieve data from, this has the
589 advantage that no central index service is needed. CRUSH works with a map of
590 OSDs, buckets (device locations) and rulesets (data replication) for pools.
591
592 NOTE: Further information can be found in the Ceph documentation, under the
593 section CRUSH map footnote:[CRUSH map {cephdocs-url}/rados/operations/crush-map/].
594
595 This map can be altered to reflect different replication hierarchies. The object
596 replicas can be separated (eg. failure domains), while maintaining the desired
597 distribution.
598
599 A common use case is to use different classes of disks for different Ceph pools.
600 For this reason, Ceph introduced the device classes with luminous, to
601 accommodate the need for easy ruleset generation.
602
603 The device classes can be seen in the 'ceph osd tree' output. These classes
604 represent their own root bucket, which can be seen with the below command.
605
606 [source, bash]
607 ----
608 ceph osd crush tree --show-shadow
609 ----
610
611 Example output form the above command:
612
613 [source, bash]
614 ----
615 ID CLASS WEIGHT TYPE NAME
616 -16 nvme 2.18307 root default~nvme
617 -13 nvme 0.72769 host sumi1~nvme
618 12 nvme 0.72769 osd.12
619 -14 nvme 0.72769 host sumi2~nvme
620 13 nvme 0.72769 osd.13
621 -15 nvme 0.72769 host sumi3~nvme
622 14 nvme 0.72769 osd.14
623 -1 7.70544 root default
624 -3 2.56848 host sumi1
625 12 nvme 0.72769 osd.12
626 -5 2.56848 host sumi2
627 13 nvme 0.72769 osd.13
628 -7 2.56848 host sumi3
629 14 nvme 0.72769 osd.14
630 ----
631
632 To let a pool distribute its objects only on a specific device class, you need
633 to create a ruleset with the specific class first.
634
635 [source, bash]
636 ----
637 ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
638 ----
639
640 [frame="none",grid="none", align="left", cols="30%,70%"]
641 |===
642 |<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
643 |<root>|which crush root it should belong to (default ceph root "default")
644 |<failure-domain>|at which failure-domain the objects should be distributed (usually host)
645 |<class>|what type of OSD backing store to use (eg. nvme, ssd, hdd)
646 |===
647
648 Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
649
650 [source, bash]
651 ----
652 ceph osd pool set <pool-name> crush_rule <rule-name>
653 ----
654
655 TIP: If the pool already contains objects, all of these have to be moved
656 accordingly. Depending on your setup this may introduce a big performance hit
657 on your cluster. As an alternative, you can create a new pool and move disks
658 separately.
659
660
661 Ceph Client
662 -----------
663
664 [thumbnail="screenshot/gui-ceph-log.png"]
665
666 You can then configure {pve} to use such pools to store VM or
667 Container images. Simply use the GUI too add a new `RBD` storage (see
668 section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
669
670 You also need to copy the keyring to a predefined location for an external Ceph
671 cluster. If Ceph is installed on the Proxmox nodes itself, then this will be
672 done automatically.
673
674 NOTE: The file name needs to be `<storage_id> + `.keyring` - `<storage_id>` is
675 the expression after 'rbd:' in `/etc/pve/storage.cfg` which is
676 `my-ceph-storage` in the following example:
677
678 [source,bash]
679 ----
680 mkdir /etc/pve/priv/ceph
681 cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
682 ----
683
684 [[pveceph_fs]]
685 CephFS
686 ------
687
688 Ceph provides also a filesystem running on top of the same object storage as
689 RADOS block devices do. A **M**eta**d**ata **S**erver (`MDS`) is used to map
690 the RADOS backed objects to files and directories, allowing to provide a
691 POSIX-compliant replicated filesystem. This allows one to have a clustered
692 highly available shared filesystem in an easy way if ceph is already used. Its
693 Metadata Servers guarantee that files get balanced out over the whole Ceph
694 cluster, this way even high load will not overload a single host, which can be
695 an issue with traditional shared filesystem approaches, like `NFS`, for
696 example.
697
698 [thumbnail="screenshot/gui-node-ceph-cephfs-panel.png"]
699
700 {pve} supports both, using an existing xref:storage_cephfs[CephFS as storage]
701 to save backups, ISO files or container templates and creating a
702 hyper-converged CephFS itself.
703
704
705 [[pveceph_fs_mds]]
706 Metadata Server (MDS)
707 ~~~~~~~~~~~~~~~~~~~~~
708
709 CephFS needs at least one Metadata Server to be configured and running to be
710 able to work. One can simply create one through the {pve} web GUI's `Node ->
711 CephFS` panel or on the command line with:
712
713 ----
714 pveceph mds create
715 ----
716
717 Multiple metadata servers can be created in a cluster. But with the default
718 settings only one can be active at any time. If an MDS, or its node, becomes
719 unresponsive (or crashes), another `standby` MDS will get promoted to `active`.
720 One can speed up the hand-over between the active and a standby MDS up by using
721 the 'hotstandby' parameter option on create, or if you have already created it
722 you may set/add:
723
724 ----
725 mds standby replay = true
726 ----
727
728 in the ceph.conf respective MDS section. With this enabled, this specific MDS
729 will always poll the active one, so that it can take over faster as it is in a
730 `warm` state. But naturally, the active polling will cause some additional
731 performance impact on your system and active `MDS`.
732
733 .Multiple Active MDS
734
735 Since Luminous (12.2.x) you can also have multiple active metadata servers
736 running, but this is normally only useful for a high count on parallel clients,
737 as else the `MDS` seldom is the bottleneck. If you want to set this up please
738 refer to the ceph documentation. footnote:[Configuring multiple active MDS
739 daemons {cephdocs-url}/cephfs/multimds/]
740
741 [[pveceph_fs_create]]
742 Create CephFS
743 ~~~~~~~~~~~~~
744
745 With {pve}'s CephFS integration into you can create a CephFS easily over the
746 Web GUI, the CLI or an external API interface. Some prerequisites are required
747 for this to work:
748
749 .Prerequisites for a successful CephFS setup:
750 - xref:pve_ceph_install[Install Ceph packages], if this was already done some
751 time ago you might want to rerun it on an up to date system to ensure that
752 also all CephFS related packages get installed.
753 - xref:pve_ceph_monitors[Setup Monitors]
754 - xref:pve_ceph_monitors[Setup your OSDs]
755 - xref:pveceph_fs_mds[Setup at least one MDS]
756
757 After this got all checked and done you can simply create a CephFS through
758 either the Web GUI's `Node -> CephFS` panel or the command line tool `pveceph`,
759 for example with:
760
761 ----
762 pveceph fs create --pg_num 128 --add-storage
763 ----
764
765 This creates a CephFS named `'cephfs'' using a pool for its data named
766 `'cephfs_data'' with `128` placement groups and a pool for its metadata named
767 `'cephfs_metadata'' with one quarter of the data pools placement groups (`32`).
768 Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
769 Ceph documentation for more information regarding a fitting placement group
770 number (`pg_num`) for your setup footnoteref:[placement_groups].
771 Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
772 storage configuration after it has been created successfully.
773
774 Destroy CephFS
775 ~~~~~~~~~~~~~~
776
777 WARNING: Destroying a CephFS will render all its data unusable, this cannot be
778 undone!
779
780 If you really want to destroy an existing CephFS you first need to stop, or
781 destroy, all metadata servers (`M̀DS`). You can destroy them either over the Web
782 GUI or the command line interface, with:
783
784 ----
785 pveceph mds destroy NAME
786 ----
787 on each {pve} node hosting a MDS daemon.
788
789 Then, you can remove (destroy) CephFS by issuing a:
790
791 ----
792 ceph fs rm NAME --yes-i-really-mean-it
793 ----
794 on a single node hosting Ceph. After this you may want to remove the created
795 data and metadata pools, this can be done either over the Web GUI or the CLI
796 with:
797
798 ----
799 pveceph pool destroy NAME
800 ----
801
802
803 Ceph maintenance
804 ----------------
805
806 Replace OSDs
807 ~~~~~~~~~~~~
808
809 One of the common maintenance tasks in Ceph is to replace a disk of an OSD. If
810 a disk is already in a failed state, then you can go ahead and run through the
811 steps in xref:pve_ceph_osd_destroy[Destroy OSDs]. Ceph will recreate those
812 copies on the remaining OSDs if possible. This rebalancing will start as soon
813 as an OSD failure is detected or an OSD was actively stopped.
814
815 NOTE: With the default size/min_size (3/2) of a pool, recovery only starts when
816 `size + 1` nodes are available. The reason for this is that the Ceph object
817 balancer xref:pve_ceph_device_classes[CRUSH] defaults to a full node as
818 `failure domain'.
819
820 To replace a still functioning disk, on the GUI go through the steps in
821 xref:pve_ceph_osd_destroy[Destroy OSDs]. The only addition is to wait until
822 the cluster shows 'HEALTH_OK' before stopping the OSD to destroy it.
823
824 On the command line use the following commands.
825 ----
826 ceph osd out osd.<id>
827 ----
828
829 You can check with the command below if the OSD can be safely removed.
830 ----
831 ceph osd safe-to-destroy osd.<id>
832 ----
833
834 Once the above check tells you that it is save to remove the OSD, you can
835 continue with following commands.
836 ----
837 systemctl stop ceph-osd@<id>.service
838 pveceph osd destroy <id>
839 ----
840
841 Replace the old disk with the new one and use the same procedure as described
842 in xref:pve_ceph_osd_create[Create OSDs].
843
844 Trim/Discard
845 ~~~~~~~~~~~~
846 It is a good measure to run 'fstrim' (discard) regularly on VMs or containers.
847 This releases data blocks that the filesystem isn’t using anymore. It reduces
848 data usage and resource load. Most modern operating systems issue such discard
849 commands to their disks regularly. You only need to ensure that the Virtual
850 Machines enable the xref:qm_hard_disk_discard[disk discard option].
851
852 [[pveceph_scrub]]
853 Scrub & Deep Scrub
854 ~~~~~~~~~~~~~~~~~~
855 Ceph ensures data integrity by 'scrubbing' placement groups. Ceph checks every
856 object in a PG for its health. There are two forms of Scrubbing, daily
857 cheap metadata checks and weekly deep data checks. The weekly deep scrub reads
858 the objects and uses checksums to ensure data integrity. If a running scrub
859 interferes with business (performance) needs, you can adjust the time when
860 scrubs footnote:[Ceph scrubbing {cephdocs-url}/rados/configuration/osd-config-ref/#scrubbing]
861 are executed.
862
863
864 Ceph monitoring and troubleshooting
865 -----------------------------------
866 A good start is to continuously monitor the ceph health from the start of
867 initial deployment. Either through the ceph tools itself, but also by accessing
868 the status through the {pve} link:api-viewer/index.html[API].
869
870 The following ceph commands below can be used to see if the cluster is healthy
871 ('HEALTH_OK'), if there are warnings ('HEALTH_WARN'), or even errors
872 ('HEALTH_ERR'). If the cluster is in an unhealthy state the status commands
873 below will also give you an overview of the current events and actions to take.
874
875 ----
876 # single time output
877 pve# ceph -s
878 # continuously output status changes (press CTRL+C to stop)
879 pve# ceph -w
880 ----
881
882 To get a more detailed view, every ceph service has a log file under
883 `/var/log/ceph/` and if there is not enough detail, the log level can be
884 adjusted footnote:[Ceph log and debugging {cephdocs-url}/rados/troubleshooting/log-and-debug/].
885
886 You can find more information about troubleshooting
887 footnote:[Ceph troubleshooting {cephdocs-url}/rados/troubleshooting/]
888 a Ceph cluster on the official website.
889
890
891 ifdef::manvolnum[]
892 include::pve-copyright.adoc[]
893 endif::manvolnum[]