]> git.proxmox.com Git - pve-docs.git/blame - pveceph.adoc
vzdump: add section about backup fleecing
[pve-docs.git] / pveceph.adoc
CommitLineData
80c0adcb 1[[chapter_pveceph]]
0840a663 2ifdef::manvolnum[]
b2f242ab
DM
3pveceph(1)
4==========
404a158e 5:pve-toplevel:
0840a663
DM
6
7NAME
8----
9
21394e70 10pveceph - Manage Ceph Services on Proxmox VE Nodes
0840a663 11
49a5e11c 12SYNOPSIS
0840a663
DM
13--------
14
15include::pveceph.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
0840a663 20ifndef::manvolnum[]
4bfe3e35
TL
21Deploy Hyper-Converged Ceph Cluster
22===================================
49d3ad91 23:pve-toplevel:
3885be3b
TL
24
25Introduction
26------------
0840a663
DM
27endif::manvolnum[]
28
94d7a98c 29[thumbnail="screenshot/gui-ceph-status-dashboard.png"]
8997dd6e 30
40e6c806 31{pve} unifies your compute and storage systems, that is, you can use the same
a474ca1f
AA
32physical nodes within a cluster for both computing (processing VMs and
33containers) and replicated storage. The traditional silos of compute and
34storage resources can be wrapped up into a single hyper-converged appliance.
40e6c806 35Separate storage networks (SANs) and connections via network attached storage
a474ca1f
AA
36(NAS) disappear. With the integration of Ceph, an open source software-defined
37storage platform, {pve} has the ability to run and manage Ceph storage directly
38on the hypervisor nodes.
c994e4e5
DM
39
40Ceph is a distributed object store and file system designed to provide
1d54c3b4
AA
41excellent performance, reliability and scalability.
42
04ba9b24 43.Some advantages of Ceph on {pve} are:
40e6c806 44- Easy setup and management via CLI and GUI
a474ca1f 45- Thin provisioning
40e6c806 46- Snapshot support
a474ca1f 47- Self healing
a474ca1f 48- Scalable to the exabyte level
3885be3b 49- Provides block, file system, and object storage
a474ca1f
AA
50- Setup pools with different performance and redundancy characteristics
51- Data is replicated, making it fault tolerant
40e6c806 52- Runs on commodity hardware
a474ca1f 53- No need for hardware RAID controllers
a474ca1f
AA
54- Open source
55
3885be3b
TL
56For small to medium-sized deployments, it is possible to install a Ceph server
57for using RADOS Block Devices (RBD) or CephFS directly on your {pve} cluster
58nodes (see xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
59Recent hardware has a lot of CPU power and RAM, so running storage services and
60virtual guests on the same node is possible.
21394e70 61
3885be3b
TL
62To simplify management, {pve} provides you native integration to install and
63manage {ceph} services on {pve} nodes either via the built-in web interface, or
64using the 'pveceph' command line tool.
21394e70 65
3885be3b
TL
66
67Terminology
68-----------
69
70// TODO: extend and also describe basic architecture here.
40e6c806 71.Ceph consists of multiple Daemons, for use as an RBD storage:
3885be3b
TL
72- Ceph Monitor (ceph-mon, or MON)
73- Ceph Manager (ceph-mgr, or MGS)
74- Ceph Metadata Service (ceph-mds, or MDS)
75- Ceph Object Storage Daemon (ceph-osd, or OSD)
1d54c3b4 76
d241b01b 77TIP: We highly recommend to get familiar with Ceph
b46a49ed 78footnote:[Ceph intro {cephdocs-url}/start/intro/],
d241b01b 79its architecture
b46a49ed 80footnote:[Ceph architecture {cephdocs-url}/architecture/]
477fbcfb 81and vocabulary
b46a49ed 82footnote:[Ceph glossary {cephdocs-url}/glossary].
1d54c3b4 83
21394e70 84
3885be3b
TL
85Recommendations for a Healthy Ceph Cluster
86------------------------------------------
21394e70 87
3885be3b
TL
88To build a hyper-converged Proxmox + Ceph Cluster, you must use at least three
89(preferably) identical servers for the setup.
21394e70
DM
90
91Check also the recommendations from
b46a49ed 92{cephdocs-url}/start/hardware-recommendations/[Ceph's website].
21394e70 93
3885be3b
TL
94NOTE: The recommendations below should be seen as a rough guidance for choosing
95hardware. Therefore, it is still essential to adapt it to your specific needs.
96You should test your setup and monitor health and performance continuously.
97
76f6eca4 98.CPU
3885be3b
TL
99Ceph services can be classified into two categories:
100* Intensive CPU usage, benefiting from high CPU base frequencies and multiple
101 cores. Members of that category are:
102** Object Storage Daemon (OSD) services
103** Meta Data Service (MDS) used for CephFS
104* Moderate CPU usage, not needing multiple CPU cores. These are:
105** Monitor (MON) services
106** Manager (MGR) services
107
108As a simple rule of thumb, you should assign at least one CPU core (or thread)
109to each Ceph service to provide the minimum resources required for stable and
110durable Ceph performance.
111
112For example, if you plan to run a Ceph monitor, a Ceph manager and 6 Ceph OSDs
113services on a node you should reserve 8 CPU cores purely for Ceph when targeting
114basic and stable performance.
115
116Note that OSDs CPU usage depend mostly from the disks performance. The higher
117the possible IOPS (**IO** **O**perations per **S**econd) of a disk, the more CPU
118can be utilized by a OSD service.
119For modern enterprise SSD disks, like NVMe's that can permanently sustain a high
120IOPS load over 100'000 with sub millisecond latency, each OSD can use multiple
121CPU threads, e.g., four to six CPU threads utilized per NVMe backed OSD is
122likely for very high performance disks.
76f6eca4
AA
123
124.Memory
125Especially in a hyper-converged setup, the memory consumption needs to be
3885be3b
TL
126carefully planned out and monitored. In addition to the predicted memory usage
127of virtual machines and containers, you must also account for having enough
128memory available for Ceph to provide excellent and stable performance.
5b502340
AA
129
130As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will be used
3885be3b
TL
131by an OSD. While the usage might be less under normal conditions, it will use
132most during critical operations like recovery, re-balancing or backfilling.
133That means that you should avoid maxing out your available memory already on
134normal operation, but rather leave some headroom to cope with outages.
5b502340 135
3885be3b 136The OSD service itself will use additional memory. The Ceph BlueStore backend of
4df8e368 137the daemon requires by default **3-5 GiB of memory** (adjustable).
76f6eca4
AA
138
139.Network
3885be3b
TL
140We recommend a network bandwidth of at least 10 Gbps, or more, to be used
141exclusively for Ceph traffic. A meshed network setup
76f6eca4 142footnote:[Full Mesh Network for Ceph {webwiki-url}Full_Mesh_Network_for_Ceph_Server]
3885be3b
TL
143is also an option for three to five node clusters, if there are no 10+ Gbps
144switches available.
145
146[IMPORTANT]
147The volume of traffic, especially during recovery, will interfere
148with other services on the same network, especially the latency sensitive {pve}
149corosync cluster stack can be affected, resulting in possible loss of cluster
150quorum. Moving the Ceph traffic to dedicated and physical separated networks
151will avoid such interference, not only for corosync, but also for the networking
152services provided by any virtual guests.
153
154For estimating your bandwidth needs, you need to take the performance of your
155disks into account.. While a single HDD might not saturate a 1 Gb link, multiple
156HDD OSDs per node can already saturate 10 Gbps too.
157If modern NVMe-attached SSDs are used, a single one can already saturate 10 Gbps
158of bandwidth, or more. For such high-performance setups we recommend at least
159a 25 Gpbs, while even 40 Gbps or 100+ Gbps might be required to utilize the full
160performance potential of the underlying disks.
161
162If unsure, we recommend using three (physical) separate networks for
163high-performance setups:
164* one very high bandwidth (25+ Gbps) network for Ceph (internal) cluster
165 traffic.
166* one high bandwidth (10+ Gpbs) network for Ceph (public) traffic between the
167 ceph server and ceph client storage traffic. Depending on your needs this can
168 also be used to host the virtual guest traffic and the VM live-migration
169 traffic.
170* one medium bandwidth (1 Gbps) exclusive for the latency sensitive corosync
171 cluster communication.
76f6eca4
AA
172
173.Disks
174When planning the size of your Ceph cluster, it is important to take the
40e6c806 175recovery time into consideration. Especially with small clusters, recovery
76f6eca4
AA
176might take long. It is recommended that you use SSDs instead of HDDs in small
177setups to reduce recovery time, minimizing the likelihood of a subsequent
178failure event during recovery.
179
3a433e9b 180In general, SSDs will provide more IOPS than spinning disks. With this in mind,
40e6c806
DW
181in addition to the higher cost, it may make sense to implement a
182xref:pve_ceph_device_classes[class based] separation of pools. Another way to
183speed up OSDs is to use a faster disk as a journal or
513e2f57
TL
184DB/**W**rite-**A**head-**L**og device, see
185xref:pve_ceph_osds[creating Ceph OSDs].
186If a faster disk is used for multiple OSDs, a proper balance between OSD
40e6c806
DW
187and WAL / DB (or journal) disk must be selected, otherwise the faster disk
188becomes the bottleneck for all linked OSDs.
189
3885be3b
TL
190Aside from the disk type, Ceph performs best with an evenly sized, and an evenly
191distributed amount of disks per node. For example, 4 x 500 GB disks within each
192node is better than a mixed setup with a single 1 TB and three 250 GB disk.
2f19a6b0 193
40e6c806
DW
194You also need to balance OSD count and single OSD capacity. More capacity
195allows you to increase storage density, but it also means that a single OSD
196failure forces Ceph to recover more data at once.
76f6eca4 197
a474ca1f 198.Avoid RAID
86be506d 199As Ceph handles data object redundancy and multiple parallel writes to disks
c78756be 200(OSDs) on its own, using a RAID controller normally doesn’t improve
86be506d 201performance or availability. On the contrary, Ceph is designed to handle whole
40e6c806
DW
202disks on it's own, without any abstraction in between. RAID controllers are not
203designed for the Ceph workload and may complicate things and sometimes even
86be506d
TL
204reduce performance, as their write and caching algorithms may interfere with
205the ones from Ceph.
a474ca1f 206
40e6c806 207WARNING: Avoid RAID controllers. Use host bus adapter (HBA) instead.
a474ca1f 208
2394c306 209[[pve_ceph_install_wizard]]
40e6c806 210Initial Ceph Installation & Configuration
2394c306
TM
211-----------------------------------------
212
513e2f57
TL
213Using the Web-based Wizard
214~~~~~~~~~~~~~~~~~~~~~~~~~~
215
2394c306
TM
216[thumbnail="screenshot/gui-node-ceph-install.png"]
217
218With {pve} you have the benefit of an easy to use installation wizard
219for Ceph. Click on one of your cluster nodes and navigate to the Ceph
40e6c806
DW
220section in the menu tree. If Ceph is not already installed, you will see a
221prompt offering to do so.
2394c306 222
40e6c806 223The wizard is divided into multiple sections, where each needs to
513e2f57
TL
224finish successfully, in order to use Ceph.
225
226First you need to chose which Ceph version you want to install. Prefer the one
227from your other nodes, or the newest if this is the first node you install
228Ceph.
229
230After starting the installation, the wizard will download and install all the
231required packages from {pve}'s Ceph repository.
94d7a98c 232[thumbnail="screenshot/gui-node-ceph-install-wizard-step0.png"]
2394c306 233
513e2f57 234After finishing the installation step, you will need to create a configuration.
6a711e64
TL
235This step is only needed once per cluster, as this configuration is distributed
236automatically to all remaining cluster members through {pve}'s clustered
237xref:chapter_pmxcfs[configuration file system (pmxcfs)].
2394c306
TM
238
239The configuration step includes the following settings:
240
7367ba5b
TL
241[[pve_ceph_wizard_networks]]
242
243* *Public Network:* This network will be used for public storage communication
d4ee0a19
AL
244 (e.g., for virtual machines using a Ceph RBD backed disk, or a CephFS mount),
245 and communication between the different Ceph services. This setting is
246 required.
7367ba5b 247 +
d4ee0a19
AL
248 Separating your Ceph traffic from the {pve} cluster communication (corosync),
249 and possible the front-facing (public) networks of your virtual guests, is
250 highly recommended. Otherwise, Ceph's high-bandwidth IO-traffic could cause
251 interference with other low-latency dependent services.
2394c306
TM
252
253[thumbnail="screenshot/gui-node-ceph-install-wizard-step2.png"]
254
7367ba5b 255* *Cluster Network:* Specify to separate the xref:pve_ceph_osds[OSD] replication
d4ee0a19 256 and heartbeat traffic as well. This setting is optional.
7367ba5b
TL
257 +
258 Using a physically separated network is recommended, as it will relieve the
259 Ceph public and the virtual guests network, while also providing a significant
260 Ceph performance improvements.
d4ee0a19
AL
261 +
262 The Ceph cluster network can be configured and moved to another physically
263 separated network at a later time.
2394c306 264
7367ba5b
TL
265You have two more options which are considered advanced and therefore should
266only changed if you know what you are doing.
2394c306 267
7367ba5b
TL
268* *Number of replicas*: Defines how often an object is replicated.
269* *Minimum replicas*: Defines the minimum number of required replicas for I/O to
270 be marked as complete.
2394c306 271
40e6c806 272Additionally, you need to choose your first monitor node. This step is required.
2394c306 273
40e6c806
DW
274That's it. You should now see a success page as the last step, with further
275instructions on how to proceed. Your system is now ready to start using Ceph.
276To get started, you will need to create some additional xref:pve_ceph_monitors[monitors],
277xref:pve_ceph_osds[OSDs] and at least one xref:pve_ceph_pools[pool].
2394c306 278
40e6c806
DW
279The rest of this chapter will guide you through getting the most out of
280your {pve} based Ceph setup. This includes the aforementioned tips and
281more, such as xref:pveceph_fs[CephFS], which is a helpful addition to your
2394c306 282new Ceph cluster.
21394e70 283
58f95dd7 284[[pve_ceph_install]]
513e2f57
TL
285CLI Installation of Ceph Packages
286~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
287
288Alternatively to the the recommended {pve} Ceph installation wizard available
e2b3622a 289in the web interface, you can use the following CLI command on each node:
21394e70
DM
290
291[source,bash]
292----
19920184 293pveceph install
21394e70
DM
294----
295
296This sets up an `apt` package repository in
297`/etc/apt/sources.list.d/ceph.list` and installs the required software.
298
299
513e2f57
TL
300Initial Ceph configuration via CLI
301~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
8997dd6e 302
2394c306
TM
303Use the {pve} Ceph installation wizard (recommended) or run the
304following command on one node:
21394e70
DM
305
306[source,bash]
307----
308pveceph init --network 10.10.10.0/24
309----
310
2394c306 311This creates an initial configuration at `/etc/pve/ceph.conf` with a
40e6c806
DW
312dedicated network for Ceph. This file is automatically distributed to
313all {pve} nodes, using xref:chapter_pmxcfs[pmxcfs]. The command also
314creates a symbolic link at `/etc/ceph/ceph.conf`, which points to that file.
315Thus, you can simply run Ceph commands without the need to specify a
2394c306 316configuration file.
21394e70
DM
317
318
d9a27ee1 319[[pve_ceph_monitors]]
b3338e29
AA
320Ceph Monitor
321-----------
513e2f57
TL
322
323[thumbnail="screenshot/gui-ceph-monitor.png"]
324
1d54c3b4 325The Ceph Monitor (MON)
b46a49ed 326footnote:[Ceph Monitor {cephdocs-url}/start/intro/]
40e6c806
DW
327maintains a master copy of the cluster map. For high availability, you need at
328least 3 monitors. One monitor will already be installed if you
329used the installation wizard. You won't need more than 3 monitors, as long
330as your cluster is small to medium-sized. Only really large clusters will
331require more than this.
1d54c3b4 332
c998bdf2 333[[pveceph_create_mon]]
b3338e29
AA
334Create Monitors
335~~~~~~~~~~~~~~~
336
1d54c3b4 337On each node where you want to place a monitor (three monitors are recommended),
40e6c806 338create one by using the 'Ceph -> Monitor' tab in the GUI or run:
21394e70
DM
339
340
341[source,bash]
342----
d1fdb121 343pveceph mon create
21394e70
DM
344----
345
c998bdf2 346[[pveceph_destroy_mon]]
b3338e29
AA
347Destroy Monitors
348~~~~~~~~~~~~~~~~
0e38a564 349
40e6c806 350To remove a Ceph Monitor via the GUI, first select a node in the tree view and
0e38a564
AA
351go to the **Ceph -> Monitor** panel. Select the MON and click the **Destroy**
352button.
353
40e6c806 354To remove a Ceph Monitor via the CLI, first connect to the node on which the MON
0e38a564
AA
355is running. Then execute the following command:
356[source,bash]
357----
358pveceph mon destroy
359----
360
361NOTE: At least three Monitors are needed for quorum.
362
363
1d54c3b4 364[[pve_ceph_manager]]
b3338e29
AA
365Ceph Manager
366------------
40e6c806 367
b3338e29 368The Manager daemon runs alongside the monitors. It provides an interface to
40e6c806 369monitor the cluster. Since the release of Ceph luminous, at least one ceph-mgr
b46a49ed 370footnote:[Ceph Manager {cephdocs-url}/mgr/] daemon is
b3338e29
AA
371required.
372
55d634e6 373[[pveceph_create_mgr]]
b3338e29
AA
374Create Manager
375~~~~~~~~~~~~~~
1d54c3b4 376
40e6c806
DW
377Multiple Managers can be installed, but only one Manager is active at any given
378time.
1d54c3b4 379
1d54c3b4
AA
380[source,bash]
381----
d1fdb121 382pveceph mgr create
1d54c3b4
AA
383----
384
c1f38fe3
AA
385NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For
386high availability install more then one manager.
387
21394e70 388
c998bdf2 389[[pveceph_destroy_mgr]]
b3338e29
AA
390Destroy Manager
391~~~~~~~~~~~~~~~
549350fe 392
40e6c806 393To remove a Ceph Manager via the GUI, first select a node in the tree view and
549350fe
AA
394go to the **Ceph -> Monitor** panel. Select the Manager and click the
395**Destroy** button.
396
40e6c806 397To remove a Ceph Monitor via the CLI, first connect to the node on which the
549350fe
AA
398Manager is running. Then execute the following command:
399[source,bash]
400----
401pveceph mgr destroy
402----
403
40e6c806
DW
404NOTE: While a manager is not a hard-dependency, it is crucial for a Ceph cluster,
405as it handles important features like PG-autoscaling, device health monitoring,
406telemetry and more.
549350fe 407
d9a27ee1 408[[pve_ceph_osds]]
b3338e29
AA
409Ceph OSDs
410---------
513e2f57
TL
411
412[thumbnail="screenshot/gui-ceph-osd-status.png"]
413
40e6c806 414Ceph **O**bject **S**torage **D**aemons store objects for Ceph over the
b3338e29
AA
415network. It is recommended to use one OSD per physical disk.
416
081cb761 417[[pve_ceph_osd_create]]
b3338e29
AA
418Create OSDs
419~~~~~~~~~~~
21394e70 420
e2b3622a 421You can create an OSD either via the {pve} web interface or via the CLI using
e79e0b9d 422`pveceph`. For example:
21394e70
DM
423
424[source,bash]
425----
d1fdb121 426pveceph osd create /dev/sd[X]
21394e70
DM
427----
428
40e6c806 429TIP: We recommend a Ceph cluster with at least three nodes and at least 12
e79e0b9d 430OSDs, evenly distributed among the nodes.
1d54c3b4 431
40e6c806
DW
432If the disk was in use before (for example, for ZFS or as an OSD) you first need
433to zap all traces of that usage. To remove the partition table, boot sector and
434any other OSD leftover, you can use the following command:
a474ca1f
AA
435
436[source,bash]
437----
9bddef40 438ceph-volume lvm zap /dev/sd[X] --destroy
a474ca1f
AA
439----
440
e79e0b9d 441WARNING: The above command will destroy all data on the disk!
1d54c3b4 442
b3338e29 443.Ceph Bluestore
21394e70 444
1d54c3b4 445Starting with the Ceph Kraken release, a new Ceph OSD storage type was
40e6c806 446introduced called Bluestore
2798d126 447footnote:[Ceph Bluestore https://ceph.com/community/new-luminous-bluestore/].
9bddef40 448This is the default when creating OSDs since Ceph Luminous.
21394e70
DM
449
450[source,bash]
451----
d1fdb121 452pveceph osd create /dev/sd[X]
1d54c3b4
AA
453----
454
1e834cb2 455.Block.db and block.wal
1d54c3b4
AA
456
457If you want to use a separate DB/WAL device for your OSDs, you can specify it
b3338e29
AA
458through the '-db_dev' and '-wal_dev' options. The WAL is placed with the DB, if
459not specified separately.
1d54c3b4
AA
460
461[source,bash]
462----
d1fdb121 463pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]
1d54c3b4
AA
464----
465
40e6c806
DW
466You can directly choose the size of those with the '-db_size' and '-wal_size'
467parameters respectively. If they are not given, the following values (in order)
9bddef40
DC
468will be used:
469
40e6c806 470* bluestore_block_{db,wal}_size from Ceph configuration...
352c803f
TL
471** ... database, section 'osd'
472** ... database, section 'global'
473** ... file, section 'osd'
474** ... file, section 'global'
9bddef40
DC
475* 10% (DB)/1% (WAL) of OSD size
476
40e6c806 477NOTE: The DB stores BlueStore’s internal metadata, and the WAL is BlueStore’s
ee4a0e96 478internal journal or write-ahead log. It is recommended to use a fast SSD or
1d54c3b4
AA
479NVRAM for better performance.
480
b3338e29 481.Ceph Filestore
9bddef40 482
40e6c806 483Before Ceph Luminous, Filestore was used as the default storage type for Ceph OSDs.
9bddef40 484Starting with Ceph Nautilus, {pve} does not support creating such OSDs with
352c803f
TL
485'pveceph' anymore. If you still want to create filestore OSDs, use
486'ceph-volume' directly.
1d54c3b4
AA
487
488[source,bash]
489----
9bddef40 490ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]
21394e70
DM
491----
492
081cb761 493[[pve_ceph_osd_destroy]]
b3338e29
AA
494Destroy OSDs
495~~~~~~~~~~~~
be2d137e 496
40e6c806
DW
497To remove an OSD via the GUI, first select a {PVE} node in the tree view and go
498to the **Ceph -> OSD** panel. Then select the OSD to destroy and click the **OUT**
499button. Once the OSD status has changed from `in` to `out`, click the **STOP**
500button. Finally, after the status has changed from `up` to `down`, select
501**Destroy** from the `More` drop-down menu.
be2d137e
AA
502
503To remove an OSD via the CLI run the following commands.
40e6c806 504
be2d137e
AA
505[source,bash]
506----
507ceph osd out <ID>
508systemctl stop ceph-osd@<ID>.service
509----
40e6c806 510
be2d137e
AA
511NOTE: The first command instructs Ceph not to include the OSD in the data
512distribution. The second command stops the OSD service. Until this time, no
513data is lost.
514
515The following command destroys the OSD. Specify the '-cleanup' option to
516additionally destroy the partition table.
40e6c806 517
be2d137e
AA
518[source,bash]
519----
520pveceph osd destroy <ID>
521----
40e6c806
DW
522
523WARNING: The above command will destroy all data on the disk!
be2d137e
AA
524
525
07fef357 526[[pve_ceph_pools]]
b3338e29
AA
527Ceph Pools
528----------
94d7a98c
TL
529
530[thumbnail="screenshot/gui-ceph-pools.png"]
531
40e6c806
DW
532A pool is a logical group for storing objects. It holds a collection of objects,
533known as **P**lacement **G**roups (`PG`, `pg_num`).
1d54c3b4 534
b3338e29 535
6004d86b 536Create and Edit Pools
5b9f923f 537~~~~~~~~~~~~~~~~~~~~~
b3338e29 538
e2b3622a 539You can create and edit pools from the command line or the web interface of any
513e2f57 540{pve} host under **Ceph -> Pools**.
d56606c7 541
90682f35 542When no options are given, we set a default of **128 PGs**, a **size of 3
d56606c7
TL
543replicas** and a **min_size of 2 replicas**, to ensure no data loss occurs if
544any OSD fails.
1d54c3b4 545
ef3efe51 546WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1
40e6c806 547allows I/O on an object when it has only 1 replica, which could lead to data
ef3efe51
AA
548loss, incomplete PGs or unfound objects.
549
513e2f57
TL
550It is advised that you either enable the PG-Autoscaler or calculate the PG
551number based on your setup. You can find the formula and the PG calculator
f8bfcb41 552footnote:[PG calculator https://web.archive.org/web/20210301111112/http://ceph.com/pgcalc/] online. From Ceph Nautilus
513e2f57
TL
553onward, you can change the number of PGs
554footnoteref:[placement_groups,Placement Groups
c446b6bb 555{cephdocs-url}/rados/operations/placement-groups/] after the setup.
1d54c3b4 556
513e2f57 557The PG autoscaler footnoteref:[autoscaler,Automated Scaling
c446b6bb 558{cephdocs-url}/rados/operations/placement-groups/#automated-scaling] can
513e2f57
TL
559automatically scale the PG count for a pool in the background. Setting the
560`Target Size` or `Target Ratio` advanced parameters helps the PG-Autoscaler to
561make better decisions.
1d54c3b4 562
d56606c7 563.Example for creating a pool over the CLI
1d54c3b4
AA
564[source,bash]
565----
41791cf8 566pveceph pool create <pool-name> --add_storages
1d54c3b4
AA
567----
568
40e6c806 569TIP: If you would also like to automatically define a storage for your
e2b3622a 570pool, keep the `Add as Storage' checkbox checked in the web interface, or use the
ff4ae052 571command-line option '--add_storages' at pool creation.
21394e70 572
513e2f57
TL
573Pool Options
574^^^^^^^^^^^^
575
94d7a98c
TL
576[thumbnail="screenshot/gui-ceph-pool-create.png"]
577
513e2f57
TL
578The following options are available on pool creation, and partially also when
579editing a pool.
580
c446b6bb
DW
581Name:: The name of the pool. This must be unique and can't be changed afterwards.
582Size:: The number of replicas per object. Ceph always tries to have this many
583copies of an object. Default: `3`.
584PG Autoscale Mode:: The automatic PG scaling mode footnoteref:[autoscaler] of
585the pool. If set to `warn`, it produces a warning message when a pool
586has a non-optimal PG count. Default: `warn`.
587Add as Storage:: Configure a VM or container storage using the new pool.
5b9f923f 588Default: `true` (only visible on creation).
c446b6bb
DW
589
590.Advanced Options
591Min. Size:: The minimum number of replicas per object. Ceph will reject I/O on
592the pool if a PG has less than this many replicas. Default: `2`.
593Crush Rule:: The rule to use for mapping object placement in the cluster. These
594rules define how data is placed within the cluster. See
595xref:pve_ceph_device_classes[Ceph CRUSH & device classes] for information on
596device-based rules.
597# of PGs:: The number of placement groups footnoteref:[placement_groups] that
598the pool should have at the beginning. Default: `128`.
513e2f57 599Target Ratio:: The ratio of data that is expected in the pool. The PG
c446b6bb
DW
600autoscaler uses the ratio relative to other ratio sets. It takes precedence
601over the `target size` if both are set.
a0d289ff
DC
602Target Size:: The estimated amount of data expected in the pool. The PG
603autoscaler uses this size to estimate the optimal PG count.
c446b6bb
DW
604Min. # of PGs:: The minimum number of placement groups. This setting is used to
605fine-tune the lower bound of the PG count for that pool. The PG autoscaler
606will not merge PGs below this threshold.
607
1d54c3b4
AA
608Further information on Ceph pool handling can be found in the Ceph pool
609operation footnote:[Ceph pool operation
b46a49ed 610{cephdocs-url}/rados/operations/pools/]
1d54c3b4 611manual.
21394e70 612
166c91fe 613
cbb265a3 614[[pve_ceph_ec_pools]]
41791cf8
TL
615Erasure Coded Pools
616~~~~~~~~~~~~~~~~~~~
cbb265a3 617
41791cf8
TL
618Erasure coding (EC) is a form of `forward error correction' codes that allows
619to recover from a certain amount of data loss. Erasure coded pools can offer
620more usable space compared to replicated pools, but they do that for the price
621of performance.
622
42135e58 623For comparison: in classic, replicated pools, multiple replicas of the data
41791cf8
TL
624are stored (`size`) while in erasure coded pool, data is split into `k` data
625chunks with additional `m` coding (checking) chunks. Those coding chunks can be
626used to recreate data should data chunks be missing.
627
628The number of coding chunks, `m`, defines how many OSDs can be lost without
629losing any data. The total amount of objects stored is `k + m`.
630
631Creating EC Pools
632^^^^^^^^^^^^^^^^^
633
42135e58
AL
634Erasure coded (EC) pools can be created with the `pveceph` CLI tooling.
635Planning an EC pool needs to account for the fact, that they work differently
636than replicated pools.
cbb265a3 637
e9d331c5
TL
638The default `min_size` of an EC pool depends on the `m` parameter. If `m = 1`,
639the `min_size` of the EC pool will be `k`. The `min_size` will be `k + 1` if
640`m > 1`. The Ceph documentation recommends a conservative `min_size` of `k + 2`
cbb265a3
AL
641footnote:[Ceph Erasure Coded Pool Recovery
642{cephdocs-url}/rados/operations/erasure-code/#erasure-coded-pool-recovery].
643
e9d331c5 644If there are less than `min_size` OSDs available, any IO to the pool will be
cbb265a3
AL
645blocked until there are enough OSDs available again.
646
e9d331c5 647NOTE: When planning an erasure coded pool, keep an eye on the `min_size` as it
cbb265a3
AL
648defines how many OSDs need to be available. Otherwise, IO will be blocked.
649
e9d331c5
TL
650For example, an EC pool with `k = 2` and `m = 1` will have `size = 3`,
651`min_size = 2` and will stay operational if one OSD fails. If the pool is
652configured with `k = 2`, `m = 2`, it will have a `size = 4` and `min_size = 3`
cbb265a3
AL
653and stay operational if one OSD is lost.
654
655To create a new EC pool, run the following command:
656
657[source,bash]
658----
81de7382 659pveceph pool create <pool-name> --erasure-coding k=2,m=1
cbb265a3
AL
660----
661
e9d331c5 662Optional parameters are `failure-domain` and `device-class`. If you
cbb265a3
AL
663need to change any EC profile settings used by the pool, you will have to
664create a new pool with a new profile.
665
666This will create a new EC pool plus the needed replicated pool to store the RBD
e9d331c5
TL
667omap and other metadata. In the end, there will be a `<pool name>-data` and
668`<pool name>-metada` pool. The default behavior is to create a matching storage
cbb265a3 669configuration as well. If that behavior is not wanted, you can disable it by
e9d331c5
TL
670providing the `--add_storages 0` parameter. When configuring the storage
671configuration manually, keep in mind that the `data-pool` parameter needs to be
cbb265a3
AL
672set. Only then will the EC pool be used to store the data objects. For example:
673
e9d331c5 674NOTE: The optional parameters `--size`, `--min_size` and `--crush_rule` will be
12730071 675used for the replicated metadata pool, but not for the erasure coded data pool.
e9d331c5
TL
676If you need to change the `min_size` on the data pool, you can do it later.
677The `size` and `crush_rule` parameters cannot be changed on erasure coded
12730071
AL
678pools.
679
cbb265a3
AL
680If there is a need to further customize the EC profile, you can do so by
681creating it with the Ceph tools directly footnote:[Ceph Erasure Code Profile
682{cephdocs-url}/rados/operations/erasure-code/#erasure-code-profiles], and
e9d331c5 683specify the profile to use with the `profile` parameter.
cbb265a3
AL
684
685For example:
686[source,bash]
687----
81de7382 688pveceph pool create <pool-name> --erasure-coding profile=<profile-name>
41791cf8
TL
689----
690
691Adding EC Pools as Storage
692^^^^^^^^^^^^^^^^^^^^^^^^^^
693
42135e58
AL
694You can add an already existing EC pool as storage to {pve}. It works the same
695way as adding an `RBD` pool but requires the extra `data-pool` option.
41791cf8
TL
696
697[source,bash]
698----
699pvesm add rbd <storage-name> --pool <replicated-pool> --data-pool <ec-pool>
cbb265a3
AL
700----
701
41791cf8 702TIP: Do not forget to add the `keyring` and `monhost` option for any external
f226da0e 703Ceph clusters, not managed by the local {pve} cluster.
cbb265a3 704
b3338e29
AA
705Destroy Pools
706~~~~~~~~~~~~~
166c91fe 707
40e6c806 708To destroy a pool via the GUI, select a node in the tree view and go to the
166c91fe 709**Ceph -> Pools** panel. Select the pool to destroy and click the **Destroy**
40e6c806 710button. To confirm the destruction of the pool, you need to enter the pool name.
166c91fe
AA
711
712Run the following command to destroy a pool. Specify the '-remove_storages' to
713also remove the associated storage.
40e6c806 714
166c91fe
AA
715[source,bash]
716----
717pveceph pool destroy <name>
718----
719
40e6c806
DW
720NOTE: Pool deletion runs in the background and can take some time.
721You will notice the data usage in the cluster decreasing throughout this
722process.
166c91fe 723
47d62c84
DW
724
725PG Autoscaler
726~~~~~~~~~~~~~
727
728The PG autoscaler allows the cluster to consider the amount of (expected) data
729stored in each pool and to choose the appropriate pg_num values automatically.
513e2f57 730It is available since Ceph Nautilus.
47d62c84
DW
731
732You may need to activate the PG autoscaler module before adjustments can take
733effect.
40e6c806 734
47d62c84
DW
735[source,bash]
736----
737ceph mgr module enable pg_autoscaler
738----
739
740The autoscaler is configured on a per pool basis and has the following modes:
741
742[horizontal]
743warn:: A health warning is issued if the suggested `pg_num` value differs too
744much from the current value.
745on:: The `pg_num` is adjusted automatically with no need for any manual
746interaction.
747off:: No automatic `pg_num` adjustments are made, and no warning will be issued
40e6c806 748if the PG count is not optimal.
47d62c84 749
40e6c806 750The scaling factor can be adjusted to facilitate future data storage with the
47d62c84
DW
751`target_size`, `target_size_ratio` and the `pg_num_min` options.
752
753WARNING: By default, the autoscaler considers tuning the PG count of a pool if
754it is off by a factor of 3. This will lead to a considerable shift in data
755placement and might introduce a high load on the cluster.
756
757You can find a more in-depth introduction to the PG autoscaler on Ceph's Blog -
758https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/[New in
759Nautilus: PG merging and autotuning].
760
761
76f6eca4 762[[pve_ceph_device_classes]]
9fad507d
AA
763Ceph CRUSH & device classes
764---------------------------
513e2f57
TL
765
766[thumbnail="screenshot/gui-ceph-config.png"]
767
40e6c806
DW
768The footnote:[CRUSH
769https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf] (**C**ontrolled
770**R**eplication **U**nder **S**calable **H**ashing) algorithm is at the
771foundation of Ceph.
9fad507d 772
40e6c806
DW
773CRUSH calculates where to store and retrieve data from. This has the
774advantage that no central indexing service is needed. CRUSH works using a map of
9fad507d
AA
775OSDs, buckets (device locations) and rulesets (data replication) for pools.
776
777NOTE: Further information can be found in the Ceph documentation, under the
b46a49ed 778section CRUSH map footnote:[CRUSH map {cephdocs-url}/rados/operations/crush-map/].
9fad507d
AA
779
780This map can be altered to reflect different replication hierarchies. The object
3a433e9b 781replicas can be separated (e.g., failure domains), while maintaining the desired
9fad507d
AA
782distribution.
783
40e6c806
DW
784A common configuration is to use different classes of disks for different Ceph
785pools. For this reason, Ceph introduced device classes with luminous, to
9fad507d
AA
786accommodate the need for easy ruleset generation.
787
788The device classes can be seen in the 'ceph osd tree' output. These classes
789represent their own root bucket, which can be seen with the below command.
790
791[source, bash]
792----
793ceph osd crush tree --show-shadow
794----
795
796Example output form the above command:
797
798[source, bash]
799----
800ID CLASS WEIGHT TYPE NAME
801-16 nvme 2.18307 root default~nvme
802-13 nvme 0.72769 host sumi1~nvme
803 12 nvme 0.72769 osd.12
804-14 nvme 0.72769 host sumi2~nvme
805 13 nvme 0.72769 osd.13
806-15 nvme 0.72769 host sumi3~nvme
807 14 nvme 0.72769 osd.14
808 -1 7.70544 root default
809 -3 2.56848 host sumi1
810 12 nvme 0.72769 osd.12
811 -5 2.56848 host sumi2
812 13 nvme 0.72769 osd.13
813 -7 2.56848 host sumi3
814 14 nvme 0.72769 osd.14
815----
816
40e6c806
DW
817To instruct a pool to only distribute objects on a specific device class, you
818first need to create a ruleset for the device class:
9fad507d
AA
819
820[source, bash]
821----
822ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
823----
824
825[frame="none",grid="none", align="left", cols="30%,70%"]
826|===
827|<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
f226da0e 828|<root>|which crush root it should belong to (default Ceph root "default")
9fad507d 829|<failure-domain>|at which failure-domain the objects should be distributed (usually host)
3a433e9b 830|<class>|what type of OSD backing store to use (e.g., nvme, ssd, hdd)
9fad507d
AA
831|===
832
833Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
834
835[source, bash]
836----
837ceph osd pool set <pool-name> crush_rule <rule-name>
838----
839
40e6c806
DW
840TIP: If the pool already contains objects, these must be moved accordingly.
841Depending on your setup, this may introduce a big performance impact on your
842cluster. As an alternative, you can create a new pool and move disks separately.
9fad507d
AA
843
844
21394e70
DM
845Ceph Client
846-----------
847
1ff5e4e8 848[thumbnail="screenshot/gui-ceph-log.png"]
8997dd6e 849
40e6c806
DW
850Following the setup from the previous sections, you can configure {pve} to use
851such pools to store VM and Container images. Simply use the GUI to add a new
513e2f57
TL
852`RBD` storage (see section
853xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
21394e70 854
620d6725 855You also need to copy the keyring to a predefined location for an external Ceph
1d54c3b4
AA
856cluster. If Ceph is installed on the Proxmox nodes itself, then this will be
857done automatically.
21394e70 858
40e6c806
DW
859NOTE: The filename needs to be `<storage_id> + `.keyring`, where `<storage_id>` is
860the expression after 'rbd:' in `/etc/pve/storage.cfg`. In the following example,
861`my-ceph-storage` is the `<storage_id>`:
21394e70
DM
862
863[source,bash]
864----
865mkdir /etc/pve/priv/ceph
866cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
867----
0840a663 868
58f95dd7
TL
869[[pveceph_fs]]
870CephFS
871------
872
40e6c806
DW
873Ceph also provides a filesystem, which runs on top of the same object storage as
874RADOS block devices do. A **M**eta**d**ata **S**erver (`MDS`) is used to map the
875RADOS backed objects to files and directories, allowing Ceph to provide a
876POSIX-compliant, replicated filesystem. This allows you to easily configure a
877clustered, highly available, shared filesystem. Ceph's Metadata Servers
878guarantee that files are evenly distributed over the entire Ceph cluster. As a
879result, even cases of high load will not overwhelm a single host, which can be
880an issue with traditional shared filesystem approaches, for example `NFS`.
58f95dd7 881
1e834cb2
TL
882[thumbnail="screenshot/gui-node-ceph-cephfs-panel.png"]
883
40e6c806
DW
884{pve} supports both creating a hyper-converged CephFS and using an existing
885xref:storage_cephfs[CephFS as storage] to save backups, ISO files, and container
886templates.
58f95dd7
TL
887
888
889[[pveceph_fs_mds]]
890Metadata Server (MDS)
891~~~~~~~~~~~~~~~~~~~~~
892
40e6c806
DW
893CephFS needs at least one Metadata Server to be configured and running, in order
894to function. You can create an MDS through the {pve} web GUI's `Node
895-> CephFS` panel or from the command line with:
58f95dd7
TL
896
897----
898pveceph mds create
899----
900
40e6c806
DW
901Multiple metadata servers can be created in a cluster, but with the default
902settings, only one can be active at a time. If an MDS or its node becomes
58f95dd7 903unresponsive (or crashes), another `standby` MDS will get promoted to `active`.
40e6c806
DW
904You can speed up the handover between the active and standby MDS by using
905the 'hotstandby' parameter option on creation, or if you have already created it
58f95dd7
TL
906you may set/add:
907
908----
909mds standby replay = true
910----
911
40e6c806
DW
912in the respective MDS section of `/etc/pve/ceph.conf`. With this enabled, the
913specified MDS will remain in a `warm` state, polling the active one, so that it
914can take over faster in case of any issues.
915
916NOTE: This active polling will have an additional performance impact on your
917system and the active `MDS`.
58f95dd7 918
1e834cb2 919.Multiple Active MDS
58f95dd7 920
40e6c806
DW
921Since Luminous (12.2.x) you can have multiple active metadata servers
922running at once, but this is normally only useful if you have a high amount of
923clients running in parallel. Otherwise the `MDS` is rarely the bottleneck in a
924system. If you want to set this up, please refer to the Ceph documentation.
925footnote:[Configuring multiple active MDS daemons
926{cephdocs-url}/cephfs/multimds/]
58f95dd7
TL
927
928[[pveceph_fs_create]]
8a38333f
AA
929Create CephFS
930~~~~~~~~~~~~~
58f95dd7 931
40e6c806
DW
932With {pve}'s integration of CephFS, you can easily create a CephFS using the
933web interface, CLI or an external API interface. Some prerequisites are required
58f95dd7
TL
934for this to work:
935
936.Prerequisites for a successful CephFS setup:
40e6c806
DW
937- xref:pve_ceph_install[Install Ceph packages] - if this was already done some
938time ago, you may want to rerun it on an up-to-date system to
939ensure that all CephFS related packages get installed.
58f95dd7
TL
940- xref:pve_ceph_monitors[Setup Monitors]
941- xref:pve_ceph_monitors[Setup your OSDs]
942- xref:pveceph_fs_mds[Setup at least one MDS]
943
40e6c806 944After this is complete, you can simply create a CephFS through
ff4ae052 945either the Web GUI's `Node -> CephFS` panel or the command-line tool `pveceph`,
40e6c806 946for example:
58f95dd7
TL
947
948----
949pveceph fs create --pg_num 128 --add-storage
950----
951
40e6c806
DW
952This creates a CephFS named 'cephfs', using a pool for its data named
953'cephfs_data' with '128' placement groups and a pool for its metadata named
954'cephfs_metadata' with one quarter of the data pool's placement groups (`32`).
58f95dd7 955Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
40e6c806 956Ceph documentation for more information regarding an appropriate placement group
c446b6bb 957number (`pg_num`) for your setup footnoteref:[placement_groups].
40e6c806 958Additionally, the '--add-storage' parameter will add the CephFS to the {pve}
c446b6bb 959storage configuration after it has been created successfully.
58f95dd7
TL
960
961Destroy CephFS
962~~~~~~~~~~~~~~
963
40e6c806 964WARNING: Destroying a CephFS will render all of its data unusable. This cannot be
58f95dd7
TL
965undone!
966
54f20853
TL
967To completely and gracefully remove a CephFS, the following steps are
968necessary:
58f95dd7 969
b631c35e
DC
970* Disconnect every non-{PVE} client (e.g. unmount the CephFS in guests).
971* Disable all related CephFS {PVE} storage entries (to prevent it from being
972 automatically mounted).
973* Remove all used resources from guests (e.g. ISOs) that are on the CephFS you
974 want to destroy.
975* Unmount the CephFS storages on all cluster nodes manually with
976+
58f95dd7 977----
b631c35e 978umount /mnt/pve/<STORAGE-NAME>
58f95dd7 979----
b631c35e
DC
980+
981Where `<STORAGE-NAME>` is the name of the CephFS storage in your {PVE}.
58f95dd7 982
b631c35e 983* Now make sure that no metadata server (`MDS`) is running for that CephFS,
54f20853 984 either by stopping or destroying them. This can be done through the web
ff4ae052 985 interface or via the command-line interface, for the latter you would issue
54f20853 986 the following command:
b631c35e
DC
987+
988----
989pveceph stop --service mds.NAME
58f95dd7 990----
b631c35e
DC
991+
992to stop them, or
993+
994----
995pveceph mds destroy NAME
58f95dd7 996----
b631c35e
DC
997+
998to destroy them.
999+
1000Note that standby servers will automatically be promoted to active when an
1001active `MDS` is stopped or removed, so it is best to first stop all standby
1002servers.
58f95dd7 1003
b631c35e
DC
1004* Now you can destroy the CephFS with
1005+
58f95dd7 1006----
b631c35e 1007pveceph fs destroy NAME --remove-storages --remove-pools
58f95dd7 1008----
b631c35e 1009+
f226da0e 1010This will automatically destroy the underlying Ceph pools as well as remove
b631c35e 1011the storages from pve config.
0840a663 1012
b631c35e
DC
1013After these steps, the CephFS should be completely removed and if you have
1014other CephFS instances, the stopped metadata servers can be started again
1015to act as standbys.
6ff32926 1016
081cb761
AA
1017Ceph maintenance
1018----------------
af6f59f4 1019
081cb761
AA
1020Replace OSDs
1021~~~~~~~~~~~~
af6f59f4 1022
40e6c806
DW
1023One of the most common maintenance tasks in Ceph is to replace the disk of an
1024OSD. If a disk is already in a failed state, then you can go ahead and run
1025through the steps in xref:pve_ceph_osd_destroy[Destroy OSDs]. Ceph will recreate
1026those copies on the remaining OSDs if possible. This rebalancing will start as
1027soon as an OSD failure is detected or an OSD was actively stopped.
af6f59f4
TL
1028
1029NOTE: With the default size/min_size (3/2) of a pool, recovery only starts when
1030`size + 1` nodes are available. The reason for this is that the Ceph object
1031balancer xref:pve_ceph_device_classes[CRUSH] defaults to a full node as
1032`failure domain'.
081cb761 1033
40e6c806 1034To replace a functioning disk from the GUI, go through the steps in
081cb761
AA
1035xref:pve_ceph_osd_destroy[Destroy OSDs]. The only addition is to wait until
1036the cluster shows 'HEALTH_OK' before stopping the OSD to destroy it.
1037
40e6c806
DW
1038On the command line, use the following commands:
1039
081cb761
AA
1040----
1041ceph osd out osd.<id>
1042----
1043
1044You can check with the command below if the OSD can be safely removed.
40e6c806 1045
081cb761
AA
1046----
1047ceph osd safe-to-destroy osd.<id>
1048----
1049
40e6c806
DW
1050Once the above check tells you that it is safe to remove the OSD, you can
1051continue with the following commands:
1052
081cb761
AA
1053----
1054systemctl stop ceph-osd@<id>.service
1055pveceph osd destroy <id>
1056----
1057
1058Replace the old disk with the new one and use the same procedure as described
1059in xref:pve_ceph_osd_create[Create OSDs].
1060
835f322d
TL
1061Trim/Discard
1062~~~~~~~~~~~~
40e6c806
DW
1063
1064It is good practice to run 'fstrim' (discard) regularly on VMs and containers.
081cb761 1065This releases data blocks that the filesystem isn’t using anymore. It reduces
c78cd2b6
AA
1066data usage and resource load. Most modern operating systems issue such discard
1067commands to their disks regularly. You only need to ensure that the Virtual
1068Machines enable the xref:qm_hard_disk_discard[disk discard option].
081cb761 1069
c998bdf2 1070[[pveceph_scrub]]
081cb761
AA
1071Scrub & Deep Scrub
1072~~~~~~~~~~~~~~~~~~
40e6c806 1073
081cb761
AA
1074Ceph ensures data integrity by 'scrubbing' placement groups. Ceph checks every
1075object in a PG for its health. There are two forms of Scrubbing, daily
b16f8c5f
TL
1076cheap metadata checks and weekly deep data checks. The weekly deep scrub reads
1077the objects and uses checksums to ensure data integrity. If a running scrub
1078interferes with business (performance) needs, you can adjust the time when
b46a49ed 1079scrubs footnote:[Ceph scrubbing {cephdocs-url}/rados/configuration/osd-config-ref/#scrubbing]
081cb761
AA
1080are executed.
1081
1082
40e6c806 1083Ceph Monitoring and Troubleshooting
10df14fb 1084-----------------------------------
40e6c806
DW
1085
1086It is important to continuously monitor the health of a Ceph deployment from the
1087beginning, either by using the Ceph tools or by accessing
10df14fb 1088the status through the {pve} link:api-viewer/index.html[API].
6ff32926 1089
40e6c806 1090The following Ceph commands can be used to see if the cluster is healthy
10df14fb 1091('HEALTH_OK'), if there are warnings ('HEALTH_WARN'), or even errors
40e6c806 1092('HEALTH_ERR'). If the cluster is in an unhealthy state, the status commands
620d6725 1093below will also give you an overview of the current events and actions to take.
6ff32926
AA
1094
1095----
10df14fb
TL
1096# single time output
1097pve# ceph -s
1098# continuously output status changes (press CTRL+C to stop)
1099pve# ceph -w
6ff32926
AA
1100----
1101
40e6c806
DW
1102To get a more detailed view, every Ceph service has a log file under
1103`/var/log/ceph/`. If more detail is required, the log level can be
b46a49ed 1104adjusted footnote:[Ceph log and debugging {cephdocs-url}/rados/troubleshooting/log-and-debug/].
6ff32926
AA
1105
1106You can find more information about troubleshooting
b46a49ed 1107footnote:[Ceph troubleshooting {cephdocs-url}/rados/troubleshooting/]
620d6725 1108a Ceph cluster on the official website.
6ff32926
AA
1109
1110
0840a663
DM
1111ifdef::manvolnum[]
1112include::pve-copyright.adoc[]
1113endif::manvolnum[]