]> git.proxmox.com Git - pve-docs.git/blame - pveceph.adoc
ceph: section language fixup
[pve-docs.git] / pveceph.adoc
CommitLineData
80c0adcb 1[[chapter_pveceph]]
0840a663 2ifdef::manvolnum[]
b2f242ab
DM
3pveceph(1)
4==========
404a158e 5:pve-toplevel:
0840a663
DM
6
7NAME
8----
9
21394e70 10pveceph - Manage Ceph Services on Proxmox VE Nodes
0840a663 11
49a5e11c 12SYNOPSIS
0840a663
DM
13--------
14
15include::pveceph.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
0840a663 20ifndef::manvolnum[]
4bfe3e35
TL
21Deploy Hyper-Converged Ceph Cluster
22===================================
49d3ad91 23:pve-toplevel:
0840a663
DM
24endif::manvolnum[]
25
1ff5e4e8 26[thumbnail="screenshot/gui-ceph-status.png"]
8997dd6e 27
40e6c806 28{pve} unifies your compute and storage systems, that is, you can use the same
a474ca1f
AA
29physical nodes within a cluster for both computing (processing VMs and
30containers) and replicated storage. The traditional silos of compute and
31storage resources can be wrapped up into a single hyper-converged appliance.
40e6c806 32Separate storage networks (SANs) and connections via network attached storage
a474ca1f
AA
33(NAS) disappear. With the integration of Ceph, an open source software-defined
34storage platform, {pve} has the ability to run and manage Ceph storage directly
35on the hypervisor nodes.
c994e4e5
DM
36
37Ceph is a distributed object store and file system designed to provide
1d54c3b4
AA
38excellent performance, reliability and scalability.
39
04ba9b24 40.Some advantages of Ceph on {pve} are:
40e6c806 41- Easy setup and management via CLI and GUI
a474ca1f 42- Thin provisioning
40e6c806 43- Snapshot support
a474ca1f 44- Self healing
a474ca1f
AA
45- Scalable to the exabyte level
46- Setup pools with different performance and redundancy characteristics
47- Data is replicated, making it fault tolerant
40e6c806 48- Runs on commodity hardware
a474ca1f 49- No need for hardware RAID controllers
a474ca1f
AA
50- Open source
51
40e6c806
DW
52For small to medium-sized deployments, it is possible to install a Ceph server for
53RADOS Block Devices (RBD) directly on your {pve} cluster nodes (see
54xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]). Recent
55hardware has a lot of CPU power and RAM, so running storage services
c994e4e5 56and VMs on the same node is possible.
21394e70 57
40e6c806
DW
58To simplify management, we provide 'pveceph' - a tool for installing and
59managing {ceph} services on {pve} nodes.
21394e70 60
40e6c806 61.Ceph consists of multiple Daemons, for use as an RBD storage:
1d54c3b4
AA
62- Ceph Monitor (ceph-mon)
63- Ceph Manager (ceph-mgr)
64- Ceph OSD (ceph-osd; Object Storage Daemon)
65
d241b01b 66TIP: We highly recommend to get familiar with Ceph
b46a49ed 67footnote:[Ceph intro {cephdocs-url}/start/intro/],
d241b01b 68its architecture
b46a49ed 69footnote:[Ceph architecture {cephdocs-url}/architecture/]
477fbcfb 70and vocabulary
b46a49ed 71footnote:[Ceph glossary {cephdocs-url}/glossary].
1d54c3b4 72
21394e70
DM
73
74Precondition
75------------
76
40e6c806 77To build a hyper-converged Proxmox + Ceph Cluster, you must use at least
76f6eca4 78three (preferably) identical servers for the setup.
21394e70
DM
79
80Check also the recommendations from
b46a49ed 81{cephdocs-url}/start/hardware-recommendations/[Ceph's website].
21394e70 82
76f6eca4 83.CPU
40e6c806 84A high CPU core frequency reduces latency and should be preferred. As a simple
2f19a6b0
TL
85rule of thumb, you should assign a CPU core (or thread) to each Ceph service to
86provide enough resources for stable and durable Ceph performance.
76f6eca4
AA
87
88.Memory
89Especially in a hyper-converged setup, the memory consumption needs to be
40e6c806
DW
90carefully monitored. In addition to the predicted memory usage of virtual
91machines and containers, you must also account for having enough memory
92available for Ceph to provide excellent and stable performance.
5b502340
AA
93
94As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will be used
95by an OSD. Especially during recovery, rebalancing or backfilling.
96
97The daemon itself will use additional memory. The Bluestore backend of the
98daemon requires by default **3-5 GiB of memory** (adjustable). In contrast, the
99legacy Filestore backend uses the OS page cache and the memory consumption is
100generally related to PGs of an OSD daemon.
76f6eca4
AA
101
102.Network
103We recommend a network bandwidth of at least 10 GbE or more, which is used
104exclusively for Ceph. A meshed network setup
105footnote:[Full Mesh Network for Ceph {webwiki-url}Full_Mesh_Network_for_Ceph_Server]
106is also an option if there are no 10 GbE switches available.
107
2f19a6b0
TL
108The volume of traffic, especially during recovery, will interfere with other
109services on the same network and may even break the {pve} cluster stack.
76f6eca4 110
40e6c806
DW
111Furthermore, you should estimate your bandwidth needs. While one HDD might not
112saturate a 1 Gb link, multiple HDD OSDs per node can, and modern NVMe SSDs will
113even saturate 10 Gbps of bandwidth quickly. Deploying a network capable of even
114more bandwidth will ensure that this isn't your bottleneck and won't be anytime
115soon. 25, 40 or even 100 Gbps are possible.
76f6eca4
AA
116
117.Disks
118When planning the size of your Ceph cluster, it is important to take the
40e6c806 119recovery time into consideration. Especially with small clusters, recovery
76f6eca4
AA
120might take long. It is recommended that you use SSDs instead of HDDs in small
121setups to reduce recovery time, minimizing the likelihood of a subsequent
122failure event during recovery.
123
40e6c806
DW
124In general SSDs will provide more IOPs than spinning disks. With this in mind,
125in addition to the higher cost, it may make sense to implement a
126xref:pve_ceph_device_classes[class based] separation of pools. Another way to
127speed up OSDs is to use a faster disk as a journal or
128DB/**W**rite-**A**head-**L**og device, see xref:pve_ceph_osds[creating Ceph
129OSDs]. If a faster disk is used for multiple OSDs, a proper balance between OSD
130and WAL / DB (or journal) disk must be selected, otherwise the faster disk
131becomes the bottleneck for all linked OSDs.
132
133Aside from the disk type, Ceph performs best with an even sized and distributed
134amount of disks per node. For example, 4 x 500 GB disks within each node is
2f19a6b0
TL
135better than a mixed setup with a single 1 TB and three 250 GB disk.
136
40e6c806
DW
137You also need to balance OSD count and single OSD capacity. More capacity
138allows you to increase storage density, but it also means that a single OSD
139failure forces Ceph to recover more data at once.
76f6eca4 140
a474ca1f 141.Avoid RAID
86be506d 142As Ceph handles data object redundancy and multiple parallel writes to disks
c78756be 143(OSDs) on its own, using a RAID controller normally doesn’t improve
86be506d 144performance or availability. On the contrary, Ceph is designed to handle whole
40e6c806
DW
145disks on it's own, without any abstraction in between. RAID controllers are not
146designed for the Ceph workload and may complicate things and sometimes even
86be506d
TL
147reduce performance, as their write and caching algorithms may interfere with
148the ones from Ceph.
a474ca1f 149
40e6c806 150WARNING: Avoid RAID controllers. Use host bus adapter (HBA) instead.
a474ca1f 151
40e6c806
DW
152NOTE: The above recommendations should be seen as a rough guidance for choosing
153hardware. Therefore, it is still essential to adapt it to your specific needs.
154You should test your setup and monitor health and performance continuously.
76f6eca4 155
2394c306 156[[pve_ceph_install_wizard]]
40e6c806 157Initial Ceph Installation & Configuration
2394c306
TM
158-----------------------------------------
159
160[thumbnail="screenshot/gui-node-ceph-install.png"]
161
162With {pve} you have the benefit of an easy to use installation wizard
163for Ceph. Click on one of your cluster nodes and navigate to the Ceph
40e6c806
DW
164section in the menu tree. If Ceph is not already installed, you will see a
165prompt offering to do so.
2394c306 166
40e6c806
DW
167The wizard is divided into multiple sections, where each needs to
168finish successfully, in order to use Ceph. After starting the installation,
169the wizard will download and install all the required packages from {pve}'s Ceph
6a711e64 170repository.
2394c306
TM
171
172After finishing the first step, you will need to create a configuration.
6a711e64
TL
173This step is only needed once per cluster, as this configuration is distributed
174automatically to all remaining cluster members through {pve}'s clustered
175xref:chapter_pmxcfs[configuration file system (pmxcfs)].
2394c306
TM
176
177The configuration step includes the following settings:
178
40e6c806
DW
179* *Public Network:* You can set up a dedicated network for Ceph. This
180setting is required. Separating your Ceph traffic is highly recommended.
181Otherwise, it could cause trouble with other latency dependent services,
182for example, cluster communication may decrease Ceph's performance.
2394c306
TM
183
184[thumbnail="screenshot/gui-node-ceph-install-wizard-step2.png"]
185
40e6c806 186* *Cluster Network:* As an optional step, you can go even further and
2394c306
TM
187separate the xref:pve_ceph_osds[OSD] replication & heartbeat traffic
188as well. This will relieve the public network and could lead to
40e6c806 189significant performance improvements, especially in large clusters.
2394c306
TM
190
191You have two more options which are considered advanced and therefore
40e6c806 192should only changed if you know what you are doing.
2394c306 193
40e6c806 194* *Number of replicas*: Defines how often an object is replicated
2394c306 195* *Minimum replicas*: Defines the minimum number of required replicas
40e6c806 196for I/O to be marked as complete.
2394c306 197
40e6c806 198Additionally, you need to choose your first monitor node. This step is required.
2394c306 199
40e6c806
DW
200That's it. You should now see a success page as the last step, with further
201instructions on how to proceed. Your system is now ready to start using Ceph.
202To get started, you will need to create some additional xref:pve_ceph_monitors[monitors],
203xref:pve_ceph_osds[OSDs] and at least one xref:pve_ceph_pools[pool].
2394c306 204
40e6c806
DW
205The rest of this chapter will guide you through getting the most out of
206your {pve} based Ceph setup. This includes the aforementioned tips and
207more, such as xref:pveceph_fs[CephFS], which is a helpful addition to your
2394c306 208new Ceph cluster.
21394e70 209
58f95dd7 210[[pve_ceph_install]]
21394e70
DM
211Installation of Ceph Packages
212-----------------------------
40e6c806 213Use the {pve} Ceph installation wizard (recommended) or run the following
2394c306 214command on each node:
21394e70
DM
215
216[source,bash]
217----
19920184 218pveceph install
21394e70
DM
219----
220
221This sets up an `apt` package repository in
222`/etc/apt/sources.list.d/ceph.list` and installs the required software.
223
224
b3338e29
AA
225Create initial Ceph configuration
226---------------------------------
21394e70 227
1ff5e4e8 228[thumbnail="screenshot/gui-ceph-config.png"]
8997dd6e 229
2394c306
TM
230Use the {pve} Ceph installation wizard (recommended) or run the
231following command on one node:
21394e70
DM
232
233[source,bash]
234----
235pveceph init --network 10.10.10.0/24
236----
237
2394c306 238This creates an initial configuration at `/etc/pve/ceph.conf` with a
40e6c806
DW
239dedicated network for Ceph. This file is automatically distributed to
240all {pve} nodes, using xref:chapter_pmxcfs[pmxcfs]. The command also
241creates a symbolic link at `/etc/ceph/ceph.conf`, which points to that file.
242Thus, you can simply run Ceph commands without the need to specify a
2394c306 243configuration file.
21394e70
DM
244
245
d9a27ee1 246[[pve_ceph_monitors]]
b3338e29
AA
247Ceph Monitor
248-----------
1d54c3b4 249The Ceph Monitor (MON)
b46a49ed 250footnote:[Ceph Monitor {cephdocs-url}/start/intro/]
40e6c806
DW
251maintains a master copy of the cluster map. For high availability, you need at
252least 3 monitors. One monitor will already be installed if you
253used the installation wizard. You won't need more than 3 monitors, as long
254as your cluster is small to medium-sized. Only really large clusters will
255require more than this.
1d54c3b4 256
b3338e29 257
c998bdf2 258[[pveceph_create_mon]]
b3338e29
AA
259Create Monitors
260~~~~~~~~~~~~~~~
261
262[thumbnail="screenshot/gui-ceph-monitor.png"]
263
1d54c3b4 264On each node where you want to place a monitor (three monitors are recommended),
40e6c806 265create one by using the 'Ceph -> Monitor' tab in the GUI or run:
21394e70
DM
266
267
268[source,bash]
269----
d1fdb121 270pveceph mon create
21394e70
DM
271----
272
c998bdf2 273[[pveceph_destroy_mon]]
b3338e29
AA
274Destroy Monitors
275~~~~~~~~~~~~~~~~
0e38a564 276
40e6c806 277To remove a Ceph Monitor via the GUI, first select a node in the tree view and
0e38a564
AA
278go to the **Ceph -> Monitor** panel. Select the MON and click the **Destroy**
279button.
280
40e6c806 281To remove a Ceph Monitor via the CLI, first connect to the node on which the MON
0e38a564
AA
282is running. Then execute the following command:
283[source,bash]
284----
285pveceph mon destroy
286----
287
288NOTE: At least three Monitors are needed for quorum.
289
290
1d54c3b4 291[[pve_ceph_manager]]
b3338e29
AA
292Ceph Manager
293------------
40e6c806 294
b3338e29 295The Manager daemon runs alongside the monitors. It provides an interface to
40e6c806 296monitor the cluster. Since the release of Ceph luminous, at least one ceph-mgr
b46a49ed 297footnote:[Ceph Manager {cephdocs-url}/mgr/] daemon is
b3338e29
AA
298required.
299
55d634e6 300[[pveceph_create_mgr]]
b3338e29
AA
301Create Manager
302~~~~~~~~~~~~~~
1d54c3b4 303
40e6c806
DW
304Multiple Managers can be installed, but only one Manager is active at any given
305time.
1d54c3b4 306
1d54c3b4
AA
307[source,bash]
308----
d1fdb121 309pveceph mgr create
1d54c3b4
AA
310----
311
c1f38fe3
AA
312NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For
313high availability install more then one manager.
314
21394e70 315
c998bdf2 316[[pveceph_destroy_mgr]]
b3338e29
AA
317Destroy Manager
318~~~~~~~~~~~~~~~
549350fe 319
40e6c806 320To remove a Ceph Manager via the GUI, first select a node in the tree view and
549350fe
AA
321go to the **Ceph -> Monitor** panel. Select the Manager and click the
322**Destroy** button.
323
40e6c806 324To remove a Ceph Monitor via the CLI, first connect to the node on which the
549350fe
AA
325Manager is running. Then execute the following command:
326[source,bash]
327----
328pveceph mgr destroy
329----
330
40e6c806
DW
331NOTE: While a manager is not a hard-dependency, it is crucial for a Ceph cluster,
332as it handles important features like PG-autoscaling, device health monitoring,
333telemetry and more.
549350fe 334
d9a27ee1 335[[pve_ceph_osds]]
b3338e29
AA
336Ceph OSDs
337---------
40e6c806 338Ceph **O**bject **S**torage **D**aemons store objects for Ceph over the
b3338e29
AA
339network. It is recommended to use one OSD per physical disk.
340
341NOTE: By default an object is 4 MiB in size.
342
081cb761 343[[pve_ceph_osd_create]]
b3338e29
AA
344Create OSDs
345~~~~~~~~~~~
21394e70 346
1ff5e4e8 347[thumbnail="screenshot/gui-ceph-osd-status.png"]
8997dd6e 348
40e6c806 349You can create an OSD either via the {pve} web-interface or via the CLI using
e79e0b9d 350`pveceph`. For example:
21394e70
DM
351
352[source,bash]
353----
d1fdb121 354pveceph osd create /dev/sd[X]
21394e70
DM
355----
356
40e6c806 357TIP: We recommend a Ceph cluster with at least three nodes and at least 12
e79e0b9d 358OSDs, evenly distributed among the nodes.
1d54c3b4 359
40e6c806
DW
360If the disk was in use before (for example, for ZFS or as an OSD) you first need
361to zap all traces of that usage. To remove the partition table, boot sector and
362any other OSD leftover, you can use the following command:
a474ca1f
AA
363
364[source,bash]
365----
9bddef40 366ceph-volume lvm zap /dev/sd[X] --destroy
a474ca1f
AA
367----
368
e79e0b9d 369WARNING: The above command will destroy all data on the disk!
1d54c3b4 370
b3338e29 371.Ceph Bluestore
21394e70 372
1d54c3b4 373Starting with the Ceph Kraken release, a new Ceph OSD storage type was
40e6c806 374introduced called Bluestore
2798d126 375footnote:[Ceph Bluestore https://ceph.com/community/new-luminous-bluestore/].
9bddef40 376This is the default when creating OSDs since Ceph Luminous.
21394e70
DM
377
378[source,bash]
379----
d1fdb121 380pveceph osd create /dev/sd[X]
1d54c3b4
AA
381----
382
1e834cb2 383.Block.db and block.wal
1d54c3b4
AA
384
385If you want to use a separate DB/WAL device for your OSDs, you can specify it
b3338e29
AA
386through the '-db_dev' and '-wal_dev' options. The WAL is placed with the DB, if
387not specified separately.
1d54c3b4
AA
388
389[source,bash]
390----
d1fdb121 391pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]
1d54c3b4
AA
392----
393
40e6c806
DW
394You can directly choose the size of those with the '-db_size' and '-wal_size'
395parameters respectively. If they are not given, the following values (in order)
9bddef40
DC
396will be used:
397
40e6c806 398* bluestore_block_{db,wal}_size from Ceph configuration...
352c803f
TL
399** ... database, section 'osd'
400** ... database, section 'global'
401** ... file, section 'osd'
402** ... file, section 'global'
9bddef40
DC
403* 10% (DB)/1% (WAL) of OSD size
404
40e6c806 405NOTE: The DB stores BlueStore’s internal metadata, and the WAL is BlueStore’s
ee4a0e96 406internal journal or write-ahead log. It is recommended to use a fast SSD or
1d54c3b4
AA
407NVRAM for better performance.
408
409
b3338e29 410.Ceph Filestore
9bddef40 411
40e6c806 412Before Ceph Luminous, Filestore was used as the default storage type for Ceph OSDs.
9bddef40 413Starting with Ceph Nautilus, {pve} does not support creating such OSDs with
352c803f
TL
414'pveceph' anymore. If you still want to create filestore OSDs, use
415'ceph-volume' directly.
1d54c3b4
AA
416
417[source,bash]
418----
9bddef40 419ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]
21394e70
DM
420----
421
081cb761 422[[pve_ceph_osd_destroy]]
b3338e29
AA
423Destroy OSDs
424~~~~~~~~~~~~
be2d137e 425
40e6c806
DW
426To remove an OSD via the GUI, first select a {PVE} node in the tree view and go
427to the **Ceph -> OSD** panel. Then select the OSD to destroy and click the **OUT**
428button. Once the OSD status has changed from `in` to `out`, click the **STOP**
429button. Finally, after the status has changed from `up` to `down`, select
430**Destroy** from the `More` drop-down menu.
be2d137e
AA
431
432To remove an OSD via the CLI run the following commands.
40e6c806 433
be2d137e
AA
434[source,bash]
435----
436ceph osd out <ID>
437systemctl stop ceph-osd@<ID>.service
438----
40e6c806 439
be2d137e
AA
440NOTE: The first command instructs Ceph not to include the OSD in the data
441distribution. The second command stops the OSD service. Until this time, no
442data is lost.
443
444The following command destroys the OSD. Specify the '-cleanup' option to
445additionally destroy the partition table.
40e6c806 446
be2d137e
AA
447[source,bash]
448----
449pveceph osd destroy <ID>
450----
40e6c806
DW
451
452WARNING: The above command will destroy all data on the disk!
be2d137e
AA
453
454
07fef357 455[[pve_ceph_pools]]
b3338e29
AA
456Ceph Pools
457----------
40e6c806
DW
458A pool is a logical group for storing objects. It holds a collection of objects,
459known as **P**lacement **G**roups (`PG`, `pg_num`).
1d54c3b4 460
b3338e29 461
6004d86b 462Create and Edit Pools
5b9f923f 463~~~~~~~~~~~~~~~~~~~~~
b3338e29 464
40e6c806 465You can create pools from the command line or the web-interface of any {pve}
d56606c7
TL
466host under **Ceph -> Pools**.
467
b3338e29
AA
468[thumbnail="screenshot/gui-ceph-pools.png"]
469
90682f35 470When no options are given, we set a default of **128 PGs**, a **size of 3
d56606c7
TL
471replicas** and a **min_size of 2 replicas**, to ensure no data loss occurs if
472any OSD fails.
1d54c3b4 473
ef3efe51 474WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1
40e6c806 475allows I/O on an object when it has only 1 replica, which could lead to data
ef3efe51
AA
476loss, incomplete PGs or unfound objects.
477
c446b6bb
DW
478It is advised that you calculate the PG number based on your setup. You can
479find the formula and the PG calculator footnote:[PG calculator
480https://ceph.com/pgcalc/] online. From Ceph Nautilus onward, you can change the
481number of PGs footnoteref:[placement_groups,Placement Groups
482{cephdocs-url}/rados/operations/placement-groups/] after the setup.
1d54c3b4 483
c446b6bb
DW
484In addition to manual adjustment, the PG autoscaler
485footnoteref:[autoscaler,Automated Scaling
486{cephdocs-url}/rados/operations/placement-groups/#automated-scaling] can
487automatically scale the PG count for a pool in the background.
1d54c3b4 488
d56606c7 489.Example for creating a pool over the CLI
1d54c3b4
AA
490[source,bash]
491----
d56606c7 492pveceph pool create <name> --add_storages
1d54c3b4
AA
493----
494
40e6c806
DW
495TIP: If you would also like to automatically define a storage for your
496pool, keep the `Add as Storage' checkbox checked in the web-interface, or use the
d56606c7 497command line option '--add_storages' at pool creation.
21394e70 498
c446b6bb
DW
499.Base Options
500Name:: The name of the pool. This must be unique and can't be changed afterwards.
501Size:: The number of replicas per object. Ceph always tries to have this many
502copies of an object. Default: `3`.
503PG Autoscale Mode:: The automatic PG scaling mode footnoteref:[autoscaler] of
504the pool. If set to `warn`, it produces a warning message when a pool
505has a non-optimal PG count. Default: `warn`.
506Add as Storage:: Configure a VM or container storage using the new pool.
5b9f923f 507Default: `true` (only visible on creation).
c446b6bb
DW
508
509.Advanced Options
510Min. Size:: The minimum number of replicas per object. Ceph will reject I/O on
511the pool if a PG has less than this many replicas. Default: `2`.
512Crush Rule:: The rule to use for mapping object placement in the cluster. These
513rules define how data is placed within the cluster. See
514xref:pve_ceph_device_classes[Ceph CRUSH & device classes] for information on
515device-based rules.
516# of PGs:: The number of placement groups footnoteref:[placement_groups] that
517the pool should have at the beginning. Default: `128`.
c446b6bb
DW
518Target Size Ratio:: The ratio of data that is expected in the pool. The PG
519autoscaler uses the ratio relative to other ratio sets. It takes precedence
520over the `target size` if both are set.
a0d289ff
DC
521Target Size:: The estimated amount of data expected in the pool. The PG
522autoscaler uses this size to estimate the optimal PG count.
c446b6bb
DW
523Min. # of PGs:: The minimum number of placement groups. This setting is used to
524fine-tune the lower bound of the PG count for that pool. The PG autoscaler
525will not merge PGs below this threshold.
526
1d54c3b4
AA
527Further information on Ceph pool handling can be found in the Ceph pool
528operation footnote:[Ceph pool operation
b46a49ed 529{cephdocs-url}/rados/operations/pools/]
1d54c3b4 530manual.
21394e70 531
166c91fe 532
b3338e29
AA
533Destroy Pools
534~~~~~~~~~~~~~
166c91fe 535
40e6c806 536To destroy a pool via the GUI, select a node in the tree view and go to the
166c91fe 537**Ceph -> Pools** panel. Select the pool to destroy and click the **Destroy**
40e6c806 538button. To confirm the destruction of the pool, you need to enter the pool name.
166c91fe
AA
539
540Run the following command to destroy a pool. Specify the '-remove_storages' to
541also remove the associated storage.
40e6c806 542
166c91fe
AA
543[source,bash]
544----
545pveceph pool destroy <name>
546----
547
40e6c806
DW
548NOTE: Pool deletion runs in the background and can take some time.
549You will notice the data usage in the cluster decreasing throughout this
550process.
166c91fe 551
47d62c84
DW
552
553PG Autoscaler
554~~~~~~~~~~~~~
555
556The PG autoscaler allows the cluster to consider the amount of (expected) data
557stored in each pool and to choose the appropriate pg_num values automatically.
558
559You may need to activate the PG autoscaler module before adjustments can take
560effect.
40e6c806 561
47d62c84
DW
562[source,bash]
563----
564ceph mgr module enable pg_autoscaler
565----
566
567The autoscaler is configured on a per pool basis and has the following modes:
568
569[horizontal]
570warn:: A health warning is issued if the suggested `pg_num` value differs too
571much from the current value.
572on:: The `pg_num` is adjusted automatically with no need for any manual
573interaction.
574off:: No automatic `pg_num` adjustments are made, and no warning will be issued
40e6c806 575if the PG count is not optimal.
47d62c84 576
40e6c806 577The scaling factor can be adjusted to facilitate future data storage with the
47d62c84
DW
578`target_size`, `target_size_ratio` and the `pg_num_min` options.
579
580WARNING: By default, the autoscaler considers tuning the PG count of a pool if
581it is off by a factor of 3. This will lead to a considerable shift in data
582placement and might introduce a high load on the cluster.
583
584You can find a more in-depth introduction to the PG autoscaler on Ceph's Blog -
585https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/[New in
586Nautilus: PG merging and autotuning].
587
588
76f6eca4 589[[pve_ceph_device_classes]]
9fad507d
AA
590Ceph CRUSH & device classes
591---------------------------
40e6c806
DW
592The footnote:[CRUSH
593https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf] (**C**ontrolled
594**R**eplication **U**nder **S**calable **H**ashing) algorithm is at the
595foundation of Ceph.
9fad507d 596
40e6c806
DW
597CRUSH calculates where to store and retrieve data from. This has the
598advantage that no central indexing service is needed. CRUSH works using a map of
9fad507d
AA
599OSDs, buckets (device locations) and rulesets (data replication) for pools.
600
601NOTE: Further information can be found in the Ceph documentation, under the
b46a49ed 602section CRUSH map footnote:[CRUSH map {cephdocs-url}/rados/operations/crush-map/].
9fad507d
AA
603
604This map can be altered to reflect different replication hierarchies. The object
605replicas can be separated (eg. failure domains), while maintaining the desired
606distribution.
607
40e6c806
DW
608A common configuration is to use different classes of disks for different Ceph
609pools. For this reason, Ceph introduced device classes with luminous, to
9fad507d
AA
610accommodate the need for easy ruleset generation.
611
612The device classes can be seen in the 'ceph osd tree' output. These classes
613represent their own root bucket, which can be seen with the below command.
614
615[source, bash]
616----
617ceph osd crush tree --show-shadow
618----
619
620Example output form the above command:
621
622[source, bash]
623----
624ID CLASS WEIGHT TYPE NAME
625-16 nvme 2.18307 root default~nvme
626-13 nvme 0.72769 host sumi1~nvme
627 12 nvme 0.72769 osd.12
628-14 nvme 0.72769 host sumi2~nvme
629 13 nvme 0.72769 osd.13
630-15 nvme 0.72769 host sumi3~nvme
631 14 nvme 0.72769 osd.14
632 -1 7.70544 root default
633 -3 2.56848 host sumi1
634 12 nvme 0.72769 osd.12
635 -5 2.56848 host sumi2
636 13 nvme 0.72769 osd.13
637 -7 2.56848 host sumi3
638 14 nvme 0.72769 osd.14
639----
640
40e6c806
DW
641To instruct a pool to only distribute objects on a specific device class, you
642first need to create a ruleset for the device class:
9fad507d
AA
643
644[source, bash]
645----
646ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
647----
648
649[frame="none",grid="none", align="left", cols="30%,70%"]
650|===
651|<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
652|<root>|which crush root it should belong to (default ceph root "default")
653|<failure-domain>|at which failure-domain the objects should be distributed (usually host)
654|<class>|what type of OSD backing store to use (eg. nvme, ssd, hdd)
655|===
656
657Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
658
659[source, bash]
660----
661ceph osd pool set <pool-name> crush_rule <rule-name>
662----
663
40e6c806
DW
664TIP: If the pool already contains objects, these must be moved accordingly.
665Depending on your setup, this may introduce a big performance impact on your
666cluster. As an alternative, you can create a new pool and move disks separately.
9fad507d
AA
667
668
21394e70
DM
669Ceph Client
670-----------
671
1ff5e4e8 672[thumbnail="screenshot/gui-ceph-log.png"]
8997dd6e 673
40e6c806
DW
674Following the setup from the previous sections, you can configure {pve} to use
675such pools to store VM and Container images. Simply use the GUI to add a new
676`RBD` storage (see section xref:ceph_rados_block_devices[Ceph RADOS Block
677Devices (RBD)]).
21394e70 678
620d6725 679You also need to copy the keyring to a predefined location for an external Ceph
1d54c3b4
AA
680cluster. If Ceph is installed on the Proxmox nodes itself, then this will be
681done automatically.
21394e70 682
40e6c806
DW
683NOTE: The filename needs to be `<storage_id> + `.keyring`, where `<storage_id>` is
684the expression after 'rbd:' in `/etc/pve/storage.cfg`. In the following example,
685`my-ceph-storage` is the `<storage_id>`:
21394e70
DM
686
687[source,bash]
688----
689mkdir /etc/pve/priv/ceph
690cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
691----
0840a663 692
58f95dd7
TL
693[[pveceph_fs]]
694CephFS
695------
696
40e6c806
DW
697Ceph also provides a filesystem, which runs on top of the same object storage as
698RADOS block devices do. A **M**eta**d**ata **S**erver (`MDS`) is used to map the
699RADOS backed objects to files and directories, allowing Ceph to provide a
700POSIX-compliant, replicated filesystem. This allows you to easily configure a
701clustered, highly available, shared filesystem. Ceph's Metadata Servers
702guarantee that files are evenly distributed over the entire Ceph cluster. As a
703result, even cases of high load will not overwhelm a single host, which can be
704an issue with traditional shared filesystem approaches, for example `NFS`.
58f95dd7 705
1e834cb2
TL
706[thumbnail="screenshot/gui-node-ceph-cephfs-panel.png"]
707
40e6c806
DW
708{pve} supports both creating a hyper-converged CephFS and using an existing
709xref:storage_cephfs[CephFS as storage] to save backups, ISO files, and container
710templates.
58f95dd7
TL
711
712
713[[pveceph_fs_mds]]
714Metadata Server (MDS)
715~~~~~~~~~~~~~~~~~~~~~
716
40e6c806
DW
717CephFS needs at least one Metadata Server to be configured and running, in order
718to function. You can create an MDS through the {pve} web GUI's `Node
719-> CephFS` panel or from the command line with:
58f95dd7
TL
720
721----
722pveceph mds create
723----
724
40e6c806
DW
725Multiple metadata servers can be created in a cluster, but with the default
726settings, only one can be active at a time. If an MDS or its node becomes
58f95dd7 727unresponsive (or crashes), another `standby` MDS will get promoted to `active`.
40e6c806
DW
728You can speed up the handover between the active and standby MDS by using
729the 'hotstandby' parameter option on creation, or if you have already created it
58f95dd7
TL
730you may set/add:
731
732----
733mds standby replay = true
734----
735
40e6c806
DW
736in the respective MDS section of `/etc/pve/ceph.conf`. With this enabled, the
737specified MDS will remain in a `warm` state, polling the active one, so that it
738can take over faster in case of any issues.
739
740NOTE: This active polling will have an additional performance impact on your
741system and the active `MDS`.
58f95dd7 742
1e834cb2 743.Multiple Active MDS
58f95dd7 744
40e6c806
DW
745Since Luminous (12.2.x) you can have multiple active metadata servers
746running at once, but this is normally only useful if you have a high amount of
747clients running in parallel. Otherwise the `MDS` is rarely the bottleneck in a
748system. If you want to set this up, please refer to the Ceph documentation.
749footnote:[Configuring multiple active MDS daemons
750{cephdocs-url}/cephfs/multimds/]
58f95dd7
TL
751
752[[pveceph_fs_create]]
8a38333f
AA
753Create CephFS
754~~~~~~~~~~~~~
58f95dd7 755
40e6c806
DW
756With {pve}'s integration of CephFS, you can easily create a CephFS using the
757web interface, CLI or an external API interface. Some prerequisites are required
58f95dd7
TL
758for this to work:
759
760.Prerequisites for a successful CephFS setup:
40e6c806
DW
761- xref:pve_ceph_install[Install Ceph packages] - if this was already done some
762time ago, you may want to rerun it on an up-to-date system to
763ensure that all CephFS related packages get installed.
58f95dd7
TL
764- xref:pve_ceph_monitors[Setup Monitors]
765- xref:pve_ceph_monitors[Setup your OSDs]
766- xref:pveceph_fs_mds[Setup at least one MDS]
767
40e6c806 768After this is complete, you can simply create a CephFS through
58f95dd7 769either the Web GUI's `Node -> CephFS` panel or the command line tool `pveceph`,
40e6c806 770for example:
58f95dd7
TL
771
772----
773pveceph fs create --pg_num 128 --add-storage
774----
775
40e6c806
DW
776This creates a CephFS named 'cephfs', using a pool for its data named
777'cephfs_data' with '128' placement groups and a pool for its metadata named
778'cephfs_metadata' with one quarter of the data pool's placement groups (`32`).
58f95dd7 779Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
40e6c806 780Ceph documentation for more information regarding an appropriate placement group
c446b6bb 781number (`pg_num`) for your setup footnoteref:[placement_groups].
40e6c806 782Additionally, the '--add-storage' parameter will add the CephFS to the {pve}
c446b6bb 783storage configuration after it has been created successfully.
58f95dd7
TL
784
785Destroy CephFS
786~~~~~~~~~~~~~~
787
40e6c806 788WARNING: Destroying a CephFS will render all of its data unusable. This cannot be
58f95dd7
TL
789undone!
790
40e6c806
DW
791If you really want to destroy an existing CephFS, you first need to stop or
792destroy all metadata servers (`M̀DS`). You can destroy them either via the web
793interface or via the command line interface, by issuing
58f95dd7
TL
794
795----
796pveceph mds destroy NAME
797----
40e6c806 798on each {pve} node hosting an MDS daemon.
58f95dd7 799
40e6c806 800Then, you can remove (destroy) the CephFS by issuing
58f95dd7
TL
801
802----
de2f8225 803ceph fs rm NAME --yes-i-really-mean-it
58f95dd7 804----
40e6c806 805on a single node hosting Ceph. After this, you may want to remove the created
58f95dd7
TL
806data and metadata pools, this can be done either over the Web GUI or the CLI
807with:
808
809----
810pveceph pool destroy NAME
811----
0840a663 812
6ff32926 813
081cb761
AA
814Ceph maintenance
815----------------
af6f59f4 816
081cb761
AA
817Replace OSDs
818~~~~~~~~~~~~
af6f59f4 819
40e6c806
DW
820One of the most common maintenance tasks in Ceph is to replace the disk of an
821OSD. If a disk is already in a failed state, then you can go ahead and run
822through the steps in xref:pve_ceph_osd_destroy[Destroy OSDs]. Ceph will recreate
823those copies on the remaining OSDs if possible. This rebalancing will start as
824soon as an OSD failure is detected or an OSD was actively stopped.
af6f59f4
TL
825
826NOTE: With the default size/min_size (3/2) of a pool, recovery only starts when
827`size + 1` nodes are available. The reason for this is that the Ceph object
828balancer xref:pve_ceph_device_classes[CRUSH] defaults to a full node as
829`failure domain'.
081cb761 830
40e6c806 831To replace a functioning disk from the GUI, go through the steps in
081cb761
AA
832xref:pve_ceph_osd_destroy[Destroy OSDs]. The only addition is to wait until
833the cluster shows 'HEALTH_OK' before stopping the OSD to destroy it.
834
40e6c806
DW
835On the command line, use the following commands:
836
081cb761
AA
837----
838ceph osd out osd.<id>
839----
840
841You can check with the command below if the OSD can be safely removed.
40e6c806 842
081cb761
AA
843----
844ceph osd safe-to-destroy osd.<id>
845----
846
40e6c806
DW
847Once the above check tells you that it is safe to remove the OSD, you can
848continue with the following commands:
849
081cb761
AA
850----
851systemctl stop ceph-osd@<id>.service
852pveceph osd destroy <id>
853----
854
855Replace the old disk with the new one and use the same procedure as described
856in xref:pve_ceph_osd_create[Create OSDs].
857
835f322d
TL
858Trim/Discard
859~~~~~~~~~~~~
40e6c806
DW
860
861It is good practice to run 'fstrim' (discard) regularly on VMs and containers.
081cb761 862This releases data blocks that the filesystem isn’t using anymore. It reduces
c78cd2b6
AA
863data usage and resource load. Most modern operating systems issue such discard
864commands to their disks regularly. You only need to ensure that the Virtual
865Machines enable the xref:qm_hard_disk_discard[disk discard option].
081cb761 866
c998bdf2 867[[pveceph_scrub]]
081cb761
AA
868Scrub & Deep Scrub
869~~~~~~~~~~~~~~~~~~
40e6c806 870
081cb761
AA
871Ceph ensures data integrity by 'scrubbing' placement groups. Ceph checks every
872object in a PG for its health. There are two forms of Scrubbing, daily
b16f8c5f
TL
873cheap metadata checks and weekly deep data checks. The weekly deep scrub reads
874the objects and uses checksums to ensure data integrity. If a running scrub
875interferes with business (performance) needs, you can adjust the time when
b46a49ed 876scrubs footnote:[Ceph scrubbing {cephdocs-url}/rados/configuration/osd-config-ref/#scrubbing]
081cb761
AA
877are executed.
878
879
40e6c806 880Ceph Monitoring and Troubleshooting
10df14fb 881-----------------------------------
40e6c806
DW
882
883It is important to continuously monitor the health of a Ceph deployment from the
884beginning, either by using the Ceph tools or by accessing
10df14fb 885the status through the {pve} link:api-viewer/index.html[API].
6ff32926 886
40e6c806 887The following Ceph commands can be used to see if the cluster is healthy
10df14fb 888('HEALTH_OK'), if there are warnings ('HEALTH_WARN'), or even errors
40e6c806 889('HEALTH_ERR'). If the cluster is in an unhealthy state, the status commands
620d6725 890below will also give you an overview of the current events and actions to take.
6ff32926
AA
891
892----
10df14fb
TL
893# single time output
894pve# ceph -s
895# continuously output status changes (press CTRL+C to stop)
896pve# ceph -w
6ff32926
AA
897----
898
40e6c806
DW
899To get a more detailed view, every Ceph service has a log file under
900`/var/log/ceph/`. If more detail is required, the log level can be
b46a49ed 901adjusted footnote:[Ceph log and debugging {cephdocs-url}/rados/troubleshooting/log-and-debug/].
6ff32926
AA
902
903You can find more information about troubleshooting
b46a49ed 904footnote:[Ceph troubleshooting {cephdocs-url}/rados/troubleshooting/]
620d6725 905a Ceph cluster on the official website.
6ff32926
AA
906
907
0840a663
DM
908ifdef::manvolnum[]
909include::pve-copyright.adoc[]
910endif::manvolnum[]