:pve-toplevel:
endif::manvolnum[]
-[thumbnail="screenshot/gui-ceph-status.png"]
+[thumbnail="screenshot/gui-ceph-status-dashboard.png"]
{pve} unifies your compute and storage systems, that is, you can use the same
physical nodes within a cluster for both computing (processing VMs and
available for Ceph to provide excellent and stable performance.
As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will be used
-by an OSD. Especially during recovery, rebalancing or backfilling.
+by an OSD. Especially during recovery, re-balancing or backfilling.
The daemon itself will use additional memory. The Bluestore backend of the
daemon requires by default **3-5 GiB of memory** (adjustable). In contrast, the
setups to reduce recovery time, minimizing the likelihood of a subsequent
failure event during recovery.
-In general SSDs will provide more IOPs than spinning disks. With this in mind,
+In general, SSDs will provide more IOPS than spinning disks. With this in mind,
in addition to the higher cost, it may make sense to implement a
xref:pve_ceph_device_classes[class based] separation of pools. Another way to
speed up OSDs is to use a faster disk as a journal or
After starting the installation, the wizard will download and install all the
required packages from {pve}'s Ceph repository.
+[thumbnail="screenshot/gui-node-ceph-install-wizard-step0.png"]
After finishing the installation step, you will need to create a configuration.
This step is only needed once per cluster, as this configuration is distributed
[[pve_ceph_pools]]
Ceph Pools
----------
+
+[thumbnail="screenshot/gui-ceph-pools.png"]
+
A pool is a logical group for storing objects. It holds a collection of objects,
known as **P**lacement **G**roups (`PG`, `pg_num`).
You can create and edit pools from the command line or the web-interface of any
{pve} host under **Ceph -> Pools**.
-[thumbnail="screenshot/gui-ceph-pools.png"]
-
When no options are given, we set a default of **128 PGs**, a **size of 3
replicas** and a **min_size of 2 replicas**, to ensure no data loss occurs if
any OSD fails.
It is advised that you either enable the PG-Autoscaler or calculate the PG
number based on your setup. You can find the formula and the PG calculator
-footnote:[PG calculator https://ceph.com/pgcalc/] online. From Ceph Nautilus
+footnote:[PG calculator https://web.archive.org/web/20210301111112/http://ceph.com/pgcalc/] online. From Ceph Nautilus
onward, you can change the number of PGs
footnoteref:[placement_groups,Placement Groups
{cephdocs-url}/rados/operations/placement-groups/] after the setup.
Pool Options
^^^^^^^^^^^^
+[thumbnail="screenshot/gui-ceph-pool-create.png"]
+
The following options are available on pool creation, and partially also when
editing a pool.
section CRUSH map footnote:[CRUSH map {cephdocs-url}/rados/operations/crush-map/].
This map can be altered to reflect different replication hierarchies. The object
-replicas can be separated (eg. failure domains), while maintaining the desired
+replicas can be separated (e.g., failure domains), while maintaining the desired
distribution.
A common configuration is to use different classes of disks for different Ceph
|<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
|<root>|which crush root it should belong to (default ceph root "default")
|<failure-domain>|at which failure-domain the objects should be distributed (usually host)
-|<class>|what type of OSD backing store to use (eg. nvme, ssd, hdd)
+|<class>|what type of OSD backing store to use (e.g., nvme, ssd, hdd)
|===
Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.