[thumbnail="screenshot/gui-ceph-pools.png"]
A pool is a logical group for storing objects. It holds **P**lacement
-**G**roups (PG), a collection of objects.
+**G**roups (`PG`, `pg_num`), a collection of objects.
-When no options are given, we set a
-default of **64 PGs**, a **size of 3 replicas** and a **min_size of 2 replicas**
-for serving objects in a degraded state.
+When no options are given, we set a default of **128 PGs**, a **size of 3
+replicas** and a **min_size of 2 replicas** for serving objects in a degraded
+state.
-NOTE: The default number of PGs works for 2-6 disks. Ceph throws a
-"HEALTH_WARNING" if you have too few or too many PGs in your cluster.
+NOTE: The default number of PGs works for 2-5 disks. Ceph throws a
+'HEALTH_WARNING' if you have too few or too many PGs in your cluster.
It is advised to calculate the PG number depending on your setup, you can find
the formula and the PG calculator footnote:[PG calculator
highly available shared filesystem in an easy way if ceph is already used. Its
Metadata Servers guarantee that files get balanced out over the whole Ceph
cluster, this way even high load will not overload a single host, which can be
-be an issue with traditional shared filesystem approaches, like `NFS`, for
+an issue with traditional shared filesystem approaches, like `NFS`, for
example.
{pve} supports both, using an existing xref:storage_cephfs[CephFS as storage])
in the ceph.conf respective MDS section. With this enabled, this specific MDS
will always poll the active one, so that it can take over faster as it is in a
-`warm' state. But naturally, the active polling will cause some additional
+`warm` state. But naturally, the active polling will cause some additional
performance impact on your system and active `MDS`.
Multiple Active MDS
Destroy CephFS
~~~~~~~~~~~~~~
-WARN: Destroying a CephFS will render all its data unusable, this cannot be
+WARNING: Destroying a CephFS will render all its data unusable, this cannot be
undone!
If you really want to destroy an existing CephFS you first need to stop, or
Then, you can remove (destroy) CephFS by issuing a:
----
-ceph rm fs NAME --yes-i-really-mean-it
+ceph fs rm NAME --yes-i-really-mean-it
----
on a single node hosting Ceph. After this you may want to remove the created
data and metadata pools, this can be done either over the Web GUI or the CLI