NOTE: The default number of PGs works for 2-5 disks. Ceph throws a
'HEALTH_WARNING' if you have too few or too many PGs in your cluster.
+WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1
+allows I/O on an object when it has only 1 replica which could lead to data
+loss, incomplete PGs or unfound objects.
+
It is advised to calculate the PG number depending on your setup, you can find
the formula and the PG calculator footnote:[PG calculator
https://ceph.com/pgcalc/] online. From Ceph Nautilus onwards it is possible to
Ceph monitoring and troubleshooting
-----------------------------------
-A good start is to continuosly monitor the ceph health from the start of
+A good start is to continuously monitor the ceph health from the start of
initial deployment. Either through the ceph tools itself, but also by accessing
the status through the {pve} link:api-viewer/index.html[API].