footnote:[Ceph Manager https://docs.ceph.com/docs/{ceph_codename}/mgr/] daemon is
required.
-[i[pveceph_create_mgr]]
+[[pveceph_create_mgr]]
Create Manager
~~~~~~~~~~~~~~
Ceph maintenance
----------------
+
Replace OSDs
~~~~~~~~~~~~
+
One of the common maintenance tasks in Ceph is to replace a disk of an OSD. If
a disk is already in a failed state, then you can go ahead and run through the
steps in xref:pve_ceph_osd_destroy[Destroy OSDs]. Ceph will recreate those
-copies on the remaining OSDs if possible.
+copies on the remaining OSDs if possible. This rebalancing will start as soon
+as an OSD failure is detected or an OSD was actively stopped.
+
+NOTE: With the default size/min_size (3/2) of a pool, recovery only starts when
+`size + 1` nodes are available. The reason for this is that the Ceph object
+balancer xref:pve_ceph_device_classes[CRUSH] defaults to a full node as
+`failure domain'.
To replace a still functioning disk, on the GUI go through the steps in
xref:pve_ceph_osd_destroy[Destroy OSDs]. The only addition is to wait until
Replace the old disk with the new one and use the same procedure as described
in xref:pve_ceph_osd_create[Create OSDs].
-NOTE: With the default size/min_size (3/2) of a pool, recovery only starts when
-`size + 1` nodes are available.
-
-Run fstrim (discard)
-~~~~~~~~~~~~~~~~~~~~
+Trim/Discard
+~~~~~~~~~~~~
It is a good measure to run 'fstrim' (discard) regularly on VMs or containers.
This releases data blocks that the filesystem isn’t using anymore. It reduces
-data usage and the resource load.
+data usage and resource load. Most modern operating systems issue such discard
+commands to their disks regularly. You only need to ensure that the Virtual
+Machines enable the xref:qm_hard_disk_discard[disk discard option].
[[pveceph_scrub]]
Scrub & Deep Scrub
~~~~~~~~~~~~~~~~~~
Ceph ensures data integrity by 'scrubbing' placement groups. Ceph checks every
object in a PG for its health. There are two forms of Scrubbing, daily
-(metadata compare) and weekly. The weekly reads the objects and uses checksums
-to ensure data integrity. If a running scrub interferes with business needs,
-you can adjust the time when scrubs footnote:[Ceph scrubbing
-https://docs.ceph.com/docs/{ceph_codename}/rados/configuration/osd-config-ref/#scrubbing]
+cheap metadata checks and weekly deep data checks. The weekly deep scrub reads
+the objects and uses checksums to ensure data integrity. If a running scrub
+interferes with business (performance) needs, you can adjust the time when
+scrubs footnote:[Ceph scrubbing https://docs.ceph.com/docs/{ceph_codename}/rados/configuration/osd-config-ref/#scrubbing]
are executed.