-----------
endif::manvolnum[]
ifndef::manvolnum[]
-Manage Ceph Services on Proxmox VE Nodes
-========================================
+Deploy Hyper-Converged Ceph Cluster
+===================================
:pve-toplevel:
endif::manvolnum[]
Further, estimate your bandwidth needs. While one HDD might not saturate a 1 Gb
link, multiple HDD OSDs per node can, and modern NVMe SSDs will even saturate
-10 Gbps of bandwidth quickly. Deploying a network capable of even more bandwith
+10 Gbps of bandwidth quickly. Deploying a network capable of even more bandwidth
will ensure that it isn't your bottleneck and won't be anytime soon, 25, 40 or
even 100 GBps are possible.
----
You can directly choose the size for those with the '-db_size' and '-wal_size'
-paremeters respectively. If they are not given the following values (in order)
+parameters respectively. If they are not given the following values (in order)
will be used:
* bluestore_block_{db,wal}_size from ceph configuration...
~~~~~~~~~~~~
It is a good measure to run 'fstrim' (discard) regularly on VMs or containers.
This releases data blocks that the filesystem isn’t using anymore. It reduces
-data usage and the resource load. Most modern operating systems issue such
-discard commands to their disks regurarly. You only need to ensure that the
-Virtual Machines enable the xref:qm_hard_disk_discard[disk discard option].
+data usage and resource load. Most modern operating systems issue such discard
+commands to their disks regularly. You only need to ensure that the Virtual
+Machines enable the xref:qm_hard_disk_discard[disk discard option].
[[pveceph_scrub]]
Scrub & Deep Scrub