-is also an option if there are no 10 GbE switches available.
-
-The volume of traffic, especially during recovery, will interfere with other
-services on the same network and may even break the {pve} cluster stack.
-
-Furthermore, you should estimate your bandwidth needs. While one HDD might not
-saturate a 1 Gb link, multiple HDD OSDs per node can, and modern NVMe SSDs will
-even saturate 10 Gbps of bandwidth quickly. Deploying a network capable of even
-more bandwidth will ensure that this isn't your bottleneck and won't be anytime
-soon. 25, 40 or even 100 Gbps are possible.
+is also an option for three to five node clusters, if there are no 10+ Gbps
+switches available.
+
+[IMPORTANT]
+The volume of traffic, especially during recovery, will interfere
+with other services on the same network, especially the latency sensitive {pve}
+corosync cluster stack can be affected, resulting in possible loss of cluster
+quorum. Moving the Ceph traffic to dedicated and physical separated networks
+will avoid such interference, not only for corosync, but also for the networking
+services provided by any virtual guests.
+
+For estimating your bandwidth needs, you need to take the performance of your
+disks into account.. While a single HDD might not saturate a 1 Gb link, multiple
+HDD OSDs per node can already saturate 10 Gbps too.
+If modern NVMe-attached SSDs are used, a single one can already saturate 10 Gbps
+of bandwidth, or more. For such high-performance setups we recommend at least
+a 25 Gpbs, while even 40 Gbps or 100+ Gbps might be required to utilize the full
+performance potential of the underlying disks.
+
+If unsure, we recommend using three (physical) separate networks for
+high-performance setups:
+* one very high bandwidth (25+ Gbps) network for Ceph (internal) cluster
+ traffic.
+* one high bandwidth (10+ Gpbs) network for Ceph (public) traffic between the
+ ceph server and ceph client storage traffic. Depending on your needs this can
+ also be used to host the virtual guest traffic and the VM live-migration
+ traffic.
+* one medium bandwidth (1 Gbps) exclusive for the latency sensitive corosync
+ cluster communication.