X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=pveceph.adoc;h=20e1883e3b92223b4fe109973fc4ab55ab5eaabc;hb=de7763697d0fbe551d959861b378bba11638313e;hp=8eca373ed6fa66c3dc601fa34b1ccc924822e859;hpb=86be506d85245885787acfdddd6b23f51031b1f0;p=pve-docs.git diff --git a/pveceph.adoc b/pveceph.adoc index 8eca373..20e1883 100644 --- a/pveceph.adoc +++ b/pveceph.adoc @@ -82,7 +82,7 @@ http://docs.ceph.com/docs/luminous/start/hardware-recommendations/[Ceph's websit .Avoid RAID As Ceph handles data object redundancy and multiple parallel writes to disks -(OSDs) on its own, using a RAID controller normally doesn’t improves +(OSDs) on its own, using a RAID controller normally doesn’t improve performance or availability. On the contrary, Ceph is designed to handle whole disks on it's own, without any abstraction in between. RAID controller are not designed for the Ceph use case and may complicate things and sometimes even