X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=ceph%2Fdoc%2Fcephfs%2Fcreatefs.rst;h=4a282e562fe3c79c41e461bdec60c9db86093045;hb=05a536ef04248702f72713fd2fe81cb055624784;hp=59706d1d2dc8c503b73931eee0dd4296c299ca2c;hpb=ab27109dd2e88c6e1082a346b3be8444697297c6;p=ceph.git diff --git a/ceph/doc/cephfs/createfs.rst b/ceph/doc/cephfs/createfs.rst index 59706d1d2..4a282e562 100644 --- a/ceph/doc/cephfs/createfs.rst +++ b/ceph/doc/cephfs/createfs.rst @@ -15,6 +15,10 @@ There are important considerations when planning these pools: - We recommend the fastest feasible low-latency storage devices (NVMe, Optane, or at the very least SAS/SATA SSD) for the metadata pool, as this will directly affect the latency of client file system operations. +- We strongly suggest that the CephFS metadata pool be provisioned on dedicated + SSD / NVMe OSDs. This ensures that high client workload does not adversely + impact metadata operations. See :ref:`device_classes` to configure pools this + way. - The data pool used to create the file system is the "default" data pool and the location for storing all inode backtrace information, which is used for hard link management and disaster recovery. For this reason, all CephFS inodes