X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=pveceph.adoc;h=c90a92e3c49b3820837e758a40953910c54ccb5e;hp=68399ad12ab4979976dc504928bbbf208c93713c;hb=3580eb1361b66a533e26727935d03861d8580df9;hpb=fa9b4ee121019cb4649a1c10c5d681d5a7e15d45 diff --git a/pveceph.adoc b/pveceph.adoc index 68399ad..c90a92e 100644 --- a/pveceph.adoc +++ b/pveceph.adoc @@ -427,7 +427,7 @@ POSIX-compliant replicated filesystem. This allows one to have a clustered highly available shared filesystem in an easy way if ceph is already used. Its Metadata Servers guarantee that files get balanced out over the whole Ceph cluster, this way even high load will not overload a single host, which can be -be an issue with traditional shared filesystem approaches, like `NFS`, for +an issue with traditional shared filesystem approaches, like `NFS`, for example. {pve} supports both, using an existing xref:storage_cephfs[CephFS as storage]) @@ -460,7 +460,7 @@ mds standby replay = true in the ceph.conf respective MDS section. With this enabled, this specific MDS will always poll the active one, so that it can take over faster as it is in a -`warm' state. But naturally, the active polling will cause some additional +`warm` state. But naturally, the active polling will cause some additional performance impact on your system and active `MDS`. Multiple Active MDS @@ -524,7 +524,7 @@ on each {pve} node hosting a MDS daemon. Then, you can remove (destroy) CephFS by issuing a: ---- -ceph rm fs NAME --yes-i-really-mean-it +ceph fs rm NAME --yes-i-really-mean-it ---- on a single node hosting Ceph. After this you may want to remove the created data and metadata pools, this can be done either over the Web GUI or the CLI