X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=pveceph.adoc;h=72210f3db6c9d0e3cf3039abf33ee953b1999ffa;hp=38c7a8591b5a136399cda7f9a198072a68ab0f2e;hb=ca8c30096d94e360c94cdb0496bd57373b92a144;hpb=352c803f9ea1deab939bfd3e9705c3d923597726 diff --git a/pveceph.adoc b/pveceph.adoc index 38c7a85..72210f3 100644 --- a/pveceph.adoc +++ b/pveceph.adoc @@ -318,8 +318,7 @@ This is the default when creating OSDs since Ceph Luminous. pveceph createosd /dev/sd[X] ---- -Block.db and block.wal -^^^^^^^^^^^^^^^^^^^^^^ +.Block.db and block.wal If you want to use a separate DB/WAL device for your OSDs, you can specify it through the '-db_dev' and '-wal_dev' options. The WAL is placed with the DB, if not @@ -515,6 +514,8 @@ cluster, this way even high load will not overload a single host, which can be an issue with traditional shared filesystem approaches, like `NFS`, for example. +[thumbnail="screenshot/gui-node-ceph-cephfs-panel.png"] + {pve} supports both, using an existing xref:storage_cephfs[CephFS as storage] to save backups, ISO files or container templates and creating a hyper-converged CephFS itself. @@ -548,8 +549,7 @@ will always poll the active one, so that it can take over faster as it is in a `warm` state. But naturally, the active polling will cause some additional performance impact on your system and active `MDS`. -Multiple Active MDS -^^^^^^^^^^^^^^^^^^^ +.Multiple Active MDS Since Luminous (12.2.x) you can also have multiple active metadata servers running, but this is normally only useful for a high count on parallel clients,