From: Matthias Heiserer Date: Wed, 9 Nov 2022 11:58:21 +0000 (+0100) Subject: consistently capitalize Ceph X-Git-Url: https://git.proxmox.com/?a=commitdiff_plain;h=f226da0ef46e0002ac08471482f046e06b9c0ed6;p=pve-docs.git consistently capitalize Ceph Signed-off-by: Matthias Heiserer Signed-off-by: Thomas Lamprecht --- diff --git a/hyper-converged-infrastructure.adoc b/hyper-converged-infrastructure.adoc index ee9f185..4616392 100644 --- a/hyper-converged-infrastructure.adoc +++ b/hyper-converged-infrastructure.adoc @@ -48,9 +48,9 @@ Hyper-Converged Infrastructure: Storage infrastructure. You can, for example, deploy and manage the following two storage technologies by using the web interface only: -- *ceph*: a both self-healing and self-managing shared, reliable and highly +- *Ceph*: a both self-healing and self-managing shared, reliable and highly scalable storage system. Checkout - xref:chapter_pveceph[how to manage ceph services on {pve} nodes] + xref:chapter_pveceph[how to manage Ceph services on {pve} nodes] - *ZFS*: a combined file system and logical volume manager with extensive protection against data corruption, various RAID modes, fast and cheap diff --git a/pve-storage-rbd.adoc b/pve-storage-rbd.adoc index 5f8619a..5fe558a 100644 --- a/pve-storage-rbd.adoc +++ b/pve-storage-rbd.adoc @@ -109,9 +109,9 @@ management, see the Ceph docs.footnoteref:[cephusermgmt,{cephdocs-url}/rados/ope Ceph client configuration (optional) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Connecting to an external ceph storage doesn't always allow setting +Connecting to an external Ceph storage doesn't always allow setting client-specific options in the config DB on the external cluster. You can add a -`ceph.conf` beside the ceph keyring to change the ceph client configuration for +`ceph.conf` beside the Ceph keyring to change the Ceph client configuration for the storage. The ceph.conf needs to have the same name as the storage. diff --git a/pveceph.adoc b/pveceph.adoc index 54fb214..fdd4cf6 100644 --- a/pveceph.adoc +++ b/pveceph.adoc @@ -636,7 +636,7 @@ pvesm add rbd --pool --data-pool ---- TIP: Do not forget to add the `keyring` and `monhost` option for any external -ceph clusters, not managed by the local {pve} cluster. +Ceph clusters, not managed by the local {pve} cluster. Destroy Pools ~~~~~~~~~~~~~ @@ -761,7 +761,7 @@ ceph osd crush rule create-replicated |name of the rule, to connect with a pool (seen in GUI & CLI) -||which crush root it should belong to (default ceph root "default") +||which crush root it should belong to (default Ceph root "default") ||at which failure-domain the objects should be distributed (usually host) ||what type of OSD backing store to use (e.g., nvme, ssd, hdd) |=== @@ -943,7 +943,7 @@ servers. pveceph fs destroy NAME --remove-storages --remove-pools ---- + -This will automatically destroy the underlying ceph pools as well as remove +This will automatically destroy the underlying Ceph pools as well as remove the storages from pve config. After these steps, the CephFS should be completely removed and if you have