infrastructure. You can, for example, deploy and manage the following two
storage technologies by using the web interface only:
-- *ceph*: a both self-healing and self-managing shared, reliable and highly
+- *Ceph*: a both self-healing and self-managing shared, reliable and highly
scalable storage system. Checkout
- xref:chapter_pveceph[how to manage ceph services on {pve} nodes]
+ xref:chapter_pveceph[how to manage Ceph services on {pve} nodes]
- *ZFS*: a combined file system and logical volume manager with extensive
protection against data corruption, various RAID modes, fast and cheap
Ceph client configuration (optional)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Connecting to an external ceph storage doesn't always allow setting
+Connecting to an external Ceph storage doesn't always allow setting
client-specific options in the config DB on the external cluster. You can add a
-`ceph.conf` beside the ceph keyring to change the ceph client configuration for
+`ceph.conf` beside the Ceph keyring to change the Ceph client configuration for
the storage.
The ceph.conf needs to have the same name as the storage.
----
TIP: Do not forget to add the `keyring` and `monhost` option for any external
-ceph clusters, not managed by the local {pve} cluster.
+Ceph clusters, not managed by the local {pve} cluster.
Destroy Pools
~~~~~~~~~~~~~
[frame="none",grid="none", align="left", cols="30%,70%"]
|===
|<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
-|<root>|which crush root it should belong to (default ceph root "default")
+|<root>|which crush root it should belong to (default Ceph root "default")
|<failure-domain>|at which failure-domain the objects should be distributed (usually host)
|<class>|what type of OSD backing store to use (e.g., nvme, ssd, hdd)
|===
pveceph fs destroy NAME --remove-storages --remove-pools
----
+
-This will automatically destroy the underlying ceph pools as well as remove
+This will automatically destroy the underlying Ceph pools as well as remove
the storages from pve config.
After these steps, the CephFS should be completely removed and if you have