more usable space compared to replicated pools, but they do that for the price
of performance.
-For comparision: in classic, replicated pools, multiple replicas of the data
+For comparison: in classic, replicated pools, multiple replicas of the data
are stored (`size`) while in erasure coded pool, data is split into `k` data
chunks with additional `m` coding (checking) chunks. Those coding chunks can be
used to recreate data should data chunks be missing.
Creating EC Pools
^^^^^^^^^^^^^^^^^
-You can create erasuce coded (EC) through using the `pveceph` CLI tooling. As
-EC code work different than replicated pools, planning a setup and the pool
-parameters used needs to adapt.
+Erasure coded (EC) pools can be created with the `pveceph` CLI tooling.
+Planning an EC pool needs to account for the fact, that they work differently
+than replicated pools.
The default `min_size` of an EC pool depends on the `m` parameter. If `m = 1`,
the `min_size` of the EC pool will be `k`. The `min_size` will be `k + 1` if
Adding EC Pools as Storage
^^^^^^^^^^^^^^^^^^^^^^^^^^
-You can also add an already existing EC pool as storage to {pve}, it works the
-same as adding any `RBD` pool but requires to pass the extra `data-pool`
-option.
+You can add an already existing EC pool as storage to {pve}. It works the same
+way as adding an `RBD` pool but requires the extra `data-pool` option.
[source,bash]
----
----
TIP: Do not forget to add the `keyring` and `monhost` option for any external
-ceph clusters, not managed by the local {pve} cluster.
+Ceph clusters, not managed by the local {pve} cluster.
Destroy Pools
~~~~~~~~~~~~~
[frame="none",grid="none", align="left", cols="30%,70%"]
|===
|<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
-|<root>|which crush root it should belong to (default ceph root "default")
+|<root>|which crush root it should belong to (default Ceph root "default")
|<failure-domain>|at which failure-domain the objects should be distributed (usually host)
|<class>|what type of OSD backing store to use (e.g., nvme, ssd, hdd)
|===
pveceph fs destroy NAME --remove-storages --remove-pools
----
+
-This will automatically destroy the underlying ceph pools as well as remove
+This will automatically destroy the underlying Ceph pools as well as remove
the storages from pve config.
After these steps, the CephFS should be completely removed and if you have