.Example for creating a pool over the CLI
[source,bash]
----
-pveceph pool create <name> --add_storages
+pveceph pool create <pool-name> --add_storages
----
TIP: If you would also like to automatically define a storage for your
[[pve_ceph_ec_pools]]
-Erasure Coded (EC) Pools
-~~~~~~~~~~~~~~~~~~~~~~~~
+Erasure Coded Pools
+~~~~~~~~~~~~~~~~~~~
-Erasure coded (EC) pools can offer more usable space for the price of
-performance. In replicated pools, multiple replicas of the data are stored
-(`size`). In erasure coded pool, data is split into `k` data chunks with
-additional `m` coding chunks. The coding chunks can be used to recreate data
-should data chunks be missing. The number of coding chunks, `m`, defines how
-many OSDs can be lost without losing any data. The total amount of objects
-stored is `k + m`.
+Erasure coding (EC) is a form of `forward error correction' codes that allows
+to recover from a certain amount of data loss. Erasure coded pools can offer
+more usable space compared to replicated pools, but they do that for the price
+of performance.
+
+For comparison: in classic, replicated pools, multiple replicas of the data
+are stored (`size`) while in erasure coded pool, data is split into `k` data
+chunks with additional `m` coding (checking) chunks. Those coding chunks can be
+used to recreate data should data chunks be missing.
+
+The number of coding chunks, `m`, defines how many OSDs can be lost without
+losing any data. The total amount of objects stored is `k + m`.
+
+Creating EC Pools
+^^^^^^^^^^^^^^^^^
+
+Erasure coded (EC) pools can be created with the `pveceph` CLI tooling.
+Planning an EC pool needs to account for the fact, that they work differently
+than replicated pools.
The default `min_size` of an EC pool depends on the `m` parameter. If `m = 1`,
the `min_size` of the EC pool will be `k`. The `min_size` will be `k + 1` if
[source,bash]
----
-pceveph pool create <pool name> --erasure-coding k=2,m=1
+pveceph pool create <pool-name> --erasure-coding k=2,m=1
----
Optional parameters are `failure-domain` and `device-class`. If you
The `size` and `crush_rule` parameters cannot be changed on erasure coded
pools.
-[source,bash]
-----
-pvesm add rbd <storage name> --pool <replicated pool> --data-pool <ec pool>
-----
-
If there is a need to further customize the EC profile, you can do so by
creating it with the Ceph tools directly footnote:[Ceph Erasure Code Profile
{cephdocs-url}/rados/operations/erasure-code/#erasure-code-profiles], and
For example:
[source,bash]
----
-pceveph pool create <pool name> --erasure-coding profile=<profile name>
+pveceph pool create <pool-name> --erasure-coding profile=<profile-name>
+----
+
+Adding EC Pools as Storage
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You can add an already existing EC pool as storage to {pve}. It works the same
+way as adding an `RBD` pool but requires the extra `data-pool` option.
+
+[source,bash]
+----
+pvesm add rbd <storage-name> --pool <replicated-pool> --data-pool <ec-pool>
----
+TIP: Do not forget to add the `keyring` and `monhost` option for any external
+Ceph clusters, not managed by the local {pve} cluster.
Destroy Pools
~~~~~~~~~~~~~
[frame="none",grid="none", align="left", cols="30%,70%"]
|===
|<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
-|<root>|which crush root it should belong to (default ceph root "default")
+|<root>|which crush root it should belong to (default Ceph root "default")
|<failure-domain>|at which failure-domain the objects should be distributed (usually host)
|<class>|what type of OSD backing store to use (e.g., nvme, ssd, hdd)
|===
pveceph fs destroy NAME --remove-storages --remove-pools
----
+
-This will automatically destroy the underlying ceph pools as well as remove
+This will automatically destroy the underlying Ceph pools as well as remove
the storages from pve config.
After these steps, the CephFS should be completely removed and if you have