Number of placement groups for the backing data pool. The metadata pool will use a quarter of this.
+*pveceph fs destroy* `<name>` `[OPTIONS]`
+
+Destroy a Ceph filesystem
+
+`<name>`: `<string>` ::
+
+The ceph filesystem name.
+
+`--remove-pools` `<boolean>` ('default =' `0`)::
+
+Remove data and metadata pools configured for this fs.
+
+`--remove-storages` `<boolean>` ('default =' `0`)::
+
+Remove all pveceph-managed storages configured for this fs.
+
*pveceph help* `[OPTIONS]`
Get help about specified command.
Use the test, not the main repository. Use with care!
-`--version` `<octopus | pacific>` ('default =' `pacific`)::
+`--version` `<octopus | pacific | quincy>` ('default =' `pacific`)::
Ceph version to install.
*pveceph pool create* `<name>` `[OPTIONS]`
-Create POOL
+Create Ceph pool
`<name>`: `<string>` ::
The name of the pool. It must be unique.
-`--add_storages` `<boolean>` ::
+`--add_storages` `<boolean>` ('default =' `0; for erasure coded pools: 1`)::
Configure VM and CT storage using the new pool.
The rule to use for mapping object placement in the cluster.
+`--erasure-coding` `k=<integer> ,m=<integer> [,device-class=<class>] [,failure-domain=<domain>] [,profile=<profile>]` ::
+
+Create an erasure coded pool for RBD with an accompaning replicated pool for metadata storage. With EC, the common ceph options 'size', 'min_size' and 'crush_rule' parameters will be applied to the metadata pool.
+
`--min_size` `<integer> (1 - 7)` ('default =' `2`)::
Minimum number of replicas per object
If true, destroys pool even if in use
+`--remove_ecprofile` `<boolean>` ('default =' `1`)::
+
+Remove the erasure code profile. Defaults to true, if applicable.
+
`--remove_storages` `<boolean>` ('default =' `0`)::
Remove all pveceph-managed storages configured for this pool