Number of placement groups for the backing data pool. The metadata pool will use a quarter of this.
+*pveceph fs destroy* `<name>` `[OPTIONS]`
+
+Destroy a Ceph filesystem
+
+`<name>`: `<string>` ::
+
+The ceph filesystem name.
+
+`--remove-pools` `<boolean>` ('default =' `0`)::
+
+Remove data and metadata pools configured for this fs.
+
+`--remove-storages` `<boolean>` ('default =' `0`)::
+
+Remove all pveceph-managed storages configured for this fs.
+
*pveceph help* `[OPTIONS]`
Get help about specified command.
Allow experimental versions. Use with care!
-`--version` `<luminous | nautilus | octopus>` ('default =' `nautilus`)::
+`--test-repository` `<boolean>` ('default =' `0`)::
+
+Use the test, not the main repository. Use with care!
+
+`--version` `<octopus | pacific>` ('default =' `pacific`)::
Ceph version to install.
`--mon-address` `<string>` ::
-Overwrites autodetected monitor IP address. Must be in the public network of ceph.
+Overwrites autodetected monitor IP address(es). Must be in the public network(s) of Ceph.
`--monid` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
Block device name.
+`--crush-device-class` `<string>` ::
+
+Set the device class of the OSD in crush.
+
`--db_dev` `<string>` ::
Block device name for block.db.
-`--db_size` `<number> (1 - N)` ('default =' `bluestore_block_db_size or 10% of OSD size`)::
+`--db_dev_size` `<number> (1 - N)` ('default =' `bluestore_block_db_size or 10% of OSD size`)::
Size in GiB for block.db.
+
Block device name for block.wal.
-`--wal_size` `<number> (0.5 - N)` ('default =' `bluestore_block_wal_size or 1% of OSD size`)::
+`--wal_dev_size` `<number> (0.5 - N)` ('default =' `bluestore_block_wal_size or 1% of OSD size`)::
Size in GiB for block.wal.
+
Configure VM and CT storage using the new pool.
-`--application` `<cephfs | rbd | rgw>` ::
+`--application` `<cephfs | rbd | rgw>` ('default =' `rbd`)::
-The application of the pool, 'rbd' by default.
+The application of the pool.
`--crush_rule` `<string>` ::
Minimum number of replicas per object
-`--pg_num` `<integer> (8 - 32768)` ('default =' `128`)::
+`--pg_autoscale_mode` `<off | on | warn>` ('default =' `warn`)::
+
+The automatic PG scaling mode of the pool.
+
+`--pg_num` `<integer> (1 - 32768)` ('default =' `128`)::
Number of placement groups.
+`--pg_num_min` `<integer> (-N - 32768)` ::
+
+Minimal number of placement groups.
+
`--size` `<integer> (1 - 7)` ('default =' `3`)::
Number of replicas per object
+`--target_size` `^(\d+(\.\d+)?)([KMGT])?$` ::
+
+The estimated target size of the pool for the PG autoscaler.
+
+`--target_size_ratio` `<number>` ::
+
+The estimated target ratio of the pool for the PG autoscaler.
+
*pveceph pool destroy* `<name>` `[OPTIONS]`
Destroy pool
Remove all pveceph-managed storages configured for this pool
+*pveceph pool get* `<name>` `[OPTIONS]` `[FORMAT_OPTIONS]`
+
+List pool settings.
+
+`<name>`: `<string>` ::
+
+The name of the pool. It must be unique.
+
+`--verbose` `<boolean>` ('default =' `0`)::
+
+If enabled, will display additional data(eg. statistics).
+
*pveceph pool ls* `[FORMAT_OPTIONS]`
List all pools.
+*pveceph pool set* `<name>` `[OPTIONS]`
+
+Change POOL settings
+
+`<name>`: `<string>` ::
+
+The name of the pool. It must be unique.
+
+`--application` `<cephfs | rbd | rgw>` ::
+
+The application of the pool.
+
+`--crush_rule` `<string>` ::
+
+The rule to use for mapping object placement in the cluster.
+
+`--min_size` `<integer> (1 - 7)` ::
+
+Minimum number of replicas per object
+
+`--pg_autoscale_mode` `<off | on | warn>` ::
+
+The automatic PG scaling mode of the pool.
+
+`--pg_num` `<integer> (1 - 32768)` ::
+
+Number of placement groups.
+
+`--pg_num_min` `<integer> (-N - 32768)` ::
+
+Minimal number of placement groups.
+
+`--size` `<integer> (1 - 7)` ::
+
+Number of replicas per object
+
+`--target_size` `^(\d+(\.\d+)?)([KMGT])?$` ::
+
+The estimated target size of the pool for the PG autoscaler.
+
+`--target_size_ratio` `<number>` ::
+
+The estimated target ratio of the pool for the PG autoscaler.
+
*pveceph purge* `[OPTIONS]`
Destroy ceph related data and configuration files.
*pveceph status*
-Get ceph status.
+Get Ceph Status.
*pveceph stop* `[OPTIONS]`