X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=pveceph.1-synopsis.adoc;h=28f48f8f49d11ec58b3bdb583c1a9a4311844d3e;hb=3c0b507a66165c2d30d8f8971187b0199e7e237e;hp=6d15c851392d0771a4e2bb42bb839958633447b2;hpb=739d4d64c2b193e81e5680352f18850a20c7e5ff;p=pve-docs.git diff --git a/pveceph.1-synopsis.adoc b/pveceph.1-synopsis.adoc index 6d15c85..28f48f8 100644 --- a/pveceph.1-synopsis.adoc +++ b/pveceph.1-synopsis.adoc @@ -48,6 +48,22 @@ The ceph filesystem name. Number of placement groups for the backing data pool. The metadata pool will use a quarter of this. +*pveceph fs destroy* `` `[OPTIONS]` + +Destroy a Ceph filesystem + +``: `` :: + +The ceph filesystem name. + +`--remove-pools` `` ('default =' `0`):: + +Remove data and metadata pools configured for this fs. + +`--remove-storages` `` ('default =' `0`):: + +Remove all pveceph-managed storages configured for this fs. + *pveceph help* `[OPTIONS]` Get help about specified command. @@ -102,7 +118,11 @@ Install ceph related packages. Allow experimental versions. Use with care! -`--version` `` ('default =' `nautilus`):: +`--test-repository` `` ('default =' `0`):: + +Use the test, not the main repository. Use with care! + +`--version` `` ('default =' `pacific`):: Ceph version to install. @@ -152,7 +172,7 @@ Create Ceph Monitor and Manager `--mon-address` `` :: -Overwrites autodetected monitor IP address. Must be in the public network of ceph. +Overwrites autodetected monitor IP address(es). Must be in the public network(s) of Ceph. `--monid` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` :: @@ -182,7 +202,7 @@ Set the device class of the OSD in crush. Block device name for block.db. -`--db_size` ` (1 - N)` ('default =' `bluestore_block_db_size or 10% of OSD size`):: +`--db_dev_size` ` (1 - N)` ('default =' `bluestore_block_db_size or 10% of OSD size`):: Size in GiB for block.db. + @@ -196,7 +216,7 @@ Enables encryption of the OSD. Block device name for block.wal. -`--wal_size` ` (0.5 - N)` ('default =' `bluestore_block_wal_size or 1% of OSD size`):: +`--wal_dev_size` ` (0.5 - N)` ('default =' `bluestore_block_wal_size or 1% of OSD size`):: Size in GiB for block.wal. + @@ -216,36 +236,56 @@ If set, we remove partition table entries. *pveceph pool create* `` `[OPTIONS]` -Create POOL +Create Ceph pool ``: `` :: The name of the pool. It must be unique. -`--add_storages` `` :: +`--add_storages` `` ('default =' `0; for erasure coded pools: 1`):: Configure VM and CT storage using the new pool. -`--application` `` :: +`--application` `` ('default =' `rbd`):: -The application of the pool, 'rbd' by default. +The application of the pool. `--crush_rule` `` :: The rule to use for mapping object placement in the cluster. +`--erasure-coding` `k= ,m= [,device-class=] [,failure-domain=] [,profile=]` :: + +Create an erasure coded pool for RBD with an accompaning replicated pool for metadata storage. With EC, the common ceph options 'size', 'min_size' and 'crush_rule' parameters will be applied to the metadata pool. + `--min_size` ` (1 - 7)` ('default =' `2`):: Minimum number of replicas per object -`--pg_num` ` (8 - 32768)` ('default =' `128`):: +`--pg_autoscale_mode` `` ('default =' `warn`):: + +The automatic PG scaling mode of the pool. + +`--pg_num` ` (1 - 32768)` ('default =' `128`):: Number of placement groups. +`--pg_num_min` ` (-N - 32768)` :: + +Minimal number of placement groups. + `--size` ` (1 - 7)` ('default =' `3`):: Number of replicas per object +`--target_size` `^(\d+(\.\d+)?)([KMGT])?$` :: + +The estimated target size of the pool for the PG autoscaler. + +`--target_size_ratio` `` :: + +The estimated target ratio of the pool for the PG autoscaler. + *pveceph pool destroy* `` `[OPTIONS]` Destroy pool @@ -258,13 +298,74 @@ The name of the pool. It must be unique. If true, destroys pool even if in use +`--remove_ecprofile` `` ('default =' `1`):: + +Remove the erasure code profile. Defaults to true, if applicable. + `--remove_storages` `` ('default =' `0`):: Remove all pveceph-managed storages configured for this pool +*pveceph pool get* `` `[OPTIONS]` `[FORMAT_OPTIONS]` + +Show the current pool status. + +``: `` :: + +The name of the pool. It must be unique. + +`--verbose` `` ('default =' `0`):: + +If enabled, will display additional data(eg. statistics). + *pveceph pool ls* `[FORMAT_OPTIONS]` -List all pools. +List all pools and their settings (which are settable by the POST/PUT +endpoints). + +*pveceph pool set* `` `[OPTIONS]` + +Change POOL settings + +``: `` :: + +The name of the pool. It must be unique. + +`--application` `` :: + +The application of the pool. + +`--crush_rule` `` :: + +The rule to use for mapping object placement in the cluster. + +`--min_size` ` (1 - 7)` :: + +Minimum number of replicas per object + +`--pg_autoscale_mode` `` :: + +The automatic PG scaling mode of the pool. + +`--pg_num` ` (1 - 32768)` :: + +Number of placement groups. + +`--pg_num_min` ` (-N - 32768)` :: + +Minimal number of placement groups. + +`--size` ` (1 - 7)` :: + +Number of replicas per object + +`--target_size` `^(\d+(\.\d+)?)([KMGT])?$` :: + +The estimated target size of the pool for the PG autoscaler. + +`--target_size_ratio` `` :: + +The estimated target ratio of the pool for the PG autoscaler. *pveceph purge* `[OPTIONS]` @@ -288,7 +389,7 @@ Ceph service name. *pveceph status* -Get ceph status. +Get Ceph Status. *pveceph stop* `[OPTIONS]`