Number of placement groups for the backing data pool. The metadata pool will use a quarter of this.
+*pveceph fs destroy* `<name>` `[OPTIONS]`
+
+Destroy a Ceph filesystem
+
+`<name>`: `<string>` ::
+
+The ceph filesystem name.
+
+`--remove-pools` `<boolean>` ('default =' `0`)::
+
+Remove data and metadata pools configured for this fs.
+
+`--remove-storages` `<boolean>` ('default =' `0`)::
+
+Remove all pveceph-managed storages configured for this fs.
+
*pveceph help* `[OPTIONS]`
Get help about specified command.
`--disable_cephx` `<boolean>` ('default =' `0`)::
-Disable cephx authentification.
+Disable cephx authentication.
+
WARNING: cephx is a security feature protecting against man-in-the-middle attacks. Only consider disabling cephx if your network is private!
Placement group bits, used to specify the default number of placement groups.
+
-NOTE: 'osd pool default pg num' does not work for default pools.
+Depreacted. This setting was deprecated in recent Ceph versions.
`--size` `<integer> (1 - 7)` ('default =' `3`)::
Install ceph related packages.
-`--version` `<luminous>` ::
+`--allow-experimental` `<boolean>` ('default =' `0`)::
+
+Allow experimental versions. Use with care!
+
+`--repository` `<enterprise | no-subscription | test>` ('default =' `enterprise`)::
-no description available
+Ceph repository to use.
+
+`--version` `<quincy | reef>` ('default =' `quincy`)::
+
+Ceph version to install.
*pveceph lspools*
Create Ceph Monitor and Manager
-`--exclude-manager` `<boolean>` ('default =' `0`)::
+`--mon-address` `<string>` ::
-When set, only a monitor will be created.
+Overwrites autodetected monitor IP address(es). Must be in the public network(s) of Ceph.
-`--id` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
+`--monid` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
The ID for the monitor, when omitted the same as the nodename
-`--mon-address` `<string>` ::
-
-Overwrites autodetected monitor IP address. Must be in the public network of ceph.
-
-*pveceph mon destroy* `<monid>` `[OPTIONS]`
+*pveceph mon destroy* `<monid>`
Destroy Ceph Monitor and Manager.
Monitor ID
-`--exclude-manager` `<boolean>` ('default =' `0`)::
-
-When set, removes only the monitor, not the manager
-
*pveceph osd create* `<dev>` `[OPTIONS]`
Create OSD
Block device name.
-`--bluestore` `<boolean>` ('default =' `1`)::
+`--crush-device-class` `<string>` ::
+
+Set the device class of the OSD in crush.
+
+`--db_dev` `<string>` ::
-Use bluestore instead of filestore. This is the default.
+Block device name for block.db.
-`--fstype` `<ext4 | xfs>` ('default =' `xfs`)::
+`--db_dev_size` `<number> (1 - N)` ('default =' `bluestore_block_db_size or 10% of OSD size`)::
-File system type (filestore only).
+Size in GiB for block.db.
++
+NOTE: Requires option(s): `db_dev`
+
+`--encrypted` `<boolean>` ('default =' `0`)::
+
+Enables encryption of the OSD.
-`--journal_dev` `<string>` ::
+`--osds-per-device` `<integer> (1 - N)` ::
-Block device name for journal (filestore) or block.db (bluestore).
+OSD services per physical device. Only useful for fast NVMe devices"
+ ." to utilize their performance better.
`--wal_dev` `<string>` ::
-Block device name for block.wal (bluestore only).
+Block device name for block.wal.
+
+`--wal_dev_size` `<number> (0.5 - N)` ('default =' `bluestore_block_wal_size or 1% of OSD size`)::
+
+Size in GiB for block.wal.
++
+NOTE: Requires option(s): `wal_dev`
*pveceph osd destroy* `<osdid>` `[OPTIONS]`
If set, we remove partition table entries.
+*pveceph osd details* `<osdid>` `[OPTIONS]` `[FORMAT_OPTIONS]`
+
+Get OSD details.
+
+`<osdid>`: `<string>` ::
+
+ID of the OSD
+
+`--verbose` `<boolean>` ('default =' `0`)::
+
+Print verbose information, same as json-pretty output format.
+
*pveceph pool create* `<name>` `[OPTIONS]`
-Create POOL
+Create Ceph pool
`<name>`: `<string>` ::
The name of the pool. It must be unique.
-`--add_storages` `<boolean>` ::
+`--add_storages` `<boolean>` ('default =' `0; for erasure coded pools: 1`)::
Configure VM and CT storage using the new pool.
-`--application` `<cephfs | rbd | rgw>` ::
+`--application` `<cephfs | rbd | rgw>` ('default =' `rbd`)::
-The application of the pool, 'rbd' by default.
+The application of the pool.
`--crush_rule` `<string>` ::
The rule to use for mapping object placement in the cluster.
+`--erasure-coding` `k=<integer> ,m=<integer> [,device-class=<class>] [,failure-domain=<domain>] [,profile=<profile>]` ::
+
+Create an erasure coded pool for RBD with an accompaning replicated pool for metadata storage. With EC, the common ceph options 'size', 'min_size' and 'crush_rule' parameters will be applied to the metadata pool.
+
`--min_size` `<integer> (1 - 7)` ('default =' `2`)::
Minimum number of replicas per object
-`--pg_num` `<integer> (8 - 32768)` ('default =' `128`)::
+`--pg_autoscale_mode` `<off | on | warn>` ('default =' `warn`)::
+
+The automatic PG scaling mode of the pool.
+
+`--pg_num` `<integer> (1 - 32768)` ('default =' `128`)::
Number of placement groups.
+`--pg_num_min` `<integer> (-N - 32768)` ::
+
+Minimal number of placement groups.
+
`--size` `<integer> (1 - 7)` ('default =' `3`)::
Number of replicas per object
+`--target_size` `^(\d+(\.\d+)?)([KMGT])?$` ::
+
+The estimated target size of the pool for the PG autoscaler.
+
+`--target_size_ratio` `<number>` ::
+
+The estimated target ratio of the pool for the PG autoscaler.
+
*pveceph pool destroy* `<name>` `[OPTIONS]`
Destroy pool
If true, destroys pool even if in use
+`--remove_ecprofile` `<boolean>` ('default =' `1`)::
+
+Remove the erasure code profile. Defaults to true, if applicable.
+
`--remove_storages` `<boolean>` ('default =' `0`)::
Remove all pveceph-managed storages configured for this pool
-*pveceph pool ls*
+*pveceph pool get* `<name>` `[OPTIONS]` `[FORMAT_OPTIONS]`
+
+Show the current pool status.
+
+`<name>`: `<string>` ::
+
+The name of the pool. It must be unique.
+
+`--verbose` `<boolean>` ('default =' `0`)::
+
+If enabled, will display additional data(eg. statistics).
+
+*pveceph pool ls* `[FORMAT_OPTIONS]`
+
+List all pools and their settings (which are settable by the POST/PUT
+endpoints).
-List all pools.
+*pveceph pool set* `<name>` `[OPTIONS]`
-*pveceph purge*
+Change POOL settings
+
+`<name>`: `<string>` ::
+
+The name of the pool. It must be unique.
+
+`--application` `<cephfs | rbd | rgw>` ::
+
+The application of the pool.
+
+`--crush_rule` `<string>` ::
+
+The rule to use for mapping object placement in the cluster.
+
+`--min_size` `<integer> (1 - 7)` ::
+
+Minimum number of replicas per object
+
+`--pg_autoscale_mode` `<off | on | warn>` ::
+
+The automatic PG scaling mode of the pool.
+
+`--pg_num` `<integer> (1 - 32768)` ::
+
+Number of placement groups.
+
+`--pg_num_min` `<integer> (-N - 32768)` ::
+
+Minimal number of placement groups.
+
+`--size` `<integer> (1 - 7)` ::
+
+Number of replicas per object
+
+`--target_size` `^(\d+(\.\d+)?)([KMGT])?$` ::
+
+The estimated target size of the pool for the PG autoscaler.
+
+`--target_size_ratio` `<number>` ::
+
+The estimated target ratio of the pool for the PG autoscaler.
+
+*pveceph purge* `[OPTIONS]`
Destroy ceph related data and configuration files.
-*pveceph start* `[<service>]`
+`--crash` `<boolean>` ::
+
+Additionally purge Ceph crash logs, /var/lib/ceph/crash.
+
+`--logs` `<boolean>` ::
+
+Additionally purge Ceph logs, /var/log/ceph.
+
+*pveceph start* `[OPTIONS]`
Start ceph services.
-`<service>`: `(ceph|mon|mds|osd|mgr)\.[A-Za-z0-9\-]{1,32}` ('default =' `ceph.target`)::
+`--service` `(ceph|mon|mds|osd|mgr)(\.[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?)?` ('default =' `ceph.target`)::
Ceph service name.
*pveceph status*
-Get ceph status.
+Get Ceph Status.
-*pveceph stop* `[<service>]`
+*pveceph stop* `[OPTIONS]`
Stop ceph services.
-`<service>`: `(ceph|mon|mds|osd|mgr)\.[A-Za-z0-9\-]{1,32}` ('default =' `ceph.target`)::
+`--service` `(ceph|mon|mds|osd|mgr)(\.[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?)?` ('default =' `ceph.target`)::
Ceph service name.