*pveceph* `<COMMAND> [ARGS] [OPTIONS]`
-*pveceph createmgr* `[OPTIONS]`
+*pveceph createmgr*
-Create Ceph Manager
+An alias for 'pveceph mgr create'.
-`--id` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
+*pveceph createmon*
-The ID for the manager, when omitted the same as the nodename
+An alias for 'pveceph mon create'.
-*pveceph createmon* `[OPTIONS]`
+*pveceph createosd*
-Create Ceph Monitor and Manager
+An alias for 'pveceph osd create'.
-`--exclude-manager` `<boolean>` ('default =' `0`)::
+*pveceph createpool*
-When set, only a monitor will be created.
+An alias for 'pveceph pool create'.
-`--id` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
+*pveceph destroymgr*
-The ID for the monitor, when omitted the same as the nodename
+An alias for 'pveceph mgr destroy'.
-`--mon-address` `<string>` ::
+*pveceph destroymon*
-Overwrites autodetected monitor IP address. Must be in the public network of ceph.
+An alias for 'pveceph mon destroy'.
-*pveceph createosd* `<dev>` `[OPTIONS]`
+*pveceph destroyosd*
-Create OSD
+An alias for 'pveceph osd destroy'.
-`<dev>`: `<string>` ::
+*pveceph destroypool*
-Block device name.
+An alias for 'pveceph pool destroy'.
-`--bluestore` `<boolean>` ('default =' `1`)::
+*pveceph fs create* `[OPTIONS]`
-Use bluestore instead of filestore. This is the default.
+Create a Ceph filesystem
-`--fstype` `<ext4 | xfs>` ('default =' `xfs`)::
+`--add-storage` `<boolean>` ('default =' `0`)::
-File system type (filestore only).
+Configure the created CephFS as storage for this cluster.
-`--journal_dev` `<string>` ::
+`--name` `<string>` ('default =' `cephfs`)::
-Block device name for journal (filestore) or block.db (bluestore).
+The ceph filesystem name.
-`--wal_dev` `<string>` ::
+`--pg_num` `<integer> (8 - 32768)` ('default =' `128`)::
-Block device name for block.wal (bluestore only).
+Number of placement groups for the backing data pool. The metadata pool will use a quarter of this.
-*pveceph createpool* `<name>` `[OPTIONS]`
+*pveceph help* `[OPTIONS]`
-Create POOL
+Get help about specified command.
-`<name>`: `<string>` ::
+`--extra-args` `<array>` ::
-The name of the pool. It must be unique.
+Shows help for a specific command
-`--add_storages` `<boolean>` ::
+`--verbose` `<boolean>` ::
-Configure VM and CT storages using the new pool.
+Verbose output format.
-`--application` `<cephfs | rbd | rgw>` ::
+*pveceph init* `[OPTIONS]`
-The application of the pool, 'rbd' by default.
+Create initial ceph default configuration and setup symlinks.
-`--crush_rule` `<string>` ::
+`--cluster-network` `<string>` ::
-The rule to use for mapping object placement in the cluster.
+Declare a separate cluster network, OSDs will routeheartbeat, object replication and recovery traffic over it
++
+NOTE: Requires option(s): `network`
+
+`--disable_cephx` `<boolean>` ('default =' `0`)::
+
+Disable cephx authentication.
++
+WARNING: cephx is a security feature protecting against man-in-the-middle attacks. Only consider disabling cephx if your network is private!
`--min_size` `<integer> (1 - 7)` ('default =' `2`)::
-Minimum number of replicas per object
+Minimum number of available replicas per object to allow I/O
-`--pg_num` `<integer> (8 - 32768)` ('default =' `64`)::
+`--network` `<string>` ::
-Number of placement groups.
+Use specific network for all ceph related traffic
+
+`--pg_bits` `<integer> (6 - 14)` ('default =' `6`)::
+
+Placement group bits, used to specify the default number of placement groups.
++
+NOTE: 'osd pool default pg num' does not work for default pools.
`--size` `<integer> (1 - 7)` ('default =' `3`)::
-Number of replicas per object
+Targeted number of replicas per object
+
+*pveceph install* `[OPTIONS]`
+
+Install ceph related packages.
+
+`--version` `<luminous | nautilus>` ('default =' `nautilus`)::
+
+Ceph version to install.
+
+*pveceph lspools*
+
+An alias for 'pveceph pool ls'.
+
+*pveceph mds create* `[OPTIONS]`
+
+Create Ceph Metadata Server (MDS)
-*pveceph destroymgr* `<id>`
+`--hotstandby` `<boolean>` ('default =' `0`)::
+
+Determines whether a ceph-mds daemon should poll and replay the log of an active MDS. Faster switch on MDS failure, but needs more idle resources.
+
+`--name` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ('default =' `nodename`)::
+
+The ID for the mds, when omitted the same as the nodename
+
+*pveceph mds destroy* `<name>`
+
+Destroy Ceph Metadata Server
+
+`<name>`: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
+
+The name (ID) of the mds
+
+*pveceph mgr create* `[OPTIONS]`
+
+Create Ceph Manager
+
+`--id` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
+
+The ID for the manager, when omitted the same as the nodename
+
+*pveceph mgr destroy* `<id>`
Destroy Ceph Manager.
The ID of the manager
-*pveceph destroymon* `<monid>` `[OPTIONS]`
+*pveceph mon create* `[OPTIONS]`
+
+Create Ceph Monitor and Manager
+
+`--mon-address` `<string>` ::
+
+Overwrites autodetected monitor IP address. Must be in the public network of ceph.
+
+`--monid` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
+
+The ID for the monitor, when omitted the same as the nodename
+
+*pveceph mon destroy* `<monid>`
Destroy Ceph Monitor and Manager.
Monitor ID
-`--exclude-manager` `<boolean>` ('default =' `0`)::
+*pveceph osd create* `<dev>` `[OPTIONS]`
-When set, removes only the monitor, not the manager
+Create OSD
-*pveceph destroyosd* `<osdid>` `[OPTIONS]`
+`<dev>`: `<string>` ::
-Destroy OSD
+Block device name.
-`<osdid>`: `<integer>` ::
+`--db_dev` `<string>` ::
-OSD ID
+Block device name for block.db.
-`--cleanup` `<boolean>` ('default =' `0`)::
+`--db_size` `<number> (1 - N)` ('default =' `bluestore_block_db_size or 10% of OSD size`)::
-If set, we remove partition table entries.
+Size in GiB for block.db.
++
+NOTE: Requires option(s): `db_dev`
-*pveceph destroypool* `<name>` `[OPTIONS]`
+`--encrypted` `<boolean>` ('default =' `0`)::
-Destroy pool
+Enables encryption of the OSD.
-`<name>`: `<string>` ::
+`--wal_dev` `<string>` ::
-The name of the pool. It must be unique.
+Block device name for block.wal.
-`--force` `<boolean>` ('default =' `0`)::
+`--wal_size` `<number> (0.5 - N)` ('default =' `bluestore_block_wal_size or 1% of OSD size`)::
-If true, destroys pool even if in use
+Size in GiB for block.wal.
++
+NOTE: Requires option(s): `wal_dev`
-`--remove_storages` `<boolean>` ('default =' `0`)::
+*pveceph osd destroy* `<osdid>` `[OPTIONS]`
-Remove all pveceph-managed storages configured for this pool
+Destroy OSD
-*pveceph help* `[OPTIONS]`
+`<osdid>`: `<integer>` ::
-Get help about specified command.
+OSD ID
-`--extra-args` `<array>` ::
+`--cleanup` `<boolean>` ('default =' `0`)::
-Shows help for a specific command
+If set, we remove partition table entries.
-`--verbose` `<boolean>` ::
+*pveceph pool create* `<name>` `[OPTIONS]`
-Verbose output format.
+Create POOL
-*pveceph init* `[OPTIONS]`
+`<name>`: `<string>` ::
-Create initial ceph default configuration and setup symlinks.
+The name of the pool. It must be unique.
-`--disable_cephx` `<boolean>` ('default =' `0`)::
+`--add_storages` `<boolean>` ::
-Disable cephx authentification.
-+
-WARNING: cephx is a security feature protecting against man-in-the-middle attacks. Only consider disabling cephx if your network is private!
+Configure VM and CT storage using the new pool.
-`--min_size` `<integer> (1 - 7)` ('default =' `2`)::
+`--application` `<cephfs | rbd | rgw>` ::
-Minimum number of available replicas per object to allow I/O
+The application of the pool, 'rbd' by default.
-`--network` `<string>` ::
+`--crush_rule` `<string>` ::
-Use specific network for all ceph related traffic
+The rule to use for mapping object placement in the cluster.
-`--pg_bits` `<integer> (6 - 14)` ('default =' `6`)::
+`--min_size` `<integer> (1 - 7)` ('default =' `2`)::
-Placement group bits, used to specify the default number of placement groups.
-+
-NOTE: 'osd pool default pg num' does not work for default pools.
+Minimum number of replicas per object
+
+`--pg_num` `<integer> (8 - 32768)` ('default =' `128`)::
+
+Number of placement groups.
`--size` `<integer> (1 - 7)` ('default =' `3`)::
-Targeted number of replicas per object
+Number of replicas per object
-*pveceph install* `[OPTIONS]`
+*pveceph pool destroy* `<name>` `[OPTIONS]`
-Install ceph related packages.
+Destroy pool
-`--version` `<luminous>` ::
+`<name>`: `<string>` ::
-no description available
+The name of the pool. It must be unique.
-*pveceph lspools*
+`--force` `<boolean>` ('default =' `0`)::
+
+If true, destroys pool even if in use
+
+`--remove_storages` `<boolean>` ('default =' `0`)::
+
+Remove all pveceph-managed storages configured for this pool
+
+*pveceph pool ls*
List all pools.
Start ceph services.
-`<service>`: `(mon|mds|osd|mgr)\.[A-Za-z0-9\-]{1,32}` ::
+`<service>`: `(ceph|mon|mds|osd|mgr)\.[A-Za-z0-9\-]{1,32}` ('default =' `ceph.target`)::
Ceph service name.
Stop ceph services.
-`<service>`: `(mon|mds|osd|mgr)\.[A-Za-z0-9\-]{1,32}` ::
+`<service>`: `(ceph|mon|mds|osd|mgr)\.[A-Za-z0-9\-]{1,32}` ('default =' `ceph.target`)::
Ceph service name.