*pveceph* `<COMMAND> [ARGS] [OPTIONS]`
-*pveceph createmgr* `[OPTIONS]`
+*pveceph createmgr*
-Create Ceph Manager
+An alias for 'pveceph mgr create'.
-`--id` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
+*pveceph createmon*
-The ID for the manager, when omitted the same as the nodename
+An alias for 'pveceph mon create'.
+*pveceph createosd*
+An alias for 'pveceph osd create'.
-*pveceph createmon* `[OPTIONS]`
+*pveceph createpool*
-Create Ceph Monitor and Manager
+An alias for 'pveceph pool create'.
-`--exclude-manager` `<boolean>` ('default =' `0`)::
+*pveceph destroymgr*
-When set, only a monitor will be created.
+An alias for 'pveceph mgr destroy'.
-`--id` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
+*pveceph destroymon*
-The ID for the monitor, when omitted the same as the nodename
+An alias for 'pveceph mon destroy'.
+*pveceph destroyosd*
+An alias for 'pveceph osd destroy'.
+*pveceph destroypool*
-*pveceph createosd* `<dev>` `[OPTIONS]`
+An alias for 'pveceph pool destroy'.
-Create OSD
+*pveceph fs create* `[OPTIONS]`
-`<dev>`: `<string>` ::
+Create a Ceph filesystem
-Block device name.
+`--add-storage` `<boolean>` ('default =' `0`)::
-`--bluestore` `<boolean>` ('default =' `0`)::
+Configure the created CephFS as storage for this cluster.
-Use bluestore instead of filestore.
+`--name` `<string>` ('default =' `cephfs`)::
-`--fstype` `<btrfs | ext4 | xfs>` ('default =' `xfs`)::
+The ceph filesystem name.
-File system type (filestore only).
+`--pg_num` `<integer> (8 - 32768)` ('default =' `128`)::
-`--journal_dev` `<string>` ::
+Number of placement groups for the backing data pool. The metadata pool will use a quarter of this.
-Block device name for journal (filestore) or block.db (bluestore).
+*pveceph help* `[OPTIONS]`
-`--wal_dev` `<string>` ::
+Get help about specified command.
-Block device name for block.wal (bluestore only).
+`--extra-args` `<array>` ::
+Shows help for a specific command
+`--verbose` `<boolean>` ::
+Verbose output format.
-*pveceph createpool* `<name>` `[OPTIONS]`
+*pveceph init* `[OPTIONS]`
-Create POOL
+Create initial ceph default configuration and setup symlinks.
-`<name>`: `<string>` ::
+`--cluster-network` `<string>` ::
-The name of the pool. It must be unique.
+Declare a separate cluster network, OSDs will routeheartbeat, object replication and recovery traffic over it
++
+NOTE: Requires option(s): `network`
-`--add_storages` `<boolean>` ::
+`--disable_cephx` `<boolean>` ('default =' `0`)::
-Configure VM and CT storages using the new pool.
+Disable cephx authentication.
++
+WARNING: cephx is a security feature protecting against man-in-the-middle attacks. Only consider disabling cephx if your network is private!
-`--application` `<cephfs | rbd | rgw>` ::
+`--min_size` `<integer> (1 - 7)` ('default =' `2`)::
-The application of the pool, 'rbd' by default.
+Minimum number of available replicas per object to allow I/O
-`--crush_rule` `<string>` ::
+`--network` `<string>` ::
-The rule to use for mapping object placement in the cluster.
+Use specific network for all ceph related traffic
-`--min_size` `<integer> (1 - 7)` ('default =' `2`)::
+`--pg_bits` `<integer> (6 - 14)` ('default =' `6`)::
-Minimum number of replicas per object
+Placement group bits, used to specify the default number of placement groups.
++
+NOTE: 'osd pool default pg num' does not work for default pools.
-`--pg_num` `<integer> (8 - 32768)` ('default =' `64`)::
+`--size` `<integer> (1 - 7)` ('default =' `3`)::
-Number of placement groups.
+Targeted number of replicas per object
-`--size` `<integer> (1 - 7)` ('default =' `3`)::
+*pveceph install* `[OPTIONS]`
-Number of replicas per object
+Install ceph related packages.
+`--version` `<luminous | nautilus>` ('default =' `nautilus`)::
+Ceph version to install.
-*pveceph destroymgr* `<id>`
+*pveceph lspools*
-Destroy Ceph Manager.
+An alias for 'pveceph pool ls'.
-`<id>`: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
+*pveceph mds create* `[OPTIONS]`
-The ID of the manager
+Create Ceph Metadata Server (MDS)
+`--hotstandby` `<boolean>` ('default =' `0`)::
+Determines whether a ceph-mds daemon should poll and replay the log of an active MDS. Faster switch on MDS failure, but needs more idle resources.
-*pveceph destroymon* `<monid>` `[OPTIONS]`
+`--name` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ('default =' `nodename`)::
-Destroy Ceph Monitor and Manager.
+The ID for the mds, when omitted the same as the nodename
-`<monid>`: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
+*pveceph mds destroy* `<name>`
-Monitor ID
+Destroy Ceph Metadata Server
-`--exclude-manager` `<boolean>` ('default =' `0`)::
+`<name>`: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
-When set, removes only the monitor, not the manager
+The name (ID) of the mds
+*pveceph mgr create* `[OPTIONS]`
+Create Ceph Manager
+`--id` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
-*pveceph destroyosd* `<osdid>` `[OPTIONS]`
+The ID for the manager, when omitted the same as the nodename
-Destroy OSD
+*pveceph mgr destroy* `<id>`
-`<osdid>`: `<integer>` ::
+Destroy Ceph Manager.
-OSD ID
+`<id>`: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
-`--cleanup` `<boolean>` ('default =' `0`)::
+The ID of the manager
-If set, we remove partition table entries.
+*pveceph mon create* `[OPTIONS]`
+Create Ceph Monitor and Manager
+`--mon-address` `<string>` ::
+Overwrites autodetected monitor IP address. Must be in the public network of ceph.
-*pveceph destroypool* `<name>` `[OPTIONS]`
+`--monid` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
-Destroy pool
+The ID for the monitor, when omitted the same as the nodename
-`<name>`: `<string>` ::
+*pveceph mon destroy* `<monid>`
-The name of the pool. It must be unique.
+Destroy Ceph Monitor and Manager.
-`--force` `<boolean>` ('default =' `0`)::
+`<monid>`: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
-If true, destroys pool even if in use
+Monitor ID
-`--remove_storages` `<boolean>` ('default =' `0`)::
+*pveceph osd create* `<dev>` `[OPTIONS]`
-Remove all pveceph-managed storages configured for this pool
+Create OSD
+`<dev>`: `<string>` ::
+Block device name.
+`--db_dev` `<string>` ::
-*pveceph help* `[<cmd>]` `[OPTIONS]`
+Block device name for block.db.
-Get help about specified command.
+`--db_size` `<number> (1 - N)` ('default =' `bluestore_block_db_size or 10% of OSD size`)::
-`<cmd>`: `<string>` ::
+Size in GiB for block.db.
++
+NOTE: Requires option(s): `db_dev`
-Command name
+`--encrypted` `<boolean>` ('default =' `0`)::
-`--verbose` `<boolean>` ::
+Enables encryption of the OSD.
-Verbose output format.
+`--wal_dev` `<string>` ::
+Block device name for block.wal.
+`--wal_size` `<number> (0.5 - N)` ('default =' `bluestore_block_wal_size or 1% of OSD size`)::
+Size in GiB for block.wal.
++
+NOTE: Requires option(s): `wal_dev`
-*pveceph init* `[OPTIONS]`
+*pveceph osd destroy* `<osdid>` `[OPTIONS]`
-Create initial ceph default configuration and setup symlinks.
+Destroy OSD
-`--disable_cephx` `<boolean>` ('default =' `0`)::
+`<osdid>`: `<integer>` ::
-Disable cephx authentification.
-+
-WARNING: cephx is a security feature protecting against man-in-the-middle attacks. Only consider disabling cephx if your network is private!
+OSD ID
-`--min_size` `<integer> (1 - 7)` ('default =' `2`)::
+`--cleanup` `<boolean>` ('default =' `0`)::
-Minimum number of available replicas per object to allow I/O
+If set, we remove partition table entries.
-`--network` `<string>` ::
+*pveceph pool create* `<name>` `[OPTIONS]`
-Use specific network for all ceph related traffic
+Create POOL
-`--pg_bits` `<integer> (6 - 14)` ('default =' `6`)::
+`<name>`: `<string>` ::
-Placement group bits, used to specify the default number of placement groups.
-+
-NOTE: 'osd pool default pg num' does not work for default pools.
+The name of the pool. It must be unique.
-`--size` `<integer> (1 - 7)` ('default =' `3`)::
+`--add_storages` `<boolean>` ::
-Targeted number of replicas per object
+Configure VM and CT storage using the new pool.
+`--application` `<cephfs | rbd | rgw>` ::
+The application of the pool, 'rbd' by default.
+`--crush_rule` `<string>` ::
-*pveceph install* `[OPTIONS]`
+The rule to use for mapping object placement in the cluster.
-Install ceph related packages.
+`--min_size` `<integer> (1 - 7)` ('default =' `2`)::
-`--version` `<luminous>` ::
+Minimum number of replicas per object
-no description available
+`--pg_num` `<integer> (8 - 32768)` ('default =' `128`)::
+Number of placement groups.
+`--size` `<integer> (1 - 7)` ('default =' `3`)::
+Number of replicas per object
-*pveceph lspools*
+*pveceph pool destroy* `<name>` `[OPTIONS]`
-List all pools.
+Destroy pool
+`<name>`: `<string>` ::
+The name of the pool. It must be unique.
+`--force` `<boolean>` ('default =' `0`)::
-*pveceph purge*
+If true, destroys pool even if in use
-Destroy ceph related data and configuration files.
+`--remove_storages` `<boolean>` ('default =' `0`)::
+Remove all pveceph-managed storages configured for this pool
+
+*pveceph pool ls*
+List all pools.
+*pveceph purge*
+
+Destroy ceph related data and configuration files.
*pveceph start* `[<service>]`
Start ceph services.
-`<service>`: `(mon|mds|osd|mgr)\.[A-Za-z0-9\-]{1,32}` ::
+`<service>`: `(ceph|mon|mds|osd|mgr)\.[A-Za-z0-9\-]{1,32}` ('default =' `ceph.target`)::
Ceph service name.
-
-
*pveceph status*
Get ceph status.
-
-
*pveceph stop* `[<service>]`
Stop ceph services.
-`<service>`: `(mon|mds|osd|mgr)\.[A-Za-z0-9\-]{1,32}` ::
+`<service>`: `(ceph|mon|mds|osd|mgr)\.[A-Za-z0-9\-]{1,32}` ('default =' `ceph.target`)::
Ceph service name.
-
-