X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=pveceph.1-synopsis.adoc;h=12f450138301dd2741abc5a142dc0dc0a0095ed0;hb=HEAD;hp=790abcd61d9f636e4a9efa4a5635f924bf6d7ae4;hpb=2c0dde619dc203c365be8b15284530099d159682;p=pve-docs.git diff --git a/pveceph.1-synopsis.adoc b/pveceph.1-synopsis.adoc index 790abcd..12f4501 100644 --- a/pveceph.1-synopsis.adoc +++ b/pveceph.1-synopsis.adoc @@ -1,183 +1,419 @@ *pveceph* ` [ARGS] [OPTIONS]` +*pveceph createmgr* + +An alias for 'pveceph mgr create'. + *pveceph createmon* -Create Ceph Monitor +An alias for 'pveceph mon create'. +*pveceph createosd* +An alias for 'pveceph osd create'. +*pveceph createpool* -*pveceph createosd* `` `[OPTIONS]` +An alias for 'pveceph pool create'. -Create OSD +*pveceph destroymgr* -`` `string` :: +An alias for 'pveceph mgr destroy'. -Block device name. +*pveceph destroymon* -`-fstype` `(btrfs | ext4 | xfs)` (default=`xfs`):: +An alias for 'pveceph mon destroy'. -File system type. +*pveceph destroyosd* -`-journal_dev` `string` :: +An alias for 'pveceph osd destroy'. -Block device name for journal. +*pveceph destroypool* +An alias for 'pveceph pool destroy'. +*pveceph fs create* `[OPTIONS]` +Create a Ceph filesystem -*pveceph createpool* `` `[OPTIONS]` +`--add-storage` `` ('default =' `0`):: -Create POOL +Configure the created CephFS as storage for this cluster. -`` `string` :: +`--name` `` ('default =' `cephfs`):: -The name of the pool. It must be unique. +The ceph filesystem name. -`-crush_ruleset` `integer (0 - 32768)` (default=`0`):: +`--pg_num` ` (8 - 32768)` ('default =' `128`):: -The ruleset to use for mapping object placement in the cluster. +Number of placement groups for the backing data pool. The metadata pool will use a quarter of this. -`-min_size` `integer (1 - 3)` (default=`1`):: +*pveceph fs destroy* `` `[OPTIONS]` -Minimum number of replicas per object +Destroy a Ceph filesystem -`-pg_num` `integer (8 - 32768)` (default=`64`):: +``: `` :: -Number of placement groups. +The ceph filesystem name. -`-size` `integer (1 - 3)` (default=`2`):: +`--remove-pools` `` ('default =' `0`):: -Number of replicas per object +Remove data and metadata pools configured for this fs. + +`--remove-storages` `` ('default =' `0`):: + +Remove all pveceph-managed storages configured for this fs. + +*pveceph help* `[OPTIONS]` + +Get help about specified command. + +`--extra-args` `` :: + +Shows help for a specific command + +`--verbose` `` :: + +Verbose output format. + +*pveceph init* `[OPTIONS]` + +Create initial ceph default configuration and setup symlinks. + +`--cluster-network` `` :: + +Declare a separate cluster network, OSDs will routeheartbeat, object replication and recovery traffic over it ++ +NOTE: Requires option(s): `network` + +`--disable_cephx` `` ('default =' `0`):: +Disable cephx authentication. ++ +WARNING: cephx is a security feature protecting against man-in-the-middle attacks. Only consider disabling cephx if your network is private! +`--min_size` ` (1 - 7)` ('default =' `2`):: -*pveceph destroymon* `` +Minimum number of available replicas per object to allow I/O -Destroy Ceph monitor. +`--network` `` :: + +Use specific network for all ceph related traffic -`` `integer` :: +`--pg_bits` ` (6 - 14)` ('default =' `6`):: + +Placement group bits, used to specify the default number of placement groups. ++ +Depreacted. This setting was deprecated in recent Ceph versions. + +`--size` ` (1 - 7)` ('default =' `3`):: + +Targeted number of replicas per object + +*pveceph install* `[OPTIONS]` + +Install ceph related packages. + +`--allow-experimental` `` ('default =' `0`):: + +Allow experimental versions. Use with care! + +`--repository` `` ('default =' `enterprise`):: + +Ceph repository to use. + +`--version` `` ('default =' `quincy`):: + +Ceph version to install. + +*pveceph lspools* + +An alias for 'pveceph pool ls'. + +*pveceph mds create* `[OPTIONS]` + +Create Ceph Metadata Server (MDS) + +`--hotstandby` `` ('default =' `0`):: + +Determines whether a ceph-mds daemon should poll and replay the log of an active MDS. Faster switch on MDS failure, but needs more idle resources. + +`--name` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ('default =' `nodename`):: + +The ID for the mds, when omitted the same as the nodename + +*pveceph mds destroy* `` + +Destroy Ceph Metadata Server + +``: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` :: + +The name (ID) of the mds + +*pveceph mgr create* `[OPTIONS]` + +Create Ceph Manager + +`--id` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` :: + +The ID for the manager, when omitted the same as the nodename + +*pveceph mgr destroy* `` + +Destroy Ceph Manager. + +``: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` :: + +The ID of the manager + +*pveceph mon create* `[OPTIONS]` + +Create Ceph Monitor and Manager + +`--mon-address` `` :: + +Overwrites autodetected monitor IP address(es). Must be in the public network(s) of Ceph. + +`--monid` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` :: + +The ID for the monitor, when omitted the same as the nodename + +*pveceph mon destroy* `` + +Destroy Ceph Monitor and Manager. + +``: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` :: Monitor ID +*pveceph osd create* `` `[OPTIONS]` + +Create OSD + +``: `` :: + +Block device name. + +`--crush-device-class` `` :: +Set the device class of the OSD in crush. + +`--db_dev` `` :: + +Block device name for block.db. + +`--db_dev_size` ` (1 - N)` ('default =' `bluestore_block_db_size or 10% of OSD size`):: + +Size in GiB for block.db. ++ +NOTE: Requires option(s): `db_dev` +`--encrypted` `` ('default =' `0`):: -*pveceph destroyosd* `` `[OPTIONS]` +Enables encryption of the OSD. + +`--osds-per-device` ` (1 - N)` :: + +OSD services per physical device. Only useful for fast NVMe devices" + ." to utilize their performance better. + +`--wal_dev` `` :: + +Block device name for block.wal. + +`--wal_dev_size` ` (0.5 - N)` ('default =' `bluestore_block_wal_size or 1% of OSD size`):: + +Size in GiB for block.wal. ++ +NOTE: Requires option(s): `wal_dev` + +*pveceph osd destroy* `` `[OPTIONS]` Destroy OSD -`` `integer` :: +``: `` :: OSD ID -`-cleanup` `boolean` (default=`0`):: +`--cleanup` `` ('default =' `0`):: If set, we remove partition table entries. +*pveceph osd details* `` `[OPTIONS]` `[FORMAT_OPTIONS]` +Get OSD details. +``: `` :: -*pveceph destroypool* `` `[OPTIONS]` +ID of the OSD -Destroy pool +`--verbose` `` ('default =' `0`):: -`` `string` :: +Print verbose information, same as json-pretty output format. + +*pveceph pool create* `` `[OPTIONS]` + +Create Ceph pool + +``: `` :: The name of the pool. It must be unique. -`-force` `boolean` (default=`0`):: +`--add_storages` `` ('default =' `0; for erasure coded pools: 1`):: -If true, destroys pool even if in use +Configure VM and CT storage using the new pool. +`--application` `` ('default =' `rbd`):: +The application of the pool. +`--crush_rule` `` :: -*pveceph help* `[]` `[OPTIONS]` +The rule to use for mapping object placement in the cluster. -Get help about specified command. +`--erasure-coding` `k= ,m= [,device-class=] [,failure-domain=] [,profile=]` :: -`` `string` :: +Create an erasure coded pool for RBD with an accompaning replicated pool for metadata storage. With EC, the common ceph options 'size', 'min_size' and 'crush_rule' parameters will be applied to the metadata pool. -Command name +`--min_size` ` (1 - 7)` ('default =' `2`):: -`-verbose` `boolean` :: +Minimum number of replicas per object -Verbose output format. +`--pg_autoscale_mode` `` ('default =' `warn`):: +The automatic PG scaling mode of the pool. +`--pg_num` ` (1 - 32768)` ('default =' `128`):: +Number of placement groups. -*pveceph init* `[OPTIONS]` +`--pg_num_min` ` (-N - 32768)` :: -Create initial ceph default configuration and setup symlinks. +Minimal number of placement groups. -`-network` `string` :: +`--size` ` (1 - 7)` ('default =' `3`):: -Use specific network for all ceph related traffic +Number of replicas per object -`-pg_bits` `integer (6 - 14)` (default=`6`):: +`--target_size` `^(\d+(\.\d+)?)([KMGT])?$` :: -Placement group bits, used to specify the default number of placement groups. -+ -NOTE: 'osd pool default pg num' does not work for default pools. +The estimated target size of the pool for the PG autoscaler. -`-size` `integer (1 - 3)` (default=`2`):: +`--target_size_ratio` `` :: -Number of replicas per object +The estimated target ratio of the pool for the PG autoscaler. +*pveceph pool destroy* `` `[OPTIONS]` +Destroy pool +``: `` :: -*pveceph install* `[OPTIONS]` +The name of the pool. It must be unique. -Install ceph related packages. +`--force` `` ('default =' `0`):: -`-version` `(hammer)` :: +If true, destroys pool even if in use -no description available +`--remove_ecprofile` `` ('default =' `1`):: +Remove the erasure code profile. Defaults to true, if applicable. +`--remove_storages` `` ('default =' `0`):: +Remove all pveceph-managed storages configured for this pool -*pveceph lspools* +*pveceph pool get* `` `[OPTIONS]` `[FORMAT_OPTIONS]` -List all pools. +Show the current pool status. +``: `` :: +The name of the pool. It must be unique. +`--verbose` `` ('default =' `0`):: -*pveceph purge* +If enabled, will display additional data(eg. statistics). -Destroy ceph related data and configuration files. +*pveceph pool ls* `[FORMAT_OPTIONS]` +List all pools and their settings (which are settable by the POST/PUT +endpoints). +*pveceph pool set* `` `[OPTIONS]` +Change POOL settings -*pveceph start* `[]` +``: `` :: -Start ceph services. +The name of the pool. It must be unique. -`` `(mon|mds|osd)\.[A-Za-z0-9]{1,32}` :: +`--application` `` :: -Ceph service name. +The application of the pool. +`--crush_rule` `` :: +The rule to use for mapping object placement in the cluster. -*pveceph status* +`--min_size` ` (1 - 7)` :: -Get ceph status. +Minimum number of replicas per object +`--pg_autoscale_mode` `` :: +The automatic PG scaling mode of the pool. -*pveceph stop* `[]` +`--pg_num` ` (1 - 32768)` :: -Stop ceph services. +Number of placement groups. + +`--pg_num_min` ` (-N - 32768)` :: + +Minimal number of placement groups. + +`--size` ` (1 - 7)` :: + +Number of replicas per object + +`--target_size` `^(\d+(\.\d+)?)([KMGT])?$` :: + +The estimated target size of the pool for the PG autoscaler. + +`--target_size_ratio` `` :: -`` `(mon|mds|osd)\.[A-Za-z0-9]{1,32}` :: +The estimated target ratio of the pool for the PG autoscaler. + +*pveceph purge* `[OPTIONS]` + +Destroy ceph related data and configuration files. + +`--crash` `` :: + +Additionally purge Ceph crash logs, /var/lib/ceph/crash. + +`--logs` `` :: + +Additionally purge Ceph logs, /var/log/ceph. + +*pveceph start* `[OPTIONS]` + +Start ceph services. + +`--service` `(ceph|mon|mds|osd|mgr)(\.[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?)?` ('default =' `ceph.target`):: Ceph service name. +*pveceph status* + +Get Ceph Status. + +*pveceph stop* `[OPTIONS]` +Stop ceph services. + +`--service` `(ceph|mon|mds|osd|mgr)(\.[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?)?` ('default =' `ceph.target`):: + +Ceph service name.