X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=pveceph.1-synopsis.adoc;h=9514e3e62b73f90686bd6391a8d46a6cfde91eb5;hb=8d5c645cf745305c5c8fb9706543881f584c7491;hp=0859989880d30699721936eace427cc41d1a66ea;hpb=013dc89ffce47b8c55412c016a508205768b4fd6;p=pve-docs.git diff --git a/pveceph.1-synopsis.adoc b/pveceph.1-synopsis.adoc index 0859989..9514e3e 100644 --- a/pveceph.1-synopsis.adoc +++ b/pveceph.1-synopsis.adoc @@ -1,69 +1,212 @@ *pveceph* ` [ARGS] [OPTIONS]` +*pveceph createmgr* + +An alias for 'pveceph mgr create'. + *pveceph createmon* -Create Ceph Monitor +An alias for 'pveceph mon create'. +*pveceph createosd* +An alias for 'pveceph osd create'. +*pveceph createpool* -*pveceph createosd* `` `[OPTIONS]` +An alias for 'pveceph pool create'. -Create OSD +*pveceph destroymgr* -``: `` :: +An alias for 'pveceph mgr destroy'. -Block device name. +*pveceph destroymon* -`-fstype` `` ('default =' `xfs`):: +An alias for 'pveceph mon destroy'. -File system type. +*pveceph destroyosd* -`-journal_dev` `` :: +An alias for 'pveceph osd destroy'. -Block device name for journal. +*pveceph destroypool* +An alias for 'pveceph pool destroy'. +*pveceph fs create* `[OPTIONS]` +Create a Ceph filesystem -*pveceph createpool* `` `[OPTIONS]` +`--add-storage` `` ('default =' `0`):: -Create POOL +Configure the created CephFS as storage for this cluster. -``: `` :: +`--name` `` ('default =' `cephfs`):: -The name of the pool. It must be unique. +The ceph filesystem name. -`-crush_ruleset` ` (0 - 32768)` ('default =' `0`):: +`--pg_num` ` (8 - 32768)` ('default =' `128`):: -The ruleset to use for mapping object placement in the cluster. +Number of placement groups for the backing data pool. The metadata pool will use a quarter of this. -`-min_size` ` (1 - 3)` ('default =' `1`):: +*pveceph help* `[OPTIONS]` -Minimum number of replicas per object +Get help about specified command. -`-pg_num` ` (8 - 32768)` ('default =' `64`):: +`--extra-args` `` :: -Number of placement groups. +Shows help for a specific command -`-size` ` (1 - 3)` ('default =' `2`):: +`--verbose` `` :: -Number of replicas per object +Verbose output format. + +*pveceph init* `[OPTIONS]` +Create initial ceph default configuration and setup symlinks. + +`--cluster-network` `` :: + +Declare a separate cluster network, OSDs will routeheartbeat, object replication and recovery traffic over it ++ +NOTE: Requires option(s): `network` +`--disable_cephx` `` ('default =' `0`):: + +Disable cephx authentication. ++ +WARNING: cephx is a security feature protecting against man-in-the-middle attacks. Only consider disabling cephx if your network is private! + +`--min_size` ` (1 - 7)` ('default =' `2`):: + +Minimum number of available replicas per object to allow I/O + +`--network` `` :: + +Use specific network for all ceph related traffic + +`--pg_bits` ` (6 - 14)` ('default =' `6`):: + +Placement group bits, used to specify the default number of placement groups. ++ +NOTE: 'osd pool default pg num' does not work for default pools. + +`--size` ` (1 - 7)` ('default =' `3`):: + +Targeted number of replicas per object + +*pveceph install* `[OPTIONS]` + +Install ceph related packages. -*pveceph destroymon* `` +`--allow-experimental` `` ('default =' `0`):: -Destroy Ceph monitor. +Allow experimental versions. Use with care! -``: `` :: +`--test-repository` `` ('default =' `0`):: + +Use the test, not the main repository. Use with care! + +`--version` `` ('default =' `pacific`):: + +Ceph version to install. + +*pveceph lspools* + +An alias for 'pveceph pool ls'. + +*pveceph mds create* `[OPTIONS]` + +Create Ceph Metadata Server (MDS) + +`--hotstandby` `` ('default =' `0`):: + +Determines whether a ceph-mds daemon should poll and replay the log of an active MDS. Faster switch on MDS failure, but needs more idle resources. + +`--name` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ('default =' `nodename`):: + +The ID for the mds, when omitted the same as the nodename + +*pveceph mds destroy* `` + +Destroy Ceph Metadata Server + +``: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` :: + +The name (ID) of the mds + +*pveceph mgr create* `[OPTIONS]` + +Create Ceph Manager + +`--id` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` :: + +The ID for the manager, when omitted the same as the nodename + +*pveceph mgr destroy* `` + +Destroy Ceph Manager. + +``: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` :: + +The ID of the manager + +*pveceph mon create* `[OPTIONS]` + +Create Ceph Monitor and Manager + +`--mon-address` `` :: + +Overwrites autodetected monitor IP address(es). Must be in the public network(s) of Ceph. + +`--monid` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` :: + +The ID for the monitor, when omitted the same as the nodename + +*pveceph mon destroy* `` + +Destroy Ceph Monitor and Manager. + +``: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` :: Monitor ID +*pveceph osd create* `` `[OPTIONS]` + +Create OSD + +``: `` :: + +Block device name. + +`--crush-device-class` `` :: + +Set the device class of the OSD in crush. + +`--db_dev` `` :: + +Block device name for block.db. + +`--db_dev_size` ` (1 - N)` ('default =' `bluestore_block_db_size or 10% of OSD size`):: + +Size in GiB for block.db. ++ +NOTE: Requires option(s): `db_dev` +`--encrypted` `` ('default =' `0`):: +Enables encryption of the OSD. -*pveceph destroyosd* `` `[OPTIONS]` +`--wal_dev` `` :: + +Block device name for block.wal. + +`--wal_dev_size` ` (0.5 - N)` ('default =' `bluestore_block_wal_size or 1% of OSD size`):: + +Size in GiB for block.wal. ++ +NOTE: Requires option(s): `wal_dev` + +*pveceph osd destroy* `` `[OPTIONS]` Destroy OSD @@ -71,113 +214,164 @@ Destroy OSD OSD ID -`-cleanup` `` ('default =' `0`):: +`--cleanup` `` ('default =' `0`):: If set, we remove partition table entries. +*pveceph pool create* `` `[OPTIONS]` - - -*pveceph destroypool* `` `[OPTIONS]` - -Destroy pool +Create POOL ``: `` :: The name of the pool. It must be unique. -`-force` `` ('default =' `0`):: +`--add_storages` `` :: -If true, destroys pool even if in use +Configure VM and CT storage using the new pool. +`--application` `` ('default =' `rbd`):: +The application of the pool. +`--crush_rule` `` :: -*pveceph help* `[]` `[OPTIONS]` +The rule to use for mapping object placement in the cluster. -Get help about specified command. +`--min_size` ` (1 - 7)` ('default =' `2`):: -``: `` :: +Minimum number of replicas per object -Command name +`--pg_autoscale_mode` `` ('default =' `warn`):: -`-verbose` `` :: +The automatic PG scaling mode of the pool. -Verbose output format. +`--pg_num` ` (1 - 32768)` ('default =' `128`):: +Number of placement groups. +`--pg_num_min` ` (-N - 32768)` :: +Minimal number of placement groups. -*pveceph init* `[OPTIONS]` +`--size` ` (1 - 7)` ('default =' `3`):: -Create initial ceph default configuration and setup symlinks. +Number of replicas per object -`-network` `` :: +`--target_size` `^(\d+(\.\d+)?)([KMGT])?$` :: -Use specific network for all ceph related traffic +The estimated target size of the pool for the PG autoscaler. -`-pg_bits` ` (6 - 14)` ('default =' `6`):: +`--target_size_ratio` `` :: -Placement group bits, used to specify the default number of placement groups. -+ -NOTE: 'osd pool default pg num' does not work for default pools. +The estimated target ratio of the pool for the PG autoscaler. -`-size` ` (1 - 3)` ('default =' `2`):: +*pveceph pool destroy* `` `[OPTIONS]` -Number of replicas per object +Destroy pool +``: `` :: +The name of the pool. It must be unique. +`--force` `` ('default =' `0`):: -*pveceph install* `[OPTIONS]` +If true, destroys pool even if in use -Install ceph related packages. +`--remove_storages` `` ('default =' `0`):: -`-version` `` :: +Remove all pveceph-managed storages configured for this pool -no description available +*pveceph pool get* `` `[OPTIONS]` `[FORMAT_OPTIONS]` +List pool settings. +``: `` :: +The name of the pool. It must be unique. -*pveceph lspools* +`--verbose` `` ('default =' `0`):: + +If enabled, will display additional data(eg. statistics). + +*pveceph pool ls* `[FORMAT_OPTIONS]` List all pools. +*pveceph pool set* `` `[OPTIONS]` +Change POOL settings + +``: `` :: +The name of the pool. It must be unique. -*pveceph purge* +`--application` `` :: -Destroy ceph related data and configuration files. +The application of the pool. +`--crush_rule` `` :: +The rule to use for mapping object placement in the cluster. +`--min_size` ` (1 - 7)` :: -*pveceph start* `[]` +Minimum number of replicas per object -Start ceph services. +`--pg_autoscale_mode` `` :: -``: `(mon|mds|osd)\.[A-Za-z0-9]{1,32}` :: +The automatic PG scaling mode of the pool. -Ceph service name. +`--pg_num` ` (1 - 32768)` :: +Number of placement groups. +`--pg_num_min` ` (-N - 32768)` :: -*pveceph status* +Minimal number of placement groups. -Get ceph status. +`--size` ` (1 - 7)` :: +Number of replicas per object +`--target_size` `^(\d+(\.\d+)?)([KMGT])?$` :: -*pveceph stop* `[]` +The estimated target size of the pool for the PG autoscaler. -Stop ceph services. +`--target_size_ratio` `` :: -``: `(mon|mds|osd)\.[A-Za-z0-9]{1,32}` :: +The estimated target ratio of the pool for the PG autoscaler. + +*pveceph purge* `[OPTIONS]` + +Destroy ceph related data and configuration files. + +`--crash` `` :: + +Additionally purge Ceph crash logs, /var/lib/ceph/crash. + +`--logs` `` :: + +Additionally purge Ceph logs, /var/log/ceph. + +*pveceph start* `[OPTIONS]` + +Start ceph services. + +`--service` `(ceph|mon|mds|osd|mgr)(\.[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?)?` ('default =' `ceph.target`):: Ceph service name. +*pveceph status* + +Get Ceph Status. + +*pveceph stop* `[OPTIONS]` +Stop ceph services. + +`--service` `(ceph|mon|mds|osd|mgr)(\.[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?)?` ('default =' `ceph.target`):: + +Ceph service name.