1 *pveceph* `<COMMAND> [ARGS] [OPTIONS]`
3 *pveceph createmgr* `[OPTIONS]`
7 `--id` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
9 The ID for the manager, when omitted the same as the nodename
13 *pveceph createmon* `[OPTIONS]`
15 Create Ceph Monitor and Manager
17 `--exclude-manager` `<boolean>` ('default =' `0`)::
19 When set, only a monitor will be created.
21 `--id` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
23 The ID for the monitor, when omitted the same as the nodename
28 *pveceph createosd* `<dev>` `[OPTIONS]`
32 `<dev>`: `<string>` ::
36 `--bluestore` `<boolean>` ('default =' `0`)::
38 Use bluestore instead of filestore.
40 `--fstype` `<btrfs | ext4 | xfs>` ('default =' `xfs`)::
42 File system type (filestore only).
44 `--journal_dev` `<string>` ::
46 Block device name for journal (filestore) or block.db (bluestore).
48 `--wal_dev` `<string>` ::
50 Block device name for block.wal (bluestore only).
55 *pveceph createpool* `<name>` `[OPTIONS]`
59 `<name>`: `<string>` ::
61 The name of the pool. It must be unique.
63 `--add_storages` `<boolean>` ::
65 Configure VM and CT storages using the new pool.
67 `--application` `<cephfs | rbd | rgw>` ::
69 The application of the pool, 'rbd' by default.
71 `--crush_rule` `<string>` ::
73 The rule to use for mapping object placement in the cluster.
75 `--min_size` `<integer> (1 - 7)` ('default =' `2`)::
77 Minimum number of replicas per object
79 `--pg_num` `<integer> (8 - 32768)` ('default =' `64`)::
81 Number of placement groups.
83 `--size` `<integer> (1 - 7)` ('default =' `3`)::
85 Number of replicas per object
89 *pveceph destroymgr* `<id>`
93 `<id>`: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
99 *pveceph destroymon* `<monid>` `[OPTIONS]`
101 Destroy Ceph Monitor and Manager.
103 `<monid>`: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
107 `--exclude-manager` `<boolean>` ('default =' `0`)::
109 When set, removes only the monitor, not the manager
114 *pveceph destroyosd* `<osdid>` `[OPTIONS]`
118 `<osdid>`: `<integer>` ::
122 `--cleanup` `<boolean>` ('default =' `0`)::
124 If set, we remove partition table entries.
129 *pveceph destroypool* `<name>` `[OPTIONS]`
133 `<name>`: `<string>` ::
135 The name of the pool. It must be unique.
137 `--force` `<boolean>` ('default =' `0`)::
139 If true, destroys pool even if in use
141 `--remove_storages` `<boolean>` ('default =' `0`)::
143 Remove all pveceph-managed storages configured for this pool
148 *pveceph help* `[<cmd>]` `[OPTIONS]`
150 Get help about specified command.
152 `<cmd>`: `<string>` ::
156 `--verbose` `<boolean>` ::
158 Verbose output format.
163 *pveceph init* `[OPTIONS]`
165 Create initial ceph default configuration and setup symlinks.
167 `--disable_cephx` `<boolean>` ('default =' `0`)::
169 Disable cephx authentification.
171 WARNING: cephx is a security feature protecting against man-in-the-middle attacks. Only consider disabling cephx if your network is private!
173 `--min_size` `<integer> (1 - 7)` ('default =' `2`)::
175 Minimum number of available replicas per object to allow I/O
177 `--network` `<string>` ::
179 Use specific network for all ceph related traffic
181 `--pg_bits` `<integer> (6 - 14)` ('default =' `6`)::
183 Placement group bits, used to specify the default number of placement groups.
185 NOTE: 'osd pool default pg num' does not work for default pools.
187 `--size` `<integer> (1 - 7)` ('default =' `3`)::
189 Targeted number of replicas per object
194 *pveceph install* `[OPTIONS]`
196 Install ceph related packages.
198 `--version` `<luminous>` ::
200 no description available
214 Destroy ceph related data and configuration files.
219 *pveceph start* `[<service>]`
223 `<service>`: `(mon|mds|osd|mgr)\.[A-Za-z0-9\-]{1,32}` ::
235 *pveceph stop* `[<service>]`
239 `<service>`: `(mon|mds|osd|mgr)\.[A-Za-z0-9\-]{1,32}` ::