1 *pveceph* `<COMMAND> [ARGS] [OPTIONS]`
5 An alias for 'pveceph mgr create'.
9 An alias for 'pveceph mon create'.
13 An alias for 'pveceph osd create'.
17 An alias for 'pveceph pool create'.
21 An alias for 'pveceph mgr destroy'.
25 An alias for 'pveceph mon destroy'.
29 An alias for 'pveceph osd destroy'.
33 An alias for 'pveceph pool destroy'.
35 *pveceph fs create* `[OPTIONS]`
37 Create a Ceph filesystem
39 `--add-storage` `<boolean>` ('default =' `0`)::
41 Configure the created CephFS as storage for this cluster.
43 `--name` `<string>` ('default =' `cephfs`)::
45 The ceph filesystem name.
47 `--pg_num` `<integer> (8 - 32768)` ('default =' `128`)::
49 Number of placement groups for the backing data pool. The metadata pool will use a quarter of this.
51 *pveceph fs destroy* `<name>` `[OPTIONS]`
53 Destroy a Ceph filesystem
55 `<name>`: `<string>` ::
57 The ceph filesystem name.
59 `--remove-pools` `<boolean>` ('default =' `0`)::
61 Remove data and metadata pools configured for this fs.
63 `--remove-storages` `<boolean>` ('default =' `0`)::
65 Remove all pveceph-managed storages configured for this fs.
67 *pveceph help* `[OPTIONS]`
69 Get help about specified command.
71 `--extra-args` `<array>` ::
73 Shows help for a specific command
75 `--verbose` `<boolean>` ::
77 Verbose output format.
79 *pveceph init* `[OPTIONS]`
81 Create initial ceph default configuration and setup symlinks.
83 `--cluster-network` `<string>` ::
85 Declare a separate cluster network, OSDs will routeheartbeat, object replication and recovery traffic over it
87 NOTE: Requires option(s): `network`
89 `--disable_cephx` `<boolean>` ('default =' `0`)::
91 Disable cephx authentication.
93 WARNING: cephx is a security feature protecting against man-in-the-middle attacks. Only consider disabling cephx if your network is private!
95 `--min_size` `<integer> (1 - 7)` ('default =' `2`)::
97 Minimum number of available replicas per object to allow I/O
99 `--network` `<string>` ::
101 Use specific network for all ceph related traffic
103 `--pg_bits` `<integer> (6 - 14)` ('default =' `6`)::
105 Placement group bits, used to specify the default number of placement groups.
107 NOTE: 'osd pool default pg num' does not work for default pools.
109 `--size` `<integer> (1 - 7)` ('default =' `3`)::
111 Targeted number of replicas per object
113 *pveceph install* `[OPTIONS]`
115 Install ceph related packages.
117 `--allow-experimental` `<boolean>` ('default =' `0`)::
119 Allow experimental versions. Use with care!
121 `--test-repository` `<boolean>` ('default =' `0`)::
123 Use the test, not the main repository. Use with care!
125 `--version` `<octopus | pacific | quincy>` ('default =' `pacific`)::
127 Ceph version to install.
131 An alias for 'pveceph pool ls'.
133 *pveceph mds create* `[OPTIONS]`
135 Create Ceph Metadata Server (MDS)
137 `--hotstandby` `<boolean>` ('default =' `0`)::
139 Determines whether a ceph-mds daemon should poll and replay the log of an active MDS. Faster switch on MDS failure, but needs more idle resources.
141 `--name` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ('default =' `nodename`)::
143 The ID for the mds, when omitted the same as the nodename
145 *pveceph mds destroy* `<name>`
147 Destroy Ceph Metadata Server
149 `<name>`: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
151 The name (ID) of the mds
153 *pveceph mgr create* `[OPTIONS]`
157 `--id` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
159 The ID for the manager, when omitted the same as the nodename
161 *pveceph mgr destroy* `<id>`
163 Destroy Ceph Manager.
165 `<id>`: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
167 The ID of the manager
169 *pveceph mon create* `[OPTIONS]`
171 Create Ceph Monitor and Manager
173 `--mon-address` `<string>` ::
175 Overwrites autodetected monitor IP address(es). Must be in the public network(s) of Ceph.
177 `--monid` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
179 The ID for the monitor, when omitted the same as the nodename
181 *pveceph mon destroy* `<monid>`
183 Destroy Ceph Monitor and Manager.
185 `<monid>`: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
189 *pveceph osd create* `<dev>` `[OPTIONS]`
193 `<dev>`: `<string>` ::
197 `--crush-device-class` `<string>` ::
199 Set the device class of the OSD in crush.
201 `--db_dev` `<string>` ::
203 Block device name for block.db.
205 `--db_dev_size` `<number> (1 - N)` ('default =' `bluestore_block_db_size or 10% of OSD size`)::
207 Size in GiB for block.db.
209 NOTE: Requires option(s): `db_dev`
211 `--encrypted` `<boolean>` ('default =' `0`)::
213 Enables encryption of the OSD.
215 `--wal_dev` `<string>` ::
217 Block device name for block.wal.
219 `--wal_dev_size` `<number> (0.5 - N)` ('default =' `bluestore_block_wal_size or 1% of OSD size`)::
221 Size in GiB for block.wal.
223 NOTE: Requires option(s): `wal_dev`
225 *pveceph osd destroy* `<osdid>` `[OPTIONS]`
229 `<osdid>`: `<integer>` ::
233 `--cleanup` `<boolean>` ('default =' `0`)::
235 If set, we remove partition table entries.
237 *pveceph pool create* `<name>` `[OPTIONS]`
241 `<name>`: `<string>` ::
243 The name of the pool. It must be unique.
245 `--add_storages` `<boolean>` ('default =' `0; for erasure coded pools: 1`)::
247 Configure VM and CT storage using the new pool.
249 `--application` `<cephfs | rbd | rgw>` ('default =' `rbd`)::
251 The application of the pool.
253 `--crush_rule` `<string>` ::
255 The rule to use for mapping object placement in the cluster.
257 `--erasure-coding` `k=<integer> ,m=<integer> [,device-class=<class>] [,failure-domain=<domain>] [,profile=<profile>]` ::
259 Create an erasure coded pool for RBD with an accompaning replicated pool for metadata storage. With EC, the common ceph options 'size', 'min_size' and 'crush_rule' parameters will be applied to the metadata pool.
261 `--min_size` `<integer> (1 - 7)` ('default =' `2`)::
263 Minimum number of replicas per object
265 `--pg_autoscale_mode` `<off | on | warn>` ('default =' `warn`)::
267 The automatic PG scaling mode of the pool.
269 `--pg_num` `<integer> (1 - 32768)` ('default =' `128`)::
271 Number of placement groups.
273 `--pg_num_min` `<integer> (-N - 32768)` ::
275 Minimal number of placement groups.
277 `--size` `<integer> (1 - 7)` ('default =' `3`)::
279 Number of replicas per object
281 `--target_size` `^(\d+(\.\d+)?)([KMGT])?$` ::
283 The estimated target size of the pool for the PG autoscaler.
285 `--target_size_ratio` `<number>` ::
287 The estimated target ratio of the pool for the PG autoscaler.
289 *pveceph pool destroy* `<name>` `[OPTIONS]`
293 `<name>`: `<string>` ::
295 The name of the pool. It must be unique.
297 `--force` `<boolean>` ('default =' `0`)::
299 If true, destroys pool even if in use
301 `--remove_ecprofile` `<boolean>` ('default =' `1`)::
303 Remove the erasure code profile. Defaults to true, if applicable.
305 `--remove_storages` `<boolean>` ('default =' `0`)::
307 Remove all pveceph-managed storages configured for this pool
309 *pveceph pool get* `<name>` `[OPTIONS]` `[FORMAT_OPTIONS]`
311 Show the current pool status.
313 `<name>`: `<string>` ::
315 The name of the pool. It must be unique.
317 `--verbose` `<boolean>` ('default =' `0`)::
319 If enabled, will display additional data(eg. statistics).
321 *pveceph pool ls* `[FORMAT_OPTIONS]`
323 List all pools and their settings (which are settable by the POST/PUT
326 *pveceph pool set* `<name>` `[OPTIONS]`
330 `<name>`: `<string>` ::
332 The name of the pool. It must be unique.
334 `--application` `<cephfs | rbd | rgw>` ::
336 The application of the pool.
338 `--crush_rule` `<string>` ::
340 The rule to use for mapping object placement in the cluster.
342 `--min_size` `<integer> (1 - 7)` ::
344 Minimum number of replicas per object
346 `--pg_autoscale_mode` `<off | on | warn>` ::
348 The automatic PG scaling mode of the pool.
350 `--pg_num` `<integer> (1 - 32768)` ::
352 Number of placement groups.
354 `--pg_num_min` `<integer> (-N - 32768)` ::
356 Minimal number of placement groups.
358 `--size` `<integer> (1 - 7)` ::
360 Number of replicas per object
362 `--target_size` `^(\d+(\.\d+)?)([KMGT])?$` ::
364 The estimated target size of the pool for the PG autoscaler.
366 `--target_size_ratio` `<number>` ::
368 The estimated target ratio of the pool for the PG autoscaler.
370 *pveceph purge* `[OPTIONS]`
372 Destroy ceph related data and configuration files.
374 `--crash` `<boolean>` ::
376 Additionally purge Ceph crash logs, /var/lib/ceph/crash.
378 `--logs` `<boolean>` ::
380 Additionally purge Ceph logs, /var/log/ceph.
382 *pveceph start* `[OPTIONS]`
386 `--service` `(ceph|mon|mds|osd|mgr)(\.[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?)?` ('default =' `ceph.target`)::
394 *pveceph stop* `[OPTIONS]`
398 `--service` `(ceph|mon|mds|osd|mgr)(\.[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?)?` ('default =' `ceph.target`)::