]> git.proxmox.com Git - pve-docs.git/blame - pveceph.1-synopsis.adoc
Explain why shut down VMs appear as running when backed up
[pve-docs.git] / pveceph.1-synopsis.adoc
CommitLineData
0840a663
DM
1*pveceph* `<COMMAND> [ARGS] [OPTIONS]`
2
2489d6df 3*pveceph createmgr* `[OPTIONS]`
0840a663 4
2489d6df
WB
5Create Ceph Manager
6
7`--id` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
8
9The ID for the manager, when omitted the same as the nodename
10
11
12
13*pveceph createmon* `[OPTIONS]`
14
15Create Ceph Monitor and Manager
16
17`--exclude-manager` `<boolean>` ('default =' `0`)::
18
19When set, only a monitor will be created.
20
21`--id` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
22
23The ID for the monitor, when omitted the same as the nodename
0840a663
DM
24
25
26
27
28*pveceph createosd* `<dev>` `[OPTIONS]`
29
30Create OSD
31
013dc89f 32`<dev>`: `<string>` ::
0840a663
DM
33
34Block device name.
35
2489d6df 36`--bluestore` `<boolean>` ('default =' `0`)::
5d9c884c
DM
37
38Use bluestore instead of filestore.
39
2489d6df 40`--fstype` `<btrfs | ext4 | xfs>` ('default =' `xfs`)::
0840a663 41
5d9c884c 42File system type (filestore only).
0840a663 43
2489d6df 44`--journal_dev` `<string>` ::
0840a663 45
2489d6df
WB
46Block device name for journal (filestore) or block.db (bluestore).
47
48`--wal_dev` `<string>` ::
49
50Block device name for block.wal (bluestore only).
0840a663
DM
51
52
53
54
55*pveceph createpool* `<name>` `[OPTIONS]`
56
57Create POOL
58
013dc89f 59`<name>`: `<string>` ::
0840a663
DM
60
61The name of the pool. It must be unique.
62
2489d6df
WB
63`--add_storages` `<boolean>` ::
64
65Configure VM and CT storages using the new pool.
0840a663 66
2489d6df 67`--application` `<cephfs | rbd | rgw>` ::
0840a663 68
2489d6df
WB
69The application of the pool, 'rbd' by default.
70
71`--crush_rule` `<string>` ::
72
73The rule to use for mapping object placement in the cluster.
74
75`--min_size` `<integer> (1 - 7)` ('default =' `2`)::
0840a663
DM
76
77Minimum number of replicas per object
78
2489d6df 79`--pg_num` `<integer> (8 - 32768)` ('default =' `64`)::
0840a663
DM
80
81Number of placement groups.
82
2489d6df 83`--size` `<integer> (1 - 7)` ('default =' `3`)::
0840a663
DM
84
85Number of replicas per object
86
87
88
2489d6df
WB
89*pveceph destroymgr* `<id>`
90
91Destroy Ceph Manager.
92
93`<id>`: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
94
95The ID of the manager
96
0840a663 97
0840a663 98
2489d6df
WB
99*pveceph destroymon* `<monid>` `[OPTIONS]`
100
101Destroy Ceph Monitor and Manager.
102
103`<monid>`: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ::
0840a663
DM
104
105Monitor ID
106
2489d6df
WB
107`--exclude-manager` `<boolean>` ('default =' `0`)::
108
109When set, removes only the monitor, not the manager
110
0840a663
DM
111
112
113
114*pveceph destroyosd* `<osdid>` `[OPTIONS]`
115
116Destroy OSD
117
013dc89f 118`<osdid>`: `<integer>` ::
0840a663
DM
119
120OSD ID
121
2489d6df 122`--cleanup` `<boolean>` ('default =' `0`)::
0840a663
DM
123
124If set, we remove partition table entries.
125
126
127
128
2c0dde61 129*pveceph destroypool* `<name>` `[OPTIONS]`
0840a663
DM
130
131Destroy pool
132
013dc89f 133`<name>`: `<string>` ::
0840a663
DM
134
135The name of the pool. It must be unique.
136
2489d6df 137`--force` `<boolean>` ('default =' `0`)::
2c0dde61
DM
138
139If true, destroys pool even if in use
140
2489d6df
WB
141`--remove_storages` `<boolean>` ('default =' `0`)::
142
143Remove all pveceph-managed storages configured for this pool
144
0840a663
DM
145
146
147
148*pveceph help* `[<cmd>]` `[OPTIONS]`
149
150Get help about specified command.
151
013dc89f 152`<cmd>`: `<string>` ::
0840a663
DM
153
154Command name
155
2489d6df 156`--verbose` `<boolean>` ::
0840a663
DM
157
158Verbose output format.
159
160
161
162
163*pveceph init* `[OPTIONS]`
164
165Create initial ceph default configuration and setup symlinks.
166
2489d6df 167`--disable_cephx` `<boolean>` ('default =' `0`)::
5d9c884c
DM
168
169Disable cephx authentification.
170+
171WARNING: cephx is a security feature protecting against man-in-the-middle attacks. Only consider disabling cephx if your network is private!
172
2489d6df 173`--min_size` `<integer> (1 - 7)` ('default =' `2`)::
5d9c884c
DM
174
175Minimum number of available replicas per object to allow I/O
176
2489d6df 177`--network` `<string>` ::
0840a663
DM
178
179Use specific network for all ceph related traffic
180
2489d6df 181`--pg_bits` `<integer> (6 - 14)` ('default =' `6`)::
0840a663 182
c2993fe5 183Placement group bits, used to specify the default number of placement groups.
04ce4dfa
DM
184+
185NOTE: 'osd pool default pg num' does not work for default pools.
0840a663 186
2489d6df 187`--size` `<integer> (1 - 7)` ('default =' `3`)::
0840a663 188
5d9c884c 189Targeted number of replicas per object
0840a663
DM
190
191
192
193
194*pveceph install* `[OPTIONS]`
195
196Install ceph related packages.
197
2489d6df 198`--version` `<luminous>` ::
0840a663
DM
199
200no description available
201
202
203
204
205*pveceph lspools*
206
207List all pools.
208
209
210
211
212*pveceph purge*
213
214Destroy ceph related data and configuration files.
215
216
217
218
219*pveceph start* `[<service>]`
220
221Start ceph services.
222
2489d6df 223`<service>`: `(mon|mds|osd|mgr)\.[A-Za-z0-9\-]{1,32}` ::
0840a663
DM
224
225Ceph service name.
226
227
228
229*pveceph status*
230
231Get ceph status.
232
233
234
235*pveceph stop* `[<service>]`
236
237Stop ceph services.
238
2489d6df 239`<service>`: `(mon|mds|osd|mgr)\.[A-Za-z0-9\-]{1,32}` ::
0840a663
DM
240
241Ceph service name.
242
243
244
245