]>
Commit | Line | Data |
---|---|---|
0840a663 DM |
1 | *pveceph* `<COMMAND> [ARGS] [OPTIONS]` |
2 | ||
e2d681b3 | 3 | *pveceph createmgr* |
0840a663 | 4 | |
e2d681b3 | 5 | An alias for 'pveceph mgr create'. |
2489d6df | 6 | |
e2d681b3 | 7 | *pveceph createmon* |
2489d6df | 8 | |
e2d681b3 | 9 | An alias for 'pveceph mon create'. |
2489d6df | 10 | |
e2d681b3 | 11 | *pveceph createosd* |
2489d6df | 12 | |
e2d681b3 | 13 | An alias for 'pveceph osd create'. |
2489d6df | 14 | |
e2d681b3 | 15 | *pveceph createpool* |
2489d6df | 16 | |
e2d681b3 | 17 | An alias for 'pveceph pool create'. |
2489d6df | 18 | |
e2d681b3 | 19 | *pveceph destroymgr* |
2489d6df | 20 | |
e2d681b3 | 21 | An alias for 'pveceph mgr destroy'. |
0840a663 | 22 | |
e2d681b3 | 23 | *pveceph destroymon* |
0840a663 | 24 | |
e2d681b3 | 25 | An alias for 'pveceph mon destroy'. |
0840a663 | 26 | |
e2d681b3 | 27 | *pveceph destroyosd* |
0840a663 | 28 | |
e2d681b3 | 29 | An alias for 'pveceph osd destroy'. |
0840a663 | 30 | |
e2d681b3 | 31 | *pveceph destroypool* |
0840a663 | 32 | |
e2d681b3 | 33 | An alias for 'pveceph pool destroy'. |
0840a663 | 34 | |
e2d681b3 | 35 | *pveceph fs create* `[OPTIONS]` |
5d9c884c | 36 | |
e2d681b3 | 37 | Create a Ceph filesystem |
5d9c884c | 38 | |
e2d681b3 | 39 | `--add-storage` `<boolean>` ('default =' `0`):: |
0840a663 | 40 | |
e2d681b3 | 41 | Configure the created CephFS as storage for this cluster. |
0840a663 | 42 | |
e2d681b3 | 43 | `--name` `<string>` ('default =' `cephfs`):: |
0840a663 | 44 | |
e2d681b3 | 45 | The ceph filesystem name. |
2489d6df | 46 | |
e2d681b3 | 47 | `--pg_num` `<integer> (8 - 32768)` ('default =' `128`):: |
2489d6df | 48 | |
e2d681b3 | 49 | Number of placement groups for the backing data pool. The metadata pool will use a quarter of this. |
0840a663 | 50 | |
e2d681b3 | 51 | *pveceph help* `[OPTIONS]` |
0840a663 | 52 | |
e2d681b3 | 53 | Get help about specified command. |
0840a663 | 54 | |
e2d681b3 | 55 | `--extra-args` `<array>` :: |
0840a663 | 56 | |
e2d681b3 | 57 | Shows help for a specific command |
0840a663 | 58 | |
e2d681b3 | 59 | `--verbose` `<boolean>` :: |
2489d6df | 60 | |
e2d681b3 | 61 | Verbose output format. |
0840a663 | 62 | |
e2d681b3 | 63 | *pveceph init* `[OPTIONS]` |
0840a663 | 64 | |
e2d681b3 | 65 | Create initial ceph default configuration and setup symlinks. |
2489d6df | 66 | |
e2d681b3 | 67 | `--cluster-network` `<string>` :: |
2489d6df | 68 | |
e2d681b3 TL |
69 | Declare a separate cluster network, OSDs will routeheartbeat, object replication and recovery traffic over it |
70 | + | |
71 | NOTE: Requires option(s): `network` | |
72 | ||
73 | `--disable_cephx` `<boolean>` ('default =' `0`):: | |
74 | ||
75 | Disable cephx authentification. | |
76 | + | |
77 | WARNING: cephx is a security feature protecting against man-in-the-middle attacks. Only consider disabling cephx if your network is private! | |
2489d6df WB |
78 | |
79 | `--min_size` `<integer> (1 - 7)` ('default =' `2`):: | |
0840a663 | 80 | |
e2d681b3 | 81 | Minimum number of available replicas per object to allow I/O |
0840a663 | 82 | |
e2d681b3 | 83 | `--network` `<string>` :: |
0840a663 | 84 | |
e2d681b3 TL |
85 | Use specific network for all ceph related traffic |
86 | ||
87 | `--pg_bits` `<integer> (6 - 14)` ('default =' `6`):: | |
88 | ||
89 | Placement group bits, used to specify the default number of placement groups. | |
90 | + | |
91 | NOTE: 'osd pool default pg num' does not work for default pools. | |
0840a663 | 92 | |
2489d6df | 93 | `--size` `<integer> (1 - 7)` ('default =' `3`):: |
0840a663 | 94 | |
e2d681b3 TL |
95 | Targeted number of replicas per object |
96 | ||
97 | *pveceph install* `[OPTIONS]` | |
98 | ||
99 | Install ceph related packages. | |
100 | ||
101 | `--version` `<luminous>` :: | |
102 | ||
103 | no description available | |
104 | ||
105 | *pveceph lspools* | |
106 | ||
107 | An alias for 'pveceph pool ls'. | |
0840a663 | 108 | |
e2d681b3 TL |
109 | *pveceph mds create* `[OPTIONS]` |
110 | ||
111 | Create Ceph Metadata Server (MDS) | |
112 | ||
113 | `--hotstandby` `<boolean>` ('default =' `0`):: | |
114 | ||
115 | Determines whether a ceph-mds daemon should poll and replay the log of an active MDS. Faster switch on MDS failure, but needs more idle resources. | |
116 | ||
117 | `--name` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` ('default =' `nodename`):: | |
118 | ||
119 | The ID for the mds, when omitted the same as the nodename | |
120 | ||
121 | *pveceph mds destroy* `<name>` | |
122 | ||
123 | Destroy Ceph Metadata Server | |
124 | ||
125 | `<name>`: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` :: | |
126 | ||
127 | The name (ID) of the mds | |
128 | ||
129 | *pveceph mgr create* `[OPTIONS]` | |
130 | ||
131 | Create Ceph Manager | |
132 | ||
133 | `--id` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` :: | |
134 | ||
135 | The ID for the manager, when omitted the same as the nodename | |
136 | ||
137 | *pveceph mgr destroy* `<id>` | |
2489d6df WB |
138 | |
139 | Destroy Ceph Manager. | |
140 | ||
141 | `<id>`: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` :: | |
142 | ||
143 | The ID of the manager | |
144 | ||
e2d681b3 TL |
145 | *pveceph mon create* `[OPTIONS]` |
146 | ||
147 | Create Ceph Monitor and Manager | |
148 | ||
149 | `--exclude-manager` `<boolean>` ('default =' `0`):: | |
150 | ||
151 | When set, only a monitor will be created. | |
152 | ||
153 | `--id` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` :: | |
154 | ||
155 | The ID for the monitor, when omitted the same as the nodename | |
156 | ||
157 | `--mon-address` `<string>` :: | |
158 | ||
159 | Overwrites autodetected monitor IP address. Must be in the public network of ceph. | |
160 | ||
161 | *pveceph mon destroy* `<monid>` `[OPTIONS]` | |
2489d6df WB |
162 | |
163 | Destroy Ceph Monitor and Manager. | |
164 | ||
165 | `<monid>`: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` :: | |
0840a663 DM |
166 | |
167 | Monitor ID | |
168 | ||
2489d6df WB |
169 | `--exclude-manager` `<boolean>` ('default =' `0`):: |
170 | ||
171 | When set, removes only the monitor, not the manager | |
172 | ||
e2d681b3 | 173 | *pveceph osd create* `<dev>` `[OPTIONS]` |
0840a663 | 174 | |
e2d681b3 | 175 | Create OSD |
0840a663 | 176 | |
e2d681b3 | 177 | `<dev>`: `<string>` :: |
0840a663 | 178 | |
e2d681b3 | 179 | Block device name. |
0840a663 | 180 | |
e2d681b3 | 181 | `--bluestore` `<boolean>` ('default =' `1`):: |
0840a663 | 182 | |
e2d681b3 | 183 | Use bluestore instead of filestore. This is the default. |
0840a663 | 184 | |
e2d681b3 | 185 | `--fstype` `<ext4 | xfs>` ('default =' `xfs`):: |
0840a663 | 186 | |
e2d681b3 | 187 | File system type (filestore only). |
0840a663 | 188 | |
e2d681b3 | 189 | `--journal_dev` `<string>` :: |
0840a663 | 190 | |
e2d681b3 | 191 | Block device name for journal (filestore) or block.db (bluestore). |
0840a663 | 192 | |
e2d681b3 | 193 | `--wal_dev` `<string>` :: |
2c0dde61 | 194 | |
e2d681b3 | 195 | Block device name for block.wal (bluestore only). |
2c0dde61 | 196 | |
e2d681b3 | 197 | *pveceph osd destroy* `<osdid>` `[OPTIONS]` |
2489d6df | 198 | |
e2d681b3 | 199 | Destroy OSD |
2489d6df | 200 | |
e2d681b3 | 201 | `<osdid>`: `<integer>` :: |
0840a663 | 202 | |
e2d681b3 | 203 | OSD ID |
0840a663 | 204 | |
e2d681b3 | 205 | `--cleanup` `<boolean>` ('default =' `0`):: |
0840a663 | 206 | |
e2d681b3 | 207 | If set, we remove partition table entries. |
0840a663 | 208 | |
e2d681b3 | 209 | *pveceph pool create* `<name>` `[OPTIONS]` |
0840a663 | 210 | |
e2d681b3 | 211 | Create POOL |
0840a663 | 212 | |
e2d681b3 | 213 | `<name>`: `<string>` :: |
0840a663 | 214 | |
e2d681b3 | 215 | The name of the pool. It must be unique. |
0840a663 | 216 | |
e2d681b3 | 217 | `--add_storages` `<boolean>` :: |
5d9c884c | 218 | |
e2d681b3 | 219 | Configure VM and CT storage using the new pool. |
5d9c884c | 220 | |
e2d681b3 | 221 | `--application` `<cephfs | rbd | rgw>` :: |
5d9c884c | 222 | |
e2d681b3 | 223 | The application of the pool, 'rbd' by default. |
5d9c884c | 224 | |
e2d681b3 | 225 | `--crush_rule` `<string>` :: |
0840a663 | 226 | |
e2d681b3 | 227 | The rule to use for mapping object placement in the cluster. |
0840a663 | 228 | |
e2d681b3 | 229 | `--min_size` `<integer> (1 - 7)` ('default =' `2`):: |
0840a663 | 230 | |
e2d681b3 TL |
231 | Minimum number of replicas per object |
232 | ||
233 | `--pg_num` `<integer> (8 - 32768)` ('default =' `128`):: | |
234 | ||
235 | Number of placement groups. | |
0840a663 | 236 | |
2489d6df | 237 | `--size` `<integer> (1 - 7)` ('default =' `3`):: |
0840a663 | 238 | |
e2d681b3 | 239 | Number of replicas per object |
0840a663 | 240 | |
e2d681b3 | 241 | *pveceph pool destroy* `<name>` `[OPTIONS]` |
0840a663 | 242 | |
e2d681b3 | 243 | Destroy pool |
0840a663 | 244 | |
e2d681b3 | 245 | `<name>`: `<string>` :: |
0840a663 | 246 | |
e2d681b3 | 247 | The name of the pool. It must be unique. |
0840a663 | 248 | |
e2d681b3 TL |
249 | `--force` `<boolean>` ('default =' `0`):: |
250 | ||
251 | If true, destroys pool even if in use | |
252 | ||
253 | `--remove_storages` `<boolean>` ('default =' `0`):: | |
254 | ||
255 | Remove all pveceph-managed storages configured for this pool | |
256 | ||
257 | *pveceph pool ls* | |
0840a663 DM |
258 | |
259 | List all pools. | |
260 | ||
0840a663 DM |
261 | *pveceph purge* |
262 | ||
263 | Destroy ceph related data and configuration files. | |
264 | ||
0840a663 DM |
265 | *pveceph start* `[<service>]` |
266 | ||
267 | Start ceph services. | |
268 | ||
e2d681b3 | 269 | `<service>`: `(mon|mds|osd|mgr)\.[A-Za-z0-9\-]{1,32}` ('default =' `ceph.target`):: |
0840a663 DM |
270 | |
271 | Ceph service name. | |
272 | ||
0840a663 DM |
273 | *pveceph status* |
274 | ||
275 | Get ceph status. | |
276 | ||
0840a663 DM |
277 | *pveceph stop* `[<service>]` |
278 | ||
279 | Stop ceph services. | |
280 | ||
e2d681b3 | 281 | `<service>`: `(mon|mds|osd|mgr)\.[A-Za-z0-9\-]{1,32}` ('default =' `ceph.target`):: |
0840a663 DM |
282 | |
283 | Ceph service name. | |
284 | ||
285 |