]>
Commit | Line | Data |
---|---|---|
0840a663 DM |
1 | *pveceph* `<COMMAND> [ARGS] [OPTIONS]` |
2 | ||
2489d6df | 3 | *pveceph createmgr* `[OPTIONS]` |
0840a663 | 4 | |
2489d6df WB |
5 | Create Ceph Manager |
6 | ||
7 | `--id` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` :: | |
8 | ||
9 | The ID for the manager, when omitted the same as the nodename | |
10 | ||
11 | ||
12 | ||
13 | *pveceph createmon* `[OPTIONS]` | |
14 | ||
15 | Create Ceph Monitor and Manager | |
16 | ||
17 | `--exclude-manager` `<boolean>` ('default =' `0`):: | |
18 | ||
19 | When set, only a monitor will be created. | |
20 | ||
21 | `--id` `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` :: | |
22 | ||
23 | The ID for the monitor, when omitted the same as the nodename | |
0840a663 DM |
24 | |
25 | ||
26 | ||
27 | ||
28 | *pveceph createosd* `<dev>` `[OPTIONS]` | |
29 | ||
30 | Create OSD | |
31 | ||
013dc89f | 32 | `<dev>`: `<string>` :: |
0840a663 DM |
33 | |
34 | Block device name. | |
35 | ||
2489d6df | 36 | `--bluestore` `<boolean>` ('default =' `0`):: |
5d9c884c DM |
37 | |
38 | Use bluestore instead of filestore. | |
39 | ||
2489d6df | 40 | `--fstype` `<btrfs | ext4 | xfs>` ('default =' `xfs`):: |
0840a663 | 41 | |
5d9c884c | 42 | File system type (filestore only). |
0840a663 | 43 | |
2489d6df | 44 | `--journal_dev` `<string>` :: |
0840a663 | 45 | |
2489d6df WB |
46 | Block device name for journal (filestore) or block.db (bluestore). |
47 | ||
48 | `--wal_dev` `<string>` :: | |
49 | ||
50 | Block device name for block.wal (bluestore only). | |
0840a663 DM |
51 | |
52 | ||
53 | ||
54 | ||
55 | *pveceph createpool* `<name>` `[OPTIONS]` | |
56 | ||
57 | Create POOL | |
58 | ||
013dc89f | 59 | `<name>`: `<string>` :: |
0840a663 DM |
60 | |
61 | The name of the pool. It must be unique. | |
62 | ||
2489d6df WB |
63 | `--add_storages` `<boolean>` :: |
64 | ||
65 | Configure VM and CT storages using the new pool. | |
0840a663 | 66 | |
2489d6df | 67 | `--application` `<cephfs | rbd | rgw>` :: |
0840a663 | 68 | |
2489d6df WB |
69 | The application of the pool, 'rbd' by default. |
70 | ||
71 | `--crush_rule` `<string>` :: | |
72 | ||
73 | The rule to use for mapping object placement in the cluster. | |
74 | ||
75 | `--min_size` `<integer> (1 - 7)` ('default =' `2`):: | |
0840a663 DM |
76 | |
77 | Minimum number of replicas per object | |
78 | ||
2489d6df | 79 | `--pg_num` `<integer> (8 - 32768)` ('default =' `64`):: |
0840a663 DM |
80 | |
81 | Number of placement groups. | |
82 | ||
2489d6df | 83 | `--size` `<integer> (1 - 7)` ('default =' `3`):: |
0840a663 DM |
84 | |
85 | Number of replicas per object | |
86 | ||
87 | ||
88 | ||
2489d6df WB |
89 | *pveceph destroymgr* `<id>` |
90 | ||
91 | Destroy Ceph Manager. | |
92 | ||
93 | `<id>`: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` :: | |
94 | ||
95 | The ID of the manager | |
96 | ||
0840a663 | 97 | |
0840a663 | 98 | |
2489d6df WB |
99 | *pveceph destroymon* `<monid>` `[OPTIONS]` |
100 | ||
101 | Destroy Ceph Monitor and Manager. | |
102 | ||
103 | `<monid>`: `[a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?` :: | |
0840a663 DM |
104 | |
105 | Monitor ID | |
106 | ||
2489d6df WB |
107 | `--exclude-manager` `<boolean>` ('default =' `0`):: |
108 | ||
109 | When set, removes only the monitor, not the manager | |
110 | ||
0840a663 DM |
111 | |
112 | ||
113 | ||
114 | *pveceph destroyosd* `<osdid>` `[OPTIONS]` | |
115 | ||
116 | Destroy OSD | |
117 | ||
013dc89f | 118 | `<osdid>`: `<integer>` :: |
0840a663 DM |
119 | |
120 | OSD ID | |
121 | ||
2489d6df | 122 | `--cleanup` `<boolean>` ('default =' `0`):: |
0840a663 DM |
123 | |
124 | If set, we remove partition table entries. | |
125 | ||
126 | ||
127 | ||
128 | ||
2c0dde61 | 129 | *pveceph destroypool* `<name>` `[OPTIONS]` |
0840a663 DM |
130 | |
131 | Destroy pool | |
132 | ||
013dc89f | 133 | `<name>`: `<string>` :: |
0840a663 DM |
134 | |
135 | The name of the pool. It must be unique. | |
136 | ||
2489d6df | 137 | `--force` `<boolean>` ('default =' `0`):: |
2c0dde61 DM |
138 | |
139 | If true, destroys pool even if in use | |
140 | ||
2489d6df WB |
141 | `--remove_storages` `<boolean>` ('default =' `0`):: |
142 | ||
143 | Remove all pveceph-managed storages configured for this pool | |
144 | ||
0840a663 DM |
145 | |
146 | ||
147 | ||
148 | *pveceph help* `[<cmd>]` `[OPTIONS]` | |
149 | ||
150 | Get help about specified command. | |
151 | ||
013dc89f | 152 | `<cmd>`: `<string>` :: |
0840a663 DM |
153 | |
154 | Command name | |
155 | ||
2489d6df | 156 | `--verbose` `<boolean>` :: |
0840a663 DM |
157 | |
158 | Verbose output format. | |
159 | ||
160 | ||
161 | ||
162 | ||
163 | *pveceph init* `[OPTIONS]` | |
164 | ||
165 | Create initial ceph default configuration and setup symlinks. | |
166 | ||
2489d6df | 167 | `--disable_cephx` `<boolean>` ('default =' `0`):: |
5d9c884c DM |
168 | |
169 | Disable cephx authentification. | |
170 | + | |
171 | WARNING: cephx is a security feature protecting against man-in-the-middle attacks. Only consider disabling cephx if your network is private! | |
172 | ||
2489d6df | 173 | `--min_size` `<integer> (1 - 7)` ('default =' `2`):: |
5d9c884c DM |
174 | |
175 | Minimum number of available replicas per object to allow I/O | |
176 | ||
2489d6df | 177 | `--network` `<string>` :: |
0840a663 DM |
178 | |
179 | Use specific network for all ceph related traffic | |
180 | ||
2489d6df | 181 | `--pg_bits` `<integer> (6 - 14)` ('default =' `6`):: |
0840a663 | 182 | |
c2993fe5 | 183 | Placement group bits, used to specify the default number of placement groups. |
04ce4dfa DM |
184 | + |
185 | NOTE: 'osd pool default pg num' does not work for default pools. | |
0840a663 | 186 | |
2489d6df | 187 | `--size` `<integer> (1 - 7)` ('default =' `3`):: |
0840a663 | 188 | |
5d9c884c | 189 | Targeted number of replicas per object |
0840a663 DM |
190 | |
191 | ||
192 | ||
193 | ||
194 | *pveceph install* `[OPTIONS]` | |
195 | ||
196 | Install ceph related packages. | |
197 | ||
2489d6df | 198 | `--version` `<luminous>` :: |
0840a663 DM |
199 | |
200 | no description available | |
201 | ||
202 | ||
203 | ||
204 | ||
205 | *pveceph lspools* | |
206 | ||
207 | List all pools. | |
208 | ||
209 | ||
210 | ||
211 | ||
212 | *pveceph purge* | |
213 | ||
214 | Destroy ceph related data and configuration files. | |
215 | ||
216 | ||
217 | ||
218 | ||
219 | *pveceph start* `[<service>]` | |
220 | ||
221 | Start ceph services. | |
222 | ||
2489d6df | 223 | `<service>`: `(mon|mds|osd|mgr)\.[A-Za-z0-9\-]{1,32}` :: |
0840a663 DM |
224 | |
225 | Ceph service name. | |
226 | ||
227 | ||
228 | ||
229 | *pveceph status* | |
230 | ||
231 | Get ceph status. | |
232 | ||
233 | ||
234 | ||
235 | *pveceph stop* `[<service>]` | |
236 | ||
237 | Stop ceph services. | |
238 | ||
2489d6df | 239 | `<service>`: `(mon|mds|osd|mgr)\.[A-Za-z0-9\-]{1,32}` :: |
0840a663 DM |
240 | |
241 | Ceph service name. | |
242 | ||
243 | ||
244 | ||
245 |