]>
Commit | Line | Data |
---|---|---|
80c0adcb | 1 | [[chapter_pveceph]] |
0840a663 | 2 | ifdef::manvolnum[] |
b2f242ab DM |
3 | pveceph(1) |
4 | ========== | |
404a158e | 5 | :pve-toplevel: |
0840a663 DM |
6 | |
7 | NAME | |
8 | ---- | |
9 | ||
21394e70 | 10 | pveceph - Manage Ceph Services on Proxmox VE Nodes |
0840a663 | 11 | |
49a5e11c | 12 | SYNOPSIS |
0840a663 DM |
13 | -------- |
14 | ||
15 | include::pveceph.1-synopsis.adoc[] | |
16 | ||
17 | DESCRIPTION | |
18 | ----------- | |
19 | endif::manvolnum[] | |
0840a663 | 20 | ifndef::manvolnum[] |
4bfe3e35 TL |
21 | Deploy Hyper-Converged Ceph Cluster |
22 | =================================== | |
49d3ad91 | 23 | :pve-toplevel: |
0840a663 DM |
24 | endif::manvolnum[] |
25 | ||
1ff5e4e8 | 26 | [thumbnail="screenshot/gui-ceph-status.png"] |
8997dd6e | 27 | |
a474ca1f AA |
28 | {pve} unifies your compute and storage systems, i.e. you can use the same |
29 | physical nodes within a cluster for both computing (processing VMs and | |
30 | containers) and replicated storage. The traditional silos of compute and | |
31 | storage resources can be wrapped up into a single hyper-converged appliance. | |
32 | Separate storage networks (SANs) and connections via network attached storages | |
33 | (NAS) disappear. With the integration of Ceph, an open source software-defined | |
34 | storage platform, {pve} has the ability to run and manage Ceph storage directly | |
35 | on the hypervisor nodes. | |
c994e4e5 DM |
36 | |
37 | Ceph is a distributed object store and file system designed to provide | |
1d54c3b4 AA |
38 | excellent performance, reliability and scalability. |
39 | ||
04ba9b24 TL |
40 | .Some advantages of Ceph on {pve} are: |
41 | - Easy setup and management with CLI and GUI support | |
a474ca1f AA |
42 | - Thin provisioning |
43 | - Snapshots support | |
44 | - Self healing | |
a474ca1f AA |
45 | - Scalable to the exabyte level |
46 | - Setup pools with different performance and redundancy characteristics | |
47 | - Data is replicated, making it fault tolerant | |
48 | - Runs on economical commodity hardware | |
49 | - No need for hardware RAID controllers | |
a474ca1f AA |
50 | - Open source |
51 | ||
1d54c3b4 AA |
52 | For small to mid sized deployments, it is possible to install a Ceph server for |
53 | RADOS Block Devices (RBD) directly on your {pve} cluster nodes, see | |
c994e4e5 DM |
54 | xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]. Recent |
55 | hardware has plenty of CPU power and RAM, so running storage services | |
56 | and VMs on the same node is possible. | |
21394e70 DM |
57 | |
58 | To simplify management, we provide 'pveceph' - a tool to install and | |
59 | manage {ceph} services on {pve} nodes. | |
60 | ||
d241b01b | 61 | .Ceph consists of a couple of Daemons, for use as a RBD storage: |
1d54c3b4 AA |
62 | - Ceph Monitor (ceph-mon) |
63 | - Ceph Manager (ceph-mgr) | |
64 | - Ceph OSD (ceph-osd; Object Storage Daemon) | |
65 | ||
d241b01b | 66 | TIP: We highly recommend to get familiar with Ceph |
b46a49ed | 67 | footnote:[Ceph intro {cephdocs-url}/start/intro/], |
d241b01b | 68 | its architecture |
b46a49ed | 69 | footnote:[Ceph architecture {cephdocs-url}/architecture/] |
477fbcfb | 70 | and vocabulary |
b46a49ed | 71 | footnote:[Ceph glossary {cephdocs-url}/glossary]. |
1d54c3b4 | 72 | |
21394e70 DM |
73 | |
74 | Precondition | |
75 | ------------ | |
76 | ||
76f6eca4 AA |
77 | To build a hyper-converged Proxmox + Ceph Cluster there should be at least |
78 | three (preferably) identical servers for the setup. | |
21394e70 DM |
79 | |
80 | Check also the recommendations from | |
b46a49ed | 81 | {cephdocs-url}/start/hardware-recommendations/[Ceph's website]. |
21394e70 | 82 | |
76f6eca4 | 83 | .CPU |
2f19a6b0 TL |
84 | Higher CPU core frequency reduce latency and should be preferred. As a simple |
85 | rule of thumb, you should assign a CPU core (or thread) to each Ceph service to | |
86 | provide enough resources for stable and durable Ceph performance. | |
76f6eca4 AA |
87 | |
88 | .Memory | |
89 | Especially in a hyper-converged setup, the memory consumption needs to be | |
2f19a6b0 | 90 | carefully monitored. In addition to the intended workload from virtual machines |
5b502340 AA |
91 | and containers, Ceph needs enough memory available to provide excellent and |
92 | stable performance. | |
93 | ||
94 | As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will be used | |
95 | by an OSD. Especially during recovery, rebalancing or backfilling. | |
96 | ||
97 | The daemon itself will use additional memory. The Bluestore backend of the | |
98 | daemon requires by default **3-5 GiB of memory** (adjustable). In contrast, the | |
99 | legacy Filestore backend uses the OS page cache and the memory consumption is | |
100 | generally related to PGs of an OSD daemon. | |
76f6eca4 AA |
101 | |
102 | .Network | |
103 | We recommend a network bandwidth of at least 10 GbE or more, which is used | |
104 | exclusively for Ceph. A meshed network setup | |
105 | footnote:[Full Mesh Network for Ceph {webwiki-url}Full_Mesh_Network_for_Ceph_Server] | |
106 | is also an option if there are no 10 GbE switches available. | |
107 | ||
2f19a6b0 TL |
108 | The volume of traffic, especially during recovery, will interfere with other |
109 | services on the same network and may even break the {pve} cluster stack. | |
76f6eca4 AA |
110 | |
111 | Further, estimate your bandwidth needs. While one HDD might not saturate a 1 Gb | |
2f19a6b0 | 112 | link, multiple HDD OSDs per node can, and modern NVMe SSDs will even saturate |
5f318cc0 | 113 | 10 Gbps of bandwidth quickly. Deploying a network capable of even more bandwidth |
2f19a6b0 TL |
114 | will ensure that it isn't your bottleneck and won't be anytime soon, 25, 40 or |
115 | even 100 GBps are possible. | |
76f6eca4 AA |
116 | |
117 | .Disks | |
118 | When planning the size of your Ceph cluster, it is important to take the | |
119 | recovery time into consideration. Especially with small clusters, the recovery | |
120 | might take long. It is recommended that you use SSDs instead of HDDs in small | |
121 | setups to reduce recovery time, minimizing the likelihood of a subsequent | |
122 | failure event during recovery. | |
123 | ||
2f19a6b0 | 124 | In general SSDs will provide more IOPs than spinning disks. This fact and the |
76f6eca4 | 125 | higher cost may make a xref:pve_ceph_device_classes[class based] separation of |
2f19a6b0 | 126 | pools appealing. Another possibility to speedup OSDs is to use a faster disk |
352c803f TL |
127 | as journal or DB/**W**rite-**A**head-**L**og device, see |
128 | xref:pve_ceph_osds[creating Ceph OSDs]. If a faster disk is used for multiple | |
129 | OSDs, a proper balance between OSD and WAL / DB (or journal) disk must be | |
130 | selected, otherwise the faster disk becomes the bottleneck for all linked OSDs. | |
76f6eca4 AA |
131 | |
132 | Aside from the disk type, Ceph best performs with an even sized and distributed | |
2f19a6b0 TL |
133 | amount of disks per node. For example, 4 x 500 GB disks with in each node is |
134 | better than a mixed setup with a single 1 TB and three 250 GB disk. | |
135 | ||
136 | One also need to balance OSD count and single OSD capacity. More capacity | |
137 | allows to increase storage density, but it also means that a single OSD | |
138 | failure forces ceph to recover more data at once. | |
76f6eca4 | 139 | |
a474ca1f | 140 | .Avoid RAID |
86be506d | 141 | As Ceph handles data object redundancy and multiple parallel writes to disks |
c78756be | 142 | (OSDs) on its own, using a RAID controller normally doesn’t improve |
86be506d TL |
143 | performance or availability. On the contrary, Ceph is designed to handle whole |
144 | disks on it's own, without any abstraction in between. RAID controller are not | |
145 | designed for the Ceph use case and may complicate things and sometimes even | |
146 | reduce performance, as their write and caching algorithms may interfere with | |
147 | the ones from Ceph. | |
a474ca1f AA |
148 | |
149 | WARNING: Avoid RAID controller, use host bus adapter (HBA) instead. | |
150 | ||
76f6eca4 | 151 | NOTE: Above recommendations should be seen as a rough guidance for choosing |
2f19a6b0 TL |
152 | hardware. Therefore, it is still essential to adapt it to your specific needs, |
153 | test your setup and monitor health and performance continuously. | |
76f6eca4 | 154 | |
2394c306 TM |
155 | [[pve_ceph_install_wizard]] |
156 | Initial Ceph installation & configuration | |
157 | ----------------------------------------- | |
158 | ||
159 | [thumbnail="screenshot/gui-node-ceph-install.png"] | |
160 | ||
161 | With {pve} you have the benefit of an easy to use installation wizard | |
162 | for Ceph. Click on one of your cluster nodes and navigate to the Ceph | |
6a711e64 TL |
163 | section in the menu tree. If Ceph is not already installed you will be |
164 | offered to do so now. | |
2394c306 TM |
165 | |
166 | The wizard is divided into different sections, where each needs to be | |
6a711e64 TL |
167 | finished successfully in order to use Ceph. After starting the installation |
168 | the wizard will download and install all required packages from {pve}'s ceph | |
169 | repository. | |
2394c306 TM |
170 | |
171 | After finishing the first step, you will need to create a configuration. | |
6a711e64 TL |
172 | This step is only needed once per cluster, as this configuration is distributed |
173 | automatically to all remaining cluster members through {pve}'s clustered | |
174 | xref:chapter_pmxcfs[configuration file system (pmxcfs)]. | |
2394c306 TM |
175 | |
176 | The configuration step includes the following settings: | |
177 | ||
178 | * *Public Network:* You should setup a dedicated network for Ceph, this | |
179 | setting is required. Separating your Ceph traffic is highly recommended, | |
6a711e64 TL |
180 | because it could lead to troubles with other latency dependent services, |
181 | e.g., cluster communication may decrease Ceph's performance, if not done. | |
2394c306 TM |
182 | |
183 | [thumbnail="screenshot/gui-node-ceph-install-wizard-step2.png"] | |
184 | ||
185 | * *Cluster Network:* As an optional step you can go even further and | |
186 | separate the xref:pve_ceph_osds[OSD] replication & heartbeat traffic | |
187 | as well. This will relieve the public network and could lead to | |
188 | significant performance improvements especially in big clusters. | |
189 | ||
190 | You have two more options which are considered advanced and therefore | |
191 | should only changed if you are an expert. | |
192 | ||
193 | * *Number of replicas*: Defines the how often a object is replicated | |
194 | * *Minimum replicas*: Defines the minimum number of required replicas | |
6a711e64 | 195 | for I/O to be marked as complete. |
2394c306 | 196 | |
6a711e64 | 197 | Additionally you need to choose your first monitor node, this is required. |
2394c306 TM |
198 | |
199 | That's it, you should see a success page as the last step with further | |
200 | instructions on how to go on. You are now prepared to start using Ceph, | |
201 | even though you will need to create additional xref:pve_ceph_monitors[monitors], | |
202 | create some xref:pve_ceph_osds[OSDs] and at least one xref:pve_ceph_pools[pool]. | |
203 | ||
204 | The rest of this chapter will guide you on how to get the most out of | |
205 | your {pve} based Ceph setup, this will include aforementioned and | |
206 | more like xref:pveceph_fs[CephFS] which is a very handy addition to your | |
207 | new Ceph cluster. | |
21394e70 | 208 | |
58f95dd7 | 209 | [[pve_ceph_install]] |
21394e70 DM |
210 | Installation of Ceph Packages |
211 | ----------------------------- | |
2394c306 TM |
212 | Use {pve} Ceph installation wizard (recommended) or run the following |
213 | command on each node: | |
21394e70 DM |
214 | |
215 | [source,bash] | |
216 | ---- | |
19920184 | 217 | pveceph install |
21394e70 DM |
218 | ---- |
219 | ||
220 | This sets up an `apt` package repository in | |
221 | `/etc/apt/sources.list.d/ceph.list` and installs the required software. | |
222 | ||
223 | ||
b3338e29 AA |
224 | Create initial Ceph configuration |
225 | --------------------------------- | |
21394e70 | 226 | |
1ff5e4e8 | 227 | [thumbnail="screenshot/gui-ceph-config.png"] |
8997dd6e | 228 | |
2394c306 TM |
229 | Use the {pve} Ceph installation wizard (recommended) or run the |
230 | following command on one node: | |
21394e70 DM |
231 | |
232 | [source,bash] | |
233 | ---- | |
234 | pveceph init --network 10.10.10.0/24 | |
235 | ---- | |
236 | ||
2394c306 TM |
237 | This creates an initial configuration at `/etc/pve/ceph.conf` with a |
238 | dedicated network for ceph. That file is automatically distributed to | |
239 | all {pve} nodes by using xref:chapter_pmxcfs[pmxcfs]. The command also | |
240 | creates a symbolic link from `/etc/ceph/ceph.conf` pointing to that file. | |
241 | So you can simply run Ceph commands without the need to specify a | |
242 | configuration file. | |
21394e70 DM |
243 | |
244 | ||
d9a27ee1 | 245 | [[pve_ceph_monitors]] |
b3338e29 AA |
246 | Ceph Monitor |
247 | ----------- | |
1d54c3b4 | 248 | The Ceph Monitor (MON) |
b46a49ed | 249 | footnote:[Ceph Monitor {cephdocs-url}/start/intro/] |
a474ca1f | 250 | maintains a master copy of the cluster map. For high availability you need to |
2394c306 | 251 | have at least 3 monitors. One monitor will already be installed if you |
620d6725 | 252 | used the installation wizard. You won't need more than 3 monitors as long |
2394c306 TM |
253 | as your cluster is small to midsize, only really large clusters will |
254 | need more than that. | |
1d54c3b4 | 255 | |
b3338e29 | 256 | |
c998bdf2 | 257 | [[pveceph_create_mon]] |
b3338e29 AA |
258 | Create Monitors |
259 | ~~~~~~~~~~~~~~~ | |
260 | ||
261 | [thumbnail="screenshot/gui-ceph-monitor.png"] | |
262 | ||
1d54c3b4 AA |
263 | On each node where you want to place a monitor (three monitors are recommended), |
264 | create it by using the 'Ceph -> Monitor' tab in the GUI or run. | |
21394e70 DM |
265 | |
266 | ||
267 | [source,bash] | |
268 | ---- | |
d1fdb121 | 269 | pveceph mon create |
21394e70 DM |
270 | ---- |
271 | ||
c998bdf2 | 272 | [[pveceph_destroy_mon]] |
b3338e29 AA |
273 | Destroy Monitors |
274 | ~~~~~~~~~~~~~~~~ | |
0e38a564 AA |
275 | |
276 | To remove a Ceph Monitor via the GUI first select a node in the tree view and | |
277 | go to the **Ceph -> Monitor** panel. Select the MON and click the **Destroy** | |
278 | button. | |
279 | ||
280 | To remove a Ceph Monitor via the CLI first connect to the node on which the MON | |
281 | is running. Then execute the following command: | |
282 | [source,bash] | |
283 | ---- | |
284 | pveceph mon destroy | |
285 | ---- | |
286 | ||
287 | NOTE: At least three Monitors are needed for quorum. | |
288 | ||
289 | ||
1d54c3b4 | 290 | [[pve_ceph_manager]] |
b3338e29 AA |
291 | Ceph Manager |
292 | ------------ | |
293 | The Manager daemon runs alongside the monitors. It provides an interface to | |
294 | monitor the cluster. Since the Ceph luminous release at least one ceph-mgr | |
b46a49ed | 295 | footnote:[Ceph Manager {cephdocs-url}/mgr/] daemon is |
b3338e29 AA |
296 | required. |
297 | ||
55d634e6 | 298 | [[pveceph_create_mgr]] |
b3338e29 AA |
299 | Create Manager |
300 | ~~~~~~~~~~~~~~ | |
1d54c3b4 | 301 | |
b3338e29 | 302 | Multiple Managers can be installed, but at any time only one Manager is active. |
1d54c3b4 | 303 | |
1d54c3b4 AA |
304 | [source,bash] |
305 | ---- | |
d1fdb121 | 306 | pveceph mgr create |
1d54c3b4 AA |
307 | ---- |
308 | ||
c1f38fe3 AA |
309 | NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For |
310 | high availability install more then one manager. | |
311 | ||
21394e70 | 312 | |
c998bdf2 | 313 | [[pveceph_destroy_mgr]] |
b3338e29 AA |
314 | Destroy Manager |
315 | ~~~~~~~~~~~~~~~ | |
549350fe AA |
316 | |
317 | To remove a Ceph Manager via the GUI first select a node in the tree view and | |
318 | go to the **Ceph -> Monitor** panel. Select the Manager and click the | |
319 | **Destroy** button. | |
320 | ||
321 | To remove a Ceph Monitor via the CLI first connect to the node on which the | |
322 | Manager is running. Then execute the following command: | |
323 | [source,bash] | |
324 | ---- | |
325 | pveceph mgr destroy | |
326 | ---- | |
327 | ||
328 | NOTE: A Ceph cluster can function without a Manager, but certain functions like | |
329 | the cluster status or usage require a running Manager. | |
330 | ||
331 | ||
d9a27ee1 | 332 | [[pve_ceph_osds]] |
b3338e29 AA |
333 | Ceph OSDs |
334 | --------- | |
335 | Ceph **O**bject **S**torage **D**aemons are storing objects for Ceph over the | |
336 | network. It is recommended to use one OSD per physical disk. | |
337 | ||
338 | NOTE: By default an object is 4 MiB in size. | |
339 | ||
081cb761 | 340 | [[pve_ceph_osd_create]] |
b3338e29 AA |
341 | Create OSDs |
342 | ~~~~~~~~~~~ | |
21394e70 | 343 | |
1ff5e4e8 | 344 | [thumbnail="screenshot/gui-ceph-osd-status.png"] |
8997dd6e | 345 | |
21394e70 DM |
346 | via GUI or via CLI as follows: |
347 | ||
348 | [source,bash] | |
349 | ---- | |
d1fdb121 | 350 | pveceph osd create /dev/sd[X] |
21394e70 DM |
351 | ---- |
352 | ||
b3338e29 AA |
353 | TIP: We recommend a Ceph cluster size, starting with 12 OSDs, distributed |
354 | evenly among your, at least three nodes (4 OSDs on each node). | |
1d54c3b4 | 355 | |
a474ca1f | 356 | If the disk was used before (eg. ZFS/RAID/OSD), to remove partition table, boot |
9bddef40 | 357 | sector and any OSD leftover the following command should be sufficient. |
a474ca1f AA |
358 | |
359 | [source,bash] | |
360 | ---- | |
9bddef40 | 361 | ceph-volume lvm zap /dev/sd[X] --destroy |
a474ca1f AA |
362 | ---- |
363 | ||
9bddef40 | 364 | WARNING: The above command will destroy data on the disk! |
1d54c3b4 | 365 | |
b3338e29 | 366 | .Ceph Bluestore |
21394e70 | 367 | |
1d54c3b4 AA |
368 | Starting with the Ceph Kraken release, a new Ceph OSD storage type was |
369 | introduced, the so called Bluestore | |
2798d126 | 370 | footnote:[Ceph Bluestore https://ceph.com/community/new-luminous-bluestore/]. |
9bddef40 | 371 | This is the default when creating OSDs since Ceph Luminous. |
21394e70 DM |
372 | |
373 | [source,bash] | |
374 | ---- | |
d1fdb121 | 375 | pveceph osd create /dev/sd[X] |
1d54c3b4 AA |
376 | ---- |
377 | ||
1e834cb2 | 378 | .Block.db and block.wal |
1d54c3b4 AA |
379 | |
380 | If you want to use a separate DB/WAL device for your OSDs, you can specify it | |
b3338e29 AA |
381 | through the '-db_dev' and '-wal_dev' options. The WAL is placed with the DB, if |
382 | not specified separately. | |
1d54c3b4 AA |
383 | |
384 | [source,bash] | |
385 | ---- | |
d1fdb121 | 386 | pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z] |
1d54c3b4 AA |
387 | ---- |
388 | ||
9bddef40 | 389 | You can directly choose the size for those with the '-db_size' and '-wal_size' |
5f318cc0 | 390 | parameters respectively. If they are not given the following values (in order) |
9bddef40 DC |
391 | will be used: |
392 | ||
352c803f TL |
393 | * bluestore_block_{db,wal}_size from ceph configuration... |
394 | ** ... database, section 'osd' | |
395 | ** ... database, section 'global' | |
396 | ** ... file, section 'osd' | |
397 | ** ... file, section 'global' | |
9bddef40 DC |
398 | * 10% (DB)/1% (WAL) of OSD size |
399 | ||
1d54c3b4 | 400 | NOTE: The DB stores BlueStore’s internal metadata and the WAL is BlueStore’s |
ee4a0e96 | 401 | internal journal or write-ahead log. It is recommended to use a fast SSD or |
1d54c3b4 AA |
402 | NVRAM for better performance. |
403 | ||
404 | ||
b3338e29 | 405 | .Ceph Filestore |
9bddef40 | 406 | |
352c803f | 407 | Before Ceph Luminous, Filestore was used as default storage type for Ceph OSDs. |
9bddef40 | 408 | Starting with Ceph Nautilus, {pve} does not support creating such OSDs with |
352c803f TL |
409 | 'pveceph' anymore. If you still want to create filestore OSDs, use |
410 | 'ceph-volume' directly. | |
1d54c3b4 AA |
411 | |
412 | [source,bash] | |
413 | ---- | |
9bddef40 | 414 | ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y] |
21394e70 DM |
415 | ---- |
416 | ||
081cb761 | 417 | [[pve_ceph_osd_destroy]] |
b3338e29 AA |
418 | Destroy OSDs |
419 | ~~~~~~~~~~~~ | |
be2d137e AA |
420 | |
421 | To remove an OSD via the GUI first select a {PVE} node in the tree view and go | |
422 | to the **Ceph -> OSD** panel. Select the OSD to destroy. Next click the **OUT** | |
423 | button. Once the OSD status changed from `in` to `out` click the **STOP** | |
424 | button. As soon as the status changed from `up` to `down` select **Destroy** | |
425 | from the `More` drop-down menu. | |
426 | ||
427 | To remove an OSD via the CLI run the following commands. | |
428 | [source,bash] | |
429 | ---- | |
430 | ceph osd out <ID> | |
431 | systemctl stop ceph-osd@<ID>.service | |
432 | ---- | |
433 | NOTE: The first command instructs Ceph not to include the OSD in the data | |
434 | distribution. The second command stops the OSD service. Until this time, no | |
435 | data is lost. | |
436 | ||
437 | The following command destroys the OSD. Specify the '-cleanup' option to | |
438 | additionally destroy the partition table. | |
439 | [source,bash] | |
440 | ---- | |
441 | pveceph osd destroy <ID> | |
442 | ---- | |
443 | WARNING: The above command will destroy data on the disk! | |
444 | ||
445 | ||
07fef357 | 446 | [[pve_ceph_pools]] |
b3338e29 AA |
447 | Ceph Pools |
448 | ---------- | |
1d54c3b4 | 449 | A pool is a logical group for storing objects. It holds **P**lacement |
90682f35 | 450 | **G**roups (`PG`, `pg_num`), a collection of objects. |
1d54c3b4 | 451 | |
b3338e29 | 452 | |
5b9f923f DC |
453 | Create and edit Pools |
454 | ~~~~~~~~~~~~~~~~~~~~~ | |
b3338e29 AA |
455 | |
456 | [thumbnail="screenshot/gui-ceph-pools.png"] | |
457 | ||
90682f35 TL |
458 | When no options are given, we set a default of **128 PGs**, a **size of 3 |
459 | replicas** and a **min_size of 2 replicas** for serving objects in a degraded | |
460 | state. | |
1d54c3b4 | 461 | |
5a54ef44 | 462 | NOTE: The default number of PGs works for 2-5 disks. Ceph throws a |
90682f35 | 463 | 'HEALTH_WARNING' if you have too few or too many PGs in your cluster. |
1d54c3b4 | 464 | |
ef3efe51 AA |
465 | WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1 |
466 | allows I/O on an object when it has only 1 replica which could lead to data | |
467 | loss, incomplete PGs or unfound objects. | |
468 | ||
c446b6bb DW |
469 | It is advised that you calculate the PG number based on your setup. You can |
470 | find the formula and the PG calculator footnote:[PG calculator | |
471 | https://ceph.com/pgcalc/] online. From Ceph Nautilus onward, you can change the | |
472 | number of PGs footnoteref:[placement_groups,Placement Groups | |
473 | {cephdocs-url}/rados/operations/placement-groups/] after the setup. | |
1d54c3b4 | 474 | |
c446b6bb DW |
475 | In addition to manual adjustment, the PG autoscaler |
476 | footnoteref:[autoscaler,Automated Scaling | |
477 | {cephdocs-url}/rados/operations/placement-groups/#automated-scaling] can | |
478 | automatically scale the PG count for a pool in the background. | |
1d54c3b4 AA |
479 | |
480 | You can create pools through command line or on the GUI on each PVE host under | |
481 | **Ceph -> Pools**. | |
482 | ||
483 | [source,bash] | |
484 | ---- | |
d1fdb121 | 485 | pveceph pool create <name> |
1d54c3b4 AA |
486 | ---- |
487 | ||
620d6725 FE |
488 | If you would like to automatically also get a storage definition for your pool, |
489 | mark the checkbox "Add storages" in the GUI or use the command line option | |
490 | '--add_storages' at pool creation. | |
21394e70 | 491 | |
c446b6bb DW |
492 | .Base Options |
493 | Name:: The name of the pool. This must be unique and can't be changed afterwards. | |
494 | Size:: The number of replicas per object. Ceph always tries to have this many | |
495 | copies of an object. Default: `3`. | |
496 | PG Autoscale Mode:: The automatic PG scaling mode footnoteref:[autoscaler] of | |
497 | the pool. If set to `warn`, it produces a warning message when a pool | |
498 | has a non-optimal PG count. Default: `warn`. | |
499 | Add as Storage:: Configure a VM or container storage using the new pool. | |
5b9f923f | 500 | Default: `true` (only visible on creation). |
c446b6bb DW |
501 | |
502 | .Advanced Options | |
503 | Min. Size:: The minimum number of replicas per object. Ceph will reject I/O on | |
504 | the pool if a PG has less than this many replicas. Default: `2`. | |
505 | Crush Rule:: The rule to use for mapping object placement in the cluster. These | |
506 | rules define how data is placed within the cluster. See | |
507 | xref:pve_ceph_device_classes[Ceph CRUSH & device classes] for information on | |
508 | device-based rules. | |
509 | # of PGs:: The number of placement groups footnoteref:[placement_groups] that | |
510 | the pool should have at the beginning. Default: `128`. | |
511 | Traget Size:: The estimated amount of data expected in the pool. The PG | |
512 | autoscaler uses this size to estimate the optimal PG count. | |
513 | Target Size Ratio:: The ratio of data that is expected in the pool. The PG | |
514 | autoscaler uses the ratio relative to other ratio sets. It takes precedence | |
515 | over the `target size` if both are set. | |
516 | Min. # of PGs:: The minimum number of placement groups. This setting is used to | |
517 | fine-tune the lower bound of the PG count for that pool. The PG autoscaler | |
518 | will not merge PGs below this threshold. | |
519 | ||
1d54c3b4 AA |
520 | Further information on Ceph pool handling can be found in the Ceph pool |
521 | operation footnote:[Ceph pool operation | |
b46a49ed | 522 | {cephdocs-url}/rados/operations/pools/] |
1d54c3b4 | 523 | manual. |
21394e70 | 524 | |
166c91fe | 525 | |
b3338e29 AA |
526 | Destroy Pools |
527 | ~~~~~~~~~~~~~ | |
166c91fe AA |
528 | |
529 | To destroy a pool via the GUI select a node in the tree view and go to the | |
530 | **Ceph -> Pools** panel. Select the pool to destroy and click the **Destroy** | |
531 | button. To confirm the destruction of the pool you need to enter the pool name. | |
532 | ||
533 | Run the following command to destroy a pool. Specify the '-remove_storages' to | |
534 | also remove the associated storage. | |
535 | [source,bash] | |
536 | ---- | |
537 | pveceph pool destroy <name> | |
538 | ---- | |
539 | ||
540 | NOTE: Deleting the data of a pool is a background task and can take some time. | |
541 | You will notice that the data usage in the cluster is decreasing. | |
542 | ||
47d62c84 DW |
543 | |
544 | PG Autoscaler | |
545 | ~~~~~~~~~~~~~ | |
546 | ||
547 | The PG autoscaler allows the cluster to consider the amount of (expected) data | |
548 | stored in each pool and to choose the appropriate pg_num values automatically. | |
549 | ||
550 | You may need to activate the PG autoscaler module before adjustments can take | |
551 | effect. | |
552 | [source,bash] | |
553 | ---- | |
554 | ceph mgr module enable pg_autoscaler | |
555 | ---- | |
556 | ||
557 | The autoscaler is configured on a per pool basis and has the following modes: | |
558 | ||
559 | [horizontal] | |
560 | warn:: A health warning is issued if the suggested `pg_num` value differs too | |
561 | much from the current value. | |
562 | on:: The `pg_num` is adjusted automatically with no need for any manual | |
563 | interaction. | |
564 | off:: No automatic `pg_num` adjustments are made, and no warning will be issued | |
565 | if the PG count is far from optimal. | |
566 | ||
567 | The scaling factor can be adjusted to facilitate future data storage, with the | |
568 | `target_size`, `target_size_ratio` and the `pg_num_min` options. | |
569 | ||
570 | WARNING: By default, the autoscaler considers tuning the PG count of a pool if | |
571 | it is off by a factor of 3. This will lead to a considerable shift in data | |
572 | placement and might introduce a high load on the cluster. | |
573 | ||
574 | You can find a more in-depth introduction to the PG autoscaler on Ceph's Blog - | |
575 | https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/[New in | |
576 | Nautilus: PG merging and autotuning]. | |
577 | ||
578 | ||
76f6eca4 | 579 | [[pve_ceph_device_classes]] |
9fad507d AA |
580 | Ceph CRUSH & device classes |
581 | --------------------------- | |
582 | The foundation of Ceph is its algorithm, **C**ontrolled **R**eplication | |
583 | **U**nder **S**calable **H**ashing | |
584 | (CRUSH footnote:[CRUSH https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf]). | |
585 | ||
586 | CRUSH calculates where to store to and retrieve data from, this has the | |
587 | advantage that no central index service is needed. CRUSH works with a map of | |
588 | OSDs, buckets (device locations) and rulesets (data replication) for pools. | |
589 | ||
590 | NOTE: Further information can be found in the Ceph documentation, under the | |
b46a49ed | 591 | section CRUSH map footnote:[CRUSH map {cephdocs-url}/rados/operations/crush-map/]. |
9fad507d AA |
592 | |
593 | This map can be altered to reflect different replication hierarchies. The object | |
594 | replicas can be separated (eg. failure domains), while maintaining the desired | |
595 | distribution. | |
596 | ||
597 | A common use case is to use different classes of disks for different Ceph pools. | |
598 | For this reason, Ceph introduced the device classes with luminous, to | |
599 | accommodate the need for easy ruleset generation. | |
600 | ||
601 | The device classes can be seen in the 'ceph osd tree' output. These classes | |
602 | represent their own root bucket, which can be seen with the below command. | |
603 | ||
604 | [source, bash] | |
605 | ---- | |
606 | ceph osd crush tree --show-shadow | |
607 | ---- | |
608 | ||
609 | Example output form the above command: | |
610 | ||
611 | [source, bash] | |
612 | ---- | |
613 | ID CLASS WEIGHT TYPE NAME | |
614 | -16 nvme 2.18307 root default~nvme | |
615 | -13 nvme 0.72769 host sumi1~nvme | |
616 | 12 nvme 0.72769 osd.12 | |
617 | -14 nvme 0.72769 host sumi2~nvme | |
618 | 13 nvme 0.72769 osd.13 | |
619 | -15 nvme 0.72769 host sumi3~nvme | |
620 | 14 nvme 0.72769 osd.14 | |
621 | -1 7.70544 root default | |
622 | -3 2.56848 host sumi1 | |
623 | 12 nvme 0.72769 osd.12 | |
624 | -5 2.56848 host sumi2 | |
625 | 13 nvme 0.72769 osd.13 | |
626 | -7 2.56848 host sumi3 | |
627 | 14 nvme 0.72769 osd.14 | |
628 | ---- | |
629 | ||
630 | To let a pool distribute its objects only on a specific device class, you need | |
631 | to create a ruleset with the specific class first. | |
632 | ||
633 | [source, bash] | |
634 | ---- | |
635 | ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class> | |
636 | ---- | |
637 | ||
638 | [frame="none",grid="none", align="left", cols="30%,70%"] | |
639 | |=== | |
640 | |<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI) | |
641 | |<root>|which crush root it should belong to (default ceph root "default") | |
642 | |<failure-domain>|at which failure-domain the objects should be distributed (usually host) | |
643 | |<class>|what type of OSD backing store to use (eg. nvme, ssd, hdd) | |
644 | |=== | |
645 | ||
646 | Once the rule is in the CRUSH map, you can tell a pool to use the ruleset. | |
647 | ||
648 | [source, bash] | |
649 | ---- | |
650 | ceph osd pool set <pool-name> crush_rule <rule-name> | |
651 | ---- | |
652 | ||
653 | TIP: If the pool already contains objects, all of these have to be moved | |
b3338e29 AA |
654 | accordingly. Depending on your setup this may introduce a big performance hit |
655 | on your cluster. As an alternative, you can create a new pool and move disks | |
9fad507d AA |
656 | separately. |
657 | ||
658 | ||
21394e70 DM |
659 | Ceph Client |
660 | ----------- | |
661 | ||
1ff5e4e8 | 662 | [thumbnail="screenshot/gui-ceph-log.png"] |
8997dd6e | 663 | |
21394e70 DM |
664 | You can then configure {pve} to use such pools to store VM or |
665 | Container images. Simply use the GUI too add a new `RBD` storage (see | |
666 | section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]). | |
667 | ||
620d6725 | 668 | You also need to copy the keyring to a predefined location for an external Ceph |
1d54c3b4 AA |
669 | cluster. If Ceph is installed on the Proxmox nodes itself, then this will be |
670 | done automatically. | |
21394e70 DM |
671 | |
672 | NOTE: The file name needs to be `<storage_id> + `.keyring` - `<storage_id>` is | |
673 | the expression after 'rbd:' in `/etc/pve/storage.cfg` which is | |
674 | `my-ceph-storage` in the following example: | |
675 | ||
676 | [source,bash] | |
677 | ---- | |
678 | mkdir /etc/pve/priv/ceph | |
679 | cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring | |
680 | ---- | |
0840a663 | 681 | |
58f95dd7 TL |
682 | [[pveceph_fs]] |
683 | CephFS | |
684 | ------ | |
685 | ||
686 | Ceph provides also a filesystem running on top of the same object storage as | |
687 | RADOS block devices do. A **M**eta**d**ata **S**erver (`MDS`) is used to map | |
688 | the RADOS backed objects to files and directories, allowing to provide a | |
689 | POSIX-compliant replicated filesystem. This allows one to have a clustered | |
690 | highly available shared filesystem in an easy way if ceph is already used. Its | |
691 | Metadata Servers guarantee that files get balanced out over the whole Ceph | |
692 | cluster, this way even high load will not overload a single host, which can be | |
d180eb39 | 693 | an issue with traditional shared filesystem approaches, like `NFS`, for |
58f95dd7 TL |
694 | example. |
695 | ||
1e834cb2 TL |
696 | [thumbnail="screenshot/gui-node-ceph-cephfs-panel.png"] |
697 | ||
2394c306 | 698 | {pve} supports both, using an existing xref:storage_cephfs[CephFS as storage] |
58f95dd7 TL |
699 | to save backups, ISO files or container templates and creating a |
700 | hyper-converged CephFS itself. | |
701 | ||
702 | ||
703 | [[pveceph_fs_mds]] | |
704 | Metadata Server (MDS) | |
705 | ~~~~~~~~~~~~~~~~~~~~~ | |
706 | ||
707 | CephFS needs at least one Metadata Server to be configured and running to be | |
708 | able to work. One can simply create one through the {pve} web GUI's `Node -> | |
709 | CephFS` panel or on the command line with: | |
710 | ||
711 | ---- | |
712 | pveceph mds create | |
713 | ---- | |
714 | ||
715 | Multiple metadata servers can be created in a cluster. But with the default | |
716 | settings only one can be active at any time. If an MDS, or its node, becomes | |
717 | unresponsive (or crashes), another `standby` MDS will get promoted to `active`. | |
718 | One can speed up the hand-over between the active and a standby MDS up by using | |
719 | the 'hotstandby' parameter option on create, or if you have already created it | |
720 | you may set/add: | |
721 | ||
722 | ---- | |
723 | mds standby replay = true | |
724 | ---- | |
725 | ||
726 | in the ceph.conf respective MDS section. With this enabled, this specific MDS | |
727 | will always poll the active one, so that it can take over faster as it is in a | |
3580eb13 | 728 | `warm` state. But naturally, the active polling will cause some additional |
58f95dd7 TL |
729 | performance impact on your system and active `MDS`. |
730 | ||
1e834cb2 | 731 | .Multiple Active MDS |
58f95dd7 TL |
732 | |
733 | Since Luminous (12.2.x) you can also have multiple active metadata servers | |
734 | running, but this is normally only useful for a high count on parallel clients, | |
735 | as else the `MDS` seldom is the bottleneck. If you want to set this up please | |
736 | refer to the ceph documentation. footnote:[Configuring multiple active MDS | |
b46a49ed | 737 | daemons {cephdocs-url}/cephfs/multimds/] |
58f95dd7 TL |
738 | |
739 | [[pveceph_fs_create]] | |
8a38333f AA |
740 | Create CephFS |
741 | ~~~~~~~~~~~~~ | |
58f95dd7 TL |
742 | |
743 | With {pve}'s CephFS integration into you can create a CephFS easily over the | |
744 | Web GUI, the CLI or an external API interface. Some prerequisites are required | |
745 | for this to work: | |
746 | ||
747 | .Prerequisites for a successful CephFS setup: | |
748 | - xref:pve_ceph_install[Install Ceph packages], if this was already done some | |
749 | time ago you might want to rerun it on an up to date system to ensure that | |
750 | also all CephFS related packages get installed. | |
751 | - xref:pve_ceph_monitors[Setup Monitors] | |
752 | - xref:pve_ceph_monitors[Setup your OSDs] | |
753 | - xref:pveceph_fs_mds[Setup at least one MDS] | |
754 | ||
755 | After this got all checked and done you can simply create a CephFS through | |
756 | either the Web GUI's `Node -> CephFS` panel or the command line tool `pveceph`, | |
757 | for example with: | |
758 | ||
759 | ---- | |
760 | pveceph fs create --pg_num 128 --add-storage | |
761 | ---- | |
762 | ||
763 | This creates a CephFS named `'cephfs'' using a pool for its data named | |
764 | `'cephfs_data'' with `128` placement groups and a pool for its metadata named | |
765 | `'cephfs_metadata'' with one quarter of the data pools placement groups (`32`). | |
766 | Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the | |
767 | Ceph documentation for more information regarding a fitting placement group | |
c446b6bb | 768 | number (`pg_num`) for your setup footnoteref:[placement_groups]. |
58f95dd7 | 769 | Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve} |
c446b6bb | 770 | storage configuration after it has been created successfully. |
58f95dd7 TL |
771 | |
772 | Destroy CephFS | |
773 | ~~~~~~~~~~~~~~ | |
774 | ||
fa9b4ee1 | 775 | WARNING: Destroying a CephFS will render all its data unusable, this cannot be |
58f95dd7 TL |
776 | undone! |
777 | ||
778 | If you really want to destroy an existing CephFS you first need to stop, or | |
620d6725 | 779 | destroy, all metadata servers (`M̀DS`). You can destroy them either over the Web |
58f95dd7 TL |
780 | GUI or the command line interface, with: |
781 | ||
782 | ---- | |
783 | pveceph mds destroy NAME | |
784 | ---- | |
785 | on each {pve} node hosting a MDS daemon. | |
786 | ||
787 | Then, you can remove (destroy) CephFS by issuing a: | |
788 | ||
789 | ---- | |
de2f8225 | 790 | ceph fs rm NAME --yes-i-really-mean-it |
58f95dd7 TL |
791 | ---- |
792 | on a single node hosting Ceph. After this you may want to remove the created | |
793 | data and metadata pools, this can be done either over the Web GUI or the CLI | |
794 | with: | |
795 | ||
796 | ---- | |
797 | pveceph pool destroy NAME | |
798 | ---- | |
0840a663 | 799 | |
6ff32926 | 800 | |
081cb761 AA |
801 | Ceph maintenance |
802 | ---------------- | |
af6f59f4 | 803 | |
081cb761 AA |
804 | Replace OSDs |
805 | ~~~~~~~~~~~~ | |
af6f59f4 | 806 | |
081cb761 AA |
807 | One of the common maintenance tasks in Ceph is to replace a disk of an OSD. If |
808 | a disk is already in a failed state, then you can go ahead and run through the | |
809 | steps in xref:pve_ceph_osd_destroy[Destroy OSDs]. Ceph will recreate those | |
af6f59f4 TL |
810 | copies on the remaining OSDs if possible. This rebalancing will start as soon |
811 | as an OSD failure is detected or an OSD was actively stopped. | |
812 | ||
813 | NOTE: With the default size/min_size (3/2) of a pool, recovery only starts when | |
814 | `size + 1` nodes are available. The reason for this is that the Ceph object | |
815 | balancer xref:pve_ceph_device_classes[CRUSH] defaults to a full node as | |
816 | `failure domain'. | |
081cb761 AA |
817 | |
818 | To replace a still functioning disk, on the GUI go through the steps in | |
819 | xref:pve_ceph_osd_destroy[Destroy OSDs]. The only addition is to wait until | |
820 | the cluster shows 'HEALTH_OK' before stopping the OSD to destroy it. | |
821 | ||
822 | On the command line use the following commands. | |
823 | ---- | |
824 | ceph osd out osd.<id> | |
825 | ---- | |
826 | ||
827 | You can check with the command below if the OSD can be safely removed. | |
828 | ---- | |
829 | ceph osd safe-to-destroy osd.<id> | |
830 | ---- | |
831 | ||
832 | Once the above check tells you that it is save to remove the OSD, you can | |
833 | continue with following commands. | |
834 | ---- | |
835 | systemctl stop ceph-osd@<id>.service | |
836 | pveceph osd destroy <id> | |
837 | ---- | |
838 | ||
839 | Replace the old disk with the new one and use the same procedure as described | |
840 | in xref:pve_ceph_osd_create[Create OSDs]. | |
841 | ||
835f322d TL |
842 | Trim/Discard |
843 | ~~~~~~~~~~~~ | |
081cb761 AA |
844 | It is a good measure to run 'fstrim' (discard) regularly on VMs or containers. |
845 | This releases data blocks that the filesystem isn’t using anymore. It reduces | |
c78cd2b6 AA |
846 | data usage and resource load. Most modern operating systems issue such discard |
847 | commands to their disks regularly. You only need to ensure that the Virtual | |
848 | Machines enable the xref:qm_hard_disk_discard[disk discard option]. | |
081cb761 | 849 | |
c998bdf2 | 850 | [[pveceph_scrub]] |
081cb761 AA |
851 | Scrub & Deep Scrub |
852 | ~~~~~~~~~~~~~~~~~~ | |
853 | Ceph ensures data integrity by 'scrubbing' placement groups. Ceph checks every | |
854 | object in a PG for its health. There are two forms of Scrubbing, daily | |
b16f8c5f TL |
855 | cheap metadata checks and weekly deep data checks. The weekly deep scrub reads |
856 | the objects and uses checksums to ensure data integrity. If a running scrub | |
857 | interferes with business (performance) needs, you can adjust the time when | |
b46a49ed | 858 | scrubs footnote:[Ceph scrubbing {cephdocs-url}/rados/configuration/osd-config-ref/#scrubbing] |
081cb761 AA |
859 | are executed. |
860 | ||
861 | ||
10df14fb TL |
862 | Ceph monitoring and troubleshooting |
863 | ----------------------------------- | |
b07ed53e | 864 | A good start is to continuously monitor the ceph health from the start of |
10df14fb TL |
865 | initial deployment. Either through the ceph tools itself, but also by accessing |
866 | the status through the {pve} link:api-viewer/index.html[API]. | |
6ff32926 | 867 | |
10df14fb TL |
868 | The following ceph commands below can be used to see if the cluster is healthy |
869 | ('HEALTH_OK'), if there are warnings ('HEALTH_WARN'), or even errors | |
870 | ('HEALTH_ERR'). If the cluster is in an unhealthy state the status commands | |
620d6725 | 871 | below will also give you an overview of the current events and actions to take. |
6ff32926 AA |
872 | |
873 | ---- | |
10df14fb TL |
874 | # single time output |
875 | pve# ceph -s | |
876 | # continuously output status changes (press CTRL+C to stop) | |
877 | pve# ceph -w | |
6ff32926 AA |
878 | ---- |
879 | ||
880 | To get a more detailed view, every ceph service has a log file under | |
881 | `/var/log/ceph/` and if there is not enough detail, the log level can be | |
b46a49ed | 882 | adjusted footnote:[Ceph log and debugging {cephdocs-url}/rados/troubleshooting/log-and-debug/]. |
6ff32926 AA |
883 | |
884 | You can find more information about troubleshooting | |
b46a49ed | 885 | footnote:[Ceph troubleshooting {cephdocs-url}/rados/troubleshooting/] |
620d6725 | 886 | a Ceph cluster on the official website. |
6ff32926 AA |
887 | |
888 | ||
0840a663 DM |
889 | ifdef::manvolnum[] |
890 | include::pve-copyright.adoc[] | |
891 | endif::manvolnum[] |