]>
Commit | Line | Data |
---|---|---|
1 | [[chapter_pveceph]] | |
2 | ifdef::manvolnum[] | |
3 | pveceph(1) | |
4 | ========== | |
5 | :pve-toplevel: | |
6 | ||
7 | NAME | |
8 | ---- | |
9 | ||
10 | pveceph - Manage Ceph Services on Proxmox VE Nodes | |
11 | ||
12 | SYNOPSIS | |
13 | -------- | |
14 | ||
15 | include::pveceph.1-synopsis.adoc[] | |
16 | ||
17 | DESCRIPTION | |
18 | ----------- | |
19 | endif::manvolnum[] | |
20 | ifndef::manvolnum[] | |
21 | Deploy Hyper-Converged Ceph Cluster | |
22 | =================================== | |
23 | :pve-toplevel: | |
24 | endif::manvolnum[] | |
25 | ||
26 | [thumbnail="screenshot/gui-ceph-status.png"] | |
27 | ||
28 | {pve} unifies your compute and storage systems, that is, you can use the same | |
29 | physical nodes within a cluster for both computing (processing VMs and | |
30 | containers) and replicated storage. The traditional silos of compute and | |
31 | storage resources can be wrapped up into a single hyper-converged appliance. | |
32 | Separate storage networks (SANs) and connections via network attached storage | |
33 | (NAS) disappear. With the integration of Ceph, an open source software-defined | |
34 | storage platform, {pve} has the ability to run and manage Ceph storage directly | |
35 | on the hypervisor nodes. | |
36 | ||
37 | Ceph is a distributed object store and file system designed to provide | |
38 | excellent performance, reliability and scalability. | |
39 | ||
40 | .Some advantages of Ceph on {pve} are: | |
41 | - Easy setup and management via CLI and GUI | |
42 | - Thin provisioning | |
43 | - Snapshot support | |
44 | - Self healing | |
45 | - Scalable to the exabyte level | |
46 | - Setup pools with different performance and redundancy characteristics | |
47 | - Data is replicated, making it fault tolerant | |
48 | - Runs on commodity hardware | |
49 | - No need for hardware RAID controllers | |
50 | - Open source | |
51 | ||
52 | For small to medium-sized deployments, it is possible to install a Ceph server for | |
53 | RADOS Block Devices (RBD) directly on your {pve} cluster nodes (see | |
54 | xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]). Recent | |
55 | hardware has a lot of CPU power and RAM, so running storage services | |
56 | and VMs on the same node is possible. | |
57 | ||
58 | To simplify management, we provide 'pveceph' - a tool for installing and | |
59 | managing {ceph} services on {pve} nodes. | |
60 | ||
61 | .Ceph consists of multiple Daemons, for use as an RBD storage: | |
62 | - Ceph Monitor (ceph-mon) | |
63 | - Ceph Manager (ceph-mgr) | |
64 | - Ceph OSD (ceph-osd; Object Storage Daemon) | |
65 | ||
66 | TIP: We highly recommend to get familiar with Ceph | |
67 | footnote:[Ceph intro {cephdocs-url}/start/intro/], | |
68 | its architecture | |
69 | footnote:[Ceph architecture {cephdocs-url}/architecture/] | |
70 | and vocabulary | |
71 | footnote:[Ceph glossary {cephdocs-url}/glossary]. | |
72 | ||
73 | ||
74 | Precondition | |
75 | ------------ | |
76 | ||
77 | To build a hyper-converged Proxmox + Ceph Cluster, you must use at least | |
78 | three (preferably) identical servers for the setup. | |
79 | ||
80 | Check also the recommendations from | |
81 | {cephdocs-url}/start/hardware-recommendations/[Ceph's website]. | |
82 | ||
83 | .CPU | |
84 | A high CPU core frequency reduces latency and should be preferred. As a simple | |
85 | rule of thumb, you should assign a CPU core (or thread) to each Ceph service to | |
86 | provide enough resources for stable and durable Ceph performance. | |
87 | ||
88 | .Memory | |
89 | Especially in a hyper-converged setup, the memory consumption needs to be | |
90 | carefully monitored. In addition to the predicted memory usage of virtual | |
91 | machines and containers, you must also account for having enough memory | |
92 | available for Ceph to provide excellent and stable performance. | |
93 | ||
94 | As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will be used | |
95 | by an OSD. Especially during recovery, rebalancing or backfilling. | |
96 | ||
97 | The daemon itself will use additional memory. The Bluestore backend of the | |
98 | daemon requires by default **3-5 GiB of memory** (adjustable). In contrast, the | |
99 | legacy Filestore backend uses the OS page cache and the memory consumption is | |
100 | generally related to PGs of an OSD daemon. | |
101 | ||
102 | .Network | |
103 | We recommend a network bandwidth of at least 10 GbE or more, which is used | |
104 | exclusively for Ceph. A meshed network setup | |
105 | footnote:[Full Mesh Network for Ceph {webwiki-url}Full_Mesh_Network_for_Ceph_Server] | |
106 | is also an option if there are no 10 GbE switches available. | |
107 | ||
108 | The volume of traffic, especially during recovery, will interfere with other | |
109 | services on the same network and may even break the {pve} cluster stack. | |
110 | ||
111 | Furthermore, you should estimate your bandwidth needs. While one HDD might not | |
112 | saturate a 1 Gb link, multiple HDD OSDs per node can, and modern NVMe SSDs will | |
113 | even saturate 10 Gbps of bandwidth quickly. Deploying a network capable of even | |
114 | more bandwidth will ensure that this isn't your bottleneck and won't be anytime | |
115 | soon. 25, 40 or even 100 Gbps are possible. | |
116 | ||
117 | .Disks | |
118 | When planning the size of your Ceph cluster, it is important to take the | |
119 | recovery time into consideration. Especially with small clusters, recovery | |
120 | might take long. It is recommended that you use SSDs instead of HDDs in small | |
121 | setups to reduce recovery time, minimizing the likelihood of a subsequent | |
122 | failure event during recovery. | |
123 | ||
124 | In general SSDs will provide more IOPs than spinning disks. With this in mind, | |
125 | in addition to the higher cost, it may make sense to implement a | |
126 | xref:pve_ceph_device_classes[class based] separation of pools. Another way to | |
127 | speed up OSDs is to use a faster disk as a journal or | |
128 | DB/**W**rite-**A**head-**L**og device, see xref:pve_ceph_osds[creating Ceph | |
129 | OSDs]. If a faster disk is used for multiple OSDs, a proper balance between OSD | |
130 | and WAL / DB (or journal) disk must be selected, otherwise the faster disk | |
131 | becomes the bottleneck for all linked OSDs. | |
132 | ||
133 | Aside from the disk type, Ceph performs best with an even sized and distributed | |
134 | amount of disks per node. For example, 4 x 500 GB disks within each node is | |
135 | better than a mixed setup with a single 1 TB and three 250 GB disk. | |
136 | ||
137 | You also need to balance OSD count and single OSD capacity. More capacity | |
138 | allows you to increase storage density, but it also means that a single OSD | |
139 | failure forces Ceph to recover more data at once. | |
140 | ||
141 | .Avoid RAID | |
142 | As Ceph handles data object redundancy and multiple parallel writes to disks | |
143 | (OSDs) on its own, using a RAID controller normally doesn’t improve | |
144 | performance or availability. On the contrary, Ceph is designed to handle whole | |
145 | disks on it's own, without any abstraction in between. RAID controllers are not | |
146 | designed for the Ceph workload and may complicate things and sometimes even | |
147 | reduce performance, as their write and caching algorithms may interfere with | |
148 | the ones from Ceph. | |
149 | ||
150 | WARNING: Avoid RAID controllers. Use host bus adapter (HBA) instead. | |
151 | ||
152 | NOTE: The above recommendations should be seen as a rough guidance for choosing | |
153 | hardware. Therefore, it is still essential to adapt it to your specific needs. | |
154 | You should test your setup and monitor health and performance continuously. | |
155 | ||
156 | [[pve_ceph_install_wizard]] | |
157 | Initial Ceph Installation & Configuration | |
158 | ----------------------------------------- | |
159 | ||
160 | [thumbnail="screenshot/gui-node-ceph-install.png"] | |
161 | ||
162 | With {pve} you have the benefit of an easy to use installation wizard | |
163 | for Ceph. Click on one of your cluster nodes and navigate to the Ceph | |
164 | section in the menu tree. If Ceph is not already installed, you will see a | |
165 | prompt offering to do so. | |
166 | ||
167 | The wizard is divided into multiple sections, where each needs to | |
168 | finish successfully, in order to use Ceph. After starting the installation, | |
169 | the wizard will download and install all the required packages from {pve}'s Ceph | |
170 | repository. | |
171 | ||
172 | After finishing the first step, you will need to create a configuration. | |
173 | This step is only needed once per cluster, as this configuration is distributed | |
174 | automatically to all remaining cluster members through {pve}'s clustered | |
175 | xref:chapter_pmxcfs[configuration file system (pmxcfs)]. | |
176 | ||
177 | The configuration step includes the following settings: | |
178 | ||
179 | * *Public Network:* You can set up a dedicated network for Ceph. This | |
180 | setting is required. Separating your Ceph traffic is highly recommended. | |
181 | Otherwise, it could cause trouble with other latency dependent services, | |
182 | for example, cluster communication may decrease Ceph's performance. | |
183 | ||
184 | [thumbnail="screenshot/gui-node-ceph-install-wizard-step2.png"] | |
185 | ||
186 | * *Cluster Network:* As an optional step, you can go even further and | |
187 | separate the xref:pve_ceph_osds[OSD] replication & heartbeat traffic | |
188 | as well. This will relieve the public network and could lead to | |
189 | significant performance improvements, especially in large clusters. | |
190 | ||
191 | You have two more options which are considered advanced and therefore | |
192 | should only changed if you know what you are doing. | |
193 | ||
194 | * *Number of replicas*: Defines how often an object is replicated | |
195 | * *Minimum replicas*: Defines the minimum number of required replicas | |
196 | for I/O to be marked as complete. | |
197 | ||
198 | Additionally, you need to choose your first monitor node. This step is required. | |
199 | ||
200 | That's it. You should now see a success page as the last step, with further | |
201 | instructions on how to proceed. Your system is now ready to start using Ceph. | |
202 | To get started, you will need to create some additional xref:pve_ceph_monitors[monitors], | |
203 | xref:pve_ceph_osds[OSDs] and at least one xref:pve_ceph_pools[pool]. | |
204 | ||
205 | The rest of this chapter will guide you through getting the most out of | |
206 | your {pve} based Ceph setup. This includes the aforementioned tips and | |
207 | more, such as xref:pveceph_fs[CephFS], which is a helpful addition to your | |
208 | new Ceph cluster. | |
209 | ||
210 | [[pve_ceph_install]] | |
211 | Installation of Ceph Packages | |
212 | ----------------------------- | |
213 | Use the {pve} Ceph installation wizard (recommended) or run the following | |
214 | command on each node: | |
215 | ||
216 | [source,bash] | |
217 | ---- | |
218 | pveceph install | |
219 | ---- | |
220 | ||
221 | This sets up an `apt` package repository in | |
222 | `/etc/apt/sources.list.d/ceph.list` and installs the required software. | |
223 | ||
224 | ||
225 | Create initial Ceph configuration | |
226 | --------------------------------- | |
227 | ||
228 | [thumbnail="screenshot/gui-ceph-config.png"] | |
229 | ||
230 | Use the {pve} Ceph installation wizard (recommended) or run the | |
231 | following command on one node: | |
232 | ||
233 | [source,bash] | |
234 | ---- | |
235 | pveceph init --network 10.10.10.0/24 | |
236 | ---- | |
237 | ||
238 | This creates an initial configuration at `/etc/pve/ceph.conf` with a | |
239 | dedicated network for Ceph. This file is automatically distributed to | |
240 | all {pve} nodes, using xref:chapter_pmxcfs[pmxcfs]. The command also | |
241 | creates a symbolic link at `/etc/ceph/ceph.conf`, which points to that file. | |
242 | Thus, you can simply run Ceph commands without the need to specify a | |
243 | configuration file. | |
244 | ||
245 | ||
246 | [[pve_ceph_monitors]] | |
247 | Ceph Monitor | |
248 | ----------- | |
249 | The Ceph Monitor (MON) | |
250 | footnote:[Ceph Monitor {cephdocs-url}/start/intro/] | |
251 | maintains a master copy of the cluster map. For high availability, you need at | |
252 | least 3 monitors. One monitor will already be installed if you | |
253 | used the installation wizard. You won't need more than 3 monitors, as long | |
254 | as your cluster is small to medium-sized. Only really large clusters will | |
255 | require more than this. | |
256 | ||
257 | ||
258 | [[pveceph_create_mon]] | |
259 | Create Monitors | |
260 | ~~~~~~~~~~~~~~~ | |
261 | ||
262 | [thumbnail="screenshot/gui-ceph-monitor.png"] | |
263 | ||
264 | On each node where you want to place a monitor (three monitors are recommended), | |
265 | create one by using the 'Ceph -> Monitor' tab in the GUI or run: | |
266 | ||
267 | ||
268 | [source,bash] | |
269 | ---- | |
270 | pveceph mon create | |
271 | ---- | |
272 | ||
273 | [[pveceph_destroy_mon]] | |
274 | Destroy Monitors | |
275 | ~~~~~~~~~~~~~~~~ | |
276 | ||
277 | To remove a Ceph Monitor via the GUI, first select a node in the tree view and | |
278 | go to the **Ceph -> Monitor** panel. Select the MON and click the **Destroy** | |
279 | button. | |
280 | ||
281 | To remove a Ceph Monitor via the CLI, first connect to the node on which the MON | |
282 | is running. Then execute the following command: | |
283 | [source,bash] | |
284 | ---- | |
285 | pveceph mon destroy | |
286 | ---- | |
287 | ||
288 | NOTE: At least three Monitors are needed for quorum. | |
289 | ||
290 | ||
291 | [[pve_ceph_manager]] | |
292 | Ceph Manager | |
293 | ------------ | |
294 | ||
295 | The Manager daemon runs alongside the monitors. It provides an interface to | |
296 | monitor the cluster. Since the release of Ceph luminous, at least one ceph-mgr | |
297 | footnote:[Ceph Manager {cephdocs-url}/mgr/] daemon is | |
298 | required. | |
299 | ||
300 | [[pveceph_create_mgr]] | |
301 | Create Manager | |
302 | ~~~~~~~~~~~~~~ | |
303 | ||
304 | Multiple Managers can be installed, but only one Manager is active at any given | |
305 | time. | |
306 | ||
307 | [source,bash] | |
308 | ---- | |
309 | pveceph mgr create | |
310 | ---- | |
311 | ||
312 | NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For | |
313 | high availability install more then one manager. | |
314 | ||
315 | ||
316 | [[pveceph_destroy_mgr]] | |
317 | Destroy Manager | |
318 | ~~~~~~~~~~~~~~~ | |
319 | ||
320 | To remove a Ceph Manager via the GUI, first select a node in the tree view and | |
321 | go to the **Ceph -> Monitor** panel. Select the Manager and click the | |
322 | **Destroy** button. | |
323 | ||
324 | To remove a Ceph Monitor via the CLI, first connect to the node on which the | |
325 | Manager is running. Then execute the following command: | |
326 | [source,bash] | |
327 | ---- | |
328 | pveceph mgr destroy | |
329 | ---- | |
330 | ||
331 | NOTE: While a manager is not a hard-dependency, it is crucial for a Ceph cluster, | |
332 | as it handles important features like PG-autoscaling, device health monitoring, | |
333 | telemetry and more. | |
334 | ||
335 | [[pve_ceph_osds]] | |
336 | Ceph OSDs | |
337 | --------- | |
338 | Ceph **O**bject **S**torage **D**aemons store objects for Ceph over the | |
339 | network. It is recommended to use one OSD per physical disk. | |
340 | ||
341 | NOTE: By default an object is 4 MiB in size. | |
342 | ||
343 | [[pve_ceph_osd_create]] | |
344 | Create OSDs | |
345 | ~~~~~~~~~~~ | |
346 | ||
347 | [thumbnail="screenshot/gui-ceph-osd-status.png"] | |
348 | ||
349 | You can create an OSD either via the {pve} web-interface or via the CLI using | |
350 | `pveceph`. For example: | |
351 | ||
352 | [source,bash] | |
353 | ---- | |
354 | pveceph osd create /dev/sd[X] | |
355 | ---- | |
356 | ||
357 | TIP: We recommend a Ceph cluster with at least three nodes and at least 12 | |
358 | OSDs, evenly distributed among the nodes. | |
359 | ||
360 | If the disk was in use before (for example, for ZFS or as an OSD) you first need | |
361 | to zap all traces of that usage. To remove the partition table, boot sector and | |
362 | any other OSD leftover, you can use the following command: | |
363 | ||
364 | [source,bash] | |
365 | ---- | |
366 | ceph-volume lvm zap /dev/sd[X] --destroy | |
367 | ---- | |
368 | ||
369 | WARNING: The above command will destroy all data on the disk! | |
370 | ||
371 | .Ceph Bluestore | |
372 | ||
373 | Starting with the Ceph Kraken release, a new Ceph OSD storage type was | |
374 | introduced called Bluestore | |
375 | footnote:[Ceph Bluestore https://ceph.com/community/new-luminous-bluestore/]. | |
376 | This is the default when creating OSDs since Ceph Luminous. | |
377 | ||
378 | [source,bash] | |
379 | ---- | |
380 | pveceph osd create /dev/sd[X] | |
381 | ---- | |
382 | ||
383 | .Block.db and block.wal | |
384 | ||
385 | If you want to use a separate DB/WAL device for your OSDs, you can specify it | |
386 | through the '-db_dev' and '-wal_dev' options. The WAL is placed with the DB, if | |
387 | not specified separately. | |
388 | ||
389 | [source,bash] | |
390 | ---- | |
391 | pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z] | |
392 | ---- | |
393 | ||
394 | You can directly choose the size of those with the '-db_size' and '-wal_size' | |
395 | parameters respectively. If they are not given, the following values (in order) | |
396 | will be used: | |
397 | ||
398 | * bluestore_block_{db,wal}_size from Ceph configuration... | |
399 | ** ... database, section 'osd' | |
400 | ** ... database, section 'global' | |
401 | ** ... file, section 'osd' | |
402 | ** ... file, section 'global' | |
403 | * 10% (DB)/1% (WAL) of OSD size | |
404 | ||
405 | NOTE: The DB stores BlueStore’s internal metadata, and the WAL is BlueStore’s | |
406 | internal journal or write-ahead log. It is recommended to use a fast SSD or | |
407 | NVRAM for better performance. | |
408 | ||
409 | ||
410 | .Ceph Filestore | |
411 | ||
412 | Before Ceph Luminous, Filestore was used as the default storage type for Ceph OSDs. | |
413 | Starting with Ceph Nautilus, {pve} does not support creating such OSDs with | |
414 | 'pveceph' anymore. If you still want to create filestore OSDs, use | |
415 | 'ceph-volume' directly. | |
416 | ||
417 | [source,bash] | |
418 | ---- | |
419 | ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y] | |
420 | ---- | |
421 | ||
422 | [[pve_ceph_osd_destroy]] | |
423 | Destroy OSDs | |
424 | ~~~~~~~~~~~~ | |
425 | ||
426 | To remove an OSD via the GUI, first select a {PVE} node in the tree view and go | |
427 | to the **Ceph -> OSD** panel. Then select the OSD to destroy and click the **OUT** | |
428 | button. Once the OSD status has changed from `in` to `out`, click the **STOP** | |
429 | button. Finally, after the status has changed from `up` to `down`, select | |
430 | **Destroy** from the `More` drop-down menu. | |
431 | ||
432 | To remove an OSD via the CLI run the following commands. | |
433 | ||
434 | [source,bash] | |
435 | ---- | |
436 | ceph osd out <ID> | |
437 | systemctl stop ceph-osd@<ID>.service | |
438 | ---- | |
439 | ||
440 | NOTE: The first command instructs Ceph not to include the OSD in the data | |
441 | distribution. The second command stops the OSD service. Until this time, no | |
442 | data is lost. | |
443 | ||
444 | The following command destroys the OSD. Specify the '-cleanup' option to | |
445 | additionally destroy the partition table. | |
446 | ||
447 | [source,bash] | |
448 | ---- | |
449 | pveceph osd destroy <ID> | |
450 | ---- | |
451 | ||
452 | WARNING: The above command will destroy all data on the disk! | |
453 | ||
454 | ||
455 | [[pve_ceph_pools]] | |
456 | Ceph Pools | |
457 | ---------- | |
458 | A pool is a logical group for storing objects. It holds a collection of objects, | |
459 | known as **P**lacement **G**roups (`PG`, `pg_num`). | |
460 | ||
461 | ||
462 | Create and Edit Pools | |
463 | ~~~~~~~~~~~~~~~~~~~~~ | |
464 | ||
465 | You can create pools from the command line or the web-interface of any {pve} | |
466 | host under **Ceph -> Pools**. | |
467 | ||
468 | [thumbnail="screenshot/gui-ceph-pools.png"] | |
469 | ||
470 | When no options are given, we set a default of **128 PGs**, a **size of 3 | |
471 | replicas** and a **min_size of 2 replicas**, to ensure no data loss occurs if | |
472 | any OSD fails. | |
473 | ||
474 | WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1 | |
475 | allows I/O on an object when it has only 1 replica, which could lead to data | |
476 | loss, incomplete PGs or unfound objects. | |
477 | ||
478 | It is advised that you calculate the PG number based on your setup. You can | |
479 | find the formula and the PG calculator footnote:[PG calculator | |
480 | https://ceph.com/pgcalc/] online. From Ceph Nautilus onward, you can change the | |
481 | number of PGs footnoteref:[placement_groups,Placement Groups | |
482 | {cephdocs-url}/rados/operations/placement-groups/] after the setup. | |
483 | ||
484 | In addition to manual adjustment, the PG autoscaler | |
485 | footnoteref:[autoscaler,Automated Scaling | |
486 | {cephdocs-url}/rados/operations/placement-groups/#automated-scaling] can | |
487 | automatically scale the PG count for a pool in the background. | |
488 | ||
489 | .Example for creating a pool over the CLI | |
490 | [source,bash] | |
491 | ---- | |
492 | pveceph pool create <name> --add_storages | |
493 | ---- | |
494 | ||
495 | TIP: If you would also like to automatically define a storage for your | |
496 | pool, keep the `Add as Storage' checkbox checked in the web-interface, or use the | |
497 | command line option '--add_storages' at pool creation. | |
498 | ||
499 | .Base Options | |
500 | Name:: The name of the pool. This must be unique and can't be changed afterwards. | |
501 | Size:: The number of replicas per object. Ceph always tries to have this many | |
502 | copies of an object. Default: `3`. | |
503 | PG Autoscale Mode:: The automatic PG scaling mode footnoteref:[autoscaler] of | |
504 | the pool. If set to `warn`, it produces a warning message when a pool | |
505 | has a non-optimal PG count. Default: `warn`. | |
506 | Add as Storage:: Configure a VM or container storage using the new pool. | |
507 | Default: `true` (only visible on creation). | |
508 | ||
509 | .Advanced Options | |
510 | Min. Size:: The minimum number of replicas per object. Ceph will reject I/O on | |
511 | the pool if a PG has less than this many replicas. Default: `2`. | |
512 | Crush Rule:: The rule to use for mapping object placement in the cluster. These | |
513 | rules define how data is placed within the cluster. See | |
514 | xref:pve_ceph_device_classes[Ceph CRUSH & device classes] for information on | |
515 | device-based rules. | |
516 | # of PGs:: The number of placement groups footnoteref:[placement_groups] that | |
517 | the pool should have at the beginning. Default: `128`. | |
518 | Target Size Ratio:: The ratio of data that is expected in the pool. The PG | |
519 | autoscaler uses the ratio relative to other ratio sets. It takes precedence | |
520 | over the `target size` if both are set. | |
521 | Target Size:: The estimated amount of data expected in the pool. The PG | |
522 | autoscaler uses this size to estimate the optimal PG count. | |
523 | Min. # of PGs:: The minimum number of placement groups. This setting is used to | |
524 | fine-tune the lower bound of the PG count for that pool. The PG autoscaler | |
525 | will not merge PGs below this threshold. | |
526 | ||
527 | Further information on Ceph pool handling can be found in the Ceph pool | |
528 | operation footnote:[Ceph pool operation | |
529 | {cephdocs-url}/rados/operations/pools/] | |
530 | manual. | |
531 | ||
532 | ||
533 | Destroy Pools | |
534 | ~~~~~~~~~~~~~ | |
535 | ||
536 | To destroy a pool via the GUI, select a node in the tree view and go to the | |
537 | **Ceph -> Pools** panel. Select the pool to destroy and click the **Destroy** | |
538 | button. To confirm the destruction of the pool, you need to enter the pool name. | |
539 | ||
540 | Run the following command to destroy a pool. Specify the '-remove_storages' to | |
541 | also remove the associated storage. | |
542 | ||
543 | [source,bash] | |
544 | ---- | |
545 | pveceph pool destroy <name> | |
546 | ---- | |
547 | ||
548 | NOTE: Pool deletion runs in the background and can take some time. | |
549 | You will notice the data usage in the cluster decreasing throughout this | |
550 | process. | |
551 | ||
552 | ||
553 | PG Autoscaler | |
554 | ~~~~~~~~~~~~~ | |
555 | ||
556 | The PG autoscaler allows the cluster to consider the amount of (expected) data | |
557 | stored in each pool and to choose the appropriate pg_num values automatically. | |
558 | ||
559 | You may need to activate the PG autoscaler module before adjustments can take | |
560 | effect. | |
561 | ||
562 | [source,bash] | |
563 | ---- | |
564 | ceph mgr module enable pg_autoscaler | |
565 | ---- | |
566 | ||
567 | The autoscaler is configured on a per pool basis and has the following modes: | |
568 | ||
569 | [horizontal] | |
570 | warn:: A health warning is issued if the suggested `pg_num` value differs too | |
571 | much from the current value. | |
572 | on:: The `pg_num` is adjusted automatically with no need for any manual | |
573 | interaction. | |
574 | off:: No automatic `pg_num` adjustments are made, and no warning will be issued | |
575 | if the PG count is not optimal. | |
576 | ||
577 | The scaling factor can be adjusted to facilitate future data storage with the | |
578 | `target_size`, `target_size_ratio` and the `pg_num_min` options. | |
579 | ||
580 | WARNING: By default, the autoscaler considers tuning the PG count of a pool if | |
581 | it is off by a factor of 3. This will lead to a considerable shift in data | |
582 | placement and might introduce a high load on the cluster. | |
583 | ||
584 | You can find a more in-depth introduction to the PG autoscaler on Ceph's Blog - | |
585 | https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/[New in | |
586 | Nautilus: PG merging and autotuning]. | |
587 | ||
588 | ||
589 | [[pve_ceph_device_classes]] | |
590 | Ceph CRUSH & device classes | |
591 | --------------------------- | |
592 | The footnote:[CRUSH | |
593 | https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf] (**C**ontrolled | |
594 | **R**eplication **U**nder **S**calable **H**ashing) algorithm is at the | |
595 | foundation of Ceph. | |
596 | ||
597 | CRUSH calculates where to store and retrieve data from. This has the | |
598 | advantage that no central indexing service is needed. CRUSH works using a map of | |
599 | OSDs, buckets (device locations) and rulesets (data replication) for pools. | |
600 | ||
601 | NOTE: Further information can be found in the Ceph documentation, under the | |
602 | section CRUSH map footnote:[CRUSH map {cephdocs-url}/rados/operations/crush-map/]. | |
603 | ||
604 | This map can be altered to reflect different replication hierarchies. The object | |
605 | replicas can be separated (eg. failure domains), while maintaining the desired | |
606 | distribution. | |
607 | ||
608 | A common configuration is to use different classes of disks for different Ceph | |
609 | pools. For this reason, Ceph introduced device classes with luminous, to | |
610 | accommodate the need for easy ruleset generation. | |
611 | ||
612 | The device classes can be seen in the 'ceph osd tree' output. These classes | |
613 | represent their own root bucket, which can be seen with the below command. | |
614 | ||
615 | [source, bash] | |
616 | ---- | |
617 | ceph osd crush tree --show-shadow | |
618 | ---- | |
619 | ||
620 | Example output form the above command: | |
621 | ||
622 | [source, bash] | |
623 | ---- | |
624 | ID CLASS WEIGHT TYPE NAME | |
625 | -16 nvme 2.18307 root default~nvme | |
626 | -13 nvme 0.72769 host sumi1~nvme | |
627 | 12 nvme 0.72769 osd.12 | |
628 | -14 nvme 0.72769 host sumi2~nvme | |
629 | 13 nvme 0.72769 osd.13 | |
630 | -15 nvme 0.72769 host sumi3~nvme | |
631 | 14 nvme 0.72769 osd.14 | |
632 | -1 7.70544 root default | |
633 | -3 2.56848 host sumi1 | |
634 | 12 nvme 0.72769 osd.12 | |
635 | -5 2.56848 host sumi2 | |
636 | 13 nvme 0.72769 osd.13 | |
637 | -7 2.56848 host sumi3 | |
638 | 14 nvme 0.72769 osd.14 | |
639 | ---- | |
640 | ||
641 | To instruct a pool to only distribute objects on a specific device class, you | |
642 | first need to create a ruleset for the device class: | |
643 | ||
644 | [source, bash] | |
645 | ---- | |
646 | ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class> | |
647 | ---- | |
648 | ||
649 | [frame="none",grid="none", align="left", cols="30%,70%"] | |
650 | |=== | |
651 | |<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI) | |
652 | |<root>|which crush root it should belong to (default ceph root "default") | |
653 | |<failure-domain>|at which failure-domain the objects should be distributed (usually host) | |
654 | |<class>|what type of OSD backing store to use (eg. nvme, ssd, hdd) | |
655 | |=== | |
656 | ||
657 | Once the rule is in the CRUSH map, you can tell a pool to use the ruleset. | |
658 | ||
659 | [source, bash] | |
660 | ---- | |
661 | ceph osd pool set <pool-name> crush_rule <rule-name> | |
662 | ---- | |
663 | ||
664 | TIP: If the pool already contains objects, these must be moved accordingly. | |
665 | Depending on your setup, this may introduce a big performance impact on your | |
666 | cluster. As an alternative, you can create a new pool and move disks separately. | |
667 | ||
668 | ||
669 | Ceph Client | |
670 | ----------- | |
671 | ||
672 | [thumbnail="screenshot/gui-ceph-log.png"] | |
673 | ||
674 | Following the setup from the previous sections, you can configure {pve} to use | |
675 | such pools to store VM and Container images. Simply use the GUI to add a new | |
676 | `RBD` storage (see section xref:ceph_rados_block_devices[Ceph RADOS Block | |
677 | Devices (RBD)]). | |
678 | ||
679 | You also need to copy the keyring to a predefined location for an external Ceph | |
680 | cluster. If Ceph is installed on the Proxmox nodes itself, then this will be | |
681 | done automatically. | |
682 | ||
683 | NOTE: The filename needs to be `<storage_id> + `.keyring`, where `<storage_id>` is | |
684 | the expression after 'rbd:' in `/etc/pve/storage.cfg`. In the following example, | |
685 | `my-ceph-storage` is the `<storage_id>`: | |
686 | ||
687 | [source,bash] | |
688 | ---- | |
689 | mkdir /etc/pve/priv/ceph | |
690 | cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring | |
691 | ---- | |
692 | ||
693 | [[pveceph_fs]] | |
694 | CephFS | |
695 | ------ | |
696 | ||
697 | Ceph also provides a filesystem, which runs on top of the same object storage as | |
698 | RADOS block devices do. A **M**eta**d**ata **S**erver (`MDS`) is used to map the | |
699 | RADOS backed objects to files and directories, allowing Ceph to provide a | |
700 | POSIX-compliant, replicated filesystem. This allows you to easily configure a | |
701 | clustered, highly available, shared filesystem. Ceph's Metadata Servers | |
702 | guarantee that files are evenly distributed over the entire Ceph cluster. As a | |
703 | result, even cases of high load will not overwhelm a single host, which can be | |
704 | an issue with traditional shared filesystem approaches, for example `NFS`. | |
705 | ||
706 | [thumbnail="screenshot/gui-node-ceph-cephfs-panel.png"] | |
707 | ||
708 | {pve} supports both creating a hyper-converged CephFS and using an existing | |
709 | xref:storage_cephfs[CephFS as storage] to save backups, ISO files, and container | |
710 | templates. | |
711 | ||
712 | ||
713 | [[pveceph_fs_mds]] | |
714 | Metadata Server (MDS) | |
715 | ~~~~~~~~~~~~~~~~~~~~~ | |
716 | ||
717 | CephFS needs at least one Metadata Server to be configured and running, in order | |
718 | to function. You can create an MDS through the {pve} web GUI's `Node | |
719 | -> CephFS` panel or from the command line with: | |
720 | ||
721 | ---- | |
722 | pveceph mds create | |
723 | ---- | |
724 | ||
725 | Multiple metadata servers can be created in a cluster, but with the default | |
726 | settings, only one can be active at a time. If an MDS or its node becomes | |
727 | unresponsive (or crashes), another `standby` MDS will get promoted to `active`. | |
728 | You can speed up the handover between the active and standby MDS by using | |
729 | the 'hotstandby' parameter option on creation, or if you have already created it | |
730 | you may set/add: | |
731 | ||
732 | ---- | |
733 | mds standby replay = true | |
734 | ---- | |
735 | ||
736 | in the respective MDS section of `/etc/pve/ceph.conf`. With this enabled, the | |
737 | specified MDS will remain in a `warm` state, polling the active one, so that it | |
738 | can take over faster in case of any issues. | |
739 | ||
740 | NOTE: This active polling will have an additional performance impact on your | |
741 | system and the active `MDS`. | |
742 | ||
743 | .Multiple Active MDS | |
744 | ||
745 | Since Luminous (12.2.x) you can have multiple active metadata servers | |
746 | running at once, but this is normally only useful if you have a high amount of | |
747 | clients running in parallel. Otherwise the `MDS` is rarely the bottleneck in a | |
748 | system. If you want to set this up, please refer to the Ceph documentation. | |
749 | footnote:[Configuring multiple active MDS daemons | |
750 | {cephdocs-url}/cephfs/multimds/] | |
751 | ||
752 | [[pveceph_fs_create]] | |
753 | Create CephFS | |
754 | ~~~~~~~~~~~~~ | |
755 | ||
756 | With {pve}'s integration of CephFS, you can easily create a CephFS using the | |
757 | web interface, CLI or an external API interface. Some prerequisites are required | |
758 | for this to work: | |
759 | ||
760 | .Prerequisites for a successful CephFS setup: | |
761 | - xref:pve_ceph_install[Install Ceph packages] - if this was already done some | |
762 | time ago, you may want to rerun it on an up-to-date system to | |
763 | ensure that all CephFS related packages get installed. | |
764 | - xref:pve_ceph_monitors[Setup Monitors] | |
765 | - xref:pve_ceph_monitors[Setup your OSDs] | |
766 | - xref:pveceph_fs_mds[Setup at least one MDS] | |
767 | ||
768 | After this is complete, you can simply create a CephFS through | |
769 | either the Web GUI's `Node -> CephFS` panel or the command line tool `pveceph`, | |
770 | for example: | |
771 | ||
772 | ---- | |
773 | pveceph fs create --pg_num 128 --add-storage | |
774 | ---- | |
775 | ||
776 | This creates a CephFS named 'cephfs', using a pool for its data named | |
777 | 'cephfs_data' with '128' placement groups and a pool for its metadata named | |
778 | 'cephfs_metadata' with one quarter of the data pool's placement groups (`32`). | |
779 | Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the | |
780 | Ceph documentation for more information regarding an appropriate placement group | |
781 | number (`pg_num`) for your setup footnoteref:[placement_groups]. | |
782 | Additionally, the '--add-storage' parameter will add the CephFS to the {pve} | |
783 | storage configuration after it has been created successfully. | |
784 | ||
785 | Destroy CephFS | |
786 | ~~~~~~~~~~~~~~ | |
787 | ||
788 | WARNING: Destroying a CephFS will render all of its data unusable. This cannot be | |
789 | undone! | |
790 | ||
791 | If you really want to destroy an existing CephFS, you first need to stop or | |
792 | destroy all metadata servers (`M̀DS`). You can destroy them either via the web | |
793 | interface or via the command line interface, by issuing | |
794 | ||
795 | ---- | |
796 | pveceph mds destroy NAME | |
797 | ---- | |
798 | on each {pve} node hosting an MDS daemon. | |
799 | ||
800 | Then, you can remove (destroy) the CephFS by issuing | |
801 | ||
802 | ---- | |
803 | ceph fs rm NAME --yes-i-really-mean-it | |
804 | ---- | |
805 | on a single node hosting Ceph. After this, you may want to remove the created | |
806 | data and metadata pools, this can be done either over the Web GUI or the CLI | |
807 | with: | |
808 | ||
809 | ---- | |
810 | pveceph pool destroy NAME | |
811 | ---- | |
812 | ||
813 | ||
814 | Ceph maintenance | |
815 | ---------------- | |
816 | ||
817 | Replace OSDs | |
818 | ~~~~~~~~~~~~ | |
819 | ||
820 | One of the most common maintenance tasks in Ceph is to replace the disk of an | |
821 | OSD. If a disk is already in a failed state, then you can go ahead and run | |
822 | through the steps in xref:pve_ceph_osd_destroy[Destroy OSDs]. Ceph will recreate | |
823 | those copies on the remaining OSDs if possible. This rebalancing will start as | |
824 | soon as an OSD failure is detected or an OSD was actively stopped. | |
825 | ||
826 | NOTE: With the default size/min_size (3/2) of a pool, recovery only starts when | |
827 | `size + 1` nodes are available. The reason for this is that the Ceph object | |
828 | balancer xref:pve_ceph_device_classes[CRUSH] defaults to a full node as | |
829 | `failure domain'. | |
830 | ||
831 | To replace a functioning disk from the GUI, go through the steps in | |
832 | xref:pve_ceph_osd_destroy[Destroy OSDs]. The only addition is to wait until | |
833 | the cluster shows 'HEALTH_OK' before stopping the OSD to destroy it. | |
834 | ||
835 | On the command line, use the following commands: | |
836 | ||
837 | ---- | |
838 | ceph osd out osd.<id> | |
839 | ---- | |
840 | ||
841 | You can check with the command below if the OSD can be safely removed. | |
842 | ||
843 | ---- | |
844 | ceph osd safe-to-destroy osd.<id> | |
845 | ---- | |
846 | ||
847 | Once the above check tells you that it is safe to remove the OSD, you can | |
848 | continue with the following commands: | |
849 | ||
850 | ---- | |
851 | systemctl stop ceph-osd@<id>.service | |
852 | pveceph osd destroy <id> | |
853 | ---- | |
854 | ||
855 | Replace the old disk with the new one and use the same procedure as described | |
856 | in xref:pve_ceph_osd_create[Create OSDs]. | |
857 | ||
858 | Trim/Discard | |
859 | ~~~~~~~~~~~~ | |
860 | ||
861 | It is good practice to run 'fstrim' (discard) regularly on VMs and containers. | |
862 | This releases data blocks that the filesystem isn’t using anymore. It reduces | |
863 | data usage and resource load. Most modern operating systems issue such discard | |
864 | commands to their disks regularly. You only need to ensure that the Virtual | |
865 | Machines enable the xref:qm_hard_disk_discard[disk discard option]. | |
866 | ||
867 | [[pveceph_scrub]] | |
868 | Scrub & Deep Scrub | |
869 | ~~~~~~~~~~~~~~~~~~ | |
870 | ||
871 | Ceph ensures data integrity by 'scrubbing' placement groups. Ceph checks every | |
872 | object in a PG for its health. There are two forms of Scrubbing, daily | |
873 | cheap metadata checks and weekly deep data checks. The weekly deep scrub reads | |
874 | the objects and uses checksums to ensure data integrity. If a running scrub | |
875 | interferes with business (performance) needs, you can adjust the time when | |
876 | scrubs footnote:[Ceph scrubbing {cephdocs-url}/rados/configuration/osd-config-ref/#scrubbing] | |
877 | are executed. | |
878 | ||
879 | ||
880 | Ceph Monitoring and Troubleshooting | |
881 | ----------------------------------- | |
882 | ||
883 | It is important to continuously monitor the health of a Ceph deployment from the | |
884 | beginning, either by using the Ceph tools or by accessing | |
885 | the status through the {pve} link:api-viewer/index.html[API]. | |
886 | ||
887 | The following Ceph commands can be used to see if the cluster is healthy | |
888 | ('HEALTH_OK'), if there are warnings ('HEALTH_WARN'), or even errors | |
889 | ('HEALTH_ERR'). If the cluster is in an unhealthy state, the status commands | |
890 | below will also give you an overview of the current events and actions to take. | |
891 | ||
892 | ---- | |
893 | # single time output | |
894 | pve# ceph -s | |
895 | # continuously output status changes (press CTRL+C to stop) | |
896 | pve# ceph -w | |
897 | ---- | |
898 | ||
899 | To get a more detailed view, every Ceph service has a log file under | |
900 | `/var/log/ceph/`. If more detail is required, the log level can be | |
901 | adjusted footnote:[Ceph log and debugging {cephdocs-url}/rados/troubleshooting/log-and-debug/]. | |
902 | ||
903 | You can find more information about troubleshooting | |
904 | footnote:[Ceph troubleshooting {cephdocs-url}/rados/troubleshooting/] | |
905 | a Ceph cluster on the official website. | |
906 | ||
907 | ||
908 | ifdef::manvolnum[] | |
909 | include::pve-copyright.adoc[] | |
910 | endif::manvolnum[] |