]>
Commit | Line | Data |
---|---|---|
1 | [[chapter_pveceph]] | |
2 | ifdef::manvolnum[] | |
3 | pveceph(1) | |
4 | ========== | |
5 | :pve-toplevel: | |
6 | ||
7 | NAME | |
8 | ---- | |
9 | ||
10 | pveceph - Manage Ceph Services on Proxmox VE Nodes | |
11 | ||
12 | SYNOPSIS | |
13 | -------- | |
14 | ||
15 | include::pveceph.1-synopsis.adoc[] | |
16 | ||
17 | DESCRIPTION | |
18 | ----------- | |
19 | endif::manvolnum[] | |
20 | ifndef::manvolnum[] | |
21 | Deploy Hyper-Converged Ceph Cluster | |
22 | =================================== | |
23 | :pve-toplevel: | |
24 | ||
25 | Introduction | |
26 | ------------ | |
27 | endif::manvolnum[] | |
28 | ||
29 | [thumbnail="screenshot/gui-ceph-status-dashboard.png"] | |
30 | ||
31 | {pve} unifies your compute and storage systems, that is, you can use the same | |
32 | physical nodes within a cluster for both computing (processing VMs and | |
33 | containers) and replicated storage. The traditional silos of compute and | |
34 | storage resources can be wrapped up into a single hyper-converged appliance. | |
35 | Separate storage networks (SANs) and connections via network attached storage | |
36 | (NAS) disappear. With the integration of Ceph, an open source software-defined | |
37 | storage platform, {pve} has the ability to run and manage Ceph storage directly | |
38 | on the hypervisor nodes. | |
39 | ||
40 | Ceph is a distributed object store and file system designed to provide | |
41 | excellent performance, reliability and scalability. | |
42 | ||
43 | .Some advantages of Ceph on {pve} are: | |
44 | - Easy setup and management via CLI and GUI | |
45 | - Thin provisioning | |
46 | - Snapshot support | |
47 | - Self healing | |
48 | - Scalable to the exabyte level | |
49 | - Provides block, file system, and object storage | |
50 | - Setup pools with different performance and redundancy characteristics | |
51 | - Data is replicated, making it fault tolerant | |
52 | - Runs on commodity hardware | |
53 | - No need for hardware RAID controllers | |
54 | - Open source | |
55 | ||
56 | For small to medium-sized deployments, it is possible to install a Ceph server | |
57 | for using RADOS Block Devices (RBD) or CephFS directly on your {pve} cluster | |
58 | nodes (see xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]). | |
59 | Recent hardware has a lot of CPU power and RAM, so running storage services and | |
60 | virtual guests on the same node is possible. | |
61 | ||
62 | To simplify management, {pve} provides you native integration to install and | |
63 | manage {ceph} services on {pve} nodes either via the built-in web interface, or | |
64 | using the 'pveceph' command line tool. | |
65 | ||
66 | ||
67 | Terminology | |
68 | ----------- | |
69 | ||
70 | // TODO: extend and also describe basic architecture here. | |
71 | .Ceph consists of multiple Daemons, for use as an RBD storage: | |
72 | - Ceph Monitor (ceph-mon, or MON) | |
73 | - Ceph Manager (ceph-mgr, or MGS) | |
74 | - Ceph Metadata Service (ceph-mds, or MDS) | |
75 | - Ceph Object Storage Daemon (ceph-osd, or OSD) | |
76 | ||
77 | TIP: We highly recommend to get familiar with Ceph | |
78 | footnote:[Ceph intro {cephdocs-url}/start/intro/], | |
79 | its architecture | |
80 | footnote:[Ceph architecture {cephdocs-url}/architecture/] | |
81 | and vocabulary | |
82 | footnote:[Ceph glossary {cephdocs-url}/glossary]. | |
83 | ||
84 | ||
85 | Recommendations for a Healthy Ceph Cluster | |
86 | ------------------------------------------ | |
87 | ||
88 | To build a hyper-converged Proxmox + Ceph Cluster, you must use at least three | |
89 | (preferably) identical servers for the setup. | |
90 | ||
91 | Check also the recommendations from | |
92 | {cephdocs-url}/start/hardware-recommendations/[Ceph's website]. | |
93 | ||
94 | NOTE: The recommendations below should be seen as a rough guidance for choosing | |
95 | hardware. Therefore, it is still essential to adapt it to your specific needs. | |
96 | You should test your setup and monitor health and performance continuously. | |
97 | ||
98 | .CPU | |
99 | Ceph services can be classified into two categories: | |
100 | * Intensive CPU usage, benefiting from high CPU base frequencies and multiple | |
101 | cores. Members of that category are: | |
102 | ** Object Storage Daemon (OSD) services | |
103 | ** Meta Data Service (MDS) used for CephFS | |
104 | * Moderate CPU usage, not needing multiple CPU cores. These are: | |
105 | ** Monitor (MON) services | |
106 | ** Manager (MGR) services | |
107 | ||
108 | As a simple rule of thumb, you should assign at least one CPU core (or thread) | |
109 | to each Ceph service to provide the minimum resources required for stable and | |
110 | durable Ceph performance. | |
111 | ||
112 | For example, if you plan to run a Ceph monitor, a Ceph manager and 6 Ceph OSDs | |
113 | services on a node you should reserve 8 CPU cores purely for Ceph when targeting | |
114 | basic and stable performance. | |
115 | ||
116 | Note that OSDs CPU usage depend mostly from the disks performance. The higher | |
117 | the possible IOPS (**IO** **O**perations per **S**econd) of a disk, the more CPU | |
118 | can be utilized by a OSD service. | |
119 | For modern enterprise SSD disks, like NVMe's that can permanently sustain a high | |
120 | IOPS load over 100'000 with sub millisecond latency, each OSD can use multiple | |
121 | CPU threads, e.g., four to six CPU threads utilized per NVMe backed OSD is | |
122 | likely for very high performance disks. | |
123 | ||
124 | .Memory | |
125 | Especially in a hyper-converged setup, the memory consumption needs to be | |
126 | carefully planned out and monitored. In addition to the predicted memory usage | |
127 | of virtual machines and containers, you must also account for having enough | |
128 | memory available for Ceph to provide excellent and stable performance. | |
129 | ||
130 | As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will be used | |
131 | by an OSD. While the usage might be less under normal conditions, it will use | |
132 | most during critical operations like recovery, re-balancing or backfilling. | |
133 | That means that you should avoid maxing out your available memory already on | |
134 | normal operation, but rather leave some headroom to cope with outages. | |
135 | ||
136 | The OSD service itself will use additional memory. The Ceph BlueStore backend of | |
137 | the daemon requires by default **3-5 GiB of memory**, b (adjustable). | |
138 | ||
139 | .Network | |
140 | We recommend a network bandwidth of at least 10 Gbps, or more, to be used | |
141 | exclusively for Ceph traffic. A meshed network setup | |
142 | footnote:[Full Mesh Network for Ceph {webwiki-url}Full_Mesh_Network_for_Ceph_Server] | |
143 | is also an option for three to five node clusters, if there are no 10+ Gbps | |
144 | switches available. | |
145 | ||
146 | [IMPORTANT] | |
147 | The volume of traffic, especially during recovery, will interfere | |
148 | with other services on the same network, especially the latency sensitive {pve} | |
149 | corosync cluster stack can be affected, resulting in possible loss of cluster | |
150 | quorum. Moving the Ceph traffic to dedicated and physical separated networks | |
151 | will avoid such interference, not only for corosync, but also for the networking | |
152 | services provided by any virtual guests. | |
153 | ||
154 | For estimating your bandwidth needs, you need to take the performance of your | |
155 | disks into account.. While a single HDD might not saturate a 1 Gb link, multiple | |
156 | HDD OSDs per node can already saturate 10 Gbps too. | |
157 | If modern NVMe-attached SSDs are used, a single one can already saturate 10 Gbps | |
158 | of bandwidth, or more. For such high-performance setups we recommend at least | |
159 | a 25 Gpbs, while even 40 Gbps or 100+ Gbps might be required to utilize the full | |
160 | performance potential of the underlying disks. | |
161 | ||
162 | If unsure, we recommend using three (physical) separate networks for | |
163 | high-performance setups: | |
164 | * one very high bandwidth (25+ Gbps) network for Ceph (internal) cluster | |
165 | traffic. | |
166 | * one high bandwidth (10+ Gpbs) network for Ceph (public) traffic between the | |
167 | ceph server and ceph client storage traffic. Depending on your needs this can | |
168 | also be used to host the virtual guest traffic and the VM live-migration | |
169 | traffic. | |
170 | * one medium bandwidth (1 Gbps) exclusive for the latency sensitive corosync | |
171 | cluster communication. | |
172 | ||
173 | .Disks | |
174 | When planning the size of your Ceph cluster, it is important to take the | |
175 | recovery time into consideration. Especially with small clusters, recovery | |
176 | might take long. It is recommended that you use SSDs instead of HDDs in small | |
177 | setups to reduce recovery time, minimizing the likelihood of a subsequent | |
178 | failure event during recovery. | |
179 | ||
180 | In general, SSDs will provide more IOPS than spinning disks. With this in mind, | |
181 | in addition to the higher cost, it may make sense to implement a | |
182 | xref:pve_ceph_device_classes[class based] separation of pools. Another way to | |
183 | speed up OSDs is to use a faster disk as a journal or | |
184 | DB/**W**rite-**A**head-**L**og device, see | |
185 | xref:pve_ceph_osds[creating Ceph OSDs]. | |
186 | If a faster disk is used for multiple OSDs, a proper balance between OSD | |
187 | and WAL / DB (or journal) disk must be selected, otherwise the faster disk | |
188 | becomes the bottleneck for all linked OSDs. | |
189 | ||
190 | Aside from the disk type, Ceph performs best with an evenly sized, and an evenly | |
191 | distributed amount of disks per node. For example, 4 x 500 GB disks within each | |
192 | node is better than a mixed setup with a single 1 TB and three 250 GB disk. | |
193 | ||
194 | You also need to balance OSD count and single OSD capacity. More capacity | |
195 | allows you to increase storage density, but it also means that a single OSD | |
196 | failure forces Ceph to recover more data at once. | |
197 | ||
198 | .Avoid RAID | |
199 | As Ceph handles data object redundancy and multiple parallel writes to disks | |
200 | (OSDs) on its own, using a RAID controller normally doesn’t improve | |
201 | performance or availability. On the contrary, Ceph is designed to handle whole | |
202 | disks on it's own, without any abstraction in between. RAID controllers are not | |
203 | designed for the Ceph workload and may complicate things and sometimes even | |
204 | reduce performance, as their write and caching algorithms may interfere with | |
205 | the ones from Ceph. | |
206 | ||
207 | WARNING: Avoid RAID controllers. Use host bus adapter (HBA) instead. | |
208 | ||
209 | [[pve_ceph_install_wizard]] | |
210 | Initial Ceph Installation & Configuration | |
211 | ----------------------------------------- | |
212 | ||
213 | Using the Web-based Wizard | |
214 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
215 | ||
216 | [thumbnail="screenshot/gui-node-ceph-install.png"] | |
217 | ||
218 | With {pve} you have the benefit of an easy to use installation wizard | |
219 | for Ceph. Click on one of your cluster nodes and navigate to the Ceph | |
220 | section in the menu tree. If Ceph is not already installed, you will see a | |
221 | prompt offering to do so. | |
222 | ||
223 | The wizard is divided into multiple sections, where each needs to | |
224 | finish successfully, in order to use Ceph. | |
225 | ||
226 | First you need to chose which Ceph version you want to install. Prefer the one | |
227 | from your other nodes, or the newest if this is the first node you install | |
228 | Ceph. | |
229 | ||
230 | After starting the installation, the wizard will download and install all the | |
231 | required packages from {pve}'s Ceph repository. | |
232 | [thumbnail="screenshot/gui-node-ceph-install-wizard-step0.png"] | |
233 | ||
234 | After finishing the installation step, you will need to create a configuration. | |
235 | This step is only needed once per cluster, as this configuration is distributed | |
236 | automatically to all remaining cluster members through {pve}'s clustered | |
237 | xref:chapter_pmxcfs[configuration file system (pmxcfs)]. | |
238 | ||
239 | The configuration step includes the following settings: | |
240 | ||
241 | * *Public Network:* You can set up a dedicated network for Ceph. This | |
242 | setting is required. Separating your Ceph traffic is highly recommended. | |
243 | Otherwise, it could cause trouble with other latency dependent services, | |
244 | for example, cluster communication may decrease Ceph's performance. | |
245 | ||
246 | [thumbnail="screenshot/gui-node-ceph-install-wizard-step2.png"] | |
247 | ||
248 | * *Cluster Network:* As an optional step, you can go even further and | |
249 | separate the xref:pve_ceph_osds[OSD] replication & heartbeat traffic | |
250 | as well. This will relieve the public network and could lead to | |
251 | significant performance improvements, especially in large clusters. | |
252 | ||
253 | You have two more options which are considered advanced and therefore | |
254 | should only changed if you know what you are doing. | |
255 | ||
256 | * *Number of replicas*: Defines how often an object is replicated | |
257 | * *Minimum replicas*: Defines the minimum number of required replicas | |
258 | for I/O to be marked as complete. | |
259 | ||
260 | Additionally, you need to choose your first monitor node. This step is required. | |
261 | ||
262 | That's it. You should now see a success page as the last step, with further | |
263 | instructions on how to proceed. Your system is now ready to start using Ceph. | |
264 | To get started, you will need to create some additional xref:pve_ceph_monitors[monitors], | |
265 | xref:pve_ceph_osds[OSDs] and at least one xref:pve_ceph_pools[pool]. | |
266 | ||
267 | The rest of this chapter will guide you through getting the most out of | |
268 | your {pve} based Ceph setup. This includes the aforementioned tips and | |
269 | more, such as xref:pveceph_fs[CephFS], which is a helpful addition to your | |
270 | new Ceph cluster. | |
271 | ||
272 | [[pve_ceph_install]] | |
273 | CLI Installation of Ceph Packages | |
274 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
275 | ||
276 | Alternatively to the the recommended {pve} Ceph installation wizard available | |
277 | in the web-interface, you can use the following CLI command on each node: | |
278 | ||
279 | [source,bash] | |
280 | ---- | |
281 | pveceph install | |
282 | ---- | |
283 | ||
284 | This sets up an `apt` package repository in | |
285 | `/etc/apt/sources.list.d/ceph.list` and installs the required software. | |
286 | ||
287 | ||
288 | Initial Ceph configuration via CLI | |
289 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
290 | ||
291 | Use the {pve} Ceph installation wizard (recommended) or run the | |
292 | following command on one node: | |
293 | ||
294 | [source,bash] | |
295 | ---- | |
296 | pveceph init --network 10.10.10.0/24 | |
297 | ---- | |
298 | ||
299 | This creates an initial configuration at `/etc/pve/ceph.conf` with a | |
300 | dedicated network for Ceph. This file is automatically distributed to | |
301 | all {pve} nodes, using xref:chapter_pmxcfs[pmxcfs]. The command also | |
302 | creates a symbolic link at `/etc/ceph/ceph.conf`, which points to that file. | |
303 | Thus, you can simply run Ceph commands without the need to specify a | |
304 | configuration file. | |
305 | ||
306 | ||
307 | [[pve_ceph_monitors]] | |
308 | Ceph Monitor | |
309 | ----------- | |
310 | ||
311 | [thumbnail="screenshot/gui-ceph-monitor.png"] | |
312 | ||
313 | The Ceph Monitor (MON) | |
314 | footnote:[Ceph Monitor {cephdocs-url}/start/intro/] | |
315 | maintains a master copy of the cluster map. For high availability, you need at | |
316 | least 3 monitors. One monitor will already be installed if you | |
317 | used the installation wizard. You won't need more than 3 monitors, as long | |
318 | as your cluster is small to medium-sized. Only really large clusters will | |
319 | require more than this. | |
320 | ||
321 | [[pveceph_create_mon]] | |
322 | Create Monitors | |
323 | ~~~~~~~~~~~~~~~ | |
324 | ||
325 | On each node where you want to place a monitor (three monitors are recommended), | |
326 | create one by using the 'Ceph -> Monitor' tab in the GUI or run: | |
327 | ||
328 | ||
329 | [source,bash] | |
330 | ---- | |
331 | pveceph mon create | |
332 | ---- | |
333 | ||
334 | [[pveceph_destroy_mon]] | |
335 | Destroy Monitors | |
336 | ~~~~~~~~~~~~~~~~ | |
337 | ||
338 | To remove a Ceph Monitor via the GUI, first select a node in the tree view and | |
339 | go to the **Ceph -> Monitor** panel. Select the MON and click the **Destroy** | |
340 | button. | |
341 | ||
342 | To remove a Ceph Monitor via the CLI, first connect to the node on which the MON | |
343 | is running. Then execute the following command: | |
344 | [source,bash] | |
345 | ---- | |
346 | pveceph mon destroy | |
347 | ---- | |
348 | ||
349 | NOTE: At least three Monitors are needed for quorum. | |
350 | ||
351 | ||
352 | [[pve_ceph_manager]] | |
353 | Ceph Manager | |
354 | ------------ | |
355 | ||
356 | The Manager daemon runs alongside the monitors. It provides an interface to | |
357 | monitor the cluster. Since the release of Ceph luminous, at least one ceph-mgr | |
358 | footnote:[Ceph Manager {cephdocs-url}/mgr/] daemon is | |
359 | required. | |
360 | ||
361 | [[pveceph_create_mgr]] | |
362 | Create Manager | |
363 | ~~~~~~~~~~~~~~ | |
364 | ||
365 | Multiple Managers can be installed, but only one Manager is active at any given | |
366 | time. | |
367 | ||
368 | [source,bash] | |
369 | ---- | |
370 | pveceph mgr create | |
371 | ---- | |
372 | ||
373 | NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For | |
374 | high availability install more then one manager. | |
375 | ||
376 | ||
377 | [[pveceph_destroy_mgr]] | |
378 | Destroy Manager | |
379 | ~~~~~~~~~~~~~~~ | |
380 | ||
381 | To remove a Ceph Manager via the GUI, first select a node in the tree view and | |
382 | go to the **Ceph -> Monitor** panel. Select the Manager and click the | |
383 | **Destroy** button. | |
384 | ||
385 | To remove a Ceph Monitor via the CLI, first connect to the node on which the | |
386 | Manager is running. Then execute the following command: | |
387 | [source,bash] | |
388 | ---- | |
389 | pveceph mgr destroy | |
390 | ---- | |
391 | ||
392 | NOTE: While a manager is not a hard-dependency, it is crucial for a Ceph cluster, | |
393 | as it handles important features like PG-autoscaling, device health monitoring, | |
394 | telemetry and more. | |
395 | ||
396 | [[pve_ceph_osds]] | |
397 | Ceph OSDs | |
398 | --------- | |
399 | ||
400 | [thumbnail="screenshot/gui-ceph-osd-status.png"] | |
401 | ||
402 | Ceph **O**bject **S**torage **D**aemons store objects for Ceph over the | |
403 | network. It is recommended to use one OSD per physical disk. | |
404 | ||
405 | [[pve_ceph_osd_create]] | |
406 | Create OSDs | |
407 | ~~~~~~~~~~~ | |
408 | ||
409 | You can create an OSD either via the {pve} web-interface or via the CLI using | |
410 | `pveceph`. For example: | |
411 | ||
412 | [source,bash] | |
413 | ---- | |
414 | pveceph osd create /dev/sd[X] | |
415 | ---- | |
416 | ||
417 | TIP: We recommend a Ceph cluster with at least three nodes and at least 12 | |
418 | OSDs, evenly distributed among the nodes. | |
419 | ||
420 | If the disk was in use before (for example, for ZFS or as an OSD) you first need | |
421 | to zap all traces of that usage. To remove the partition table, boot sector and | |
422 | any other OSD leftover, you can use the following command: | |
423 | ||
424 | [source,bash] | |
425 | ---- | |
426 | ceph-volume lvm zap /dev/sd[X] --destroy | |
427 | ---- | |
428 | ||
429 | WARNING: The above command will destroy all data on the disk! | |
430 | ||
431 | .Ceph Bluestore | |
432 | ||
433 | Starting with the Ceph Kraken release, a new Ceph OSD storage type was | |
434 | introduced called Bluestore | |
435 | footnote:[Ceph Bluestore https://ceph.com/community/new-luminous-bluestore/]. | |
436 | This is the default when creating OSDs since Ceph Luminous. | |
437 | ||
438 | [source,bash] | |
439 | ---- | |
440 | pveceph osd create /dev/sd[X] | |
441 | ---- | |
442 | ||
443 | .Block.db and block.wal | |
444 | ||
445 | If you want to use a separate DB/WAL device for your OSDs, you can specify it | |
446 | through the '-db_dev' and '-wal_dev' options. The WAL is placed with the DB, if | |
447 | not specified separately. | |
448 | ||
449 | [source,bash] | |
450 | ---- | |
451 | pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z] | |
452 | ---- | |
453 | ||
454 | You can directly choose the size of those with the '-db_size' and '-wal_size' | |
455 | parameters respectively. If they are not given, the following values (in order) | |
456 | will be used: | |
457 | ||
458 | * bluestore_block_{db,wal}_size from Ceph configuration... | |
459 | ** ... database, section 'osd' | |
460 | ** ... database, section 'global' | |
461 | ** ... file, section 'osd' | |
462 | ** ... file, section 'global' | |
463 | * 10% (DB)/1% (WAL) of OSD size | |
464 | ||
465 | NOTE: The DB stores BlueStore’s internal metadata, and the WAL is BlueStore’s | |
466 | internal journal or write-ahead log. It is recommended to use a fast SSD or | |
467 | NVRAM for better performance. | |
468 | ||
469 | .Ceph Filestore | |
470 | ||
471 | Before Ceph Luminous, Filestore was used as the default storage type for Ceph OSDs. | |
472 | Starting with Ceph Nautilus, {pve} does not support creating such OSDs with | |
473 | 'pveceph' anymore. If you still want to create filestore OSDs, use | |
474 | 'ceph-volume' directly. | |
475 | ||
476 | [source,bash] | |
477 | ---- | |
478 | ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y] | |
479 | ---- | |
480 | ||
481 | [[pve_ceph_osd_destroy]] | |
482 | Destroy OSDs | |
483 | ~~~~~~~~~~~~ | |
484 | ||
485 | To remove an OSD via the GUI, first select a {PVE} node in the tree view and go | |
486 | to the **Ceph -> OSD** panel. Then select the OSD to destroy and click the **OUT** | |
487 | button. Once the OSD status has changed from `in` to `out`, click the **STOP** | |
488 | button. Finally, after the status has changed from `up` to `down`, select | |
489 | **Destroy** from the `More` drop-down menu. | |
490 | ||
491 | To remove an OSD via the CLI run the following commands. | |
492 | ||
493 | [source,bash] | |
494 | ---- | |
495 | ceph osd out <ID> | |
496 | systemctl stop ceph-osd@<ID>.service | |
497 | ---- | |
498 | ||
499 | NOTE: The first command instructs Ceph not to include the OSD in the data | |
500 | distribution. The second command stops the OSD service. Until this time, no | |
501 | data is lost. | |
502 | ||
503 | The following command destroys the OSD. Specify the '-cleanup' option to | |
504 | additionally destroy the partition table. | |
505 | ||
506 | [source,bash] | |
507 | ---- | |
508 | pveceph osd destroy <ID> | |
509 | ---- | |
510 | ||
511 | WARNING: The above command will destroy all data on the disk! | |
512 | ||
513 | ||
514 | [[pve_ceph_pools]] | |
515 | Ceph Pools | |
516 | ---------- | |
517 | ||
518 | [thumbnail="screenshot/gui-ceph-pools.png"] | |
519 | ||
520 | A pool is a logical group for storing objects. It holds a collection of objects, | |
521 | known as **P**lacement **G**roups (`PG`, `pg_num`). | |
522 | ||
523 | ||
524 | Create and Edit Pools | |
525 | ~~~~~~~~~~~~~~~~~~~~~ | |
526 | ||
527 | You can create and edit pools from the command line or the web-interface of any | |
528 | {pve} host under **Ceph -> Pools**. | |
529 | ||
530 | When no options are given, we set a default of **128 PGs**, a **size of 3 | |
531 | replicas** and a **min_size of 2 replicas**, to ensure no data loss occurs if | |
532 | any OSD fails. | |
533 | ||
534 | WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1 | |
535 | allows I/O on an object when it has only 1 replica, which could lead to data | |
536 | loss, incomplete PGs or unfound objects. | |
537 | ||
538 | It is advised that you either enable the PG-Autoscaler or calculate the PG | |
539 | number based on your setup. You can find the formula and the PG calculator | |
540 | footnote:[PG calculator https://web.archive.org/web/20210301111112/http://ceph.com/pgcalc/] online. From Ceph Nautilus | |
541 | onward, you can change the number of PGs | |
542 | footnoteref:[placement_groups,Placement Groups | |
543 | {cephdocs-url}/rados/operations/placement-groups/] after the setup. | |
544 | ||
545 | The PG autoscaler footnoteref:[autoscaler,Automated Scaling | |
546 | {cephdocs-url}/rados/operations/placement-groups/#automated-scaling] can | |
547 | automatically scale the PG count for a pool in the background. Setting the | |
548 | `Target Size` or `Target Ratio` advanced parameters helps the PG-Autoscaler to | |
549 | make better decisions. | |
550 | ||
551 | .Example for creating a pool over the CLI | |
552 | [source,bash] | |
553 | ---- | |
554 | pveceph pool create <pool-name> --add_storages | |
555 | ---- | |
556 | ||
557 | TIP: If you would also like to automatically define a storage for your | |
558 | pool, keep the `Add as Storage' checkbox checked in the web-interface, or use the | |
559 | command line option '--add_storages' at pool creation. | |
560 | ||
561 | Pool Options | |
562 | ^^^^^^^^^^^^ | |
563 | ||
564 | [thumbnail="screenshot/gui-ceph-pool-create.png"] | |
565 | ||
566 | The following options are available on pool creation, and partially also when | |
567 | editing a pool. | |
568 | ||
569 | Name:: The name of the pool. This must be unique and can't be changed afterwards. | |
570 | Size:: The number of replicas per object. Ceph always tries to have this many | |
571 | copies of an object. Default: `3`. | |
572 | PG Autoscale Mode:: The automatic PG scaling mode footnoteref:[autoscaler] of | |
573 | the pool. If set to `warn`, it produces a warning message when a pool | |
574 | has a non-optimal PG count. Default: `warn`. | |
575 | Add as Storage:: Configure a VM or container storage using the new pool. | |
576 | Default: `true` (only visible on creation). | |
577 | ||
578 | .Advanced Options | |
579 | Min. Size:: The minimum number of replicas per object. Ceph will reject I/O on | |
580 | the pool if a PG has less than this many replicas. Default: `2`. | |
581 | Crush Rule:: The rule to use for mapping object placement in the cluster. These | |
582 | rules define how data is placed within the cluster. See | |
583 | xref:pve_ceph_device_classes[Ceph CRUSH & device classes] for information on | |
584 | device-based rules. | |
585 | # of PGs:: The number of placement groups footnoteref:[placement_groups] that | |
586 | the pool should have at the beginning. Default: `128`. | |
587 | Target Ratio:: The ratio of data that is expected in the pool. The PG | |
588 | autoscaler uses the ratio relative to other ratio sets. It takes precedence | |
589 | over the `target size` if both are set. | |
590 | Target Size:: The estimated amount of data expected in the pool. The PG | |
591 | autoscaler uses this size to estimate the optimal PG count. | |
592 | Min. # of PGs:: The minimum number of placement groups. This setting is used to | |
593 | fine-tune the lower bound of the PG count for that pool. The PG autoscaler | |
594 | will not merge PGs below this threshold. | |
595 | ||
596 | Further information on Ceph pool handling can be found in the Ceph pool | |
597 | operation footnote:[Ceph pool operation | |
598 | {cephdocs-url}/rados/operations/pools/] | |
599 | manual. | |
600 | ||
601 | ||
602 | [[pve_ceph_ec_pools]] | |
603 | Erasure Coded Pools | |
604 | ~~~~~~~~~~~~~~~~~~~ | |
605 | ||
606 | Erasure coding (EC) is a form of `forward error correction' codes that allows | |
607 | to recover from a certain amount of data loss. Erasure coded pools can offer | |
608 | more usable space compared to replicated pools, but they do that for the price | |
609 | of performance. | |
610 | ||
611 | For comparison: in classic, replicated pools, multiple replicas of the data | |
612 | are stored (`size`) while in erasure coded pool, data is split into `k` data | |
613 | chunks with additional `m` coding (checking) chunks. Those coding chunks can be | |
614 | used to recreate data should data chunks be missing. | |
615 | ||
616 | The number of coding chunks, `m`, defines how many OSDs can be lost without | |
617 | losing any data. The total amount of objects stored is `k + m`. | |
618 | ||
619 | Creating EC Pools | |
620 | ^^^^^^^^^^^^^^^^^ | |
621 | ||
622 | Erasure coded (EC) pools can be created with the `pveceph` CLI tooling. | |
623 | Planning an EC pool needs to account for the fact, that they work differently | |
624 | than replicated pools. | |
625 | ||
626 | The default `min_size` of an EC pool depends on the `m` parameter. If `m = 1`, | |
627 | the `min_size` of the EC pool will be `k`. The `min_size` will be `k + 1` if | |
628 | `m > 1`. The Ceph documentation recommends a conservative `min_size` of `k + 2` | |
629 | footnote:[Ceph Erasure Coded Pool Recovery | |
630 | {cephdocs-url}/rados/operations/erasure-code/#erasure-coded-pool-recovery]. | |
631 | ||
632 | If there are less than `min_size` OSDs available, any IO to the pool will be | |
633 | blocked until there are enough OSDs available again. | |
634 | ||
635 | NOTE: When planning an erasure coded pool, keep an eye on the `min_size` as it | |
636 | defines how many OSDs need to be available. Otherwise, IO will be blocked. | |
637 | ||
638 | For example, an EC pool with `k = 2` and `m = 1` will have `size = 3`, | |
639 | `min_size = 2` and will stay operational if one OSD fails. If the pool is | |
640 | configured with `k = 2`, `m = 2`, it will have a `size = 4` and `min_size = 3` | |
641 | and stay operational if one OSD is lost. | |
642 | ||
643 | To create a new EC pool, run the following command: | |
644 | ||
645 | [source,bash] | |
646 | ---- | |
647 | pveceph pool create <pool-name> --erasure-coding k=2,m=1 | |
648 | ---- | |
649 | ||
650 | Optional parameters are `failure-domain` and `device-class`. If you | |
651 | need to change any EC profile settings used by the pool, you will have to | |
652 | create a new pool with a new profile. | |
653 | ||
654 | This will create a new EC pool plus the needed replicated pool to store the RBD | |
655 | omap and other metadata. In the end, there will be a `<pool name>-data` and | |
656 | `<pool name>-metada` pool. The default behavior is to create a matching storage | |
657 | configuration as well. If that behavior is not wanted, you can disable it by | |
658 | providing the `--add_storages 0` parameter. When configuring the storage | |
659 | configuration manually, keep in mind that the `data-pool` parameter needs to be | |
660 | set. Only then will the EC pool be used to store the data objects. For example: | |
661 | ||
662 | NOTE: The optional parameters `--size`, `--min_size` and `--crush_rule` will be | |
663 | used for the replicated metadata pool, but not for the erasure coded data pool. | |
664 | If you need to change the `min_size` on the data pool, you can do it later. | |
665 | The `size` and `crush_rule` parameters cannot be changed on erasure coded | |
666 | pools. | |
667 | ||
668 | If there is a need to further customize the EC profile, you can do so by | |
669 | creating it with the Ceph tools directly footnote:[Ceph Erasure Code Profile | |
670 | {cephdocs-url}/rados/operations/erasure-code/#erasure-code-profiles], and | |
671 | specify the profile to use with the `profile` parameter. | |
672 | ||
673 | For example: | |
674 | [source,bash] | |
675 | ---- | |
676 | pveceph pool create <pool-name> --erasure-coding profile=<profile-name> | |
677 | ---- | |
678 | ||
679 | Adding EC Pools as Storage | |
680 | ^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
681 | ||
682 | You can add an already existing EC pool as storage to {pve}. It works the same | |
683 | way as adding an `RBD` pool but requires the extra `data-pool` option. | |
684 | ||
685 | [source,bash] | |
686 | ---- | |
687 | pvesm add rbd <storage-name> --pool <replicated-pool> --data-pool <ec-pool> | |
688 | ---- | |
689 | ||
690 | TIP: Do not forget to add the `keyring` and `monhost` option for any external | |
691 | Ceph clusters, not managed by the local {pve} cluster. | |
692 | ||
693 | Destroy Pools | |
694 | ~~~~~~~~~~~~~ | |
695 | ||
696 | To destroy a pool via the GUI, select a node in the tree view and go to the | |
697 | **Ceph -> Pools** panel. Select the pool to destroy and click the **Destroy** | |
698 | button. To confirm the destruction of the pool, you need to enter the pool name. | |
699 | ||
700 | Run the following command to destroy a pool. Specify the '-remove_storages' to | |
701 | also remove the associated storage. | |
702 | ||
703 | [source,bash] | |
704 | ---- | |
705 | pveceph pool destroy <name> | |
706 | ---- | |
707 | ||
708 | NOTE: Pool deletion runs in the background and can take some time. | |
709 | You will notice the data usage in the cluster decreasing throughout this | |
710 | process. | |
711 | ||
712 | ||
713 | PG Autoscaler | |
714 | ~~~~~~~~~~~~~ | |
715 | ||
716 | The PG autoscaler allows the cluster to consider the amount of (expected) data | |
717 | stored in each pool and to choose the appropriate pg_num values automatically. | |
718 | It is available since Ceph Nautilus. | |
719 | ||
720 | You may need to activate the PG autoscaler module before adjustments can take | |
721 | effect. | |
722 | ||
723 | [source,bash] | |
724 | ---- | |
725 | ceph mgr module enable pg_autoscaler | |
726 | ---- | |
727 | ||
728 | The autoscaler is configured on a per pool basis and has the following modes: | |
729 | ||
730 | [horizontal] | |
731 | warn:: A health warning is issued if the suggested `pg_num` value differs too | |
732 | much from the current value. | |
733 | on:: The `pg_num` is adjusted automatically with no need for any manual | |
734 | interaction. | |
735 | off:: No automatic `pg_num` adjustments are made, and no warning will be issued | |
736 | if the PG count is not optimal. | |
737 | ||
738 | The scaling factor can be adjusted to facilitate future data storage with the | |
739 | `target_size`, `target_size_ratio` and the `pg_num_min` options. | |
740 | ||
741 | WARNING: By default, the autoscaler considers tuning the PG count of a pool if | |
742 | it is off by a factor of 3. This will lead to a considerable shift in data | |
743 | placement and might introduce a high load on the cluster. | |
744 | ||
745 | You can find a more in-depth introduction to the PG autoscaler on Ceph's Blog - | |
746 | https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/[New in | |
747 | Nautilus: PG merging and autotuning]. | |
748 | ||
749 | ||
750 | [[pve_ceph_device_classes]] | |
751 | Ceph CRUSH & device classes | |
752 | --------------------------- | |
753 | ||
754 | [thumbnail="screenshot/gui-ceph-config.png"] | |
755 | ||
756 | The footnote:[CRUSH | |
757 | https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf] (**C**ontrolled | |
758 | **R**eplication **U**nder **S**calable **H**ashing) algorithm is at the | |
759 | foundation of Ceph. | |
760 | ||
761 | CRUSH calculates where to store and retrieve data from. This has the | |
762 | advantage that no central indexing service is needed. CRUSH works using a map of | |
763 | OSDs, buckets (device locations) and rulesets (data replication) for pools. | |
764 | ||
765 | NOTE: Further information can be found in the Ceph documentation, under the | |
766 | section CRUSH map footnote:[CRUSH map {cephdocs-url}/rados/operations/crush-map/]. | |
767 | ||
768 | This map can be altered to reflect different replication hierarchies. The object | |
769 | replicas can be separated (e.g., failure domains), while maintaining the desired | |
770 | distribution. | |
771 | ||
772 | A common configuration is to use different classes of disks for different Ceph | |
773 | pools. For this reason, Ceph introduced device classes with luminous, to | |
774 | accommodate the need for easy ruleset generation. | |
775 | ||
776 | The device classes can be seen in the 'ceph osd tree' output. These classes | |
777 | represent their own root bucket, which can be seen with the below command. | |
778 | ||
779 | [source, bash] | |
780 | ---- | |
781 | ceph osd crush tree --show-shadow | |
782 | ---- | |
783 | ||
784 | Example output form the above command: | |
785 | ||
786 | [source, bash] | |
787 | ---- | |
788 | ID CLASS WEIGHT TYPE NAME | |
789 | -16 nvme 2.18307 root default~nvme | |
790 | -13 nvme 0.72769 host sumi1~nvme | |
791 | 12 nvme 0.72769 osd.12 | |
792 | -14 nvme 0.72769 host sumi2~nvme | |
793 | 13 nvme 0.72769 osd.13 | |
794 | -15 nvme 0.72769 host sumi3~nvme | |
795 | 14 nvme 0.72769 osd.14 | |
796 | -1 7.70544 root default | |
797 | -3 2.56848 host sumi1 | |
798 | 12 nvme 0.72769 osd.12 | |
799 | -5 2.56848 host sumi2 | |
800 | 13 nvme 0.72769 osd.13 | |
801 | -7 2.56848 host sumi3 | |
802 | 14 nvme 0.72769 osd.14 | |
803 | ---- | |
804 | ||
805 | To instruct a pool to only distribute objects on a specific device class, you | |
806 | first need to create a ruleset for the device class: | |
807 | ||
808 | [source, bash] | |
809 | ---- | |
810 | ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class> | |
811 | ---- | |
812 | ||
813 | [frame="none",grid="none", align="left", cols="30%,70%"] | |
814 | |=== | |
815 | |<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI) | |
816 | |<root>|which crush root it should belong to (default Ceph root "default") | |
817 | |<failure-domain>|at which failure-domain the objects should be distributed (usually host) | |
818 | |<class>|what type of OSD backing store to use (e.g., nvme, ssd, hdd) | |
819 | |=== | |
820 | ||
821 | Once the rule is in the CRUSH map, you can tell a pool to use the ruleset. | |
822 | ||
823 | [source, bash] | |
824 | ---- | |
825 | ceph osd pool set <pool-name> crush_rule <rule-name> | |
826 | ---- | |
827 | ||
828 | TIP: If the pool already contains objects, these must be moved accordingly. | |
829 | Depending on your setup, this may introduce a big performance impact on your | |
830 | cluster. As an alternative, you can create a new pool and move disks separately. | |
831 | ||
832 | ||
833 | Ceph Client | |
834 | ----------- | |
835 | ||
836 | [thumbnail="screenshot/gui-ceph-log.png"] | |
837 | ||
838 | Following the setup from the previous sections, you can configure {pve} to use | |
839 | such pools to store VM and Container images. Simply use the GUI to add a new | |
840 | `RBD` storage (see section | |
841 | xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]). | |
842 | ||
843 | You also need to copy the keyring to a predefined location for an external Ceph | |
844 | cluster. If Ceph is installed on the Proxmox nodes itself, then this will be | |
845 | done automatically. | |
846 | ||
847 | NOTE: The filename needs to be `<storage_id> + `.keyring`, where `<storage_id>` is | |
848 | the expression after 'rbd:' in `/etc/pve/storage.cfg`. In the following example, | |
849 | `my-ceph-storage` is the `<storage_id>`: | |
850 | ||
851 | [source,bash] | |
852 | ---- | |
853 | mkdir /etc/pve/priv/ceph | |
854 | cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring | |
855 | ---- | |
856 | ||
857 | [[pveceph_fs]] | |
858 | CephFS | |
859 | ------ | |
860 | ||
861 | Ceph also provides a filesystem, which runs on top of the same object storage as | |
862 | RADOS block devices do. A **M**eta**d**ata **S**erver (`MDS`) is used to map the | |
863 | RADOS backed objects to files and directories, allowing Ceph to provide a | |
864 | POSIX-compliant, replicated filesystem. This allows you to easily configure a | |
865 | clustered, highly available, shared filesystem. Ceph's Metadata Servers | |
866 | guarantee that files are evenly distributed over the entire Ceph cluster. As a | |
867 | result, even cases of high load will not overwhelm a single host, which can be | |
868 | an issue with traditional shared filesystem approaches, for example `NFS`. | |
869 | ||
870 | [thumbnail="screenshot/gui-node-ceph-cephfs-panel.png"] | |
871 | ||
872 | {pve} supports both creating a hyper-converged CephFS and using an existing | |
873 | xref:storage_cephfs[CephFS as storage] to save backups, ISO files, and container | |
874 | templates. | |
875 | ||
876 | ||
877 | [[pveceph_fs_mds]] | |
878 | Metadata Server (MDS) | |
879 | ~~~~~~~~~~~~~~~~~~~~~ | |
880 | ||
881 | CephFS needs at least one Metadata Server to be configured and running, in order | |
882 | to function. You can create an MDS through the {pve} web GUI's `Node | |
883 | -> CephFS` panel or from the command line with: | |
884 | ||
885 | ---- | |
886 | pveceph mds create | |
887 | ---- | |
888 | ||
889 | Multiple metadata servers can be created in a cluster, but with the default | |
890 | settings, only one can be active at a time. If an MDS or its node becomes | |
891 | unresponsive (or crashes), another `standby` MDS will get promoted to `active`. | |
892 | You can speed up the handover between the active and standby MDS by using | |
893 | the 'hotstandby' parameter option on creation, or if you have already created it | |
894 | you may set/add: | |
895 | ||
896 | ---- | |
897 | mds standby replay = true | |
898 | ---- | |
899 | ||
900 | in the respective MDS section of `/etc/pve/ceph.conf`. With this enabled, the | |
901 | specified MDS will remain in a `warm` state, polling the active one, so that it | |
902 | can take over faster in case of any issues. | |
903 | ||
904 | NOTE: This active polling will have an additional performance impact on your | |
905 | system and the active `MDS`. | |
906 | ||
907 | .Multiple Active MDS | |
908 | ||
909 | Since Luminous (12.2.x) you can have multiple active metadata servers | |
910 | running at once, but this is normally only useful if you have a high amount of | |
911 | clients running in parallel. Otherwise the `MDS` is rarely the bottleneck in a | |
912 | system. If you want to set this up, please refer to the Ceph documentation. | |
913 | footnote:[Configuring multiple active MDS daemons | |
914 | {cephdocs-url}/cephfs/multimds/] | |
915 | ||
916 | [[pveceph_fs_create]] | |
917 | Create CephFS | |
918 | ~~~~~~~~~~~~~ | |
919 | ||
920 | With {pve}'s integration of CephFS, you can easily create a CephFS using the | |
921 | web interface, CLI or an external API interface. Some prerequisites are required | |
922 | for this to work: | |
923 | ||
924 | .Prerequisites for a successful CephFS setup: | |
925 | - xref:pve_ceph_install[Install Ceph packages] - if this was already done some | |
926 | time ago, you may want to rerun it on an up-to-date system to | |
927 | ensure that all CephFS related packages get installed. | |
928 | - xref:pve_ceph_monitors[Setup Monitors] | |
929 | - xref:pve_ceph_monitors[Setup your OSDs] | |
930 | - xref:pveceph_fs_mds[Setup at least one MDS] | |
931 | ||
932 | After this is complete, you can simply create a CephFS through | |
933 | either the Web GUI's `Node -> CephFS` panel or the command line tool `pveceph`, | |
934 | for example: | |
935 | ||
936 | ---- | |
937 | pveceph fs create --pg_num 128 --add-storage | |
938 | ---- | |
939 | ||
940 | This creates a CephFS named 'cephfs', using a pool for its data named | |
941 | 'cephfs_data' with '128' placement groups and a pool for its metadata named | |
942 | 'cephfs_metadata' with one quarter of the data pool's placement groups (`32`). | |
943 | Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the | |
944 | Ceph documentation for more information regarding an appropriate placement group | |
945 | number (`pg_num`) for your setup footnoteref:[placement_groups]. | |
946 | Additionally, the '--add-storage' parameter will add the CephFS to the {pve} | |
947 | storage configuration after it has been created successfully. | |
948 | ||
949 | Destroy CephFS | |
950 | ~~~~~~~~~~~~~~ | |
951 | ||
952 | WARNING: Destroying a CephFS will render all of its data unusable. This cannot be | |
953 | undone! | |
954 | ||
955 | To completely and gracefully remove a CephFS, the following steps are | |
956 | necessary: | |
957 | ||
958 | * Disconnect every non-{PVE} client (e.g. unmount the CephFS in guests). | |
959 | * Disable all related CephFS {PVE} storage entries (to prevent it from being | |
960 | automatically mounted). | |
961 | * Remove all used resources from guests (e.g. ISOs) that are on the CephFS you | |
962 | want to destroy. | |
963 | * Unmount the CephFS storages on all cluster nodes manually with | |
964 | + | |
965 | ---- | |
966 | umount /mnt/pve/<STORAGE-NAME> | |
967 | ---- | |
968 | + | |
969 | Where `<STORAGE-NAME>` is the name of the CephFS storage in your {PVE}. | |
970 | ||
971 | * Now make sure that no metadata server (`MDS`) is running for that CephFS, | |
972 | either by stopping or destroying them. This can be done through the web | |
973 | interface or via the command line interface, for the latter you would issue | |
974 | the following command: | |
975 | + | |
976 | ---- | |
977 | pveceph stop --service mds.NAME | |
978 | ---- | |
979 | + | |
980 | to stop them, or | |
981 | + | |
982 | ---- | |
983 | pveceph mds destroy NAME | |
984 | ---- | |
985 | + | |
986 | to destroy them. | |
987 | + | |
988 | Note that standby servers will automatically be promoted to active when an | |
989 | active `MDS` is stopped or removed, so it is best to first stop all standby | |
990 | servers. | |
991 | ||
992 | * Now you can destroy the CephFS with | |
993 | + | |
994 | ---- | |
995 | pveceph fs destroy NAME --remove-storages --remove-pools | |
996 | ---- | |
997 | + | |
998 | This will automatically destroy the underlying Ceph pools as well as remove | |
999 | the storages from pve config. | |
1000 | ||
1001 | After these steps, the CephFS should be completely removed and if you have | |
1002 | other CephFS instances, the stopped metadata servers can be started again | |
1003 | to act as standbys. | |
1004 | ||
1005 | Ceph maintenance | |
1006 | ---------------- | |
1007 | ||
1008 | Replace OSDs | |
1009 | ~~~~~~~~~~~~ | |
1010 | ||
1011 | One of the most common maintenance tasks in Ceph is to replace the disk of an | |
1012 | OSD. If a disk is already in a failed state, then you can go ahead and run | |
1013 | through the steps in xref:pve_ceph_osd_destroy[Destroy OSDs]. Ceph will recreate | |
1014 | those copies on the remaining OSDs if possible. This rebalancing will start as | |
1015 | soon as an OSD failure is detected or an OSD was actively stopped. | |
1016 | ||
1017 | NOTE: With the default size/min_size (3/2) of a pool, recovery only starts when | |
1018 | `size + 1` nodes are available. The reason for this is that the Ceph object | |
1019 | balancer xref:pve_ceph_device_classes[CRUSH] defaults to a full node as | |
1020 | `failure domain'. | |
1021 | ||
1022 | To replace a functioning disk from the GUI, go through the steps in | |
1023 | xref:pve_ceph_osd_destroy[Destroy OSDs]. The only addition is to wait until | |
1024 | the cluster shows 'HEALTH_OK' before stopping the OSD to destroy it. | |
1025 | ||
1026 | On the command line, use the following commands: | |
1027 | ||
1028 | ---- | |
1029 | ceph osd out osd.<id> | |
1030 | ---- | |
1031 | ||
1032 | You can check with the command below if the OSD can be safely removed. | |
1033 | ||
1034 | ---- | |
1035 | ceph osd safe-to-destroy osd.<id> | |
1036 | ---- | |
1037 | ||
1038 | Once the above check tells you that it is safe to remove the OSD, you can | |
1039 | continue with the following commands: | |
1040 | ||
1041 | ---- | |
1042 | systemctl stop ceph-osd@<id>.service | |
1043 | pveceph osd destroy <id> | |
1044 | ---- | |
1045 | ||
1046 | Replace the old disk with the new one and use the same procedure as described | |
1047 | in xref:pve_ceph_osd_create[Create OSDs]. | |
1048 | ||
1049 | Trim/Discard | |
1050 | ~~~~~~~~~~~~ | |
1051 | ||
1052 | It is good practice to run 'fstrim' (discard) regularly on VMs and containers. | |
1053 | This releases data blocks that the filesystem isn’t using anymore. It reduces | |
1054 | data usage and resource load. Most modern operating systems issue such discard | |
1055 | commands to their disks regularly. You only need to ensure that the Virtual | |
1056 | Machines enable the xref:qm_hard_disk_discard[disk discard option]. | |
1057 | ||
1058 | [[pveceph_scrub]] | |
1059 | Scrub & Deep Scrub | |
1060 | ~~~~~~~~~~~~~~~~~~ | |
1061 | ||
1062 | Ceph ensures data integrity by 'scrubbing' placement groups. Ceph checks every | |
1063 | object in a PG for its health. There are two forms of Scrubbing, daily | |
1064 | cheap metadata checks and weekly deep data checks. The weekly deep scrub reads | |
1065 | the objects and uses checksums to ensure data integrity. If a running scrub | |
1066 | interferes with business (performance) needs, you can adjust the time when | |
1067 | scrubs footnote:[Ceph scrubbing {cephdocs-url}/rados/configuration/osd-config-ref/#scrubbing] | |
1068 | are executed. | |
1069 | ||
1070 | ||
1071 | Ceph Monitoring and Troubleshooting | |
1072 | ----------------------------------- | |
1073 | ||
1074 | It is important to continuously monitor the health of a Ceph deployment from the | |
1075 | beginning, either by using the Ceph tools or by accessing | |
1076 | the status through the {pve} link:api-viewer/index.html[API]. | |
1077 | ||
1078 | The following Ceph commands can be used to see if the cluster is healthy | |
1079 | ('HEALTH_OK'), if there are warnings ('HEALTH_WARN'), or even errors | |
1080 | ('HEALTH_ERR'). If the cluster is in an unhealthy state, the status commands | |
1081 | below will also give you an overview of the current events and actions to take. | |
1082 | ||
1083 | ---- | |
1084 | # single time output | |
1085 | pve# ceph -s | |
1086 | # continuously output status changes (press CTRL+C to stop) | |
1087 | pve# ceph -w | |
1088 | ---- | |
1089 | ||
1090 | To get a more detailed view, every Ceph service has a log file under | |
1091 | `/var/log/ceph/`. If more detail is required, the log level can be | |
1092 | adjusted footnote:[Ceph log and debugging {cephdocs-url}/rados/troubleshooting/log-and-debug/]. | |
1093 | ||
1094 | You can find more information about troubleshooting | |
1095 | footnote:[Ceph troubleshooting {cephdocs-url}/rados/troubleshooting/] | |
1096 | a Ceph cluster on the official website. | |
1097 | ||
1098 | ||
1099 | ifdef::manvolnum[] | |
1100 | include::pve-copyright.adoc[] | |
1101 | endif::manvolnum[] |