]>
Commit | Line | Data |
---|---|---|
80c0adcb | 1 | [[chapter_pveceph]] |
0840a663 | 2 | ifdef::manvolnum[] |
b2f242ab DM |
3 | pveceph(1) |
4 | ========== | |
404a158e | 5 | :pve-toplevel: |
0840a663 DM |
6 | |
7 | NAME | |
8 | ---- | |
9 | ||
21394e70 | 10 | pveceph - Manage Ceph Services on Proxmox VE Nodes |
0840a663 | 11 | |
49a5e11c | 12 | SYNOPSIS |
0840a663 DM |
13 | -------- |
14 | ||
15 | include::pveceph.1-synopsis.adoc[] | |
16 | ||
17 | DESCRIPTION | |
18 | ----------- | |
19 | endif::manvolnum[] | |
0840a663 | 20 | ifndef::manvolnum[] |
4bfe3e35 TL |
21 | Deploy Hyper-Converged Ceph Cluster |
22 | =================================== | |
49d3ad91 | 23 | :pve-toplevel: |
0840a663 DM |
24 | endif::manvolnum[] |
25 | ||
94d7a98c | 26 | [thumbnail="screenshot/gui-ceph-status-dashboard.png"] |
8997dd6e | 27 | |
40e6c806 | 28 | {pve} unifies your compute and storage systems, that is, you can use the same |
a474ca1f AA |
29 | physical nodes within a cluster for both computing (processing VMs and |
30 | containers) and replicated storage. The traditional silos of compute and | |
31 | storage resources can be wrapped up into a single hyper-converged appliance. | |
40e6c806 | 32 | Separate storage networks (SANs) and connections via network attached storage |
a474ca1f AA |
33 | (NAS) disappear. With the integration of Ceph, an open source software-defined |
34 | storage platform, {pve} has the ability to run and manage Ceph storage directly | |
35 | on the hypervisor nodes. | |
c994e4e5 DM |
36 | |
37 | Ceph is a distributed object store and file system designed to provide | |
1d54c3b4 AA |
38 | excellent performance, reliability and scalability. |
39 | ||
04ba9b24 | 40 | .Some advantages of Ceph on {pve} are: |
40e6c806 | 41 | - Easy setup and management via CLI and GUI |
a474ca1f | 42 | - Thin provisioning |
40e6c806 | 43 | - Snapshot support |
a474ca1f | 44 | - Self healing |
a474ca1f AA |
45 | - Scalable to the exabyte level |
46 | - Setup pools with different performance and redundancy characteristics | |
47 | - Data is replicated, making it fault tolerant | |
40e6c806 | 48 | - Runs on commodity hardware |
a474ca1f | 49 | - No need for hardware RAID controllers |
a474ca1f AA |
50 | - Open source |
51 | ||
40e6c806 DW |
52 | For small to medium-sized deployments, it is possible to install a Ceph server for |
53 | RADOS Block Devices (RBD) directly on your {pve} cluster nodes (see | |
54 | xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]). Recent | |
55 | hardware has a lot of CPU power and RAM, so running storage services | |
c994e4e5 | 56 | and VMs on the same node is possible. |
21394e70 | 57 | |
40e6c806 DW |
58 | To simplify management, we provide 'pveceph' - a tool for installing and |
59 | managing {ceph} services on {pve} nodes. | |
21394e70 | 60 | |
40e6c806 | 61 | .Ceph consists of multiple Daemons, for use as an RBD storage: |
1d54c3b4 AA |
62 | - Ceph Monitor (ceph-mon) |
63 | - Ceph Manager (ceph-mgr) | |
64 | - Ceph OSD (ceph-osd; Object Storage Daemon) | |
65 | ||
d241b01b | 66 | TIP: We highly recommend to get familiar with Ceph |
b46a49ed | 67 | footnote:[Ceph intro {cephdocs-url}/start/intro/], |
d241b01b | 68 | its architecture |
b46a49ed | 69 | footnote:[Ceph architecture {cephdocs-url}/architecture/] |
477fbcfb | 70 | and vocabulary |
b46a49ed | 71 | footnote:[Ceph glossary {cephdocs-url}/glossary]. |
1d54c3b4 | 72 | |
21394e70 DM |
73 | |
74 | Precondition | |
75 | ------------ | |
76 | ||
40e6c806 | 77 | To build a hyper-converged Proxmox + Ceph Cluster, you must use at least |
76f6eca4 | 78 | three (preferably) identical servers for the setup. |
21394e70 DM |
79 | |
80 | Check also the recommendations from | |
b46a49ed | 81 | {cephdocs-url}/start/hardware-recommendations/[Ceph's website]. |
21394e70 | 82 | |
76f6eca4 | 83 | .CPU |
40e6c806 | 84 | A high CPU core frequency reduces latency and should be preferred. As a simple |
2f19a6b0 TL |
85 | rule of thumb, you should assign a CPU core (or thread) to each Ceph service to |
86 | provide enough resources for stable and durable Ceph performance. | |
76f6eca4 AA |
87 | |
88 | .Memory | |
89 | Especially in a hyper-converged setup, the memory consumption needs to be | |
40e6c806 DW |
90 | carefully monitored. In addition to the predicted memory usage of virtual |
91 | machines and containers, you must also account for having enough memory | |
92 | available for Ceph to provide excellent and stable performance. | |
5b502340 AA |
93 | |
94 | As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will be used | |
3a433e9b | 95 | by an OSD. Especially during recovery, re-balancing or backfilling. |
5b502340 AA |
96 | |
97 | The daemon itself will use additional memory. The Bluestore backend of the | |
98 | daemon requires by default **3-5 GiB of memory** (adjustable). In contrast, the | |
99 | legacy Filestore backend uses the OS page cache and the memory consumption is | |
100 | generally related to PGs of an OSD daemon. | |
76f6eca4 AA |
101 | |
102 | .Network | |
103 | We recommend a network bandwidth of at least 10 GbE or more, which is used | |
104 | exclusively for Ceph. A meshed network setup | |
105 | footnote:[Full Mesh Network for Ceph {webwiki-url}Full_Mesh_Network_for_Ceph_Server] | |
106 | is also an option if there are no 10 GbE switches available. | |
107 | ||
2f19a6b0 TL |
108 | The volume of traffic, especially during recovery, will interfere with other |
109 | services on the same network and may even break the {pve} cluster stack. | |
76f6eca4 | 110 | |
40e6c806 DW |
111 | Furthermore, you should estimate your bandwidth needs. While one HDD might not |
112 | saturate a 1 Gb link, multiple HDD OSDs per node can, and modern NVMe SSDs will | |
113 | even saturate 10 Gbps of bandwidth quickly. Deploying a network capable of even | |
114 | more bandwidth will ensure that this isn't your bottleneck and won't be anytime | |
115 | soon. 25, 40 or even 100 Gbps are possible. | |
76f6eca4 AA |
116 | |
117 | .Disks | |
118 | When planning the size of your Ceph cluster, it is important to take the | |
40e6c806 | 119 | recovery time into consideration. Especially with small clusters, recovery |
76f6eca4 AA |
120 | might take long. It is recommended that you use SSDs instead of HDDs in small |
121 | setups to reduce recovery time, minimizing the likelihood of a subsequent | |
122 | failure event during recovery. | |
123 | ||
3a433e9b | 124 | In general, SSDs will provide more IOPS than spinning disks. With this in mind, |
40e6c806 DW |
125 | in addition to the higher cost, it may make sense to implement a |
126 | xref:pve_ceph_device_classes[class based] separation of pools. Another way to | |
127 | speed up OSDs is to use a faster disk as a journal or | |
513e2f57 TL |
128 | DB/**W**rite-**A**head-**L**og device, see |
129 | xref:pve_ceph_osds[creating Ceph OSDs]. | |
130 | If a faster disk is used for multiple OSDs, a proper balance between OSD | |
40e6c806 DW |
131 | and WAL / DB (or journal) disk must be selected, otherwise the faster disk |
132 | becomes the bottleneck for all linked OSDs. | |
133 | ||
134 | Aside from the disk type, Ceph performs best with an even sized and distributed | |
135 | amount of disks per node. For example, 4 x 500 GB disks within each node is | |
2f19a6b0 TL |
136 | better than a mixed setup with a single 1 TB and three 250 GB disk. |
137 | ||
40e6c806 DW |
138 | You also need to balance OSD count and single OSD capacity. More capacity |
139 | allows you to increase storage density, but it also means that a single OSD | |
140 | failure forces Ceph to recover more data at once. | |
76f6eca4 | 141 | |
a474ca1f | 142 | .Avoid RAID |
86be506d | 143 | As Ceph handles data object redundancy and multiple parallel writes to disks |
c78756be | 144 | (OSDs) on its own, using a RAID controller normally doesn’t improve |
86be506d | 145 | performance or availability. On the contrary, Ceph is designed to handle whole |
40e6c806 DW |
146 | disks on it's own, without any abstraction in between. RAID controllers are not |
147 | designed for the Ceph workload and may complicate things and sometimes even | |
86be506d TL |
148 | reduce performance, as their write and caching algorithms may interfere with |
149 | the ones from Ceph. | |
a474ca1f | 150 | |
40e6c806 | 151 | WARNING: Avoid RAID controllers. Use host bus adapter (HBA) instead. |
a474ca1f | 152 | |
40e6c806 DW |
153 | NOTE: The above recommendations should be seen as a rough guidance for choosing |
154 | hardware. Therefore, it is still essential to adapt it to your specific needs. | |
155 | You should test your setup and monitor health and performance continuously. | |
76f6eca4 | 156 | |
2394c306 | 157 | [[pve_ceph_install_wizard]] |
40e6c806 | 158 | Initial Ceph Installation & Configuration |
2394c306 TM |
159 | ----------------------------------------- |
160 | ||
513e2f57 TL |
161 | Using the Web-based Wizard |
162 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
163 | ||
2394c306 TM |
164 | [thumbnail="screenshot/gui-node-ceph-install.png"] |
165 | ||
166 | With {pve} you have the benefit of an easy to use installation wizard | |
167 | for Ceph. Click on one of your cluster nodes and navigate to the Ceph | |
40e6c806 DW |
168 | section in the menu tree. If Ceph is not already installed, you will see a |
169 | prompt offering to do so. | |
2394c306 | 170 | |
40e6c806 | 171 | The wizard is divided into multiple sections, where each needs to |
513e2f57 TL |
172 | finish successfully, in order to use Ceph. |
173 | ||
174 | First you need to chose which Ceph version you want to install. Prefer the one | |
175 | from your other nodes, or the newest if this is the first node you install | |
176 | Ceph. | |
177 | ||
178 | After starting the installation, the wizard will download and install all the | |
179 | required packages from {pve}'s Ceph repository. | |
94d7a98c | 180 | [thumbnail="screenshot/gui-node-ceph-install-wizard-step0.png"] |
2394c306 | 181 | |
513e2f57 | 182 | After finishing the installation step, you will need to create a configuration. |
6a711e64 TL |
183 | This step is only needed once per cluster, as this configuration is distributed |
184 | automatically to all remaining cluster members through {pve}'s clustered | |
185 | xref:chapter_pmxcfs[configuration file system (pmxcfs)]. | |
2394c306 TM |
186 | |
187 | The configuration step includes the following settings: | |
188 | ||
40e6c806 DW |
189 | * *Public Network:* You can set up a dedicated network for Ceph. This |
190 | setting is required. Separating your Ceph traffic is highly recommended. | |
191 | Otherwise, it could cause trouble with other latency dependent services, | |
192 | for example, cluster communication may decrease Ceph's performance. | |
2394c306 TM |
193 | |
194 | [thumbnail="screenshot/gui-node-ceph-install-wizard-step2.png"] | |
195 | ||
40e6c806 | 196 | * *Cluster Network:* As an optional step, you can go even further and |
2394c306 TM |
197 | separate the xref:pve_ceph_osds[OSD] replication & heartbeat traffic |
198 | as well. This will relieve the public network and could lead to | |
40e6c806 | 199 | significant performance improvements, especially in large clusters. |
2394c306 TM |
200 | |
201 | You have two more options which are considered advanced and therefore | |
40e6c806 | 202 | should only changed if you know what you are doing. |
2394c306 | 203 | |
40e6c806 | 204 | * *Number of replicas*: Defines how often an object is replicated |
2394c306 | 205 | * *Minimum replicas*: Defines the minimum number of required replicas |
40e6c806 | 206 | for I/O to be marked as complete. |
2394c306 | 207 | |
40e6c806 | 208 | Additionally, you need to choose your first monitor node. This step is required. |
2394c306 | 209 | |
40e6c806 DW |
210 | That's it. You should now see a success page as the last step, with further |
211 | instructions on how to proceed. Your system is now ready to start using Ceph. | |
212 | To get started, you will need to create some additional xref:pve_ceph_monitors[monitors], | |
213 | xref:pve_ceph_osds[OSDs] and at least one xref:pve_ceph_pools[pool]. | |
2394c306 | 214 | |
40e6c806 DW |
215 | The rest of this chapter will guide you through getting the most out of |
216 | your {pve} based Ceph setup. This includes the aforementioned tips and | |
217 | more, such as xref:pveceph_fs[CephFS], which is a helpful addition to your | |
2394c306 | 218 | new Ceph cluster. |
21394e70 | 219 | |
58f95dd7 | 220 | [[pve_ceph_install]] |
513e2f57 TL |
221 | CLI Installation of Ceph Packages |
222 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
223 | ||
224 | Alternatively to the the recommended {pve} Ceph installation wizard available | |
225 | in the web-interface, you can use the following CLI command on each node: | |
21394e70 DM |
226 | |
227 | [source,bash] | |
228 | ---- | |
19920184 | 229 | pveceph install |
21394e70 DM |
230 | ---- |
231 | ||
232 | This sets up an `apt` package repository in | |
233 | `/etc/apt/sources.list.d/ceph.list` and installs the required software. | |
234 | ||
235 | ||
513e2f57 TL |
236 | Initial Ceph configuration via CLI |
237 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
8997dd6e | 238 | |
2394c306 TM |
239 | Use the {pve} Ceph installation wizard (recommended) or run the |
240 | following command on one node: | |
21394e70 DM |
241 | |
242 | [source,bash] | |
243 | ---- | |
244 | pveceph init --network 10.10.10.0/24 | |
245 | ---- | |
246 | ||
2394c306 | 247 | This creates an initial configuration at `/etc/pve/ceph.conf` with a |
40e6c806 DW |
248 | dedicated network for Ceph. This file is automatically distributed to |
249 | all {pve} nodes, using xref:chapter_pmxcfs[pmxcfs]. The command also | |
250 | creates a symbolic link at `/etc/ceph/ceph.conf`, which points to that file. | |
251 | Thus, you can simply run Ceph commands without the need to specify a | |
2394c306 | 252 | configuration file. |
21394e70 DM |
253 | |
254 | ||
d9a27ee1 | 255 | [[pve_ceph_monitors]] |
b3338e29 AA |
256 | Ceph Monitor |
257 | ----------- | |
513e2f57 TL |
258 | |
259 | [thumbnail="screenshot/gui-ceph-monitor.png"] | |
260 | ||
1d54c3b4 | 261 | The Ceph Monitor (MON) |
b46a49ed | 262 | footnote:[Ceph Monitor {cephdocs-url}/start/intro/] |
40e6c806 DW |
263 | maintains a master copy of the cluster map. For high availability, you need at |
264 | least 3 monitors. One monitor will already be installed if you | |
265 | used the installation wizard. You won't need more than 3 monitors, as long | |
266 | as your cluster is small to medium-sized. Only really large clusters will | |
267 | require more than this. | |
1d54c3b4 | 268 | |
c998bdf2 | 269 | [[pveceph_create_mon]] |
b3338e29 AA |
270 | Create Monitors |
271 | ~~~~~~~~~~~~~~~ | |
272 | ||
1d54c3b4 | 273 | On each node where you want to place a monitor (three monitors are recommended), |
40e6c806 | 274 | create one by using the 'Ceph -> Monitor' tab in the GUI or run: |
21394e70 DM |
275 | |
276 | ||
277 | [source,bash] | |
278 | ---- | |
d1fdb121 | 279 | pveceph mon create |
21394e70 DM |
280 | ---- |
281 | ||
c998bdf2 | 282 | [[pveceph_destroy_mon]] |
b3338e29 AA |
283 | Destroy Monitors |
284 | ~~~~~~~~~~~~~~~~ | |
0e38a564 | 285 | |
40e6c806 | 286 | To remove a Ceph Monitor via the GUI, first select a node in the tree view and |
0e38a564 AA |
287 | go to the **Ceph -> Monitor** panel. Select the MON and click the **Destroy** |
288 | button. | |
289 | ||
40e6c806 | 290 | To remove a Ceph Monitor via the CLI, first connect to the node on which the MON |
0e38a564 AA |
291 | is running. Then execute the following command: |
292 | [source,bash] | |
293 | ---- | |
294 | pveceph mon destroy | |
295 | ---- | |
296 | ||
297 | NOTE: At least three Monitors are needed for quorum. | |
298 | ||
299 | ||
1d54c3b4 | 300 | [[pve_ceph_manager]] |
b3338e29 AA |
301 | Ceph Manager |
302 | ------------ | |
40e6c806 | 303 | |
b3338e29 | 304 | The Manager daemon runs alongside the monitors. It provides an interface to |
40e6c806 | 305 | monitor the cluster. Since the release of Ceph luminous, at least one ceph-mgr |
b46a49ed | 306 | footnote:[Ceph Manager {cephdocs-url}/mgr/] daemon is |
b3338e29 AA |
307 | required. |
308 | ||
55d634e6 | 309 | [[pveceph_create_mgr]] |
b3338e29 AA |
310 | Create Manager |
311 | ~~~~~~~~~~~~~~ | |
1d54c3b4 | 312 | |
40e6c806 DW |
313 | Multiple Managers can be installed, but only one Manager is active at any given |
314 | time. | |
1d54c3b4 | 315 | |
1d54c3b4 AA |
316 | [source,bash] |
317 | ---- | |
d1fdb121 | 318 | pveceph mgr create |
1d54c3b4 AA |
319 | ---- |
320 | ||
c1f38fe3 AA |
321 | NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For |
322 | high availability install more then one manager. | |
323 | ||
21394e70 | 324 | |
c998bdf2 | 325 | [[pveceph_destroy_mgr]] |
b3338e29 AA |
326 | Destroy Manager |
327 | ~~~~~~~~~~~~~~~ | |
549350fe | 328 | |
40e6c806 | 329 | To remove a Ceph Manager via the GUI, first select a node in the tree view and |
549350fe AA |
330 | go to the **Ceph -> Monitor** panel. Select the Manager and click the |
331 | **Destroy** button. | |
332 | ||
40e6c806 | 333 | To remove a Ceph Monitor via the CLI, first connect to the node on which the |
549350fe AA |
334 | Manager is running. Then execute the following command: |
335 | [source,bash] | |
336 | ---- | |
337 | pveceph mgr destroy | |
338 | ---- | |
339 | ||
40e6c806 DW |
340 | NOTE: While a manager is not a hard-dependency, it is crucial for a Ceph cluster, |
341 | as it handles important features like PG-autoscaling, device health monitoring, | |
342 | telemetry and more. | |
549350fe | 343 | |
d9a27ee1 | 344 | [[pve_ceph_osds]] |
b3338e29 AA |
345 | Ceph OSDs |
346 | --------- | |
513e2f57 TL |
347 | |
348 | [thumbnail="screenshot/gui-ceph-osd-status.png"] | |
349 | ||
40e6c806 | 350 | Ceph **O**bject **S**torage **D**aemons store objects for Ceph over the |
b3338e29 AA |
351 | network. It is recommended to use one OSD per physical disk. |
352 | ||
081cb761 | 353 | [[pve_ceph_osd_create]] |
b3338e29 AA |
354 | Create OSDs |
355 | ~~~~~~~~~~~ | |
21394e70 | 356 | |
40e6c806 | 357 | You can create an OSD either via the {pve} web-interface or via the CLI using |
e79e0b9d | 358 | `pveceph`. For example: |
21394e70 DM |
359 | |
360 | [source,bash] | |
361 | ---- | |
d1fdb121 | 362 | pveceph osd create /dev/sd[X] |
21394e70 DM |
363 | ---- |
364 | ||
40e6c806 | 365 | TIP: We recommend a Ceph cluster with at least three nodes and at least 12 |
e79e0b9d | 366 | OSDs, evenly distributed among the nodes. |
1d54c3b4 | 367 | |
40e6c806 DW |
368 | If the disk was in use before (for example, for ZFS or as an OSD) you first need |
369 | to zap all traces of that usage. To remove the partition table, boot sector and | |
370 | any other OSD leftover, you can use the following command: | |
a474ca1f AA |
371 | |
372 | [source,bash] | |
373 | ---- | |
9bddef40 | 374 | ceph-volume lvm zap /dev/sd[X] --destroy |
a474ca1f AA |
375 | ---- |
376 | ||
e79e0b9d | 377 | WARNING: The above command will destroy all data on the disk! |
1d54c3b4 | 378 | |
b3338e29 | 379 | .Ceph Bluestore |
21394e70 | 380 | |
1d54c3b4 | 381 | Starting with the Ceph Kraken release, a new Ceph OSD storage type was |
40e6c806 | 382 | introduced called Bluestore |
2798d126 | 383 | footnote:[Ceph Bluestore https://ceph.com/community/new-luminous-bluestore/]. |
9bddef40 | 384 | This is the default when creating OSDs since Ceph Luminous. |
21394e70 DM |
385 | |
386 | [source,bash] | |
387 | ---- | |
d1fdb121 | 388 | pveceph osd create /dev/sd[X] |
1d54c3b4 AA |
389 | ---- |
390 | ||
1e834cb2 | 391 | .Block.db and block.wal |
1d54c3b4 AA |
392 | |
393 | If you want to use a separate DB/WAL device for your OSDs, you can specify it | |
b3338e29 AA |
394 | through the '-db_dev' and '-wal_dev' options. The WAL is placed with the DB, if |
395 | not specified separately. | |
1d54c3b4 AA |
396 | |
397 | [source,bash] | |
398 | ---- | |
d1fdb121 | 399 | pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z] |
1d54c3b4 AA |
400 | ---- |
401 | ||
40e6c806 DW |
402 | You can directly choose the size of those with the '-db_size' and '-wal_size' |
403 | parameters respectively. If they are not given, the following values (in order) | |
9bddef40 DC |
404 | will be used: |
405 | ||
40e6c806 | 406 | * bluestore_block_{db,wal}_size from Ceph configuration... |
352c803f TL |
407 | ** ... database, section 'osd' |
408 | ** ... database, section 'global' | |
409 | ** ... file, section 'osd' | |
410 | ** ... file, section 'global' | |
9bddef40 DC |
411 | * 10% (DB)/1% (WAL) of OSD size |
412 | ||
40e6c806 | 413 | NOTE: The DB stores BlueStore’s internal metadata, and the WAL is BlueStore’s |
ee4a0e96 | 414 | internal journal or write-ahead log. It is recommended to use a fast SSD or |
1d54c3b4 AA |
415 | NVRAM for better performance. |
416 | ||
b3338e29 | 417 | .Ceph Filestore |
9bddef40 | 418 | |
40e6c806 | 419 | Before Ceph Luminous, Filestore was used as the default storage type for Ceph OSDs. |
9bddef40 | 420 | Starting with Ceph Nautilus, {pve} does not support creating such OSDs with |
352c803f TL |
421 | 'pveceph' anymore. If you still want to create filestore OSDs, use |
422 | 'ceph-volume' directly. | |
1d54c3b4 AA |
423 | |
424 | [source,bash] | |
425 | ---- | |
9bddef40 | 426 | ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y] |
21394e70 DM |
427 | ---- |
428 | ||
081cb761 | 429 | [[pve_ceph_osd_destroy]] |
b3338e29 AA |
430 | Destroy OSDs |
431 | ~~~~~~~~~~~~ | |
be2d137e | 432 | |
40e6c806 DW |
433 | To remove an OSD via the GUI, first select a {PVE} node in the tree view and go |
434 | to the **Ceph -> OSD** panel. Then select the OSD to destroy and click the **OUT** | |
435 | button. Once the OSD status has changed from `in` to `out`, click the **STOP** | |
436 | button. Finally, after the status has changed from `up` to `down`, select | |
437 | **Destroy** from the `More` drop-down menu. | |
be2d137e AA |
438 | |
439 | To remove an OSD via the CLI run the following commands. | |
40e6c806 | 440 | |
be2d137e AA |
441 | [source,bash] |
442 | ---- | |
443 | ceph osd out <ID> | |
444 | systemctl stop ceph-osd@<ID>.service | |
445 | ---- | |
40e6c806 | 446 | |
be2d137e AA |
447 | NOTE: The first command instructs Ceph not to include the OSD in the data |
448 | distribution. The second command stops the OSD service. Until this time, no | |
449 | data is lost. | |
450 | ||
451 | The following command destroys the OSD. Specify the '-cleanup' option to | |
452 | additionally destroy the partition table. | |
40e6c806 | 453 | |
be2d137e AA |
454 | [source,bash] |
455 | ---- | |
456 | pveceph osd destroy <ID> | |
457 | ---- | |
40e6c806 DW |
458 | |
459 | WARNING: The above command will destroy all data on the disk! | |
be2d137e AA |
460 | |
461 | ||
07fef357 | 462 | [[pve_ceph_pools]] |
b3338e29 AA |
463 | Ceph Pools |
464 | ---------- | |
94d7a98c TL |
465 | |
466 | [thumbnail="screenshot/gui-ceph-pools.png"] | |
467 | ||
40e6c806 DW |
468 | A pool is a logical group for storing objects. It holds a collection of objects, |
469 | known as **P**lacement **G**roups (`PG`, `pg_num`). | |
1d54c3b4 | 470 | |
b3338e29 | 471 | |
6004d86b | 472 | Create and Edit Pools |
5b9f923f | 473 | ~~~~~~~~~~~~~~~~~~~~~ |
b3338e29 | 474 | |
513e2f57 TL |
475 | You can create and edit pools from the command line or the web-interface of any |
476 | {pve} host under **Ceph -> Pools**. | |
d56606c7 | 477 | |
90682f35 | 478 | When no options are given, we set a default of **128 PGs**, a **size of 3 |
d56606c7 TL |
479 | replicas** and a **min_size of 2 replicas**, to ensure no data loss occurs if |
480 | any OSD fails. | |
1d54c3b4 | 481 | |
ef3efe51 | 482 | WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1 |
40e6c806 | 483 | allows I/O on an object when it has only 1 replica, which could lead to data |
ef3efe51 AA |
484 | loss, incomplete PGs or unfound objects. |
485 | ||
513e2f57 TL |
486 | It is advised that you either enable the PG-Autoscaler or calculate the PG |
487 | number based on your setup. You can find the formula and the PG calculator | |
f8bfcb41 | 488 | footnote:[PG calculator https://web.archive.org/web/20210301111112/http://ceph.com/pgcalc/] online. From Ceph Nautilus |
513e2f57 TL |
489 | onward, you can change the number of PGs |
490 | footnoteref:[placement_groups,Placement Groups | |
c446b6bb | 491 | {cephdocs-url}/rados/operations/placement-groups/] after the setup. |
1d54c3b4 | 492 | |
513e2f57 | 493 | The PG autoscaler footnoteref:[autoscaler,Automated Scaling |
c446b6bb | 494 | {cephdocs-url}/rados/operations/placement-groups/#automated-scaling] can |
513e2f57 TL |
495 | automatically scale the PG count for a pool in the background. Setting the |
496 | `Target Size` or `Target Ratio` advanced parameters helps the PG-Autoscaler to | |
497 | make better decisions. | |
1d54c3b4 | 498 | |
d56606c7 | 499 | .Example for creating a pool over the CLI |
1d54c3b4 AA |
500 | [source,bash] |
501 | ---- | |
d56606c7 | 502 | pveceph pool create <name> --add_storages |
1d54c3b4 AA |
503 | ---- |
504 | ||
40e6c806 DW |
505 | TIP: If you would also like to automatically define a storage for your |
506 | pool, keep the `Add as Storage' checkbox checked in the web-interface, or use the | |
d56606c7 | 507 | command line option '--add_storages' at pool creation. |
21394e70 | 508 | |
513e2f57 TL |
509 | Pool Options |
510 | ^^^^^^^^^^^^ | |
511 | ||
94d7a98c TL |
512 | [thumbnail="screenshot/gui-ceph-pool-create.png"] |
513 | ||
513e2f57 TL |
514 | The following options are available on pool creation, and partially also when |
515 | editing a pool. | |
516 | ||
c446b6bb DW |
517 | Name:: The name of the pool. This must be unique and can't be changed afterwards. |
518 | Size:: The number of replicas per object. Ceph always tries to have this many | |
519 | copies of an object. Default: `3`. | |
520 | PG Autoscale Mode:: The automatic PG scaling mode footnoteref:[autoscaler] of | |
521 | the pool. If set to `warn`, it produces a warning message when a pool | |
522 | has a non-optimal PG count. Default: `warn`. | |
523 | Add as Storage:: Configure a VM or container storage using the new pool. | |
5b9f923f | 524 | Default: `true` (only visible on creation). |
c446b6bb DW |
525 | |
526 | .Advanced Options | |
527 | Min. Size:: The minimum number of replicas per object. Ceph will reject I/O on | |
528 | the pool if a PG has less than this many replicas. Default: `2`. | |
529 | Crush Rule:: The rule to use for mapping object placement in the cluster. These | |
530 | rules define how data is placed within the cluster. See | |
531 | xref:pve_ceph_device_classes[Ceph CRUSH & device classes] for information on | |
532 | device-based rules. | |
533 | # of PGs:: The number of placement groups footnoteref:[placement_groups] that | |
534 | the pool should have at the beginning. Default: `128`. | |
513e2f57 | 535 | Target Ratio:: The ratio of data that is expected in the pool. The PG |
c446b6bb DW |
536 | autoscaler uses the ratio relative to other ratio sets. It takes precedence |
537 | over the `target size` if both are set. | |
a0d289ff DC |
538 | Target Size:: The estimated amount of data expected in the pool. The PG |
539 | autoscaler uses this size to estimate the optimal PG count. | |
c446b6bb DW |
540 | Min. # of PGs:: The minimum number of placement groups. This setting is used to |
541 | fine-tune the lower bound of the PG count for that pool. The PG autoscaler | |
542 | will not merge PGs below this threshold. | |
543 | ||
1d54c3b4 AA |
544 | Further information on Ceph pool handling can be found in the Ceph pool |
545 | operation footnote:[Ceph pool operation | |
b46a49ed | 546 | {cephdocs-url}/rados/operations/pools/] |
1d54c3b4 | 547 | manual. |
21394e70 | 548 | |
166c91fe | 549 | |
b3338e29 AA |
550 | Destroy Pools |
551 | ~~~~~~~~~~~~~ | |
166c91fe | 552 | |
40e6c806 | 553 | To destroy a pool via the GUI, select a node in the tree view and go to the |
166c91fe | 554 | **Ceph -> Pools** panel. Select the pool to destroy and click the **Destroy** |
40e6c806 | 555 | button. To confirm the destruction of the pool, you need to enter the pool name. |
166c91fe AA |
556 | |
557 | Run the following command to destroy a pool. Specify the '-remove_storages' to | |
558 | also remove the associated storage. | |
40e6c806 | 559 | |
166c91fe AA |
560 | [source,bash] |
561 | ---- | |
562 | pveceph pool destroy <name> | |
563 | ---- | |
564 | ||
40e6c806 DW |
565 | NOTE: Pool deletion runs in the background and can take some time. |
566 | You will notice the data usage in the cluster decreasing throughout this | |
567 | process. | |
166c91fe | 568 | |
47d62c84 DW |
569 | |
570 | PG Autoscaler | |
571 | ~~~~~~~~~~~~~ | |
572 | ||
573 | The PG autoscaler allows the cluster to consider the amount of (expected) data | |
574 | stored in each pool and to choose the appropriate pg_num values automatically. | |
513e2f57 | 575 | It is available since Ceph Nautilus. |
47d62c84 DW |
576 | |
577 | You may need to activate the PG autoscaler module before adjustments can take | |
578 | effect. | |
40e6c806 | 579 | |
47d62c84 DW |
580 | [source,bash] |
581 | ---- | |
582 | ceph mgr module enable pg_autoscaler | |
583 | ---- | |
584 | ||
585 | The autoscaler is configured on a per pool basis and has the following modes: | |
586 | ||
587 | [horizontal] | |
588 | warn:: A health warning is issued if the suggested `pg_num` value differs too | |
589 | much from the current value. | |
590 | on:: The `pg_num` is adjusted automatically with no need for any manual | |
591 | interaction. | |
592 | off:: No automatic `pg_num` adjustments are made, and no warning will be issued | |
40e6c806 | 593 | if the PG count is not optimal. |
47d62c84 | 594 | |
40e6c806 | 595 | The scaling factor can be adjusted to facilitate future data storage with the |
47d62c84 DW |
596 | `target_size`, `target_size_ratio` and the `pg_num_min` options. |
597 | ||
598 | WARNING: By default, the autoscaler considers tuning the PG count of a pool if | |
599 | it is off by a factor of 3. This will lead to a considerable shift in data | |
600 | placement and might introduce a high load on the cluster. | |
601 | ||
602 | You can find a more in-depth introduction to the PG autoscaler on Ceph's Blog - | |
603 | https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/[New in | |
604 | Nautilus: PG merging and autotuning]. | |
605 | ||
606 | ||
76f6eca4 | 607 | [[pve_ceph_device_classes]] |
9fad507d AA |
608 | Ceph CRUSH & device classes |
609 | --------------------------- | |
513e2f57 TL |
610 | |
611 | [thumbnail="screenshot/gui-ceph-config.png"] | |
612 | ||
40e6c806 DW |
613 | The footnote:[CRUSH |
614 | https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf] (**C**ontrolled | |
615 | **R**eplication **U**nder **S**calable **H**ashing) algorithm is at the | |
616 | foundation of Ceph. | |
9fad507d | 617 | |
40e6c806 DW |
618 | CRUSH calculates where to store and retrieve data from. This has the |
619 | advantage that no central indexing service is needed. CRUSH works using a map of | |
9fad507d AA |
620 | OSDs, buckets (device locations) and rulesets (data replication) for pools. |
621 | ||
622 | NOTE: Further information can be found in the Ceph documentation, under the | |
b46a49ed | 623 | section CRUSH map footnote:[CRUSH map {cephdocs-url}/rados/operations/crush-map/]. |
9fad507d AA |
624 | |
625 | This map can be altered to reflect different replication hierarchies. The object | |
3a433e9b | 626 | replicas can be separated (e.g., failure domains), while maintaining the desired |
9fad507d AA |
627 | distribution. |
628 | ||
40e6c806 DW |
629 | A common configuration is to use different classes of disks for different Ceph |
630 | pools. For this reason, Ceph introduced device classes with luminous, to | |
9fad507d AA |
631 | accommodate the need for easy ruleset generation. |
632 | ||
633 | The device classes can be seen in the 'ceph osd tree' output. These classes | |
634 | represent their own root bucket, which can be seen with the below command. | |
635 | ||
636 | [source, bash] | |
637 | ---- | |
638 | ceph osd crush tree --show-shadow | |
639 | ---- | |
640 | ||
641 | Example output form the above command: | |
642 | ||
643 | [source, bash] | |
644 | ---- | |
645 | ID CLASS WEIGHT TYPE NAME | |
646 | -16 nvme 2.18307 root default~nvme | |
647 | -13 nvme 0.72769 host sumi1~nvme | |
648 | 12 nvme 0.72769 osd.12 | |
649 | -14 nvme 0.72769 host sumi2~nvme | |
650 | 13 nvme 0.72769 osd.13 | |
651 | -15 nvme 0.72769 host sumi3~nvme | |
652 | 14 nvme 0.72769 osd.14 | |
653 | -1 7.70544 root default | |
654 | -3 2.56848 host sumi1 | |
655 | 12 nvme 0.72769 osd.12 | |
656 | -5 2.56848 host sumi2 | |
657 | 13 nvme 0.72769 osd.13 | |
658 | -7 2.56848 host sumi3 | |
659 | 14 nvme 0.72769 osd.14 | |
660 | ---- | |
661 | ||
40e6c806 DW |
662 | To instruct a pool to only distribute objects on a specific device class, you |
663 | first need to create a ruleset for the device class: | |
9fad507d AA |
664 | |
665 | [source, bash] | |
666 | ---- | |
667 | ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class> | |
668 | ---- | |
669 | ||
670 | [frame="none",grid="none", align="left", cols="30%,70%"] | |
671 | |=== | |
672 | |<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI) | |
673 | |<root>|which crush root it should belong to (default ceph root "default") | |
674 | |<failure-domain>|at which failure-domain the objects should be distributed (usually host) | |
3a433e9b | 675 | |<class>|what type of OSD backing store to use (e.g., nvme, ssd, hdd) |
9fad507d AA |
676 | |=== |
677 | ||
678 | Once the rule is in the CRUSH map, you can tell a pool to use the ruleset. | |
679 | ||
680 | [source, bash] | |
681 | ---- | |
682 | ceph osd pool set <pool-name> crush_rule <rule-name> | |
683 | ---- | |
684 | ||
40e6c806 DW |
685 | TIP: If the pool already contains objects, these must be moved accordingly. |
686 | Depending on your setup, this may introduce a big performance impact on your | |
687 | cluster. As an alternative, you can create a new pool and move disks separately. | |
9fad507d AA |
688 | |
689 | ||
21394e70 DM |
690 | Ceph Client |
691 | ----------- | |
692 | ||
1ff5e4e8 | 693 | [thumbnail="screenshot/gui-ceph-log.png"] |
8997dd6e | 694 | |
40e6c806 DW |
695 | Following the setup from the previous sections, you can configure {pve} to use |
696 | such pools to store VM and Container images. Simply use the GUI to add a new | |
513e2f57 TL |
697 | `RBD` storage (see section |
698 | xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]). | |
21394e70 | 699 | |
620d6725 | 700 | You also need to copy the keyring to a predefined location for an external Ceph |
1d54c3b4 AA |
701 | cluster. If Ceph is installed on the Proxmox nodes itself, then this will be |
702 | done automatically. | |
21394e70 | 703 | |
40e6c806 DW |
704 | NOTE: The filename needs to be `<storage_id> + `.keyring`, where `<storage_id>` is |
705 | the expression after 'rbd:' in `/etc/pve/storage.cfg`. In the following example, | |
706 | `my-ceph-storage` is the `<storage_id>`: | |
21394e70 DM |
707 | |
708 | [source,bash] | |
709 | ---- | |
710 | mkdir /etc/pve/priv/ceph | |
711 | cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring | |
712 | ---- | |
0840a663 | 713 | |
58f95dd7 TL |
714 | [[pveceph_fs]] |
715 | CephFS | |
716 | ------ | |
717 | ||
40e6c806 DW |
718 | Ceph also provides a filesystem, which runs on top of the same object storage as |
719 | RADOS block devices do. A **M**eta**d**ata **S**erver (`MDS`) is used to map the | |
720 | RADOS backed objects to files and directories, allowing Ceph to provide a | |
721 | POSIX-compliant, replicated filesystem. This allows you to easily configure a | |
722 | clustered, highly available, shared filesystem. Ceph's Metadata Servers | |
723 | guarantee that files are evenly distributed over the entire Ceph cluster. As a | |
724 | result, even cases of high load will not overwhelm a single host, which can be | |
725 | an issue with traditional shared filesystem approaches, for example `NFS`. | |
58f95dd7 | 726 | |
1e834cb2 TL |
727 | [thumbnail="screenshot/gui-node-ceph-cephfs-panel.png"] |
728 | ||
40e6c806 DW |
729 | {pve} supports both creating a hyper-converged CephFS and using an existing |
730 | xref:storage_cephfs[CephFS as storage] to save backups, ISO files, and container | |
731 | templates. | |
58f95dd7 TL |
732 | |
733 | ||
734 | [[pveceph_fs_mds]] | |
735 | Metadata Server (MDS) | |
736 | ~~~~~~~~~~~~~~~~~~~~~ | |
737 | ||
40e6c806 DW |
738 | CephFS needs at least one Metadata Server to be configured and running, in order |
739 | to function. You can create an MDS through the {pve} web GUI's `Node | |
740 | -> CephFS` panel or from the command line with: | |
58f95dd7 TL |
741 | |
742 | ---- | |
743 | pveceph mds create | |
744 | ---- | |
745 | ||
40e6c806 DW |
746 | Multiple metadata servers can be created in a cluster, but with the default |
747 | settings, only one can be active at a time. If an MDS or its node becomes | |
58f95dd7 | 748 | unresponsive (or crashes), another `standby` MDS will get promoted to `active`. |
40e6c806 DW |
749 | You can speed up the handover between the active and standby MDS by using |
750 | the 'hotstandby' parameter option on creation, or if you have already created it | |
58f95dd7 TL |
751 | you may set/add: |
752 | ||
753 | ---- | |
754 | mds standby replay = true | |
755 | ---- | |
756 | ||
40e6c806 DW |
757 | in the respective MDS section of `/etc/pve/ceph.conf`. With this enabled, the |
758 | specified MDS will remain in a `warm` state, polling the active one, so that it | |
759 | can take over faster in case of any issues. | |
760 | ||
761 | NOTE: This active polling will have an additional performance impact on your | |
762 | system and the active `MDS`. | |
58f95dd7 | 763 | |
1e834cb2 | 764 | .Multiple Active MDS |
58f95dd7 | 765 | |
40e6c806 DW |
766 | Since Luminous (12.2.x) you can have multiple active metadata servers |
767 | running at once, but this is normally only useful if you have a high amount of | |
768 | clients running in parallel. Otherwise the `MDS` is rarely the bottleneck in a | |
769 | system. If you want to set this up, please refer to the Ceph documentation. | |
770 | footnote:[Configuring multiple active MDS daemons | |
771 | {cephdocs-url}/cephfs/multimds/] | |
58f95dd7 TL |
772 | |
773 | [[pveceph_fs_create]] | |
8a38333f AA |
774 | Create CephFS |
775 | ~~~~~~~~~~~~~ | |
58f95dd7 | 776 | |
40e6c806 DW |
777 | With {pve}'s integration of CephFS, you can easily create a CephFS using the |
778 | web interface, CLI or an external API interface. Some prerequisites are required | |
58f95dd7 TL |
779 | for this to work: |
780 | ||
781 | .Prerequisites for a successful CephFS setup: | |
40e6c806 DW |
782 | - xref:pve_ceph_install[Install Ceph packages] - if this was already done some |
783 | time ago, you may want to rerun it on an up-to-date system to | |
784 | ensure that all CephFS related packages get installed. | |
58f95dd7 TL |
785 | - xref:pve_ceph_monitors[Setup Monitors] |
786 | - xref:pve_ceph_monitors[Setup your OSDs] | |
787 | - xref:pveceph_fs_mds[Setup at least one MDS] | |
788 | ||
40e6c806 | 789 | After this is complete, you can simply create a CephFS through |
58f95dd7 | 790 | either the Web GUI's `Node -> CephFS` panel or the command line tool `pveceph`, |
40e6c806 | 791 | for example: |
58f95dd7 TL |
792 | |
793 | ---- | |
794 | pveceph fs create --pg_num 128 --add-storage | |
795 | ---- | |
796 | ||
40e6c806 DW |
797 | This creates a CephFS named 'cephfs', using a pool for its data named |
798 | 'cephfs_data' with '128' placement groups and a pool for its metadata named | |
799 | 'cephfs_metadata' with one quarter of the data pool's placement groups (`32`). | |
58f95dd7 | 800 | Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the |
40e6c806 | 801 | Ceph documentation for more information regarding an appropriate placement group |
c446b6bb | 802 | number (`pg_num`) for your setup footnoteref:[placement_groups]. |
40e6c806 | 803 | Additionally, the '--add-storage' parameter will add the CephFS to the {pve} |
c446b6bb | 804 | storage configuration after it has been created successfully. |
58f95dd7 TL |
805 | |
806 | Destroy CephFS | |
807 | ~~~~~~~~~~~~~~ | |
808 | ||
40e6c806 | 809 | WARNING: Destroying a CephFS will render all of its data unusable. This cannot be |
58f95dd7 TL |
810 | undone! |
811 | ||
54f20853 TL |
812 | To completely and gracefully remove a CephFS, the following steps are |
813 | necessary: | |
58f95dd7 | 814 | |
b631c35e DC |
815 | * Disconnect every non-{PVE} client (e.g. unmount the CephFS in guests). |
816 | * Disable all related CephFS {PVE} storage entries (to prevent it from being | |
817 | automatically mounted). | |
818 | * Remove all used resources from guests (e.g. ISOs) that are on the CephFS you | |
819 | want to destroy. | |
820 | * Unmount the CephFS storages on all cluster nodes manually with | |
821 | + | |
58f95dd7 | 822 | ---- |
b631c35e | 823 | umount /mnt/pve/<STORAGE-NAME> |
58f95dd7 | 824 | ---- |
b631c35e DC |
825 | + |
826 | Where `<STORAGE-NAME>` is the name of the CephFS storage in your {PVE}. | |
58f95dd7 | 827 | |
b631c35e | 828 | * Now make sure that no metadata server (`MDS`) is running for that CephFS, |
54f20853 TL |
829 | either by stopping or destroying them. This can be done through the web |
830 | interface or via the command line interface, for the latter you would issue | |
831 | the following command: | |
b631c35e DC |
832 | + |
833 | ---- | |
834 | pveceph stop --service mds.NAME | |
58f95dd7 | 835 | ---- |
b631c35e DC |
836 | + |
837 | to stop them, or | |
838 | + | |
839 | ---- | |
840 | pveceph mds destroy NAME | |
58f95dd7 | 841 | ---- |
b631c35e DC |
842 | + |
843 | to destroy them. | |
844 | + | |
845 | Note that standby servers will automatically be promoted to active when an | |
846 | active `MDS` is stopped or removed, so it is best to first stop all standby | |
847 | servers. | |
58f95dd7 | 848 | |
b631c35e DC |
849 | * Now you can destroy the CephFS with |
850 | + | |
58f95dd7 | 851 | ---- |
b631c35e | 852 | pveceph fs destroy NAME --remove-storages --remove-pools |
58f95dd7 | 853 | ---- |
b631c35e DC |
854 | + |
855 | This will automatically destroy the underlying ceph pools as well as remove | |
856 | the storages from pve config. | |
0840a663 | 857 | |
b631c35e DC |
858 | After these steps, the CephFS should be completely removed and if you have |
859 | other CephFS instances, the stopped metadata servers can be started again | |
860 | to act as standbys. | |
6ff32926 | 861 | |
081cb761 AA |
862 | Ceph maintenance |
863 | ---------------- | |
af6f59f4 | 864 | |
081cb761 AA |
865 | Replace OSDs |
866 | ~~~~~~~~~~~~ | |
af6f59f4 | 867 | |
40e6c806 DW |
868 | One of the most common maintenance tasks in Ceph is to replace the disk of an |
869 | OSD. If a disk is already in a failed state, then you can go ahead and run | |
870 | through the steps in xref:pve_ceph_osd_destroy[Destroy OSDs]. Ceph will recreate | |
871 | those copies on the remaining OSDs if possible. This rebalancing will start as | |
872 | soon as an OSD failure is detected or an OSD was actively stopped. | |
af6f59f4 TL |
873 | |
874 | NOTE: With the default size/min_size (3/2) of a pool, recovery only starts when | |
875 | `size + 1` nodes are available. The reason for this is that the Ceph object | |
876 | balancer xref:pve_ceph_device_classes[CRUSH] defaults to a full node as | |
877 | `failure domain'. | |
081cb761 | 878 | |
40e6c806 | 879 | To replace a functioning disk from the GUI, go through the steps in |
081cb761 AA |
880 | xref:pve_ceph_osd_destroy[Destroy OSDs]. The only addition is to wait until |
881 | the cluster shows 'HEALTH_OK' before stopping the OSD to destroy it. | |
882 | ||
40e6c806 DW |
883 | On the command line, use the following commands: |
884 | ||
081cb761 AA |
885 | ---- |
886 | ceph osd out osd.<id> | |
887 | ---- | |
888 | ||
889 | You can check with the command below if the OSD can be safely removed. | |
40e6c806 | 890 | |
081cb761 AA |
891 | ---- |
892 | ceph osd safe-to-destroy osd.<id> | |
893 | ---- | |
894 | ||
40e6c806 DW |
895 | Once the above check tells you that it is safe to remove the OSD, you can |
896 | continue with the following commands: | |
897 | ||
081cb761 AA |
898 | ---- |
899 | systemctl stop ceph-osd@<id>.service | |
900 | pveceph osd destroy <id> | |
901 | ---- | |
902 | ||
903 | Replace the old disk with the new one and use the same procedure as described | |
904 | in xref:pve_ceph_osd_create[Create OSDs]. | |
905 | ||
835f322d TL |
906 | Trim/Discard |
907 | ~~~~~~~~~~~~ | |
40e6c806 DW |
908 | |
909 | It is good practice to run 'fstrim' (discard) regularly on VMs and containers. | |
081cb761 | 910 | This releases data blocks that the filesystem isn’t using anymore. It reduces |
c78cd2b6 AA |
911 | data usage and resource load. Most modern operating systems issue such discard |
912 | commands to their disks regularly. You only need to ensure that the Virtual | |
913 | Machines enable the xref:qm_hard_disk_discard[disk discard option]. | |
081cb761 | 914 | |
c998bdf2 | 915 | [[pveceph_scrub]] |
081cb761 AA |
916 | Scrub & Deep Scrub |
917 | ~~~~~~~~~~~~~~~~~~ | |
40e6c806 | 918 | |
081cb761 AA |
919 | Ceph ensures data integrity by 'scrubbing' placement groups. Ceph checks every |
920 | object in a PG for its health. There are two forms of Scrubbing, daily | |
b16f8c5f TL |
921 | cheap metadata checks and weekly deep data checks. The weekly deep scrub reads |
922 | the objects and uses checksums to ensure data integrity. If a running scrub | |
923 | interferes with business (performance) needs, you can adjust the time when | |
b46a49ed | 924 | scrubs footnote:[Ceph scrubbing {cephdocs-url}/rados/configuration/osd-config-ref/#scrubbing] |
081cb761 AA |
925 | are executed. |
926 | ||
927 | ||
40e6c806 | 928 | Ceph Monitoring and Troubleshooting |
10df14fb | 929 | ----------------------------------- |
40e6c806 DW |
930 | |
931 | It is important to continuously monitor the health of a Ceph deployment from the | |
932 | beginning, either by using the Ceph tools or by accessing | |
10df14fb | 933 | the status through the {pve} link:api-viewer/index.html[API]. |
6ff32926 | 934 | |
40e6c806 | 935 | The following Ceph commands can be used to see if the cluster is healthy |
10df14fb | 936 | ('HEALTH_OK'), if there are warnings ('HEALTH_WARN'), or even errors |
40e6c806 | 937 | ('HEALTH_ERR'). If the cluster is in an unhealthy state, the status commands |
620d6725 | 938 | below will also give you an overview of the current events and actions to take. |
6ff32926 AA |
939 | |
940 | ---- | |
10df14fb TL |
941 | # single time output |
942 | pve# ceph -s | |
943 | # continuously output status changes (press CTRL+C to stop) | |
944 | pve# ceph -w | |
6ff32926 AA |
945 | ---- |
946 | ||
40e6c806 DW |
947 | To get a more detailed view, every Ceph service has a log file under |
948 | `/var/log/ceph/`. If more detail is required, the log level can be | |
b46a49ed | 949 | adjusted footnote:[Ceph log and debugging {cephdocs-url}/rados/troubleshooting/log-and-debug/]. |
6ff32926 AA |
950 | |
951 | You can find more information about troubleshooting | |
b46a49ed | 952 | footnote:[Ceph troubleshooting {cephdocs-url}/rados/troubleshooting/] |
620d6725 | 953 | a Ceph cluster on the official website. |
6ff32926 AA |
954 | |
955 | ||
0840a663 DM |
956 | ifdef::manvolnum[] |
957 | include::pve-copyright.adoc[] | |
958 | endif::manvolnum[] |