]> git.proxmox.com Git - pve-docs.git/blame_incremental - pveceph.adoc
Fix: pveceph: broken ref anchor pveceph_mgr_create
[pve-docs.git] / pveceph.adoc
... / ...
CommitLineData
1[[chapter_pveceph]]
2ifdef::manvolnum[]
3pveceph(1)
4==========
5:pve-toplevel:
6
7NAME
8----
9
10pveceph - Manage Ceph Services on Proxmox VE Nodes
11
12SYNOPSIS
13--------
14
15include::pveceph.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20ifndef::manvolnum[]
21Manage Ceph Services on Proxmox VE Nodes
22========================================
23:pve-toplevel:
24endif::manvolnum[]
25
26[thumbnail="screenshot/gui-ceph-status.png"]
27
28{pve} unifies your compute and storage systems, i.e. you can use the same
29physical nodes within a cluster for both computing (processing VMs and
30containers) and replicated storage. The traditional silos of compute and
31storage resources can be wrapped up into a single hyper-converged appliance.
32Separate storage networks (SANs) and connections via network attached storages
33(NAS) disappear. With the integration of Ceph, an open source software-defined
34storage platform, {pve} has the ability to run and manage Ceph storage directly
35on the hypervisor nodes.
36
37Ceph is a distributed object store and file system designed to provide
38excellent performance, reliability and scalability.
39
40.Some advantages of Ceph on {pve} are:
41- Easy setup and management with CLI and GUI support
42- Thin provisioning
43- Snapshots support
44- Self healing
45- Scalable to the exabyte level
46- Setup pools with different performance and redundancy characteristics
47- Data is replicated, making it fault tolerant
48- Runs on economical commodity hardware
49- No need for hardware RAID controllers
50- Open source
51
52For small to mid sized deployments, it is possible to install a Ceph server for
53RADOS Block Devices (RBD) directly on your {pve} cluster nodes, see
54xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]. Recent
55hardware has plenty of CPU power and RAM, so running storage services
56and VMs on the same node is possible.
57
58To simplify management, we provide 'pveceph' - a tool to install and
59manage {ceph} services on {pve} nodes.
60
61.Ceph consists of a couple of Daemons footnote:[Ceph intro https://docs.ceph.com/docs/{ceph_codename}/start/intro/], for use as a RBD storage:
62- Ceph Monitor (ceph-mon)
63- Ceph Manager (ceph-mgr)
64- Ceph OSD (ceph-osd; Object Storage Daemon)
65
66TIP: We highly recommend to get familiar with Ceph's architecture
67footnote:[Ceph architecture https://docs.ceph.com/docs/{ceph_codename}/architecture/]
68and vocabulary
69footnote:[Ceph glossary https://docs.ceph.com/docs/{ceph_codename}/glossary].
70
71
72Precondition
73------------
74
75To build a hyper-converged Proxmox + Ceph Cluster there should be at least
76three (preferably) identical servers for the setup.
77
78Check also the recommendations from
79https://docs.ceph.com/docs/{ceph_codename}/start/hardware-recommendations/[Ceph's website].
80
81.CPU
82Higher CPU core frequency reduce latency and should be preferred. As a simple
83rule of thumb, you should assign a CPU core (or thread) to each Ceph service to
84provide enough resources for stable and durable Ceph performance.
85
86.Memory
87Especially in a hyper-converged setup, the memory consumption needs to be
88carefully monitored. In addition to the intended workload from virtual machines
89and container, Ceph needs enough memory available to provide good and stable
90performance. As a rule of thumb, for roughly 1 TiB of data, 1 GiB of memory
91will be used by an OSD. OSD caching will use additional memory.
92
93.Network
94We recommend a network bandwidth of at least 10 GbE or more, which is used
95exclusively for Ceph. A meshed network setup
96footnote:[Full Mesh Network for Ceph {webwiki-url}Full_Mesh_Network_for_Ceph_Server]
97is also an option if there are no 10 GbE switches available.
98
99The volume of traffic, especially during recovery, will interfere with other
100services on the same network and may even break the {pve} cluster stack.
101
102Further, estimate your bandwidth needs. While one HDD might not saturate a 1 Gb
103link, multiple HDD OSDs per node can, and modern NVMe SSDs will even saturate
10410 Gbps of bandwidth quickly. Deploying a network capable of even more bandwith
105will ensure that it isn't your bottleneck and won't be anytime soon, 25, 40 or
106even 100 GBps are possible.
107
108.Disks
109When planning the size of your Ceph cluster, it is important to take the
110recovery time into consideration. Especially with small clusters, the recovery
111might take long. It is recommended that you use SSDs instead of HDDs in small
112setups to reduce recovery time, minimizing the likelihood of a subsequent
113failure event during recovery.
114
115In general SSDs will provide more IOPs than spinning disks. This fact and the
116higher cost may make a xref:pve_ceph_device_classes[class based] separation of
117pools appealing. Another possibility to speedup OSDs is to use a faster disk
118as journal or DB/**W**rite-**A**head-**L**og device, see
119xref:pve_ceph_osds[creating Ceph OSDs]. If a faster disk is used for multiple
120OSDs, a proper balance between OSD and WAL / DB (or journal) disk must be
121selected, otherwise the faster disk becomes the bottleneck for all linked OSDs.
122
123Aside from the disk type, Ceph best performs with an even sized and distributed
124amount of disks per node. For example, 4 x 500 GB disks with in each node is
125better than a mixed setup with a single 1 TB and three 250 GB disk.
126
127One also need to balance OSD count and single OSD capacity. More capacity
128allows to increase storage density, but it also means that a single OSD
129failure forces ceph to recover more data at once.
130
131.Avoid RAID
132As Ceph handles data object redundancy and multiple parallel writes to disks
133(OSDs) on its own, using a RAID controller normally doesn’t improve
134performance or availability. On the contrary, Ceph is designed to handle whole
135disks on it's own, without any abstraction in between. RAID controller are not
136designed for the Ceph use case and may complicate things and sometimes even
137reduce performance, as their write and caching algorithms may interfere with
138the ones from Ceph.
139
140WARNING: Avoid RAID controller, use host bus adapter (HBA) instead.
141
142NOTE: Above recommendations should be seen as a rough guidance for choosing
143hardware. Therefore, it is still essential to adapt it to your specific needs,
144test your setup and monitor health and performance continuously.
145
146[[pve_ceph_install_wizard]]
147Initial Ceph installation & configuration
148-----------------------------------------
149
150[thumbnail="screenshot/gui-node-ceph-install.png"]
151
152With {pve} you have the benefit of an easy to use installation wizard
153for Ceph. Click on one of your cluster nodes and navigate to the Ceph
154section in the menu tree. If Ceph is not already installed you will be
155offered to do so now.
156
157The wizard is divided into different sections, where each needs to be
158finished successfully in order to use Ceph. After starting the installation
159the wizard will download and install all required packages from {pve}'s ceph
160repository.
161
162After finishing the first step, you will need to create a configuration.
163This step is only needed once per cluster, as this configuration is distributed
164automatically to all remaining cluster members through {pve}'s clustered
165xref:chapter_pmxcfs[configuration file system (pmxcfs)].
166
167The configuration step includes the following settings:
168
169* *Public Network:* You should setup a dedicated network for Ceph, this
170setting is required. Separating your Ceph traffic is highly recommended,
171because it could lead to troubles with other latency dependent services,
172e.g., cluster communication may decrease Ceph's performance, if not done.
173
174[thumbnail="screenshot/gui-node-ceph-install-wizard-step2.png"]
175
176* *Cluster Network:* As an optional step you can go even further and
177separate the xref:pve_ceph_osds[OSD] replication & heartbeat traffic
178as well. This will relieve the public network and could lead to
179significant performance improvements especially in big clusters.
180
181You have two more options which are considered advanced and therefore
182should only changed if you are an expert.
183
184* *Number of replicas*: Defines the how often a object is replicated
185* *Minimum replicas*: Defines the minimum number of required replicas
186 for I/O to be marked as complete.
187
188Additionally you need to choose your first monitor node, this is required.
189
190That's it, you should see a success page as the last step with further
191instructions on how to go on. You are now prepared to start using Ceph,
192even though you will need to create additional xref:pve_ceph_monitors[monitors],
193create some xref:pve_ceph_osds[OSDs] and at least one xref:pve_ceph_pools[pool].
194
195The rest of this chapter will guide you on how to get the most out of
196your {pve} based Ceph setup, this will include aforementioned and
197more like xref:pveceph_fs[CephFS] which is a very handy addition to your
198new Ceph cluster.
199
200[[pve_ceph_install]]
201Installation of Ceph Packages
202-----------------------------
203Use {pve} Ceph installation wizard (recommended) or run the following
204command on each node:
205
206[source,bash]
207----
208pveceph install
209----
210
211This sets up an `apt` package repository in
212`/etc/apt/sources.list.d/ceph.list` and installs the required software.
213
214
215Create initial Ceph configuration
216---------------------------------
217
218[thumbnail="screenshot/gui-ceph-config.png"]
219
220Use the {pve} Ceph installation wizard (recommended) or run the
221following command on one node:
222
223[source,bash]
224----
225pveceph init --network 10.10.10.0/24
226----
227
228This creates an initial configuration at `/etc/pve/ceph.conf` with a
229dedicated network for ceph. That file is automatically distributed to
230all {pve} nodes by using xref:chapter_pmxcfs[pmxcfs]. The command also
231creates a symbolic link from `/etc/ceph/ceph.conf` pointing to that file.
232So you can simply run Ceph commands without the need to specify a
233configuration file.
234
235
236[[pve_ceph_monitors]]
237Ceph Monitor
238-----------
239The Ceph Monitor (MON)
240footnote:[Ceph Monitor https://docs.ceph.com/docs/{ceph_codename}/start/intro/]
241maintains a master copy of the cluster map. For high availability you need to
242have at least 3 monitors. One monitor will already be installed if you
243used the installation wizard. You won't need more than 3 monitors as long
244as your cluster is small to midsize, only really large clusters will
245need more than that.
246
247
248[[pveceph_create_mon]]
249Create Monitors
250~~~~~~~~~~~~~~~
251
252[thumbnail="screenshot/gui-ceph-monitor.png"]
253
254On each node where you want to place a monitor (three monitors are recommended),
255create it by using the 'Ceph -> Monitor' tab in the GUI or run.
256
257
258[source,bash]
259----
260pveceph mon create
261----
262
263[[pveceph_destroy_mon]]
264Destroy Monitors
265~~~~~~~~~~~~~~~~
266
267To remove a Ceph Monitor via the GUI first select a node in the tree view and
268go to the **Ceph -> Monitor** panel. Select the MON and click the **Destroy**
269button.
270
271To remove a Ceph Monitor via the CLI first connect to the node on which the MON
272is running. Then execute the following command:
273[source,bash]
274----
275pveceph mon destroy
276----
277
278NOTE: At least three Monitors are needed for quorum.
279
280
281[[pve_ceph_manager]]
282Ceph Manager
283------------
284The Manager daemon runs alongside the monitors. It provides an interface to
285monitor the cluster. Since the Ceph luminous release at least one ceph-mgr
286footnote:[Ceph Manager https://docs.ceph.com/docs/{ceph_codename}/mgr/] daemon is
287required.
288
289[[pveceph_create_mgr]]
290Create Manager
291~~~~~~~~~~~~~~
292
293Multiple Managers can be installed, but at any time only one Manager is active.
294
295[source,bash]
296----
297pveceph mgr create
298----
299
300NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For
301high availability install more then one manager.
302
303
304[[pveceph_destroy_mgr]]
305Destroy Manager
306~~~~~~~~~~~~~~~
307
308To remove a Ceph Manager via the GUI first select a node in the tree view and
309go to the **Ceph -> Monitor** panel. Select the Manager and click the
310**Destroy** button.
311
312To remove a Ceph Monitor via the CLI first connect to the node on which the
313Manager is running. Then execute the following command:
314[source,bash]
315----
316pveceph mgr destroy
317----
318
319NOTE: A Ceph cluster can function without a Manager, but certain functions like
320the cluster status or usage require a running Manager.
321
322
323[[pve_ceph_osds]]
324Ceph OSDs
325---------
326Ceph **O**bject **S**torage **D**aemons are storing objects for Ceph over the
327network. It is recommended to use one OSD per physical disk.
328
329NOTE: By default an object is 4 MiB in size.
330
331[[pve_ceph_osd_create]]
332Create OSDs
333~~~~~~~~~~~
334
335[thumbnail="screenshot/gui-ceph-osd-status.png"]
336
337via GUI or via CLI as follows:
338
339[source,bash]
340----
341pveceph osd create /dev/sd[X]
342----
343
344TIP: We recommend a Ceph cluster size, starting with 12 OSDs, distributed
345evenly among your, at least three nodes (4 OSDs on each node).
346
347If the disk was used before (eg. ZFS/RAID/OSD), to remove partition table, boot
348sector and any OSD leftover the following command should be sufficient.
349
350[source,bash]
351----
352ceph-volume lvm zap /dev/sd[X] --destroy
353----
354
355WARNING: The above command will destroy data on the disk!
356
357.Ceph Bluestore
358
359Starting with the Ceph Kraken release, a new Ceph OSD storage type was
360introduced, the so called Bluestore
361footnote:[Ceph Bluestore https://ceph.com/community/new-luminous-bluestore/].
362This is the default when creating OSDs since Ceph Luminous.
363
364[source,bash]
365----
366pveceph osd create /dev/sd[X]
367----
368
369.Block.db and block.wal
370
371If you want to use a separate DB/WAL device for your OSDs, you can specify it
372through the '-db_dev' and '-wal_dev' options. The WAL is placed with the DB, if
373not specified separately.
374
375[source,bash]
376----
377pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]
378----
379
380You can directly choose the size for those with the '-db_size' and '-wal_size'
381paremeters respectively. If they are not given the following values (in order)
382will be used:
383
384* bluestore_block_{db,wal}_size from ceph configuration...
385** ... database, section 'osd'
386** ... database, section 'global'
387** ... file, section 'osd'
388** ... file, section 'global'
389* 10% (DB)/1% (WAL) of OSD size
390
391NOTE: The DB stores BlueStore’s internal metadata and the WAL is BlueStore’s
392internal journal or write-ahead log. It is recommended to use a fast SSD or
393NVRAM for better performance.
394
395
396.Ceph Filestore
397
398Before Ceph Luminous, Filestore was used as default storage type for Ceph OSDs.
399Starting with Ceph Nautilus, {pve} does not support creating such OSDs with
400'pveceph' anymore. If you still want to create filestore OSDs, use
401'ceph-volume' directly.
402
403[source,bash]
404----
405ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]
406----
407
408[[pve_ceph_osd_destroy]]
409Destroy OSDs
410~~~~~~~~~~~~
411
412To remove an OSD via the GUI first select a {PVE} node in the tree view and go
413to the **Ceph -> OSD** panel. Select the OSD to destroy. Next click the **OUT**
414button. Once the OSD status changed from `in` to `out` click the **STOP**
415button. As soon as the status changed from `up` to `down` select **Destroy**
416from the `More` drop-down menu.
417
418To remove an OSD via the CLI run the following commands.
419[source,bash]
420----
421ceph osd out <ID>
422systemctl stop ceph-osd@<ID>.service
423----
424NOTE: The first command instructs Ceph not to include the OSD in the data
425distribution. The second command stops the OSD service. Until this time, no
426data is lost.
427
428The following command destroys the OSD. Specify the '-cleanup' option to
429additionally destroy the partition table.
430[source,bash]
431----
432pveceph osd destroy <ID>
433----
434WARNING: The above command will destroy data on the disk!
435
436
437[[pve_ceph_pools]]
438Ceph Pools
439----------
440A pool is a logical group for storing objects. It holds **P**lacement
441**G**roups (`PG`, `pg_num`), a collection of objects.
442
443
444Create Pools
445~~~~~~~~~~~~
446
447[thumbnail="screenshot/gui-ceph-pools.png"]
448
449When no options are given, we set a default of **128 PGs**, a **size of 3
450replicas** and a **min_size of 2 replicas** for serving objects in a degraded
451state.
452
453NOTE: The default number of PGs works for 2-5 disks. Ceph throws a
454'HEALTH_WARNING' if you have too few or too many PGs in your cluster.
455
456It is advised to calculate the PG number depending on your setup, you can find
457the formula and the PG calculator footnote:[PG calculator
458https://ceph.com/pgcalc/] online. While PGs can be increased later on, they can
459never be decreased.
460
461
462You can create pools through command line or on the GUI on each PVE host under
463**Ceph -> Pools**.
464
465[source,bash]
466----
467pveceph pool create <name>
468----
469
470If you would like to automatically also get a storage definition for your pool,
471mark the checkbox "Add storages" in the GUI or use the command line option
472'--add_storages' at pool creation.
473
474Further information on Ceph pool handling can be found in the Ceph pool
475operation footnote:[Ceph pool operation
476https://docs.ceph.com/docs/{ceph_codename}/rados/operations/pools/]
477manual.
478
479
480Destroy Pools
481~~~~~~~~~~~~~
482
483To destroy a pool via the GUI select a node in the tree view and go to the
484**Ceph -> Pools** panel. Select the pool to destroy and click the **Destroy**
485button. To confirm the destruction of the pool you need to enter the pool name.
486
487Run the following command to destroy a pool. Specify the '-remove_storages' to
488also remove the associated storage.
489[source,bash]
490----
491pveceph pool destroy <name>
492----
493
494NOTE: Deleting the data of a pool is a background task and can take some time.
495You will notice that the data usage in the cluster is decreasing.
496
497[[pve_ceph_device_classes]]
498Ceph CRUSH & device classes
499---------------------------
500The foundation of Ceph is its algorithm, **C**ontrolled **R**eplication
501**U**nder **S**calable **H**ashing
502(CRUSH footnote:[CRUSH https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf]).
503
504CRUSH calculates where to store to and retrieve data from, this has the
505advantage that no central index service is needed. CRUSH works with a map of
506OSDs, buckets (device locations) and rulesets (data replication) for pools.
507
508NOTE: Further information can be found in the Ceph documentation, under the
509section CRUSH map footnote:[CRUSH map https://docs.ceph.com/docs/{ceph_codename}/rados/operations/crush-map/].
510
511This map can be altered to reflect different replication hierarchies. The object
512replicas can be separated (eg. failure domains), while maintaining the desired
513distribution.
514
515A common use case is to use different classes of disks for different Ceph pools.
516For this reason, Ceph introduced the device classes with luminous, to
517accommodate the need for easy ruleset generation.
518
519The device classes can be seen in the 'ceph osd tree' output. These classes
520represent their own root bucket, which can be seen with the below command.
521
522[source, bash]
523----
524ceph osd crush tree --show-shadow
525----
526
527Example output form the above command:
528
529[source, bash]
530----
531ID CLASS WEIGHT TYPE NAME
532-16 nvme 2.18307 root default~nvme
533-13 nvme 0.72769 host sumi1~nvme
534 12 nvme 0.72769 osd.12
535-14 nvme 0.72769 host sumi2~nvme
536 13 nvme 0.72769 osd.13
537-15 nvme 0.72769 host sumi3~nvme
538 14 nvme 0.72769 osd.14
539 -1 7.70544 root default
540 -3 2.56848 host sumi1
541 12 nvme 0.72769 osd.12
542 -5 2.56848 host sumi2
543 13 nvme 0.72769 osd.13
544 -7 2.56848 host sumi3
545 14 nvme 0.72769 osd.14
546----
547
548To let a pool distribute its objects only on a specific device class, you need
549to create a ruleset with the specific class first.
550
551[source, bash]
552----
553ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
554----
555
556[frame="none",grid="none", align="left", cols="30%,70%"]
557|===
558|<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
559|<root>|which crush root it should belong to (default ceph root "default")
560|<failure-domain>|at which failure-domain the objects should be distributed (usually host)
561|<class>|what type of OSD backing store to use (eg. nvme, ssd, hdd)
562|===
563
564Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
565
566[source, bash]
567----
568ceph osd pool set <pool-name> crush_rule <rule-name>
569----
570
571TIP: If the pool already contains objects, all of these have to be moved
572accordingly. Depending on your setup this may introduce a big performance hit
573on your cluster. As an alternative, you can create a new pool and move disks
574separately.
575
576
577Ceph Client
578-----------
579
580[thumbnail="screenshot/gui-ceph-log.png"]
581
582You can then configure {pve} to use such pools to store VM or
583Container images. Simply use the GUI too add a new `RBD` storage (see
584section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
585
586You also need to copy the keyring to a predefined location for an external Ceph
587cluster. If Ceph is installed on the Proxmox nodes itself, then this will be
588done automatically.
589
590NOTE: The file name needs to be `<storage_id> + `.keyring` - `<storage_id>` is
591the expression after 'rbd:' in `/etc/pve/storage.cfg` which is
592`my-ceph-storage` in the following example:
593
594[source,bash]
595----
596mkdir /etc/pve/priv/ceph
597cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
598----
599
600[[pveceph_fs]]
601CephFS
602------
603
604Ceph provides also a filesystem running on top of the same object storage as
605RADOS block devices do. A **M**eta**d**ata **S**erver (`MDS`) is used to map
606the RADOS backed objects to files and directories, allowing to provide a
607POSIX-compliant replicated filesystem. This allows one to have a clustered
608highly available shared filesystem in an easy way if ceph is already used. Its
609Metadata Servers guarantee that files get balanced out over the whole Ceph
610cluster, this way even high load will not overload a single host, which can be
611an issue with traditional shared filesystem approaches, like `NFS`, for
612example.
613
614[thumbnail="screenshot/gui-node-ceph-cephfs-panel.png"]
615
616{pve} supports both, using an existing xref:storage_cephfs[CephFS as storage]
617to save backups, ISO files or container templates and creating a
618hyper-converged CephFS itself.
619
620
621[[pveceph_fs_mds]]
622Metadata Server (MDS)
623~~~~~~~~~~~~~~~~~~~~~
624
625CephFS needs at least one Metadata Server to be configured and running to be
626able to work. One can simply create one through the {pve} web GUI's `Node ->
627CephFS` panel or on the command line with:
628
629----
630pveceph mds create
631----
632
633Multiple metadata servers can be created in a cluster. But with the default
634settings only one can be active at any time. If an MDS, or its node, becomes
635unresponsive (or crashes), another `standby` MDS will get promoted to `active`.
636One can speed up the hand-over between the active and a standby MDS up by using
637the 'hotstandby' parameter option on create, or if you have already created it
638you may set/add:
639
640----
641mds standby replay = true
642----
643
644in the ceph.conf respective MDS section. With this enabled, this specific MDS
645will always poll the active one, so that it can take over faster as it is in a
646`warm` state. But naturally, the active polling will cause some additional
647performance impact on your system and active `MDS`.
648
649.Multiple Active MDS
650
651Since Luminous (12.2.x) you can also have multiple active metadata servers
652running, but this is normally only useful for a high count on parallel clients,
653as else the `MDS` seldom is the bottleneck. If you want to set this up please
654refer to the ceph documentation. footnote:[Configuring multiple active MDS
655daemons https://docs.ceph.com/docs/{ceph_codename}/cephfs/multimds/]
656
657[[pveceph_fs_create]]
658Create CephFS
659~~~~~~~~~~~~~
660
661With {pve}'s CephFS integration into you can create a CephFS easily over the
662Web GUI, the CLI or an external API interface. Some prerequisites are required
663for this to work:
664
665.Prerequisites for a successful CephFS setup:
666- xref:pve_ceph_install[Install Ceph packages], if this was already done some
667 time ago you might want to rerun it on an up to date system to ensure that
668 also all CephFS related packages get installed.
669- xref:pve_ceph_monitors[Setup Monitors]
670- xref:pve_ceph_monitors[Setup your OSDs]
671- xref:pveceph_fs_mds[Setup at least one MDS]
672
673After this got all checked and done you can simply create a CephFS through
674either the Web GUI's `Node -> CephFS` panel or the command line tool `pveceph`,
675for example with:
676
677----
678pveceph fs create --pg_num 128 --add-storage
679----
680
681This creates a CephFS named `'cephfs'' using a pool for its data named
682`'cephfs_data'' with `128` placement groups and a pool for its metadata named
683`'cephfs_metadata'' with one quarter of the data pools placement groups (`32`).
684Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
685Ceph documentation for more information regarding a fitting placement group
686number (`pg_num`) for your setup footnote:[Ceph Placement Groups
687https://docs.ceph.com/docs/{ceph_codename}/rados/operations/placement-groups/].
688Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
689storage configuration after it was created successfully.
690
691Destroy CephFS
692~~~~~~~~~~~~~~
693
694WARNING: Destroying a CephFS will render all its data unusable, this cannot be
695undone!
696
697If you really want to destroy an existing CephFS you first need to stop, or
698destroy, all metadata servers (`M̀DS`). You can destroy them either over the Web
699GUI or the command line interface, with:
700
701----
702pveceph mds destroy NAME
703----
704on each {pve} node hosting a MDS daemon.
705
706Then, you can remove (destroy) CephFS by issuing a:
707
708----
709ceph fs rm NAME --yes-i-really-mean-it
710----
711on a single node hosting Ceph. After this you may want to remove the created
712data and metadata pools, this can be done either over the Web GUI or the CLI
713with:
714
715----
716pveceph pool destroy NAME
717----
718
719
720Ceph maintenance
721----------------
722
723Replace OSDs
724~~~~~~~~~~~~
725
726One of the common maintenance tasks in Ceph is to replace a disk of an OSD. If
727a disk is already in a failed state, then you can go ahead and run through the
728steps in xref:pve_ceph_osd_destroy[Destroy OSDs]. Ceph will recreate those
729copies on the remaining OSDs if possible. This rebalancing will start as soon
730as an OSD failure is detected or an OSD was actively stopped.
731
732NOTE: With the default size/min_size (3/2) of a pool, recovery only starts when
733`size + 1` nodes are available. The reason for this is that the Ceph object
734balancer xref:pve_ceph_device_classes[CRUSH] defaults to a full node as
735`failure domain'.
736
737To replace a still functioning disk, on the GUI go through the steps in
738xref:pve_ceph_osd_destroy[Destroy OSDs]. The only addition is to wait until
739the cluster shows 'HEALTH_OK' before stopping the OSD to destroy it.
740
741On the command line use the following commands.
742----
743ceph osd out osd.<id>
744----
745
746You can check with the command below if the OSD can be safely removed.
747----
748ceph osd safe-to-destroy osd.<id>
749----
750
751Once the above check tells you that it is save to remove the OSD, you can
752continue with following commands.
753----
754systemctl stop ceph-osd@<id>.service
755pveceph osd destroy <id>
756----
757
758Replace the old disk with the new one and use the same procedure as described
759in xref:pve_ceph_osd_create[Create OSDs].
760
761Trim/Discard
762~~~~~~~~~~~~
763It is a good measure to run 'fstrim' (discard) regularly on VMs or containers.
764This releases data blocks that the filesystem isn’t using anymore. It reduces
765data usage and the resource load. Most modern operating systems issue such
766discard commands to their disks regurarly. You only need to ensure that the
767Virtual Machines enable the xref:qm_hard_disk_discard[disk discard option].
768
769[[pveceph_scrub]]
770Scrub & Deep Scrub
771~~~~~~~~~~~~~~~~~~
772Ceph ensures data integrity by 'scrubbing' placement groups. Ceph checks every
773object in a PG for its health. There are two forms of Scrubbing, daily
774cheap metadata checks and weekly deep data checks. The weekly deep scrub reads
775the objects and uses checksums to ensure data integrity. If a running scrub
776interferes with business (performance) needs, you can adjust the time when
777scrubs footnote:[Ceph scrubbing https://docs.ceph.com/docs/{ceph_codename}/rados/configuration/osd-config-ref/#scrubbing]
778are executed.
779
780
781Ceph monitoring and troubleshooting
782-----------------------------------
783A good start is to continuosly monitor the ceph health from the start of
784initial deployment. Either through the ceph tools itself, but also by accessing
785the status through the {pve} link:api-viewer/index.html[API].
786
787The following ceph commands below can be used to see if the cluster is healthy
788('HEALTH_OK'), if there are warnings ('HEALTH_WARN'), or even errors
789('HEALTH_ERR'). If the cluster is in an unhealthy state the status commands
790below will also give you an overview of the current events and actions to take.
791
792----
793# single time output
794pve# ceph -s
795# continuously output status changes (press CTRL+C to stop)
796pve# ceph -w
797----
798
799To get a more detailed view, every ceph service has a log file under
800`/var/log/ceph/` and if there is not enough detail, the log level can be
801adjusted footnote:[Ceph log and debugging https://docs.ceph.com/docs/{ceph_codename}/rados/troubleshooting/log-and-debug/].
802
803You can find more information about troubleshooting
804footnote:[Ceph troubleshooting https://docs.ceph.com/docs/{ceph_codename}/rados/troubleshooting/]
805a Ceph cluster on the official website.
806
807
808ifdef::manvolnum[]
809include::pve-copyright.adoc[]
810endif::manvolnum[]