10 pveceph - Manage Ceph Services on Proxmox VE Nodes
15 include::pveceph.1-synopsis.adoc[]
21 Deploy Hyper-Converged Ceph Cluster
22 ===================================
26 [thumbnail="screenshot/gui-ceph-status.png"]
28 {pve} unifies your compute and storage systems, i.e. you can use the same
29 physical nodes within a cluster for both computing (processing VMs and
30 containers) and replicated storage. The traditional silos of compute and
31 storage resources can be wrapped up into a single hyper-converged appliance.
32 Separate storage networks (SANs) and connections via network attached storages
33 (NAS) disappear. With the integration of Ceph, an open source software-defined
34 storage platform, {pve} has the ability to run and manage Ceph storage directly
35 on the hypervisor nodes.
37 Ceph is a distributed object store and file system designed to provide
38 excellent performance, reliability and scalability.
40 .Some advantages of Ceph on {pve} are:
41 - Easy setup and management with CLI and GUI support
45 - Scalable to the exabyte level
46 - Setup pools with different performance and redundancy characteristics
47 - Data is replicated, making it fault tolerant
48 - Runs on economical commodity hardware
49 - No need for hardware RAID controllers
52 For small to mid sized deployments, it is possible to install a Ceph server for
53 RADOS Block Devices (RBD) directly on your {pve} cluster nodes, see
54 xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]. Recent
55 hardware has plenty of CPU power and RAM, so running storage services
56 and VMs on the same node is possible.
58 To simplify management, we provide 'pveceph' - a tool to install and
59 manage {ceph} services on {pve} nodes.
61 .Ceph consists of a couple of Daemons, for use as a RBD storage:
62 - Ceph Monitor (ceph-mon)
63 - Ceph Manager (ceph-mgr)
64 - Ceph OSD (ceph-osd; Object Storage Daemon)
66 TIP: We highly recommend to get familiar with Ceph
67 footnote:[Ceph intro {cephdocs-url}/start/intro/],
69 footnote:[Ceph architecture {cephdocs-url}/architecture/]
71 footnote:[Ceph glossary {cephdocs-url}/glossary].
77 To build a hyper-converged Proxmox + Ceph Cluster there should be at least
78 three (preferably) identical servers for the setup.
80 Check also the recommendations from
81 {cephdocs-url}/start/hardware-recommendations/[Ceph's website].
84 Higher CPU core frequency reduce latency and should be preferred. As a simple
85 rule of thumb, you should assign a CPU core (or thread) to each Ceph service to
86 provide enough resources for stable and durable Ceph performance.
89 Especially in a hyper-converged setup, the memory consumption needs to be
90 carefully monitored. In addition to the intended workload from virtual machines
91 and containers, Ceph needs enough memory available to provide excellent and
94 As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will be used
95 by an OSD. Especially during recovery, rebalancing or backfilling.
97 The daemon itself will use additional memory. The Bluestore backend of the
98 daemon requires by default **3-5 GiB of memory** (adjustable). In contrast, the
99 legacy Filestore backend uses the OS page cache and the memory consumption is
100 generally related to PGs of an OSD daemon.
103 We recommend a network bandwidth of at least 10 GbE or more, which is used
104 exclusively for Ceph. A meshed network setup
105 footnote:[Full Mesh Network for Ceph {webwiki-url}Full_Mesh_Network_for_Ceph_Server]
106 is also an option if there are no 10 GbE switches available.
108 The volume of traffic, especially during recovery, will interfere with other
109 services on the same network and may even break the {pve} cluster stack.
111 Further, estimate your bandwidth needs. While one HDD might not saturate a 1 Gb
112 link, multiple HDD OSDs per node can, and modern NVMe SSDs will even saturate
113 10 Gbps of bandwidth quickly. Deploying a network capable of even more bandwidth
114 will ensure that it isn't your bottleneck and won't be anytime soon, 25, 40 or
115 even 100 GBps are possible.
118 When planning the size of your Ceph cluster, it is important to take the
119 recovery time into consideration. Especially with small clusters, the recovery
120 might take long. It is recommended that you use SSDs instead of HDDs in small
121 setups to reduce recovery time, minimizing the likelihood of a subsequent
122 failure event during recovery.
124 In general SSDs will provide more IOPs than spinning disks. This fact and the
125 higher cost may make a xref:pve_ceph_device_classes[class based] separation of
126 pools appealing. Another possibility to speedup OSDs is to use a faster disk
127 as journal or DB/**W**rite-**A**head-**L**og device, see
128 xref:pve_ceph_osds[creating Ceph OSDs]. If a faster disk is used for multiple
129 OSDs, a proper balance between OSD and WAL / DB (or journal) disk must be
130 selected, otherwise the faster disk becomes the bottleneck for all linked OSDs.
132 Aside from the disk type, Ceph best performs with an even sized and distributed
133 amount of disks per node. For example, 4 x 500 GB disks with in each node is
134 better than a mixed setup with a single 1 TB and three 250 GB disk.
136 One also need to balance OSD count and single OSD capacity. More capacity
137 allows to increase storage density, but it also means that a single OSD
138 failure forces ceph to recover more data at once.
141 As Ceph handles data object redundancy and multiple parallel writes to disks
142 (OSDs) on its own, using a RAID controller normally doesn’t improve
143 performance or availability. On the contrary, Ceph is designed to handle whole
144 disks on it's own, without any abstraction in between. RAID controller are not
145 designed for the Ceph use case and may complicate things and sometimes even
146 reduce performance, as their write and caching algorithms may interfere with
149 WARNING: Avoid RAID controller, use host bus adapter (HBA) instead.
151 NOTE: Above recommendations should be seen as a rough guidance for choosing
152 hardware. Therefore, it is still essential to adapt it to your specific needs,
153 test your setup and monitor health and performance continuously.
155 [[pve_ceph_install_wizard]]
156 Initial Ceph installation & configuration
157 -----------------------------------------
159 [thumbnail="screenshot/gui-node-ceph-install.png"]
161 With {pve} you have the benefit of an easy to use installation wizard
162 for Ceph. Click on one of your cluster nodes and navigate to the Ceph
163 section in the menu tree. If Ceph is not already installed you will be
164 offered to do so now.
166 The wizard is divided into different sections, where each needs to be
167 finished successfully in order to use Ceph. After starting the installation
168 the wizard will download and install all required packages from {pve}'s ceph
171 After finishing the first step, you will need to create a configuration.
172 This step is only needed once per cluster, as this configuration is distributed
173 automatically to all remaining cluster members through {pve}'s clustered
174 xref:chapter_pmxcfs[configuration file system (pmxcfs)].
176 The configuration step includes the following settings:
178 * *Public Network:* You should setup a dedicated network for Ceph, this
179 setting is required. Separating your Ceph traffic is highly recommended,
180 because it could lead to troubles with other latency dependent services,
181 e.g., cluster communication may decrease Ceph's performance, if not done.
183 [thumbnail="screenshot/gui-node-ceph-install-wizard-step2.png"]
185 * *Cluster Network:* As an optional step you can go even further and
186 separate the xref:pve_ceph_osds[OSD] replication & heartbeat traffic
187 as well. This will relieve the public network and could lead to
188 significant performance improvements especially in big clusters.
190 You have two more options which are considered advanced and therefore
191 should only changed if you are an expert.
193 * *Number of replicas*: Defines the how often a object is replicated
194 * *Minimum replicas*: Defines the minimum number of required replicas
195 for I/O to be marked as complete.
197 Additionally you need to choose your first monitor node, this is required.
199 That's it, you should see a success page as the last step with further
200 instructions on how to go on. You are now prepared to start using Ceph,
201 even though you will need to create additional xref:pve_ceph_monitors[monitors],
202 create some xref:pve_ceph_osds[OSDs] and at least one xref:pve_ceph_pools[pool].
204 The rest of this chapter will guide you on how to get the most out of
205 your {pve} based Ceph setup, this will include aforementioned and
206 more like xref:pveceph_fs[CephFS] which is a very handy addition to your
210 Installation of Ceph Packages
211 -----------------------------
212 Use {pve} Ceph installation wizard (recommended) or run the following
213 command on each node:
220 This sets up an `apt` package repository in
221 `/etc/apt/sources.list.d/ceph.list` and installs the required software.
224 Create initial Ceph configuration
225 ---------------------------------
227 [thumbnail="screenshot/gui-ceph-config.png"]
229 Use the {pve} Ceph installation wizard (recommended) or run the
230 following command on one node:
234 pveceph init --network 10.10.10.0/24
237 This creates an initial configuration at `/etc/pve/ceph.conf` with a
238 dedicated network for ceph. That file is automatically distributed to
239 all {pve} nodes by using xref:chapter_pmxcfs[pmxcfs]. The command also
240 creates a symbolic link from `/etc/ceph/ceph.conf` pointing to that file.
241 So you can simply run Ceph commands without the need to specify a
245 [[pve_ceph_monitors]]
248 The Ceph Monitor (MON)
249 footnote:[Ceph Monitor {cephdocs-url}/start/intro/]
250 maintains a master copy of the cluster map. For high availability you need to
251 have at least 3 monitors. One monitor will already be installed if you
252 used the installation wizard. You won't need more than 3 monitors as long
253 as your cluster is small to midsize, only really large clusters will
257 [[pveceph_create_mon]]
261 [thumbnail="screenshot/gui-ceph-monitor.png"]
263 On each node where you want to place a monitor (three monitors are recommended),
264 create it by using the 'Ceph -> Monitor' tab in the GUI or run.
272 [[pveceph_destroy_mon]]
276 To remove a Ceph Monitor via the GUI first select a node in the tree view and
277 go to the **Ceph -> Monitor** panel. Select the MON and click the **Destroy**
280 To remove a Ceph Monitor via the CLI first connect to the node on which the MON
281 is running. Then execute the following command:
287 NOTE: At least three Monitors are needed for quorum.
293 The Manager daemon runs alongside the monitors. It provides an interface to
294 monitor the cluster. Since the Ceph luminous release at least one ceph-mgr
295 footnote:[Ceph Manager {cephdocs-url}/mgr/] daemon is
298 [[pveceph_create_mgr]]
302 Multiple Managers can be installed, but at any time only one Manager is active.
309 NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For
310 high availability install more then one manager.
313 [[pveceph_destroy_mgr]]
317 To remove a Ceph Manager via the GUI first select a node in the tree view and
318 go to the **Ceph -> Monitor** panel. Select the Manager and click the
321 To remove a Ceph Monitor via the CLI first connect to the node on which the
322 Manager is running. Then execute the following command:
328 NOTE: A Ceph cluster can function without a Manager, but certain functions like
329 the cluster status or usage require a running Manager.
335 Ceph **O**bject **S**torage **D**aemons are storing objects for Ceph over the
336 network. It is recommended to use one OSD per physical disk.
338 NOTE: By default an object is 4 MiB in size.
340 [[pve_ceph_osd_create]]
344 [thumbnail="screenshot/gui-ceph-osd-status.png"]
346 via GUI or via CLI as follows:
350 pveceph osd create /dev/sd[X]
353 TIP: We recommend a Ceph cluster size, starting with 12 OSDs, distributed
354 evenly among your, at least three nodes (4 OSDs on each node).
356 If the disk was used before (eg. ZFS/RAID/OSD), to remove partition table, boot
357 sector and any OSD leftover the following command should be sufficient.
361 ceph-volume lvm zap /dev/sd[X] --destroy
364 WARNING: The above command will destroy data on the disk!
368 Starting with the Ceph Kraken release, a new Ceph OSD storage type was
369 introduced, the so called Bluestore
370 footnote:[Ceph Bluestore https://ceph.com/community/new-luminous-bluestore/].
371 This is the default when creating OSDs since Ceph Luminous.
375 pveceph osd create /dev/sd[X]
378 .Block.db and block.wal
380 If you want to use a separate DB/WAL device for your OSDs, you can specify it
381 through the '-db_dev' and '-wal_dev' options. The WAL is placed with the DB, if
382 not specified separately.
386 pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]
389 You can directly choose the size for those with the '-db_size' and '-wal_size'
390 parameters respectively. If they are not given the following values (in order)
393 * bluestore_block_{db,wal}_size from ceph configuration...
394 ** ... database, section 'osd'
395 ** ... database, section 'global'
396 ** ... file, section 'osd'
397 ** ... file, section 'global'
398 * 10% (DB)/1% (WAL) of OSD size
400 NOTE: The DB stores BlueStore’s internal metadata and the WAL is BlueStore’s
401 internal journal or write-ahead log. It is recommended to use a fast SSD or
402 NVRAM for better performance.
407 Before Ceph Luminous, Filestore was used as default storage type for Ceph OSDs.
408 Starting with Ceph Nautilus, {pve} does not support creating such OSDs with
409 'pveceph' anymore. If you still want to create filestore OSDs, use
410 'ceph-volume' directly.
414 ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]
417 [[pve_ceph_osd_destroy]]
421 To remove an OSD via the GUI first select a {PVE} node in the tree view and go
422 to the **Ceph -> OSD** panel. Select the OSD to destroy. Next click the **OUT**
423 button. Once the OSD status changed from `in` to `out` click the **STOP**
424 button. As soon as the status changed from `up` to `down` select **Destroy**
425 from the `More` drop-down menu.
427 To remove an OSD via the CLI run the following commands.
431 systemctl stop ceph-osd@<ID>.service
433 NOTE: The first command instructs Ceph not to include the OSD in the data
434 distribution. The second command stops the OSD service. Until this time, no
437 The following command destroys the OSD. Specify the '-cleanup' option to
438 additionally destroy the partition table.
441 pveceph osd destroy <ID>
443 WARNING: The above command will destroy data on the disk!
449 A pool is a logical group for storing objects. It holds **P**lacement
450 **G**roups (`PG`, `pg_num`), a collection of objects.
456 [thumbnail="screenshot/gui-ceph-pools.png"]
458 When no options are given, we set a default of **128 PGs**, a **size of 3
459 replicas** and a **min_size of 2 replicas** for serving objects in a degraded
462 NOTE: The default number of PGs works for 2-5 disks. Ceph throws a
463 'HEALTH_WARNING' if you have too few or too many PGs in your cluster.
465 WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1
466 allows I/O on an object when it has only 1 replica which could lead to data
467 loss, incomplete PGs or unfound objects.
469 It is advised that you calculate the PG number based on your setup. You can
470 find the formula and the PG calculator footnote:[PG calculator
471 https://ceph.com/pgcalc/] online. From Ceph Nautilus onward, you can change the
472 number of PGs footnoteref:[placement_groups,Placement Groups
473 {cephdocs-url}/rados/operations/placement-groups/] after the setup.
475 In addition to manual adjustment, the PG autoscaler
476 footnoteref:[autoscaler,Automated Scaling
477 {cephdocs-url}/rados/operations/placement-groups/#automated-scaling] can
478 automatically scale the PG count for a pool in the background.
480 You can create pools through command line or on the GUI on each PVE host under
485 pveceph pool create <name>
488 If you would like to automatically also get a storage definition for your pool,
489 mark the checkbox "Add storages" in the GUI or use the command line option
490 '--add_storages' at pool creation.
493 Name:: The name of the pool. This must be unique and can't be changed afterwards.
494 Size:: The number of replicas per object. Ceph always tries to have this many
495 copies of an object. Default: `3`.
496 PG Autoscale Mode:: The automatic PG scaling mode footnoteref:[autoscaler] of
497 the pool. If set to `warn`, it produces a warning message when a pool
498 has a non-optimal PG count. Default: `warn`.
499 Add as Storage:: Configure a VM or container storage using the new pool.
503 Min. Size:: The minimum number of replicas per object. Ceph will reject I/O on
504 the pool if a PG has less than this many replicas. Default: `2`.
505 Crush Rule:: The rule to use for mapping object placement in the cluster. These
506 rules define how data is placed within the cluster. See
507 xref:pve_ceph_device_classes[Ceph CRUSH & device classes] for information on
509 # of PGs:: The number of placement groups footnoteref:[placement_groups] that
510 the pool should have at the beginning. Default: `128`.
511 Traget Size:: The estimated amount of data expected in the pool. The PG
512 autoscaler uses this size to estimate the optimal PG count.
513 Target Size Ratio:: The ratio of data that is expected in the pool. The PG
514 autoscaler uses the ratio relative to other ratio sets. It takes precedence
515 over the `target size` if both are set.
516 Min. # of PGs:: The minimum number of placement groups. This setting is used to
517 fine-tune the lower bound of the PG count for that pool. The PG autoscaler
518 will not merge PGs below this threshold.
520 Further information on Ceph pool handling can be found in the Ceph pool
521 operation footnote:[Ceph pool operation
522 {cephdocs-url}/rados/operations/pools/]
529 To destroy a pool via the GUI select a node in the tree view and go to the
530 **Ceph -> Pools** panel. Select the pool to destroy and click the **Destroy**
531 button. To confirm the destruction of the pool you need to enter the pool name.
533 Run the following command to destroy a pool. Specify the '-remove_storages' to
534 also remove the associated storage.
537 pveceph pool destroy <name>
540 NOTE: Deleting the data of a pool is a background task and can take some time.
541 You will notice that the data usage in the cluster is decreasing.
547 The PG autoscaler allows the cluster to consider the amount of (expected) data
548 stored in each pool and to choose the appropriate pg_num values automatically.
550 You may need to activate the PG autoscaler module before adjustments can take
554 ceph mgr module enable pg_autoscaler
557 The autoscaler is configured on a per pool basis and has the following modes:
560 warn:: A health warning is issued if the suggested `pg_num` value differs too
561 much from the current value.
562 on:: The `pg_num` is adjusted automatically with no need for any manual
564 off:: No automatic `pg_num` adjustments are made, and no warning will be issued
565 if the PG count is far from optimal.
567 The scaling factor can be adjusted to facilitate future data storage, with the
568 `target_size`, `target_size_ratio` and the `pg_num_min` options.
570 WARNING: By default, the autoscaler considers tuning the PG count of a pool if
571 it is off by a factor of 3. This will lead to a considerable shift in data
572 placement and might introduce a high load on the cluster.
574 You can find a more in-depth introduction to the PG autoscaler on Ceph's Blog -
575 https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/[New in
576 Nautilus: PG merging and autotuning].
579 [[pve_ceph_device_classes]]
580 Ceph CRUSH & device classes
581 ---------------------------
582 The foundation of Ceph is its algorithm, **C**ontrolled **R**eplication
583 **U**nder **S**calable **H**ashing
584 (CRUSH footnote:[CRUSH https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf]).
586 CRUSH calculates where to store to and retrieve data from, this has the
587 advantage that no central index service is needed. CRUSH works with a map of
588 OSDs, buckets (device locations) and rulesets (data replication) for pools.
590 NOTE: Further information can be found in the Ceph documentation, under the
591 section CRUSH map footnote:[CRUSH map {cephdocs-url}/rados/operations/crush-map/].
593 This map can be altered to reflect different replication hierarchies. The object
594 replicas can be separated (eg. failure domains), while maintaining the desired
597 A common use case is to use different classes of disks for different Ceph pools.
598 For this reason, Ceph introduced the device classes with luminous, to
599 accommodate the need for easy ruleset generation.
601 The device classes can be seen in the 'ceph osd tree' output. These classes
602 represent their own root bucket, which can be seen with the below command.
606 ceph osd crush tree --show-shadow
609 Example output form the above command:
613 ID CLASS WEIGHT TYPE NAME
614 -16 nvme 2.18307 root default~nvme
615 -13 nvme 0.72769 host sumi1~nvme
616 12 nvme 0.72769 osd.12
617 -14 nvme 0.72769 host sumi2~nvme
618 13 nvme 0.72769 osd.13
619 -15 nvme 0.72769 host sumi3~nvme
620 14 nvme 0.72769 osd.14
621 -1 7.70544 root default
622 -3 2.56848 host sumi1
623 12 nvme 0.72769 osd.12
624 -5 2.56848 host sumi2
625 13 nvme 0.72769 osd.13
626 -7 2.56848 host sumi3
627 14 nvme 0.72769 osd.14
630 To let a pool distribute its objects only on a specific device class, you need
631 to create a ruleset with the specific class first.
635 ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
638 [frame="none",grid="none", align="left", cols="30%,70%"]
640 |<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
641 |<root>|which crush root it should belong to (default ceph root "default")
642 |<failure-domain>|at which failure-domain the objects should be distributed (usually host)
643 |<class>|what type of OSD backing store to use (eg. nvme, ssd, hdd)
646 Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
650 ceph osd pool set <pool-name> crush_rule <rule-name>
653 TIP: If the pool already contains objects, all of these have to be moved
654 accordingly. Depending on your setup this may introduce a big performance hit
655 on your cluster. As an alternative, you can create a new pool and move disks
662 [thumbnail="screenshot/gui-ceph-log.png"]
664 You can then configure {pve} to use such pools to store VM or
665 Container images. Simply use the GUI too add a new `RBD` storage (see
666 section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
668 You also need to copy the keyring to a predefined location for an external Ceph
669 cluster. If Ceph is installed on the Proxmox nodes itself, then this will be
672 NOTE: The file name needs to be `<storage_id> + `.keyring` - `<storage_id>` is
673 the expression after 'rbd:' in `/etc/pve/storage.cfg` which is
674 `my-ceph-storage` in the following example:
678 mkdir /etc/pve/priv/ceph
679 cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
686 Ceph provides also a filesystem running on top of the same object storage as
687 RADOS block devices do. A **M**eta**d**ata **S**erver (`MDS`) is used to map
688 the RADOS backed objects to files and directories, allowing to provide a
689 POSIX-compliant replicated filesystem. This allows one to have a clustered
690 highly available shared filesystem in an easy way if ceph is already used. Its
691 Metadata Servers guarantee that files get balanced out over the whole Ceph
692 cluster, this way even high load will not overload a single host, which can be
693 an issue with traditional shared filesystem approaches, like `NFS`, for
696 [thumbnail="screenshot/gui-node-ceph-cephfs-panel.png"]
698 {pve} supports both, using an existing xref:storage_cephfs[CephFS as storage]
699 to save backups, ISO files or container templates and creating a
700 hyper-converged CephFS itself.
704 Metadata Server (MDS)
705 ~~~~~~~~~~~~~~~~~~~~~
707 CephFS needs at least one Metadata Server to be configured and running to be
708 able to work. One can simply create one through the {pve} web GUI's `Node ->
709 CephFS` panel or on the command line with:
715 Multiple metadata servers can be created in a cluster. But with the default
716 settings only one can be active at any time. If an MDS, or its node, becomes
717 unresponsive (or crashes), another `standby` MDS will get promoted to `active`.
718 One can speed up the hand-over between the active and a standby MDS up by using
719 the 'hotstandby' parameter option on create, or if you have already created it
723 mds standby replay = true
726 in the ceph.conf respective MDS section. With this enabled, this specific MDS
727 will always poll the active one, so that it can take over faster as it is in a
728 `warm` state. But naturally, the active polling will cause some additional
729 performance impact on your system and active `MDS`.
733 Since Luminous (12.2.x) you can also have multiple active metadata servers
734 running, but this is normally only useful for a high count on parallel clients,
735 as else the `MDS` seldom is the bottleneck. If you want to set this up please
736 refer to the ceph documentation. footnote:[Configuring multiple active MDS
737 daemons {cephdocs-url}/cephfs/multimds/]
739 [[pveceph_fs_create]]
743 With {pve}'s CephFS integration into you can create a CephFS easily over the
744 Web GUI, the CLI or an external API interface. Some prerequisites are required
747 .Prerequisites for a successful CephFS setup:
748 - xref:pve_ceph_install[Install Ceph packages], if this was already done some
749 time ago you might want to rerun it on an up to date system to ensure that
750 also all CephFS related packages get installed.
751 - xref:pve_ceph_monitors[Setup Monitors]
752 - xref:pve_ceph_monitors[Setup your OSDs]
753 - xref:pveceph_fs_mds[Setup at least one MDS]
755 After this got all checked and done you can simply create a CephFS through
756 either the Web GUI's `Node -> CephFS` panel or the command line tool `pveceph`,
760 pveceph fs create --pg_num 128 --add-storage
763 This creates a CephFS named `'cephfs'' using a pool for its data named
764 `'cephfs_data'' with `128` placement groups and a pool for its metadata named
765 `'cephfs_metadata'' with one quarter of the data pools placement groups (`32`).
766 Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
767 Ceph documentation for more information regarding a fitting placement group
768 number (`pg_num`) for your setup footnoteref:[placement_groups].
769 Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
770 storage configuration after it has been created successfully.
775 WARNING: Destroying a CephFS will render all its data unusable, this cannot be
778 If you really want to destroy an existing CephFS you first need to stop, or
779 destroy, all metadata servers (`M̀DS`). You can destroy them either over the Web
780 GUI or the command line interface, with:
783 pveceph mds destroy NAME
785 on each {pve} node hosting a MDS daemon.
787 Then, you can remove (destroy) CephFS by issuing a:
790 ceph fs rm NAME --yes-i-really-mean-it
792 on a single node hosting Ceph. After this you may want to remove the created
793 data and metadata pools, this can be done either over the Web GUI or the CLI
797 pveceph pool destroy NAME
807 One of the common maintenance tasks in Ceph is to replace a disk of an OSD. If
808 a disk is already in a failed state, then you can go ahead and run through the
809 steps in xref:pve_ceph_osd_destroy[Destroy OSDs]. Ceph will recreate those
810 copies on the remaining OSDs if possible. This rebalancing will start as soon
811 as an OSD failure is detected or an OSD was actively stopped.
813 NOTE: With the default size/min_size (3/2) of a pool, recovery only starts when
814 `size + 1` nodes are available. The reason for this is that the Ceph object
815 balancer xref:pve_ceph_device_classes[CRUSH] defaults to a full node as
818 To replace a still functioning disk, on the GUI go through the steps in
819 xref:pve_ceph_osd_destroy[Destroy OSDs]. The only addition is to wait until
820 the cluster shows 'HEALTH_OK' before stopping the OSD to destroy it.
822 On the command line use the following commands.
824 ceph osd out osd.<id>
827 You can check with the command below if the OSD can be safely removed.
829 ceph osd safe-to-destroy osd.<id>
832 Once the above check tells you that it is save to remove the OSD, you can
833 continue with following commands.
835 systemctl stop ceph-osd@<id>.service
836 pveceph osd destroy <id>
839 Replace the old disk with the new one and use the same procedure as described
840 in xref:pve_ceph_osd_create[Create OSDs].
844 It is a good measure to run 'fstrim' (discard) regularly on VMs or containers.
845 This releases data blocks that the filesystem isn’t using anymore. It reduces
846 data usage and resource load. Most modern operating systems issue such discard
847 commands to their disks regularly. You only need to ensure that the Virtual
848 Machines enable the xref:qm_hard_disk_discard[disk discard option].
853 Ceph ensures data integrity by 'scrubbing' placement groups. Ceph checks every
854 object in a PG for its health. There are two forms of Scrubbing, daily
855 cheap metadata checks and weekly deep data checks. The weekly deep scrub reads
856 the objects and uses checksums to ensure data integrity. If a running scrub
857 interferes with business (performance) needs, you can adjust the time when
858 scrubs footnote:[Ceph scrubbing {cephdocs-url}/rados/configuration/osd-config-ref/#scrubbing]
862 Ceph monitoring and troubleshooting
863 -----------------------------------
864 A good start is to continuously monitor the ceph health from the start of
865 initial deployment. Either through the ceph tools itself, but also by accessing
866 the status through the {pve} link:api-viewer/index.html[API].
868 The following ceph commands below can be used to see if the cluster is healthy
869 ('HEALTH_OK'), if there are warnings ('HEALTH_WARN'), or even errors
870 ('HEALTH_ERR'). If the cluster is in an unhealthy state the status commands
871 below will also give you an overview of the current events and actions to take.
876 # continuously output status changes (press CTRL+C to stop)
880 To get a more detailed view, every ceph service has a log file under
881 `/var/log/ceph/` and if there is not enough detail, the log level can be
882 adjusted footnote:[Ceph log and debugging {cephdocs-url}/rados/troubleshooting/log-and-debug/].
884 You can find more information about troubleshooting
885 footnote:[Ceph troubleshooting {cephdocs-url}/rados/troubleshooting/]
886 a Ceph cluster on the official website.
890 include::pve-copyright.adoc[]