10 pveceph - Manage Ceph Services on Proxmox VE Nodes
15 include::pveceph.1-synopsis.adoc[]
21 Manage Ceph Services on Proxmox VE Nodes
22 ========================================
26 [thumbnail="screenshot/gui-ceph-status.png"]
28 {pve} unifies your compute and storage systems, i.e. you can use the same
29 physical nodes within a cluster for both computing (processing VMs and
30 containers) and replicated storage. The traditional silos of compute and
31 storage resources can be wrapped up into a single hyper-converged appliance.
32 Separate storage networks (SANs) and connections via network attached storages
33 (NAS) disappear. With the integration of Ceph, an open source software-defined
34 storage platform, {pve} has the ability to run and manage Ceph storage directly
35 on the hypervisor nodes.
37 Ceph is a distributed object store and file system designed to provide
38 excellent performance, reliability and scalability.
40 .Some advantages of Ceph on {pve} are:
41 - Easy setup and management with CLI and GUI support
45 - Scalable to the exabyte level
46 - Setup pools with different performance and redundancy characteristics
47 - Data is replicated, making it fault tolerant
48 - Runs on economical commodity hardware
49 - No need for hardware RAID controllers
52 For small to mid sized deployments, it is possible to install a Ceph server for
53 RADOS Block Devices (RBD) directly on your {pve} cluster nodes, see
54 xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]. Recent
55 hardware has plenty of CPU power and RAM, so running storage services
56 and VMs on the same node is possible.
58 To simplify management, we provide 'pveceph' - a tool to install and
59 manage {ceph} services on {pve} nodes.
61 .Ceph consists of a couple of Daemons footnote:[Ceph intro http://docs.ceph.com/docs/luminous/start/intro/], for use as a RBD storage:
62 - Ceph Monitor (ceph-mon)
63 - Ceph Manager (ceph-mgr)
64 - Ceph OSD (ceph-osd; Object Storage Daemon)
66 TIP: We highly recommend to get familiar with Ceph's architecture
67 footnote:[Ceph architecture http://docs.ceph.com/docs/luminous/architecture/]
69 footnote:[Ceph glossary http://docs.ceph.com/docs/luminous/glossary].
75 To build a hyper-converged Proxmox + Ceph Cluster there should be at least
76 three (preferably) identical servers for the setup.
78 Check also the recommendations from
79 http://docs.ceph.com/docs/luminous/start/hardware-recommendations/[Ceph's website].
82 Higher CPU core frequency reduce latency and should be preferred. As a simple
83 rule of thumb, you should assign a CPU core (or thread) to each Ceph service to
84 provide enough resources for stable and durable Ceph performance.
87 Especially in a hyper-converged setup, the memory consumption needs to be
88 carefully monitored. In addition to the intended workload from virtual machines
89 and container, Ceph needs enough memory available to provide good and stable
90 performance. As a rule of thumb, for roughly 1 TiB of data, 1 GiB of memory
91 will be used by an OSD. OSD caching will use additional memory.
94 We recommend a network bandwidth of at least 10 GbE or more, which is used
95 exclusively for Ceph. A meshed network setup
96 footnote:[Full Mesh Network for Ceph {webwiki-url}Full_Mesh_Network_for_Ceph_Server]
97 is also an option if there are no 10 GbE switches available.
99 The volume of traffic, especially during recovery, will interfere with other
100 services on the same network and may even break the {pve} cluster stack.
102 Further, estimate your bandwidth needs. While one HDD might not saturate a 1 Gb
103 link, multiple HDD OSDs per node can, and modern NVMe SSDs will even saturate
104 10 Gbps of bandwidth quickly. Deploying a network capable of even more bandwith
105 will ensure that it isn't your bottleneck and won't be anytime soon, 25, 40 or
106 even 100 GBps are possible.
109 When planning the size of your Ceph cluster, it is important to take the
110 recovery time into consideration. Especially with small clusters, the recovery
111 might take long. It is recommended that you use SSDs instead of HDDs in small
112 setups to reduce recovery time, minimizing the likelihood of a subsequent
113 failure event during recovery.
115 In general SSDs will provide more IOPs than spinning disks. This fact and the
116 higher cost may make a xref:pve_ceph_device_classes[class based] separation of
117 pools appealing. Another possibility to speedup OSDs is to use a faster disk
118 as journal or DB/**W**rite-**A**head-**L**og device, see
119 xref:pve_ceph_osds[creating Ceph OSDs]. If a faster disk is used for multiple
120 OSDs, a proper balance between OSD and WAL / DB (or journal) disk must be
121 selected, otherwise the faster disk becomes the bottleneck for all linked OSDs.
123 Aside from the disk type, Ceph best performs with an even sized and distributed
124 amount of disks per node. For example, 4 x 500 GB disks with in each node is
125 better than a mixed setup with a single 1 TB and three 250 GB disk.
127 One also need to balance OSD count and single OSD capacity. More capacity
128 allows to increase storage density, but it also means that a single OSD
129 failure forces ceph to recover more data at once.
132 As Ceph handles data object redundancy and multiple parallel writes to disks
133 (OSDs) on its own, using a RAID controller normally doesn’t improve
134 performance or availability. On the contrary, Ceph is designed to handle whole
135 disks on it's own, without any abstraction in between. RAID controller are not
136 designed for the Ceph use case and may complicate things and sometimes even
137 reduce performance, as their write and caching algorithms may interfere with
140 WARNING: Avoid RAID controller, use host bus adapter (HBA) instead.
142 NOTE: Above recommendations should be seen as a rough guidance for choosing
143 hardware. Therefore, it is still essential to adapt it to your specific needs,
144 test your setup and monitor health and performance continuously.
146 [[pve_ceph_install_wizard]]
147 Initial Ceph installation & configuration
148 -----------------------------------------
150 [thumbnail="screenshot/gui-node-ceph-install.png"]
152 With {pve} you have the benefit of an easy to use installation wizard
153 for Ceph. Click on one of your cluster nodes and navigate to the Ceph
154 section in the menu tree. If Ceph is not already installed you will be
155 offered to do so now.
157 The wizard is divided into different sections, where each needs to be
158 finished successfully in order to use Ceph. After starting the installation
159 the wizard will download and install all required packages from {pve}'s ceph
162 After finishing the first step, you will need to create a configuration.
163 This step is only needed once per cluster, as this configuration is distributed
164 automatically to all remaining cluster members through {pve}'s clustered
165 xref:chapter_pmxcfs[configuration file system (pmxcfs)].
167 The configuration step includes the following settings:
169 * *Public Network:* You should setup a dedicated network for Ceph, this
170 setting is required. Separating your Ceph traffic is highly recommended,
171 because it could lead to troubles with other latency dependent services,
172 e.g., cluster communication may decrease Ceph's performance, if not done.
174 [thumbnail="screenshot/gui-node-ceph-install-wizard-step2.png"]
176 * *Cluster Network:* As an optional step you can go even further and
177 separate the xref:pve_ceph_osds[OSD] replication & heartbeat traffic
178 as well. This will relieve the public network and could lead to
179 significant performance improvements especially in big clusters.
181 You have two more options which are considered advanced and therefore
182 should only changed if you are an expert.
184 * *Number of replicas*: Defines the how often a object is replicated
185 * *Minimum replicas*: Defines the minimum number of required replicas
186 for I/O to be marked as complete.
188 Additionally you need to choose your first monitor node, this is required.
190 That's it, you should see a success page as the last step with further
191 instructions on how to go on. You are now prepared to start using Ceph,
192 even though you will need to create additional xref:pve_ceph_monitors[monitors],
193 create some xref:pve_ceph_osds[OSDs] and at least one xref:pve_ceph_pools[pool].
195 The rest of this chapter will guide you on how to get the most out of
196 your {pve} based Ceph setup, this will include aforementioned and
197 more like xref:pveceph_fs[CephFS] which is a very handy addition to your
201 Installation of Ceph Packages
202 -----------------------------
203 Use {pve} Ceph installation wizard (recommended) or run the following
204 command on each node:
211 This sets up an `apt` package repository in
212 `/etc/apt/sources.list.d/ceph.list` and installs the required software.
215 Creating initial Ceph configuration
216 -----------------------------------
218 [thumbnail="screenshot/gui-ceph-config.png"]
220 Use the {pve} Ceph installation wizard (recommended) or run the
221 following command on one node:
225 pveceph init --network 10.10.10.0/24
228 This creates an initial configuration at `/etc/pve/ceph.conf` with a
229 dedicated network for ceph. That file is automatically distributed to
230 all {pve} nodes by using xref:chapter_pmxcfs[pmxcfs]. The command also
231 creates a symbolic link from `/etc/ceph/ceph.conf` pointing to that file.
232 So you can simply run Ceph commands without the need to specify a
236 [[pve_ceph_monitors]]
237 Creating Ceph Monitors
238 ----------------------
240 [thumbnail="screenshot/gui-ceph-monitor.png"]
242 The Ceph Monitor (MON)
243 footnote:[Ceph Monitor http://docs.ceph.com/docs/luminous/start/intro/]
244 maintains a master copy of the cluster map. For high availability you need to
245 have at least 3 monitors. One monitor will already be installed if you
246 used the installation wizard. You wont need more than 3 monitors as long
247 as your cluster is small to midsize, only really large clusters will
250 On each node where you want to place a monitor (three monitors are recommended),
251 create it by using the 'Ceph -> Monitor' tab in the GUI or run.
259 This will also install the needed Ceph Manager ('ceph-mgr') by default. If you
260 do not want to install a manager, specify the '-exclude-manager' option.
264 Creating Ceph Manager
265 ----------------------
267 The Manager daemon runs alongside the monitors, providing an interface for
268 monitoring the cluster. Since the Ceph luminous release the
269 ceph-mgr footnote:[Ceph Manager http://docs.ceph.com/docs/luminous/mgr/] daemon
270 is required. During monitor installation the ceph manager will be installed as
273 NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For
274 high availability install more then one manager.
286 [thumbnail="screenshot/gui-ceph-osd-status.png"]
288 via GUI or via CLI as follows:
292 pveceph createosd /dev/sd[X]
295 TIP: We recommend a Ceph cluster size, starting with 12 OSDs, distributed evenly
296 among your, at least three nodes (4 OSDs on each node).
298 If the disk was used before (eg. ZFS/RAID/OSD), to remove partition table, boot
299 sector and any OSD leftover the following command should be sufficient.
303 ceph-volume lvm zap /dev/sd[X] --destroy
306 WARNING: The above command will destroy data on the disk!
311 Starting with the Ceph Kraken release, a new Ceph OSD storage type was
312 introduced, the so called Bluestore
313 footnote:[Ceph Bluestore http://ceph.com/community/new-luminous-bluestore/].
314 This is the default when creating OSDs since Ceph Luminous.
318 pveceph createosd /dev/sd[X]
321 Block.db and block.wal
322 ^^^^^^^^^^^^^^^^^^^^^^
324 If you want to use a separate DB/WAL device for your OSDs, you can specify it
325 through the '-db_dev' and '-wal_dev' options. The WAL is placed with the DB, if not
326 specified separately.
330 pveceph createosd /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]
333 You can directly choose the size for those with the '-db_size' and '-wal_size'
334 paremeters respectively. If they are not given the following values (in order)
337 * bluestore_block_{db,wal}_size from ceph configuration...
338 ** ... database, section 'osd'
339 ** ... database, section 'global'
340 ** ... file, section 'osd'
341 ** ... file, section 'global'
342 * 10% (DB)/1% (WAL) of OSD size
344 NOTE: The DB stores BlueStore’s internal metadata and the WAL is BlueStore’s
345 internal journal or write-ahead log. It is recommended to use a fast SSD or
346 NVRAM for better performance.
352 Before Ceph Luminous, Filestore was used as default storage type for Ceph OSDs.
353 Starting with Ceph Nautilus, {pve} does not support creating such OSDs with
354 'pveceph' anymore. If you still want to create filestore OSDs, use
355 'ceph-volume' directly.
359 ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]
366 [thumbnail="screenshot/gui-ceph-pools.png"]
368 A pool is a logical group for storing objects. It holds **P**lacement
369 **G**roups (`PG`, `pg_num`), a collection of objects.
371 When no options are given, we set a default of **128 PGs**, a **size of 3
372 replicas** and a **min_size of 2 replicas** for serving objects in a degraded
375 NOTE: The default number of PGs works for 2-5 disks. Ceph throws a
376 'HEALTH_WARNING' if you have too few or too many PGs in your cluster.
378 It is advised to calculate the PG number depending on your setup, you can find
379 the formula and the PG calculator footnote:[PG calculator
380 http://ceph.com/pgcalc/] online. While PGs can be increased later on, they can
384 You can create pools through command line or on the GUI on each PVE host under
389 pveceph createpool <name>
392 If you would like to automatically get also a storage definition for your pool,
393 active the checkbox "Add storages" on the GUI or use the command line option
394 '--add_storages' on pool creation.
396 Further information on Ceph pool handling can be found in the Ceph pool
397 operation footnote:[Ceph pool operation
398 http://docs.ceph.com/docs/luminous/rados/operations/pools/]
401 [[pve_ceph_device_classes]]
402 Ceph CRUSH & device classes
403 ---------------------------
404 The foundation of Ceph is its algorithm, **C**ontrolled **R**eplication
405 **U**nder **S**calable **H**ashing
406 (CRUSH footnote:[CRUSH https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf]).
408 CRUSH calculates where to store to and retrieve data from, this has the
409 advantage that no central index service is needed. CRUSH works with a map of
410 OSDs, buckets (device locations) and rulesets (data replication) for pools.
412 NOTE: Further information can be found in the Ceph documentation, under the
413 section CRUSH map footnote:[CRUSH map http://docs.ceph.com/docs/luminous/rados/operations/crush-map/].
415 This map can be altered to reflect different replication hierarchies. The object
416 replicas can be separated (eg. failure domains), while maintaining the desired
419 A common use case is to use different classes of disks for different Ceph pools.
420 For this reason, Ceph introduced the device classes with luminous, to
421 accommodate the need for easy ruleset generation.
423 The device classes can be seen in the 'ceph osd tree' output. These classes
424 represent their own root bucket, which can be seen with the below command.
428 ceph osd crush tree --show-shadow
431 Example output form the above command:
435 ID CLASS WEIGHT TYPE NAME
436 -16 nvme 2.18307 root default~nvme
437 -13 nvme 0.72769 host sumi1~nvme
438 12 nvme 0.72769 osd.12
439 -14 nvme 0.72769 host sumi2~nvme
440 13 nvme 0.72769 osd.13
441 -15 nvme 0.72769 host sumi3~nvme
442 14 nvme 0.72769 osd.14
443 -1 7.70544 root default
444 -3 2.56848 host sumi1
445 12 nvme 0.72769 osd.12
446 -5 2.56848 host sumi2
447 13 nvme 0.72769 osd.13
448 -7 2.56848 host sumi3
449 14 nvme 0.72769 osd.14
452 To let a pool distribute its objects only on a specific device class, you need
453 to create a ruleset with the specific class first.
457 ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
460 [frame="none",grid="none", align="left", cols="30%,70%"]
462 |<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
463 |<root>|which crush root it should belong to (default ceph root "default")
464 |<failure-domain>|at which failure-domain the objects should be distributed (usually host)
465 |<class>|what type of OSD backing store to use (eg. nvme, ssd, hdd)
468 Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
472 ceph osd pool set <pool-name> crush_rule <rule-name>
475 TIP: If the pool already contains objects, all of these have to be moved
476 accordingly. Depending on your setup this may introduce a big performance hit on
477 your cluster. As an alternative, you can create a new pool and move disks
484 [thumbnail="screenshot/gui-ceph-log.png"]
486 You can then configure {pve} to use such pools to store VM or
487 Container images. Simply use the GUI too add a new `RBD` storage (see
488 section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
490 You also need to copy the keyring to a predefined location for a external Ceph
491 cluster. If Ceph is installed on the Proxmox nodes itself, then this will be
494 NOTE: The file name needs to be `<storage_id> + `.keyring` - `<storage_id>` is
495 the expression after 'rbd:' in `/etc/pve/storage.cfg` which is
496 `my-ceph-storage` in the following example:
500 mkdir /etc/pve/priv/ceph
501 cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
508 Ceph provides also a filesystem running on top of the same object storage as
509 RADOS block devices do. A **M**eta**d**ata **S**erver (`MDS`) is used to map
510 the RADOS backed objects to files and directories, allowing to provide a
511 POSIX-compliant replicated filesystem. This allows one to have a clustered
512 highly available shared filesystem in an easy way if ceph is already used. Its
513 Metadata Servers guarantee that files get balanced out over the whole Ceph
514 cluster, this way even high load will not overload a single host, which can be
515 an issue with traditional shared filesystem approaches, like `NFS`, for
518 {pve} supports both, using an existing xref:storage_cephfs[CephFS as storage]
519 to save backups, ISO files or container templates and creating a
520 hyper-converged CephFS itself.
524 Metadata Server (MDS)
525 ~~~~~~~~~~~~~~~~~~~~~
527 CephFS needs at least one Metadata Server to be configured and running to be
528 able to work. One can simply create one through the {pve} web GUI's `Node ->
529 CephFS` panel or on the command line with:
535 Multiple metadata servers can be created in a cluster. But with the default
536 settings only one can be active at any time. If an MDS, or its node, becomes
537 unresponsive (or crashes), another `standby` MDS will get promoted to `active`.
538 One can speed up the hand-over between the active and a standby MDS up by using
539 the 'hotstandby' parameter option on create, or if you have already created it
543 mds standby replay = true
546 in the ceph.conf respective MDS section. With this enabled, this specific MDS
547 will always poll the active one, so that it can take over faster as it is in a
548 `warm` state. But naturally, the active polling will cause some additional
549 performance impact on your system and active `MDS`.
554 Since Luminous (12.2.x) you can also have multiple active metadata servers
555 running, but this is normally only useful for a high count on parallel clients,
556 as else the `MDS` seldom is the bottleneck. If you want to set this up please
557 refer to the ceph documentation. footnote:[Configuring multiple active MDS
558 daemons http://docs.ceph.com/docs/luminous/cephfs/multimds/]
560 [[pveceph_fs_create]]
564 With {pve}'s CephFS integration into you can create a CephFS easily over the
565 Web GUI, the CLI or an external API interface. Some prerequisites are required
568 .Prerequisites for a successful CephFS setup:
569 - xref:pve_ceph_install[Install Ceph packages], if this was already done some
570 time ago you might want to rerun it on an up to date system to ensure that
571 also all CephFS related packages get installed.
572 - xref:pve_ceph_monitors[Setup Monitors]
573 - xref:pve_ceph_monitors[Setup your OSDs]
574 - xref:pveceph_fs_mds[Setup at least one MDS]
576 After this got all checked and done you can simply create a CephFS through
577 either the Web GUI's `Node -> CephFS` panel or the command line tool `pveceph`,
581 pveceph fs create --pg_num 128 --add-storage
584 This creates a CephFS named `'cephfs'' using a pool for its data named
585 `'cephfs_data'' with `128` placement groups and a pool for its metadata named
586 `'cephfs_metadata'' with one quarter of the data pools placement groups (`32`).
587 Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
588 Ceph documentation for more information regarding a fitting placement group
589 number (`pg_num`) for your setup footnote:[Ceph Placement Groups
590 http://docs.ceph.com/docs/luminous/rados/operations/placement-groups/].
591 Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
592 storage configuration after it was created successfully.
597 WARNING: Destroying a CephFS will render all its data unusable, this cannot be
600 If you really want to destroy an existing CephFS you first need to stop, or
601 destroy, all metadata server (`M̀DS`). You can destroy them either over the Web
602 GUI or the command line interface, with:
605 pveceph mds destroy NAME
607 on each {pve} node hosting a MDS daemon.
609 Then, you can remove (destroy) CephFS by issuing a:
612 ceph fs rm NAME --yes-i-really-mean-it
614 on a single node hosting Ceph. After this you may want to remove the created
615 data and metadata pools, this can be done either over the Web GUI or the CLI
619 pveceph pool destroy NAME
623 Ceph monitoring and troubleshooting
624 -----------------------------------
625 A good start is to continuosly monitor the ceph health from the start of
626 initial deployment. Either through the ceph tools itself, but also by accessing
627 the status through the {pve} link:api-viewer/index.html[API].
629 The following ceph commands below can be used to see if the cluster is healthy
630 ('HEALTH_OK'), if there are warnings ('HEALTH_WARN'), or even errors
631 ('HEALTH_ERR'). If the cluster is in an unhealthy state the status commands
632 below will also give you an overview on the current events and actions take.
637 # continuously output status changes (press CTRL+C to stop)
641 To get a more detailed view, every ceph service has a log file under
642 `/var/log/ceph/` and if there is not enough detail, the log level can be
643 adjusted footnote:[Ceph log and debugging http://docs.ceph.com/docs/luminous/rados/troubleshooting/log-and-debug/].
645 You can find more information about troubleshooting
646 footnote:[Ceph troubleshooting http://docs.ceph.com/docs/luminous/rados/troubleshooting/]
647 a Ceph cluster on its website.
651 include::pve-copyright.adoc[]