-----------
endif::manvolnum[]
ifndef::manvolnum[]
-pveceph - Manage Ceph Services on Proxmox VE Nodes
-==================================================
+Manage Ceph Services on Proxmox VE Nodes
+========================================
+:pve-toplevel:
endif::manvolnum[]
+[thumbnail="gui-ceph-status.png"]
+
{pve} unifies your compute and storage systems, i.e. you can use the
same physical nodes within a cluster for both computing (processing
VMs and containers) and replicated storage. The traditional silos of
To build a Proxmox Ceph Cluster there should be at least three (preferably)
identical servers for the setup.
-A 10Gb network, exclusively used for Ceph, is recommmended. A meshed
+A 10Gb network, exclusively used for Ceph, is recommended. A meshed
network setup is also an option if there are no 10Gb switches
available, see {webwiki-url}Full_Mesh_Network_for_Ceph_Server[wiki] .
Check also the recommendations from
-http://docs.ceph.com/docs/jewel/start/hardware-recommendations/[Ceph's website].
+http://docs.ceph.com/docs/master/start/hardware-recommendations/[Ceph's website].
Installation of Ceph Packages
[source,bash]
----
-pveceph install -version jewel
+pveceph install
----
This sets up an `apt` package repository in
Creating initial Ceph configuration
-----------------------------------
+[thumbnail="gui-ceph-config.png"]
+
After installation of packages, you need to create an initial Ceph
configuration on just one node, based on your network (`10.10.10.0/24`
in the following example) dedicated for Ceph:
Ceph commands without the need to specify a configuration file.
+[[pve_ceph_monitors]]
Creating Ceph Monitors
----------------------
+[thumbnail="gui-ceph-monitor.png"]
+
On each node where a monitor is requested (three monitors are recommended)
create it by using the "Ceph" item in the GUI or run.
----
+[[pve_ceph_osds]]
Creating Ceph OSDs
------------------
+[thumbnail="gui-ceph-osd-status.png"]
+
via GUI or via CLI as follows:
[source,bash]
highly recommended if you expect good performance.
+[[pve_ceph_pools]]
Ceph Pools
----------
+[thumbnail="gui-ceph-pools.png"]
+
The standard installation creates per default the pool 'rbd',
additional pools can be created via GUI.
Ceph Client
-----------
+[thumbnail="gui-ceph-log.png"]
+
You can then configure {pve} to use such pools to store VM or
Container images. Simply use the GUI too add a new `RBD` storage (see
section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).