X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=pveceph.adoc;h=4132545f0ff9faa09c847c340faaf0e1c4363a66;hb=59b586cb297de3ef025910c6b8f9bf6375141b3e;hp=21a496560932e030fc31f84ce50d0f8427f6ba76;hpb=a474ca1f748336aaa3f1ee8991eb79452e726a9f;p=pve-docs.git diff --git a/pveceph.adoc b/pveceph.adoc index 21a4965..4132545 100644 --- a/pveceph.adoc +++ b/pveceph.adoc @@ -23,7 +23,7 @@ Manage Ceph Services on Proxmox VE Nodes :pve-toplevel: endif::manvolnum[] -[thumbnail="gui-ceph-status.png"] +[thumbnail="screenshot/gui-ceph-status.png"] {pve} unifies your compute and storage systems, i.e. you can use the same physical nodes within a cluster for both computing (processing VMs and @@ -37,18 +37,16 @@ on the hypervisor nodes. Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability. -.Some of the advantages of Ceph are: -- Easy setup and management with CLI and GUI support on Proxmox VE +.Some advantages of Ceph on {pve} are: +- Easy setup and management with CLI and GUI support - Thin provisioning - Snapshots support - Self healing -- No single point of failure - Scalable to the exabyte level - Setup pools with different performance and redundancy characteristics - Data is replicated, making it fault tolerant - Runs on economical commodity hardware - No need for hardware RAID controllers -- Easy management - Open source For small to mid sized deployments, it is possible to install a Ceph server for @@ -83,10 +81,13 @@ Check also the recommendations from http://docs.ceph.com/docs/luminous/start/hardware-recommendations/[Ceph's website]. .Avoid RAID -While RAID controller are build for storage virtualisation, to combine -independent disks to form one or more logical units. Their caching methods, -algorithms (RAID modes; incl. JBOD), disk or write/read optimisations are -targeted towards aforementioned logical units and not to Ceph. +As Ceph handles data object redundancy and multiple parallel writes to disks +(OSDs) on its own, using a RAID controller normally doesn’t improve +performance or availability. On the contrary, Ceph is designed to handle whole +disks on it's own, without any abstraction in between. RAID controller are not +designed for the Ceph use case and may complicate things and sometimes even +reduce performance, as their write and caching algorithms may interfere with +the ones from Ceph. WARNING: Avoid RAID controller, use host bus adapter (HBA) instead. @@ -108,7 +109,7 @@ This sets up an `apt` package repository in Creating initial Ceph configuration ----------------------------------- -[thumbnail="gui-ceph-config.png"] +[thumbnail="screenshot/gui-ceph-config.png"] After installation of packages, you need to create an initial Ceph configuration on just one node, based on your network (`10.10.10.0/24` @@ -130,7 +131,7 @@ Ceph commands without the need to specify a configuration file. Creating Ceph Monitors ---------------------- -[thumbnail="gui-ceph-monitor.png"] +[thumbnail="screenshot/gui-ceph-monitor.png"] The Ceph Monitor (MON) footnote:[Ceph Monitor http://docs.ceph.com/docs/luminous/start/intro/] @@ -173,7 +174,7 @@ pveceph createmgr Creating Ceph OSDs ------------------ -[thumbnail="gui-ceph-osd-status.png"] +[thumbnail="screenshot/gui-ceph-osd-status.png"] via GUI or via CLI as follows: @@ -277,7 +278,7 @@ highly recommended to achieve good performance. Creating Ceph Pools ------------------- -[thumbnail="gui-ceph-pools.png"] +[thumbnail="screenshot/gui-ceph-pools.png"] A pool is a logical group for storing objects. It holds **P**lacement **G**roups (PG), a collection of objects. @@ -394,7 +395,7 @@ separately. Ceph Client ----------- -[thumbnail="gui-ceph-log.png"] +[thumbnail="screenshot/gui-ceph-log.png"] You can then configure {pve} to use such pools to store VM or Container images. Simply use the GUI too add a new `RBD` storage (see