X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=pve-storage-rbd.adoc;h=aa870edf5a4ca165514fd473832c1cc2a961f26d;hb=40e6c80663d661debf3a3cff5de7779bf1d4691b;hp=d38294bddf2dfda0f7af3896406729edca81ce55;hpb=f532afb7ac007af6bbed8e3cad5439de934db30a;p=pve-docs.git diff --git a/pve-storage-rbd.adoc b/pve-storage-rbd.adoc index d38294b..aa870ed 100644 --- a/pve-storage-rbd.adoc +++ b/pve-storage-rbd.adoc @@ -1,6 +1,10 @@ +[[ceph_rados_block_devices]] Ceph RADOS Block Devices (RBD) ------------------------------ -include::attributes.txt[] +ifdef::wiki[] +:pve-toplevel: +:title: Storage: RBD +endif::wiki[] Storage pool type: `rbd` @@ -15,14 +19,15 @@ storage, and you get the following advantages: * full snapshot and clone capabilities * self healing * no single point of failure -* scalable to the exabyte level -* kernel and unser space implementation available +* scalable to the exabyte level +* kernel and user space implementation available NOTE: For smaller deployments, it is also possible to run Ceph services directly on your {pve} nodes. Recent hardware has plenty of CPU power and RAM, so running storage services and VMs on same node is possible. +[[storage_rbd_config]] Configuration ~~~~~~~~~~~~~ @@ -31,7 +36,8 @@ This backend supports the common storage properties `nodes`, monhost:: -List of monitor daemon IPs. +List of monitor daemon IPs. Optional, only needed if Ceph is not running on the +PVE cluster. pool:: @@ -39,31 +45,34 @@ Ceph pool name. username:: -RBD user Id. +RBD user ID. Optional, only needed if Ceph is not running on the PVE cluster. +Note that only the user ID should be used. The "client." type prefix must be +left out. krbd:: -Access rbd through krbd kernel module. This is required if you want to -use the storage for containers. +Enforce access to rados block devices through the krbd kernel module. Optional. -.Configuration Example ('/etc/pve/storage.cfg') +NOTE: Containers will use `krbd` independent of the option value. + +.Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`) ---- -rbd: ceph3 +rbd: ceph-external monhost 10.1.1.20 10.1.1.21 10.1.1.22 - pool ceph3 + pool ceph-external content images username admin ---- -TIP: You can use the 'rbd' utility to do low-level management tasks. +TIP: You can use the `rbd` utility to do low-level management tasks. Authentication ~~~~~~~~~~~~~~ -If you use cephx authentication, you need to copy the keyfile from -Ceph to Proxmox VE host. +If you use `cephx` authentication, you need to copy the keyfile from your +external Ceph cluster to a Proxmox VE host. -Create the directory '/etc/pve/priv/ceph' with +Create the directory `/etc/pve/priv/ceph` with mkdir /etc/pve/priv/ceph @@ -74,6 +83,9 @@ Then copy the keyring The keyring must be named to match your ``. Copying the keyring generally requires root privileges. +If Ceph is installed locally on the PVE cluster, this is done automatically by +'pveceph' or in the GUI. + Storage Features ~~~~~~~~~~~~~~~~