X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=pve-storage-rbd.adoc;h=ee073714ebcd51eb0d39af82d2161f96268ec81e;hp=c33b70e0fd1af14cfe17ba2d69605f14db50ee2b;hb=e4fefc2c1191c745c4fb83edc8b0b69411f7bd96;hpb=a69bfc83f6d2b79e94eeb39781d89b720b4482dc diff --git a/pve-storage-rbd.adoc b/pve-storage-rbd.adoc index c33b70e..ee07371 100644 --- a/pve-storage-rbd.adoc +++ b/pve-storage-rbd.adoc @@ -1,3 +1,4 @@ +[[ceph_rados_block_devices]] Ceph RADOS Block Devices (RBD) ------------------------------ ifdef::wiki[] @@ -18,7 +19,7 @@ storage, and you get the following advantages: * full snapshot and clone capabilities * self healing * no single point of failure -* scalable to the exabyte level +* scalable to the exabyte level * kernel and user space implementation available NOTE: For smaller deployments, it is also possible to run Ceph @@ -26,6 +27,7 @@ services directly on your {pve} nodes. Recent hardware has plenty of CPU power and RAM, so running storage services and VMs on same node is possible. +[[storage_rbd_config]] Configuration ~~~~~~~~~~~~~ @@ -34,7 +36,8 @@ This backend supports the common storage properties `nodes`, monhost:: -List of monitor daemon IPs. +List of monitor daemon IPs. Optional, only needed if Ceph is not running on the +PVE cluster. pool:: @@ -42,18 +45,18 @@ Ceph pool name. username:: -RBD user Id. +RBD user Id. Optional, only needed if Ceph is not running on the PVE cluster. krbd:: Access rbd through krbd kernel module. This is required if you want to use the storage for containers. -.Configuration Example (`/etc/pve/storage.cfg`) +.Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`) ---- -rbd: ceph3 +rbd: ceph-external monhost 10.1.1.20 10.1.1.21 10.1.1.22 - pool ceph3 + pool ceph-external content images username admin ---- @@ -63,8 +66,8 @@ TIP: You can use the `rbd` utility to do low-level management tasks. Authentication ~~~~~~~~~~~~~~ -If you use `cephx` authentication, you need to copy the keyfile from -Ceph to Proxmox VE host. +If you use `cephx` authentication, you need to copy the keyfile from your +external Ceph cluster to a Proxmox VE host. Create the directory `/etc/pve/priv/ceph` with @@ -77,6 +80,9 @@ Then copy the keyring The keyring must be named to match your ``. Copying the keyring generally requires root privileges. +If Ceph is installed locally on the PVE cluster, this is done automatically by +'pveceph' or in the GUI. + Storage Features ~~~~~~~~~~~~~~~~