X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=pve-storage-rbd.adoc;h=cafc5da83710fe1316a4f59870277bc08d70fbd2;hb=d77477d76a078937ed6b5f59845dec1fc1398110;hp=4b2cd0af3f33a7f3e3623db4a0931473cb517885;hpb=aa039b0f5a044a78ff175d8a70178d7b10895567;p=pve-docs.git diff --git a/pve-storage-rbd.adoc b/pve-storage-rbd.adoc index 4b2cd0a..cafc5da 100644 --- a/pve-storage-rbd.adoc +++ b/pve-storage-rbd.adoc @@ -1,5 +1,10 @@ Ceph RADOS Block Devices (RBD) ------------------------------ +include::attributes.txt[] +ifdef::wiki[] +:pve-toplevel: +:title: Storage: RBD +endif::wiki[] Storage pool type: `rbd` @@ -15,7 +20,7 @@ storage, and you get the following advantages: * self healing * no single point of failure * scalable to the exabyte level -* kernel and unser space implementation available +* kernel and user space implementation available NOTE: For smaller deployments, it is also possible to run Ceph services directly on your {pve} nodes. Recent hardware has plenty @@ -45,7 +50,7 @@ krbd:: Access rbd through krbd kernel module. This is required if you want to use the storage for containers. -.Configuration Example ('/etc/pve/storage.cfg') +.Configuration Example (`/etc/pve/storage.cfg`) ---- rbd: ceph3 monhost 10.1.1.20 10.1.1.21 10.1.1.22 @@ -54,15 +59,15 @@ rbd: ceph3 username admin ---- -TIP: You can use the 'rbd' utility to do low-level management tasks. +TIP: You can use the `rbd` utility to do low-level management tasks. Authentication ~~~~~~~~~~~~~~ -If you use cephx authentication, you need to copy the keyfile from +If you use `cephx` authentication, you need to copy the keyfile from Ceph to Proxmox VE host. -Create the directory '/etc/pve/priv/ceph' with +Create the directory `/etc/pve/priv/ceph` with mkdir /etc/pve/priv/ceph @@ -86,3 +91,12 @@ snapshot and clone functionality. |images rootdir |raw |yes |yes |yes |============================================================================== +ifdef::wiki[] + +See Also +~~~~~~~~ + +* link:/wiki/Storage[Storage] + +endif::wiki[] +