X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=pve-storage-rbd.adoc;h=5fe558a766b79d340743766f8da02d4d6377bb1d;hb=908345d68c6cca5e5b16550c87ac447531c88662;hp=d38294bddf2dfda0f7af3896406729edca81ce55;hpb=f532afb7ac007af6bbed8e3cad5439de934db30a;p=pve-docs.git diff --git a/pve-storage-rbd.adoc b/pve-storage-rbd.adoc index d38294b..5fe558a 100644 --- a/pve-storage-rbd.adoc +++ b/pve-storage-rbd.adoc @@ -1,10 +1,15 @@ +:fn-ceph-user-mgmt: footnote:cephusermgmt[Ceph user management {cephdocs-url}/rados/operations/user-management/] +[[ceph_rados_block_devices]] Ceph RADOS Block Devices (RBD) ------------------------------ -include::attributes.txt[] +ifdef::wiki[] +:pve-toplevel: +:title: Storage: RBD +endif::wiki[] Storage pool type: `rbd` -http://ceph.com[Ceph] is a distributed object store and file system +https://ceph.com[Ceph] is a distributed object store and file system designed to provide excellent performance, reliability and scalability. RADOS block devices implement a feature rich block level storage, and you get the following advantages: @@ -15,14 +20,15 @@ storage, and you get the following advantages: * full snapshot and clone capabilities * self healing * no single point of failure -* scalable to the exabyte level -* kernel and unser space implementation available +* scalable to the exabyte level +* kernel and user space implementation available NOTE: For smaller deployments, it is also possible to run Ceph services directly on your {pve} nodes. Recent hardware has plenty of CPU power and RAM, so running storage services and VMs on same node is possible. +[[storage_rbd_config]] Configuration ~~~~~~~~~~~~~ @@ -31,7 +37,8 @@ This backend supports the common storage properties `nodes`, monhost:: -List of monitor daemon IPs. +List of monitor daemon IPs. Optional, only needed if Ceph is not running on the +{pve} cluster. pool:: @@ -39,40 +46,86 @@ Ceph pool name. username:: -RBD user Id. +RBD user ID. Optional, only needed if Ceph is not running on the {pve} cluster. +Note that only the user ID should be used. The "client." type prefix must be +left out. krbd:: -Access rbd through krbd kernel module. This is required if you want to -use the storage for containers. +Enforce access to rados block devices through the krbd kernel module. Optional. + +NOTE: Containers will use `krbd` independent of the option value. -.Configuration Example ('/etc/pve/storage.cfg') +.Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`) ---- -rbd: ceph3 +rbd: ceph-external monhost 10.1.1.20 10.1.1.21 10.1.1.22 - pool ceph3 + pool ceph-external content images username admin ---- -TIP: You can use the 'rbd' utility to do low-level management tasks. +TIP: You can use the `rbd` utility to do low-level management tasks. Authentication ~~~~~~~~~~~~~~ -If you use cephx authentication, you need to copy the keyfile from -Ceph to Proxmox VE host. +NOTE: If Ceph is installed locally on the {pve} cluster, the following is done +automatically when adding the storage. + +If you use `cephx` authentication, which is enabled by default, you need to +provide the keyring from the external Ceph cluster. + +To configure the storage via the CLI, you first need to make the file +containing the keyring available. One way is to copy the file from the external +Ceph cluster directly to one of the {pve} nodes. The following example will +copy it to the `/root` directory of the node on which we run it: + +---- +# scp :/etc/ceph/ceph.client.admin.keyring /root/rbd.keyring +---- + +Then use the `pvesm` CLI tool to configure the external RBD storage, use the +`--keyring` parameter, which needs to be a path to the keyring file that you +copied. For example: + +---- +# pvesm add rbd --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content images --keyring /root/rbd.keyring +---- + +When configuring an external RBD storage via the GUI, you can copy and paste +the keyring into the appropriate field. + +The keyring will be stored at + +---- +# /etc/pve/priv/ceph/.keyring +---- -Create the directory '/etc/pve/priv/ceph' with +TIP: Creating a keyring with only the needed capabilities is recommend when +connecting to an external cluster. For further information on Ceph user +management, see the Ceph docs.footnoteref:[cephusermgmt,{cephdocs-url}/rados/operations/user-management/[Ceph User Management]] - mkdir /etc/pve/priv/ceph +Ceph client configuration (optional) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Connecting to an external Ceph storage doesn't always allow setting +client-specific options in the config DB on the external cluster. You can add a +`ceph.conf` beside the Ceph keyring to change the Ceph client configuration for +the storage. + +The ceph.conf needs to have the same name as the storage. + +---- +# /etc/pve/priv/ceph/.conf +---- -Then copy the keyring +See the RBD configuration reference footnote:[RBD configuration reference +{cephdocs-url}/rbd/rbd-config-ref/] for possible settings. - scp :/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/.keyring +NOTE: Do not change these settings lightly. {PVE} is merging the +.conf with the storage configuration. -The keyring must be named to match your ``. Copying the -keyring generally requires root privileges. Storage Features ~~~~~~~~~~~~~~~~