]> git.proxmox.com Git - pve-docs.git/blob - pve-storage-rbd.adoc
Reword krbd option description
[pve-docs.git] / pve-storage-rbd.adoc
1 [[ceph_rados_block_devices]]
2 Ceph RADOS Block Devices (RBD)
3 ------------------------------
4 ifdef::wiki[]
5 :pve-toplevel:
6 :title: Storage: RBD
7 endif::wiki[]
8
9 Storage pool type: `rbd`
10
11 http://ceph.com[Ceph] is a distributed object store and file system
12 designed to provide excellent performance, reliability and
13 scalability. RADOS block devices implement a feature rich block level
14 storage, and you get the following advantages:
15
16 * thin provisioning
17 * resizable volumes
18 * distributed and redundant (striped over multiple OSDs)
19 * full snapshot and clone capabilities
20 * self healing
21 * no single point of failure
22 * scalable to the exabyte level
23 * kernel and user space implementation available
24
25 NOTE: For smaller deployments, it is also possible to run Ceph
26 services directly on your {pve} nodes. Recent hardware has plenty
27 of CPU power and RAM, so running storage services and VMs on same node
28 is possible.
29
30 [[storage_rbd_config]]
31 Configuration
32 ~~~~~~~~~~~~~
33
34 This backend supports the common storage properties `nodes`,
35 `disable`, `content`, and the following `rbd` specific properties:
36
37 monhost::
38
39 List of monitor daemon IPs. Optional, only needed if Ceph is not running on the
40 PVE cluster.
41
42 pool::
43
44 Ceph pool name.
45
46 username::
47
48 RBD user Id. Optional, only needed if Ceph is not running on the PVE cluster.
49
50 krbd::
51
52 Access rbd through the krbd kernel module. Optional.
53
54 NOTE: Irrespective of the krbd option, containers use always krbd. You can
55 choose the access method used for VMs, by setting the krbd option.
56
57 .Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`)
58 ----
59 rbd: ceph-external
60 monhost 10.1.1.20 10.1.1.21 10.1.1.22
61 pool ceph-external
62 content images
63 username admin
64 ----
65
66 TIP: You can use the `rbd` utility to do low-level management tasks.
67
68 Authentication
69 ~~~~~~~~~~~~~~
70
71 If you use `cephx` authentication, you need to copy the keyfile from your
72 external Ceph cluster to a Proxmox VE host.
73
74 Create the directory `/etc/pve/priv/ceph` with
75
76 mkdir /etc/pve/priv/ceph
77
78 Then copy the keyring
79
80 scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring
81
82 The keyring must be named to match your `<STORAGE_ID>`. Copying the
83 keyring generally requires root privileges.
84
85 If Ceph is installed locally on the PVE cluster, this is done automatically by
86 'pveceph' or in the GUI.
87
88 Storage Features
89 ~~~~~~~~~~~~~~~~
90
91 The `rbd` backend is a block level storage, and implements full
92 snapshot and clone functionality.
93
94 .Storage features for backend `rbd`
95 [width="100%",cols="m,m,3*d",options="header"]
96 |==============================================================================
97 |Content types |Image formats |Shared |Snapshots |Clones
98 |images rootdir |raw |yes |yes |yes
99 |==============================================================================
100
101 ifdef::wiki[]
102
103 See Also
104 ~~~~~~~~
105
106 * link:/wiki/Storage[Storage]
107
108 endif::wiki[]
109