]> git.proxmox.com Git - pve-docs.git/blob - pve-storage-rbd.adoc
add vIOMMU documentation
[pve-docs.git] / pve-storage-rbd.adoc
1 :fn-ceph-user-mgmt: footnote:cephusermgmt[Ceph user management {cephdocs-url}/rados/operations/user-management/]
2 [[ceph_rados_block_devices]]
3 Ceph RADOS Block Devices (RBD)
4 ------------------------------
5 ifdef::wiki[]
6 :pve-toplevel:
7 :title: Storage: RBD
8 endif::wiki[]
9
10 Storage pool type: `rbd`
11
12 https://ceph.com[Ceph] is a distributed object store and file system
13 designed to provide excellent performance, reliability and
14 scalability. RADOS block devices implement a feature rich block level
15 storage, and you get the following advantages:
16
17 * thin provisioning
18 * resizable volumes
19 * distributed and redundant (striped over multiple OSDs)
20 * full snapshot and clone capabilities
21 * self healing
22 * no single point of failure
23 * scalable to the exabyte level
24 * kernel and user space implementation available
25
26 NOTE: For smaller deployments, it is also possible to run Ceph
27 services directly on your {pve} nodes. Recent hardware has plenty
28 of CPU power and RAM, so running storage services and VMs on same node
29 is possible.
30
31 [[storage_rbd_config]]
32 Configuration
33 ~~~~~~~~~~~~~
34
35 This backend supports the common storage properties `nodes`,
36 `disable`, `content`, and the following `rbd` specific properties:
37
38 monhost::
39
40 List of monitor daemon IPs. Optional, only needed if Ceph is not running on the
41 {pve} cluster.
42
43 pool::
44
45 Ceph pool name.
46
47 username::
48
49 RBD user ID. Optional, only needed if Ceph is not running on the {pve} cluster.
50 Note that only the user ID should be used. The "client." type prefix must be
51 left out.
52
53 krbd::
54
55 Enforce access to rados block devices through the krbd kernel module. Optional.
56
57 NOTE: Containers will use `krbd` independent of the option value.
58
59 .Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`)
60 ----
61 rbd: ceph-external
62 monhost 10.1.1.20 10.1.1.21 10.1.1.22
63 pool ceph-external
64 content images
65 username admin
66 ----
67
68 TIP: You can use the `rbd` utility to do low-level management tasks.
69
70 Authentication
71 ~~~~~~~~~~~~~~
72
73 NOTE: If Ceph is installed locally on the {pve} cluster, the following is done
74 automatically when adding the storage.
75
76 If you use `cephx` authentication, which is enabled by default, you need to
77 provide the keyring from the external Ceph cluster.
78
79 To configure the storage via the CLI, you first need to make the file
80 containing the keyring available. One way is to copy the file from the external
81 Ceph cluster directly to one of the {pve} nodes. The following example will
82 copy it to the `/root` directory of the node on which we run it:
83
84 ----
85 # scp <external cephserver>:/etc/ceph/ceph.client.admin.keyring /root/rbd.keyring
86 ----
87
88 Then use the `pvesm` CLI tool to configure the external RBD storage, use the
89 `--keyring` parameter, which needs to be a path to the keyring file that you
90 copied. For example:
91
92 ----
93 # pvesm add rbd <name> --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content images --keyring /root/rbd.keyring
94 ----
95
96 When configuring an external RBD storage via the GUI, you can copy and paste
97 the keyring into the appropriate field.
98
99 The keyring will be stored at
100
101 ----
102 # /etc/pve/priv/ceph/<STORAGE_ID>.keyring
103 ----
104
105 TIP: Creating a keyring with only the needed capabilities is recommend when
106 connecting to an external cluster. For further information on Ceph user
107 management, see the Ceph docs.footnoteref:[cephusermgmt,{cephdocs-url}/rados/operations/user-management/[Ceph User Management]]
108
109 Ceph client configuration (optional)
110 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
111
112 Connecting to an external Ceph storage doesn't always allow setting
113 client-specific options in the config DB on the external cluster. You can add a
114 `ceph.conf` beside the Ceph keyring to change the Ceph client configuration for
115 the storage.
116
117 The ceph.conf needs to have the same name as the storage.
118
119 ----
120 # /etc/pve/priv/ceph/<STORAGE_ID>.conf
121 ----
122
123 See the RBD configuration reference footnote:[RBD configuration reference
124 {cephdocs-url}/rbd/rbd-config-ref/] for possible settings.
125
126 NOTE: Do not change these settings lightly. {PVE} is merging the
127 <STORAGE_ID>.conf with the storage configuration.
128
129
130 Storage Features
131 ~~~~~~~~~~~~~~~~
132
133 The `rbd` backend is a block level storage, and implements full
134 snapshot and clone functionality.
135
136 .Storage features for backend `rbd`
137 [width="100%",cols="m,m,3*d",options="header"]
138 |==============================================================================
139 |Content types |Image formats |Shared |Snapshots |Clones
140 |images rootdir |raw |yes |yes |yes
141 |==============================================================================
142
143 ifdef::wiki[]
144
145 See Also
146 ~~~~~~~~
147
148 * link:/wiki/Storage[Storage]
149
150 endif::wiki[]
151