]> git.proxmox.com Git - pve-docs.git/blame_incremental - pve-storage-rbd.adoc
pve-network: Fix routed configuration example
[pve-docs.git] / pve-storage-rbd.adoc
... / ...
CommitLineData
1:fn-ceph-user-mgmt: footnote:cephusermgmt[Ceph user management {cephdocs-url}/rados/operations/user-management/]
2[[ceph_rados_block_devices]]
3Ceph RADOS Block Devices (RBD)
4------------------------------
5ifdef::wiki[]
6:pve-toplevel:
7:title: Storage: RBD
8endif::wiki[]
9
10Storage pool type: `rbd`
11
12https://ceph.com[Ceph] is a distributed object store and file system
13designed to provide excellent performance, reliability and
14scalability. RADOS block devices implement a feature rich block level
15storage, and you get the following advantages:
16
17* thin provisioning
18* resizable volumes
19* distributed and redundant (striped over multiple OSDs)
20* full snapshot and clone capabilities
21* self healing
22* no single point of failure
23* scalable to the exabyte level
24* kernel and user space implementation available
25
26NOTE: For smaller deployments, it is also possible to run Ceph
27services directly on your {pve} nodes. Recent hardware has plenty
28of CPU power and RAM, so running storage services and VMs on same node
29is possible.
30
31[[storage_rbd_config]]
32Configuration
33~~~~~~~~~~~~~
34
35This backend supports the common storage properties `nodes`,
36`disable`, `content`, and the following `rbd` specific properties:
37
38monhost::
39
40List of monitor daemon IPs. Optional, only needed if Ceph is not running on the
41{pve} cluster.
42
43pool::
44
45Ceph pool name.
46
47username::
48
49RBD user ID. Optional, only needed if Ceph is not running on the {pve} cluster.
50Note that only the user ID should be used. The "client." type prefix must be
51left out.
52
53krbd::
54
55Enforce access to rados block devices through the krbd kernel module. Optional.
56
57NOTE: Containers will use `krbd` independent of the option value.
58
59.Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`)
60----
61rbd: ceph-external
62 monhost 10.1.1.20 10.1.1.21 10.1.1.22
63 pool ceph-external
64 content images
65 username admin
66----
67
68TIP: You can use the `rbd` utility to do low-level management tasks.
69
70Authentication
71~~~~~~~~~~~~~~
72
73NOTE: If Ceph is installed locally on the {pve} cluster, the following is done
74automatically when adding the storage.
75
76If you use `cephx` authentication, which is enabled by default, you need to
77provide the keyring from the external Ceph cluster.
78
79To configure the storage via the CLI, you first need to make the file
80containing the keyring available. One way is to copy the file from the external
81Ceph cluster directly to one of the {pve} nodes. The following example will
82copy it to the `/root` directory of the node on which we run it:
83
84----
85# scp <external cephserver>:/etc/ceph/ceph.client.admin.keyring /root/rbd.keyring
86----
87
88Then use the `pvesm` CLI tool to configure the external RBD storage, use the
89`--keyring` parameter, which needs to be a path to the keyring file that you
90copied. For example:
91
92----
93# pvesm add rbd <name> --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content images --keyring /root/rbd.keyring
94----
95
96When configuring an external RBD storage via the GUI, you can copy and paste
97the keyring into the appropriate field.
98
99The keyring will be stored at
100
101----
102# /etc/pve/priv/ceph/<STORAGE_ID>.keyring
103----
104
105TIP: Creating a keyring with only the needed capabilities is recommend when
106connecting to an external cluster. For further information on Ceph user
107management, see the Ceph docs.footnoteref:[cephusermgmt,{cephdocs-url}/rados/operations/user-management/[Ceph User Management]]
108
109Ceph client configuration (optional)
110~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
111
112Connecting to an external ceph storage doesn't always allow setting
113client-specific options in the config DB on the external cluster. You can add a
114`ceph.conf` beside the ceph keyring to change the ceph client configuration for
115the storage.
116
117The ceph.conf needs to have the same name as the storage.
118
119----
120# /etc/pve/priv/ceph/<STORAGE_ID>.conf
121----
122
123See the RBD configuration reference footnote:[RBD configuration reference
124{cephdocs-url}/rbd/rbd-config-ref/] for possible settings.
125
126NOTE: Do not change these settings lightly. {PVE} is merging the
127<STORAGE_ID>.conf with the storage configuration.
128
129
130Storage Features
131~~~~~~~~~~~~~~~~
132
133The `rbd` backend is a block level storage, and implements full
134snapshot and clone functionality.
135
136.Storage features for backend `rbd`
137[width="100%",cols="m,m,3*d",options="header"]
138|==============================================================================
139|Content types |Image formats |Shared |Snapshots |Clones
140|images rootdir |raw |yes |yes |yes
141|==============================================================================
142
143ifdef::wiki[]
144
145See Also
146~~~~~~~~
147
148* link:/wiki/Storage[Storage]
149
150endif::wiki[]
151