]> git.proxmox.com Git - pve-docs.git/blame - pve-storage-rbd.adoc
firewall: add documentation for proxmox-firewall
[pve-docs.git] / pve-storage-rbd.adoc
CommitLineData
876f6dce 1:fn-ceph-user-mgmt: footnote:cephusermgmt[Ceph user management {cephdocs-url}/rados/operations/user-management/]
950229ff 2[[ceph_rados_block_devices]]
aa039b0f
DM
3Ceph RADOS Block Devices (RBD)
4------------------------------
5f09af76
DM
5ifdef::wiki[]
6:pve-toplevel:
cb84ed18 7:title: Storage: RBD
5f09af76
DM
8endif::wiki[]
9
aa039b0f
DM
10Storage pool type: `rbd`
11
a55d30db 12https://ceph.com[Ceph] is a distributed object store and file system
aa039b0f
DM
13designed to provide excellent performance, reliability and
14scalability. RADOS block devices implement a feature rich block level
15storage, and you get the following advantages:
16
17* thin provisioning
18* resizable volumes
19* distributed and redundant (striped over multiple OSDs)
20* full snapshot and clone capabilities
21* self healing
22* no single point of failure
78f02fed 23* scalable to the exabyte level
5eba0743 24* kernel and user space implementation available
aa039b0f
DM
25
26NOTE: For smaller deployments, it is also possible to run Ceph
27services directly on your {pve} nodes. Recent hardware has plenty
28of CPU power and RAM, so running storage services and VMs on same node
29is possible.
30
d9a27ee1 31[[storage_rbd_config]]
aa039b0f
DM
32Configuration
33~~~~~~~~~~~~~
34
35This backend supports the common storage properties `nodes`,
36`disable`, `content`, and the following `rbd` specific properties:
37
38monhost::
39
78f02fed 40List of monitor daemon IPs. Optional, only needed if Ceph is not running on the
8ebd2453 41{pve} cluster.
aa039b0f
DM
42
43pool::
44
45Ceph pool name.
46
47username::
48
8ebd2453 49RBD user ID. Optional, only needed if Ceph is not running on the {pve} cluster.
e9872054
DW
50Note that only the user ID should be used. The "client." type prefix must be
51left out.
aa039b0f
DM
52
53krbd::
54
7d607884 55Enforce access to rados block devices through the krbd kernel module. Optional.
b08dc187 56
b4359c08 57NOTE: Containers will use `krbd` independent of the option value.
aa039b0f 58
78f02fed 59.Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`)
aa039b0f 60----
78f02fed 61rbd: ceph-external
aa039b0f 62 monhost 10.1.1.20 10.1.1.21 10.1.1.22
78f02fed 63 pool ceph-external
aa039b0f
DM
64 content images
65 username admin
66----
67
8c1189b6 68TIP: You can use the `rbd` utility to do low-level management tasks.
aa039b0f
DM
69
70Authentication
71~~~~~~~~~~~~~~
72
33ccd432
FE
73NOTE: If Ceph is installed locally on the {pve} cluster, the following is done
74automatically when adding the storage.
75
876f6dce
AL
76If you use `cephx` authentication, which is enabled by default, you need to
77provide the keyring from the external Ceph cluster.
aa039b0f 78
876f6dce
AL
79To configure the storage via the CLI, you first need to make the file
80containing the keyring available. One way is to copy the file from the external
81Ceph cluster directly to one of the {pve} nodes. The following example will
82copy it to the `/root` directory of the node on which we run it:
aa039b0f 83
876f6dce
AL
84----
85# scp <external cephserver>:/etc/ceph/ceph.client.admin.keyring /root/rbd.keyring
86----
87
88Then use the `pvesm` CLI tool to configure the external RBD storage, use the
89`--keyring` parameter, which needs to be a path to the keyring file that you
90copied. For example:
91
92----
93# pvesm add rbd <name> --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content images --keyring /root/rbd.keyring
94----
95
96When configuring an external RBD storage via the GUI, you can copy and paste
97the keyring into the appropriate field.
98
99The keyring will be stored at
100
101----
102# /etc/pve/priv/ceph/<STORAGE_ID>.keyring
103----
aa039b0f 104
876f6dce
AL
105TIP: Creating a keyring with only the needed capabilities is recommend when
106connecting to an external cluster. For further information on Ceph user
107management, see the Ceph docs.footnoteref:[cephusermgmt,{cephdocs-url}/rados/operations/user-management/[Ceph User Management]]
aa039b0f 108
0b32cd0a
AA
109Ceph client configuration (optional)
110~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
111
f226da0e 112Connecting to an external Ceph storage doesn't always allow setting
0b32cd0a 113client-specific options in the config DB on the external cluster. You can add a
f226da0e 114`ceph.conf` beside the Ceph keyring to change the Ceph client configuration for
0b32cd0a
AA
115the storage.
116
117The ceph.conf needs to have the same name as the storage.
118
119----
120# /etc/pve/priv/ceph/<STORAGE_ID>.conf
121----
122
123See the RBD configuration reference footnote:[RBD configuration reference
124{cephdocs-url}/rbd/rbd-config-ref/] for possible settings.
125
126NOTE: Do not change these settings lightly. {PVE} is merging the
127<STORAGE_ID>.conf with the storage configuration.
aa039b0f 128
78f02fed 129
aa039b0f
DM
130Storage Features
131~~~~~~~~~~~~~~~~
132
133The `rbd` backend is a block level storage, and implements full
134snapshot and clone functionality.
135
136.Storage features for backend `rbd`
137[width="100%",cols="m,m,3*d",options="header"]
138|==============================================================================
139|Content types |Image formats |Shared |Snapshots |Clones
140|images rootdir |raw |yes |yes |yes
141|==============================================================================
142
deb4673f
DM
143ifdef::wiki[]
144
145See Also
146~~~~~~~~
147
f532afb7 148* link:/wiki/Storage[Storage]
deb4673f
DM
149
150endif::wiki[]
151