]> git.proxmox.com Git - ceph.git/blame - ceph/doc/rbd/libvirt.rst
Import ceph 15.2.8
[ceph.git] / ceph / doc / rbd / libvirt.rst
CommitLineData
7c673cae
FG
1=================================
2 Using libvirt with Ceph RBD
3=================================
4
5.. index:: Ceph Block Device; livirt
6
7The ``libvirt`` library creates a virtual machine abstraction layer between
8hypervisor interfaces and the software applications that use them. With
9``libvirt``, developers and system administrators can focus on a common
10management framework, common API, and common shell interface (i.e., ``virsh``)
11to many different hypervisors, including:
12
13- QEMU/KVM
14- XEN
15- LXC
16- VirtualBox
17- etc.
18
19Ceph block devices support QEMU/KVM. You can use Ceph block devices with
20software that interfaces with ``libvirt``. The following stack diagram
21illustrates how ``libvirt`` and QEMU use Ceph block devices via ``librbd``.
22
23
f91f0fd5
TL
24.. ditaa::
25
26 +---------------------------------------------------+
7c673cae
FG
27 | libvirt |
28 +------------------------+--------------------------+
29 |
30 | configures
31 v
32 +---------------------------------------------------+
33 | QEMU |
34 +---------------------------------------------------+
35 | librbd |
9f95a23c
TL
36 +---------------------------------------------------+
37 | librados |
7c673cae
FG
38 +------------------------+-+------------------------+
39 | OSDs | | Monitors |
40 +------------------------+ +------------------------+
41
42
43The most common ``libvirt`` use case involves providing Ceph block devices to
44cloud solutions like OpenStack or CloudStack. The cloud solution uses
45``libvirt`` to interact with QEMU/KVM, and QEMU/KVM interacts with Ceph block
46devices via ``librbd``. See `Block Devices and OpenStack`_ and `Block Devices
47and CloudStack`_ for details. See `Installation`_ for installation details.
48
49You can also use Ceph block devices with ``libvirt``, ``virsh`` and the
50``libvirt`` API. See `libvirt Virtualization API`_ for details.
51
52
53To create VMs that use Ceph block devices, use the procedures in the following
c07f9fc5 54sections. In the exemplary embodiment, we have used ``libvirt-pool`` for the pool
7c673cae
FG
55name, ``client.libvirt`` for the user name, and ``new-libvirt-image`` for the
56image name. You may use any value you like, but ensure you replace those values
57when executing commands in the subsequent procedures.
58
59
60Configuring Ceph
61================
62
63To configure Ceph for use with ``libvirt``, perform the following steps:
64
c07f9fc5 65#. `Create a pool`_. The following example uses the
9f95a23c 66 pool name ``libvirt-pool``.::
7c673cae 67
9f95a23c 68 ceph osd pool create libvirt-pool
7c673cae
FG
69
70 Verify the pool exists. ::
71
72 ceph osd lspools
73
c07f9fc5
FG
74#. Use the ``rbd`` tool to initialize the pool for use by RBD::
75
76 rbd pool init <pool-name>
77
78#. `Create a Ceph User`_ (or use ``client.admin`` for version 0.9.7 and
79 earlier). The following example uses the Ceph user name ``client.libvirt``
7c673cae
FG
80 and references ``libvirt-pool``. ::
81
c07f9fc5 82 ceph auth get-or-create client.libvirt mon 'profile rbd' osd 'profile rbd pool=libvirt-pool'
7c673cae
FG
83
84 Verify the name exists. ::
85
c07f9fc5 86 ceph auth ls
7c673cae
FG
87
88 **NOTE**: ``libvirt`` will access Ceph using the ID ``libvirt``,
89 not the Ceph name ``client.libvirt``. See `User Management - User`_ and
90 `User Management - CLI`_ for a detailed explanation of the difference
91 between ID and name.
92
93#. Use QEMU to `create an image`_ in your RBD pool.
94 The following example uses the image name ``new-libvirt-image``
95 and references ``libvirt-pool``. ::
96
97 qemu-img create -f rbd rbd:libvirt-pool/new-libvirt-image 2G
98
99 Verify the image exists. ::
100
101 rbd -p libvirt-pool ls
102
103 **NOTE:** You can also use `rbd create`_ to create an image, but we
104 recommend ensuring that QEMU is working properly.
105
106.. tip:: Optionally, if you wish to enable debug logs and the admin socket for
107 this client, you can add the following section to ``/etc/ceph/ceph.conf``::
108
109 [client.libvirt]
110 log file = /var/log/ceph/qemu-guest-$pid.log
111 admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
112
113 The ``client.libvirt`` section name should match the cephx user you created
9f95a23c
TL
114 above.
115 If SELinux or AppArmor is enabled, note that this could prevent the client
116 process (qemu via libvirt) from doing some operations, such as writing logs
117 or operate the images or admin socket to the destination locations (``/var/
118 log/ceph`` or ``/var/run/ceph``).
7c673cae
FG
119
120
121Preparing the VM Manager
122========================
123
124You may use ``libvirt`` without a VM manager, but you may find it simpler to
125create your first domain with ``virt-manager``.
126
127#. Install a virtual machine manager. See `KVM/VirtManager`_ for details. ::
128
129 sudo apt-get install virt-manager
130
131#. Download an OS image (if necessary).
132
133#. Launch the virtual machine manager. ::
134
135 sudo virt-manager
136
137
138
139Creating a VM
140=============
141
142To create a VM with ``virt-manager``, perform the following steps:
143
144#. Press the **Create New Virtual Machine** button.
145
146#. Name the new virtual machine domain. In the exemplary embodiment, we
147 use the name ``libvirt-virtual-machine``. You may use any name you wish,
148 but ensure you replace ``libvirt-virtual-machine`` with the name you
149 choose in subsequent commandline and configuration examples. ::
150
151 libvirt-virtual-machine
152
153#. Import the image. ::
154
155 /path/to/image/recent-linux.img
156
157 **NOTE:** Import a recent image. Some older images may not rescan for
158 virtual devices properly.
159
160#. Configure and start the VM.
161
162#. You may use ``virsh list`` to verify the VM domain exists. ::
163
164 sudo virsh list
165
166#. Login to the VM (root/root)
167
168#. Stop the VM before configuring it for use with Ceph.
169
170
171Configuring the VM
172==================
173
174When configuring the VM for use with Ceph, it is important to use ``virsh``
175where appropriate. Additionally, ``virsh`` commands often require root
176privileges (i.e., ``sudo``) and will not return appropriate results or notify
177you that that root privileges are required. For a reference of ``virsh``
178commands, refer to `Virsh Command Reference`_.
179
180
181#. Open the configuration file with ``virsh edit``. ::
182
183 sudo virsh edit {vm-domain-name}
184
185 Under ``<devices>`` there should be a ``<disk>`` entry. ::
186
187 <devices>
188 <emulator>/usr/bin/kvm</emulator>
189 <disk type='file' device='disk'>
190 <driver name='qemu' type='raw'/>
191 <source file='/path/to/image/recent-linux.img'/>
192 <target dev='vda' bus='virtio'/>
193 <address type='drive' controller='0' bus='0' unit='0'/>
194 </disk>
195
196
197 Replace ``/path/to/image/recent-linux.img`` with the path to the OS image.
198 The minimum kernel for using the faster ``virtio`` bus is 2.6.25. See
199 `Virtio`_ for details.
200
201 **IMPORTANT:** Use ``sudo virsh edit`` instead of a text editor. If you edit
202 the configuration file under ``/etc/libvirt/qemu`` with a text editor,
203 ``libvirt`` may not recognize the change. If there is a discrepancy between
204 the contents of the XML file under ``/etc/libvirt/qemu`` and the result of
205 ``sudo virsh dumpxml {vm-domain-name}``, then your VM may not work
206 properly.
207
208
209#. Add the Ceph RBD image you created as a ``<disk>`` entry. ::
210
211 <disk type='network' device='disk'>
212 <source protocol='rbd' name='libvirt-pool/new-libvirt-image'>
213 <host name='{monitor-host}' port='6789'/>
214 </source>
9f95a23c 215 <target dev='vdb' bus='virtio'/>
7c673cae
FG
216 </disk>
217
218 Replace ``{monitor-host}`` with the name of your host, and replace the
219 pool and/or image name as necessary. You may add multiple ``<host>``
220 entries for your Ceph monitors. The ``dev`` attribute is the logical
221 device name that will appear under the ``/dev`` directory of your
222 VM. The optional ``bus`` attribute indicates the type of disk device to
223 emulate. The valid settings are driver specific (e.g., "ide", "scsi",
224 "virtio", "xen", "usb" or "sata").
225
226 See `Disks`_ for details of the ``<disk>`` element, and its child elements
227 and attributes.
228
229#. Save the file.
230
231#. If your Ceph Storage Cluster has `Ceph Authentication`_ enabled (it does by
232 default), you must generate a secret. ::
233
234 cat > secret.xml <<EOF
235 <secret ephemeral='no' private='no'>
236 <usage type='ceph'>
237 <name>client.libvirt secret</name>
238 </usage>
239 </secret>
240 EOF
241
242#. Define the secret. ::
243
244 sudo virsh secret-define --file secret.xml
9f95a23c 245 {uuid of secret}
7c673cae
FG
246
247#. Get the ``client.libvirt`` key and save the key string to a file. ::
248
249 ceph auth get-key client.libvirt | sudo tee client.libvirt.key
250
251#. Set the UUID of the secret. ::
252
253 sudo virsh secret-set-value --secret {uuid of secret} --base64 $(cat client.libvirt.key) && rm client.libvirt.key secret.xml
254
255 You must also set the secret manually by adding the following ``<auth>``
256 entry to the ``<disk>`` element you entered earlier (replacing the
257 ``uuid`` value with the result from the command line example above). ::
258
259 sudo virsh edit {vm-domain-name}
260
261 Then, add ``<auth></auth>`` element to the domain configuration file::
262
263 ...
264 </source>
265 <auth username='libvirt'>
9f95a23c 266 <secret type='ceph' uuid='{uuid of secret}'/>
7c673cae
FG
267 </auth>
268 <target ...
269
270
271 **NOTE:** The exemplary ID is ``libvirt``, not the Ceph name
272 ``client.libvirt`` as generated at step 2 of `Configuring Ceph`_. Ensure
273 you use the ID component of the Ceph name you generated. If for some reason
274 you need to regenerate the secret, you will have to execute
275 ``sudo virsh secret-undefine {uuid}`` before executing
276 ``sudo virsh secret-set-value`` again.
277
278
279Summary
280=======
281
282Once you have configured the VM for use with Ceph, you can start the VM.
283To verify that the VM and Ceph are communicating, you may perform the
284following procedures.
285
286
287#. Check to see if Ceph is running::
288
289 ceph health
290
291#. Check to see if the VM is running. ::
292
293 sudo virsh list
294
295#. Check to see if the VM is communicating with Ceph. Replace
296 ``{vm-domain-name}`` with the name of your VM domain::
297
298 sudo virsh qemu-monitor-command --hmp {vm-domain-name} 'info block'
299
9f95a23c 300#. Check to see if the device from ``<target dev='vdb' bus='virtio'/>`` exists::
7c673cae 301
9f95a23c 302 virsh domblklist {vm-domain-name} --details
7c673cae
FG
303
304If everything looks okay, you may begin using the Ceph block device
305within your VM.
306
307
308.. _Installation: ../../install
309.. _libvirt Virtualization API: http://www.libvirt.org
310.. _Block Devices and OpenStack: ../rbd-openstack
311.. _Block Devices and CloudStack: ../rbd-cloudstack
312.. _Create a pool: ../../rados/operations/pools#create-a-pool
313.. _Create a Ceph User: ../../rados/operations/user-management#add-a-user
314.. _create an image: ../qemu-rbd#creating-images-with-qemu
315.. _Virsh Command Reference: http://www.libvirt.org/virshcmdref.html
316.. _KVM/VirtManager: https://help.ubuntu.com/community/KVM/VirtManager
317.. _Ceph Authentication: ../../rados/configuration/auth-config-ref
318.. _Disks: http://www.libvirt.org/formatdomain.html#elementsDisks
319.. _rbd create: ../rados-rbd-cmds#creating-a-block-device-image
320.. _User Management - User: ../../rados/operations/user-management#user
321.. _User Management - CLI: ../../rados/operations/user-management#command-line-usage
322.. _Virtio: http://www.linux-kvm.org/page/Virtio