1 =================================
2 Using libvirt with Ceph RBD
3 =================================
5 .. index:: Ceph Block Device; livirt
7 The ``libvirt`` library creates a virtual machine abstraction layer between
8 hypervisor interfaces and the software applications that use them. With
9 ``libvirt``, developers and system administrators can focus on a common
10 management framework, common API, and common shell interface (i.e., ``virsh``)
11 to many different hypervisors, including:
19 Ceph block devices support QEMU/KVM. You can use Ceph block devices with
20 software that interfaces with ``libvirt``. The following stack diagram
21 illustrates how ``libvirt`` and QEMU use Ceph block devices via ``librbd``.
24 .. ditaa:: +---------------------------------------------------+
26 +------------------------+--------------------------+
30 +---------------------------------------------------+
32 +---------------------------------------------------+
34 +------------------------+-+------------------------+
36 +------------------------+ +------------------------+
39 The most common ``libvirt`` use case involves providing Ceph block devices to
40 cloud solutions like OpenStack or CloudStack. The cloud solution uses
41 ``libvirt`` to interact with QEMU/KVM, and QEMU/KVM interacts with Ceph block
42 devices via ``librbd``. See `Block Devices and OpenStack`_ and `Block Devices
43 and CloudStack`_ for details. See `Installation`_ for installation details.
45 You can also use Ceph block devices with ``libvirt``, ``virsh`` and the
46 ``libvirt`` API. See `libvirt Virtualization API`_ for details.
49 To create VMs that use Ceph block devices, use the procedures in the following
50 sections. In the exemplary embodiment, we've used ``libvirt-pool`` for the pool
51 name, ``client.libvirt`` for the user name, and ``new-libvirt-image`` for the
52 image name. You may use any value you like, but ensure you replace those values
53 when executing commands in the subsequent procedures.
59 To configure Ceph for use with ``libvirt``, perform the following steps:
61 #. `Create a pool`_ (or use the default). The following example uses the
62 pool name ``libvirt-pool`` with 128 placement groups. ::
64 ceph osd pool create libvirt-pool 128 128
66 Verify the pool exists. ::
70 #. `Create a Ceph User`_ (or use ``client.admin`` for version 0.9.7 and
71 earlier). The following example uses the Ceph user name ``client.libvirt``
72 and references ``libvirt-pool``. ::
74 ceph auth get-or-create client.libvirt mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=libvirt-pool'
76 Verify the name exists. ::
80 **NOTE**: ``libvirt`` will access Ceph using the ID ``libvirt``,
81 not the Ceph name ``client.libvirt``. See `User Management - User`_ and
82 `User Management - CLI`_ for a detailed explanation of the difference
85 #. Use QEMU to `create an image`_ in your RBD pool.
86 The following example uses the image name ``new-libvirt-image``
87 and references ``libvirt-pool``. ::
89 qemu-img create -f rbd rbd:libvirt-pool/new-libvirt-image 2G
91 Verify the image exists. ::
93 rbd -p libvirt-pool ls
95 **NOTE:** You can also use `rbd create`_ to create an image, but we
96 recommend ensuring that QEMU is working properly.
98 .. tip:: Optionally, if you wish to enable debug logs and the admin socket for
99 this client, you can add the following section to ``/etc/ceph/ceph.conf``::
102 log file = /var/log/ceph/qemu-guest-$pid.log
103 admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
105 The ``client.libvirt`` section name should match the cephx user you created
106 above. If SELinux or AppArmor is enabled, note that this could prevent the
107 client process (qemu via libvirt) from writing the logs or admin socket to
108 the destination locations (``/var/log/ceph`` or ``/var/run/ceph``).
112 Preparing the VM Manager
113 ========================
115 You may use ``libvirt`` without a VM manager, but you may find it simpler to
116 create your first domain with ``virt-manager``.
118 #. Install a virtual machine manager. See `KVM/VirtManager`_ for details. ::
120 sudo apt-get install virt-manager
122 #. Download an OS image (if necessary).
124 #. Launch the virtual machine manager. ::
133 To create a VM with ``virt-manager``, perform the following steps:
135 #. Press the **Create New Virtual Machine** button.
137 #. Name the new virtual machine domain. In the exemplary embodiment, we
138 use the name ``libvirt-virtual-machine``. You may use any name you wish,
139 but ensure you replace ``libvirt-virtual-machine`` with the name you
140 choose in subsequent commandline and configuration examples. ::
142 libvirt-virtual-machine
144 #. Import the image. ::
146 /path/to/image/recent-linux.img
148 **NOTE:** Import a recent image. Some older images may not rescan for
149 virtual devices properly.
151 #. Configure and start the VM.
153 #. You may use ``virsh list`` to verify the VM domain exists. ::
157 #. Login to the VM (root/root)
159 #. Stop the VM before configuring it for use with Ceph.
165 When configuring the VM for use with Ceph, it is important to use ``virsh``
166 where appropriate. Additionally, ``virsh`` commands often require root
167 privileges (i.e., ``sudo``) and will not return appropriate results or notify
168 you that that root privileges are required. For a reference of ``virsh``
169 commands, refer to `Virsh Command Reference`_.
172 #. Open the configuration file with ``virsh edit``. ::
174 sudo virsh edit {vm-domain-name}
176 Under ``<devices>`` there should be a ``<disk>`` entry. ::
179 <emulator>/usr/bin/kvm</emulator>
180 <disk type='file' device='disk'>
181 <driver name='qemu' type='raw'/>
182 <source file='/path/to/image/recent-linux.img'/>
183 <target dev='vda' bus='virtio'/>
184 <address type='drive' controller='0' bus='0' unit='0'/>
188 Replace ``/path/to/image/recent-linux.img`` with the path to the OS image.
189 The minimum kernel for using the faster ``virtio`` bus is 2.6.25. See
190 `Virtio`_ for details.
192 **IMPORTANT:** Use ``sudo virsh edit`` instead of a text editor. If you edit
193 the configuration file under ``/etc/libvirt/qemu`` with a text editor,
194 ``libvirt`` may not recognize the change. If there is a discrepancy between
195 the contents of the XML file under ``/etc/libvirt/qemu`` and the result of
196 ``sudo virsh dumpxml {vm-domain-name}``, then your VM may not work
200 #. Add the Ceph RBD image you created as a ``<disk>`` entry. ::
202 <disk type='network' device='disk'>
203 <source protocol='rbd' name='libvirt-pool/new-libvirt-image'>
204 <host name='{monitor-host}' port='6789'/>
206 <target dev='vda' bus='virtio'/>
209 Replace ``{monitor-host}`` with the name of your host, and replace the
210 pool and/or image name as necessary. You may add multiple ``<host>``
211 entries for your Ceph monitors. The ``dev`` attribute is the logical
212 device name that will appear under the ``/dev`` directory of your
213 VM. The optional ``bus`` attribute indicates the type of disk device to
214 emulate. The valid settings are driver specific (e.g., "ide", "scsi",
215 "virtio", "xen", "usb" or "sata").
217 See `Disks`_ for details of the ``<disk>`` element, and its child elements
222 #. If your Ceph Storage Cluster has `Ceph Authentication`_ enabled (it does by
223 default), you must generate a secret. ::
225 cat > secret.xml <<EOF
226 <secret ephemeral='no' private='no'>
228 <name>client.libvirt secret</name>
233 #. Define the secret. ::
235 sudo virsh secret-define --file secret.xml
236 <uuid of secret is output here>
238 #. Get the ``client.libvirt`` key and save the key string to a file. ::
240 ceph auth get-key client.libvirt | sudo tee client.libvirt.key
242 #. Set the UUID of the secret. ::
244 sudo virsh secret-set-value --secret {uuid of secret} --base64 $(cat client.libvirt.key) && rm client.libvirt.key secret.xml
246 You must also set the secret manually by adding the following ``<auth>``
247 entry to the ``<disk>`` element you entered earlier (replacing the
248 ``uuid`` value with the result from the command line example above). ::
250 sudo virsh edit {vm-domain-name}
252 Then, add ``<auth></auth>`` element to the domain configuration file::
256 <auth username='libvirt'>
257 <secret type='ceph' uuid='9ec59067-fdbc-a6c0-03ff-df165c0587b8'/>
262 **NOTE:** The exemplary ID is ``libvirt``, not the Ceph name
263 ``client.libvirt`` as generated at step 2 of `Configuring Ceph`_. Ensure
264 you use the ID component of the Ceph name you generated. If for some reason
265 you need to regenerate the secret, you will have to execute
266 ``sudo virsh secret-undefine {uuid}`` before executing
267 ``sudo virsh secret-set-value`` again.
273 Once you have configured the VM for use with Ceph, you can start the VM.
274 To verify that the VM and Ceph are communicating, you may perform the
275 following procedures.
278 #. Check to see if Ceph is running::
282 #. Check to see if the VM is running. ::
286 #. Check to see if the VM is communicating with Ceph. Replace
287 ``{vm-domain-name}`` with the name of your VM domain::
289 sudo virsh qemu-monitor-command --hmp {vm-domain-name} 'info block'
291 #. Check to see if the device from ``<target dev='hdb' bus='ide'/>`` appears
292 under ``/dev`` or under ``proc/partitions``. ::
297 If everything looks okay, you may begin using the Ceph block device
301 .. _Installation: ../../install
302 .. _libvirt Virtualization API: http://www.libvirt.org
303 .. _Block Devices and OpenStack: ../rbd-openstack
304 .. _Block Devices and CloudStack: ../rbd-cloudstack
305 .. _Create a pool: ../../rados/operations/pools#create-a-pool
306 .. _Create a Ceph User: ../../rados/operations/user-management#add-a-user
307 .. _create an image: ../qemu-rbd#creating-images-with-qemu
308 .. _Virsh Command Reference: http://www.libvirt.org/virshcmdref.html
309 .. _KVM/VirtManager: https://help.ubuntu.com/community/KVM/VirtManager
310 .. _Ceph Authentication: ../../rados/configuration/auth-config-ref
311 .. _Disks: http://www.libvirt.org/formatdomain.html#elementsDisks
312 .. _rbd create: ../rados-rbd-cmds#creating-a-block-device-image
313 .. _User Management - User: ../../rados/operations/user-management#user
314 .. _User Management - CLI: ../../rados/operations/user-management#command-line-usage
315 .. _Virtio: http://www.linux-kvm.org/page/Virtio