]>
Commit | Line | Data |
---|---|---|
7c673cae FG |
1 | ================================= |
2 | Using libvirt with Ceph RBD | |
3 | ================================= | |
4 | ||
5 | .. index:: Ceph Block Device; livirt | |
6 | ||
7 | The ``libvirt`` library creates a virtual machine abstraction layer between | |
8 | hypervisor interfaces and the software applications that use them. With | |
9 | ``libvirt``, developers and system administrators can focus on a common | |
10 | management framework, common API, and common shell interface (i.e., ``virsh``) | |
11 | to many different hypervisors, including: | |
12 | ||
13 | - QEMU/KVM | |
14 | - XEN | |
15 | - LXC | |
16 | - VirtualBox | |
17 | - etc. | |
18 | ||
19 | Ceph block devices support QEMU/KVM. You can use Ceph block devices with | |
20 | software that interfaces with ``libvirt``. The following stack diagram | |
21 | illustrates how ``libvirt`` and QEMU use Ceph block devices via ``librbd``. | |
22 | ||
23 | ||
24 | .. ditaa:: +---------------------------------------------------+ | |
25 | | libvirt | | |
26 | +------------------------+--------------------------+ | |
27 | | | |
28 | | configures | |
29 | v | |
30 | +---------------------------------------------------+ | |
31 | | QEMU | | |
32 | +---------------------------------------------------+ | |
33 | | librbd | | |
34 | +------------------------+-+------------------------+ | |
35 | | OSDs | | Monitors | | |
36 | +------------------------+ +------------------------+ | |
37 | ||
38 | ||
39 | The most common ``libvirt`` use case involves providing Ceph block devices to | |
40 | cloud solutions like OpenStack or CloudStack. The cloud solution uses | |
41 | ``libvirt`` to interact with QEMU/KVM, and QEMU/KVM interacts with Ceph block | |
42 | devices via ``librbd``. See `Block Devices and OpenStack`_ and `Block Devices | |
43 | and CloudStack`_ for details. See `Installation`_ for installation details. | |
44 | ||
45 | You can also use Ceph block devices with ``libvirt``, ``virsh`` and the | |
46 | ``libvirt`` API. See `libvirt Virtualization API`_ for details. | |
47 | ||
48 | ||
49 | To create VMs that use Ceph block devices, use the procedures in the following | |
50 | sections. In the exemplary embodiment, we've used ``libvirt-pool`` for the pool | |
51 | name, ``client.libvirt`` for the user name, and ``new-libvirt-image`` for the | |
52 | image name. You may use any value you like, but ensure you replace those values | |
53 | when executing commands in the subsequent procedures. | |
54 | ||
55 | ||
56 | Configuring Ceph | |
57 | ================ | |
58 | ||
59 | To configure Ceph for use with ``libvirt``, perform the following steps: | |
60 | ||
61 | #. `Create a pool`_ (or use the default). The following example uses the | |
62 | pool name ``libvirt-pool`` with 128 placement groups. :: | |
63 | ||
64 | ceph osd pool create libvirt-pool 128 128 | |
65 | ||
66 | Verify the pool exists. :: | |
67 | ||
68 | ceph osd lspools | |
69 | ||
70 | #. `Create a Ceph User`_ (or use ``client.admin`` for version 0.9.7 and | |
71 | earlier). The following example uses the Ceph user name ``client.libvirt`` | |
72 | and references ``libvirt-pool``. :: | |
73 | ||
74 | ceph auth get-or-create client.libvirt mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=libvirt-pool' | |
75 | ||
76 | Verify the name exists. :: | |
77 | ||
78 | ceph auth list | |
79 | ||
80 | **NOTE**: ``libvirt`` will access Ceph using the ID ``libvirt``, | |
81 | not the Ceph name ``client.libvirt``. See `User Management - User`_ and | |
82 | `User Management - CLI`_ for a detailed explanation of the difference | |
83 | between ID and name. | |
84 | ||
85 | #. Use QEMU to `create an image`_ in your RBD pool. | |
86 | The following example uses the image name ``new-libvirt-image`` | |
87 | and references ``libvirt-pool``. :: | |
88 | ||
89 | qemu-img create -f rbd rbd:libvirt-pool/new-libvirt-image 2G | |
90 | ||
91 | Verify the image exists. :: | |
92 | ||
93 | rbd -p libvirt-pool ls | |
94 | ||
95 | **NOTE:** You can also use `rbd create`_ to create an image, but we | |
96 | recommend ensuring that QEMU is working properly. | |
97 | ||
98 | .. tip:: Optionally, if you wish to enable debug logs and the admin socket for | |
99 | this client, you can add the following section to ``/etc/ceph/ceph.conf``:: | |
100 | ||
101 | [client.libvirt] | |
102 | log file = /var/log/ceph/qemu-guest-$pid.log | |
103 | admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok | |
104 | ||
105 | The ``client.libvirt`` section name should match the cephx user you created | |
106 | above. If SELinux or AppArmor is enabled, note that this could prevent the | |
107 | client process (qemu via libvirt) from writing the logs or admin socket to | |
108 | the destination locations (``/var/log/ceph`` or ``/var/run/ceph``). | |
109 | ||
110 | ||
111 | ||
112 | Preparing the VM Manager | |
113 | ======================== | |
114 | ||
115 | You may use ``libvirt`` without a VM manager, but you may find it simpler to | |
116 | create your first domain with ``virt-manager``. | |
117 | ||
118 | #. Install a virtual machine manager. See `KVM/VirtManager`_ for details. :: | |
119 | ||
120 | sudo apt-get install virt-manager | |
121 | ||
122 | #. Download an OS image (if necessary). | |
123 | ||
124 | #. Launch the virtual machine manager. :: | |
125 | ||
126 | sudo virt-manager | |
127 | ||
128 | ||
129 | ||
130 | Creating a VM | |
131 | ============= | |
132 | ||
133 | To create a VM with ``virt-manager``, perform the following steps: | |
134 | ||
135 | #. Press the **Create New Virtual Machine** button. | |
136 | ||
137 | #. Name the new virtual machine domain. In the exemplary embodiment, we | |
138 | use the name ``libvirt-virtual-machine``. You may use any name you wish, | |
139 | but ensure you replace ``libvirt-virtual-machine`` with the name you | |
140 | choose in subsequent commandline and configuration examples. :: | |
141 | ||
142 | libvirt-virtual-machine | |
143 | ||
144 | #. Import the image. :: | |
145 | ||
146 | /path/to/image/recent-linux.img | |
147 | ||
148 | **NOTE:** Import a recent image. Some older images may not rescan for | |
149 | virtual devices properly. | |
150 | ||
151 | #. Configure and start the VM. | |
152 | ||
153 | #. You may use ``virsh list`` to verify the VM domain exists. :: | |
154 | ||
155 | sudo virsh list | |
156 | ||
157 | #. Login to the VM (root/root) | |
158 | ||
159 | #. Stop the VM before configuring it for use with Ceph. | |
160 | ||
161 | ||
162 | Configuring the VM | |
163 | ================== | |
164 | ||
165 | When configuring the VM for use with Ceph, it is important to use ``virsh`` | |
166 | where appropriate. Additionally, ``virsh`` commands often require root | |
167 | privileges (i.e., ``sudo``) and will not return appropriate results or notify | |
168 | you that that root privileges are required. For a reference of ``virsh`` | |
169 | commands, refer to `Virsh Command Reference`_. | |
170 | ||
171 | ||
172 | #. Open the configuration file with ``virsh edit``. :: | |
173 | ||
174 | sudo virsh edit {vm-domain-name} | |
175 | ||
176 | Under ``<devices>`` there should be a ``<disk>`` entry. :: | |
177 | ||
178 | <devices> | |
179 | <emulator>/usr/bin/kvm</emulator> | |
180 | <disk type='file' device='disk'> | |
181 | <driver name='qemu' type='raw'/> | |
182 | <source file='/path/to/image/recent-linux.img'/> | |
183 | <target dev='vda' bus='virtio'/> | |
184 | <address type='drive' controller='0' bus='0' unit='0'/> | |
185 | </disk> | |
186 | ||
187 | ||
188 | Replace ``/path/to/image/recent-linux.img`` with the path to the OS image. | |
189 | The minimum kernel for using the faster ``virtio`` bus is 2.6.25. See | |
190 | `Virtio`_ for details. | |
191 | ||
192 | **IMPORTANT:** Use ``sudo virsh edit`` instead of a text editor. If you edit | |
193 | the configuration file under ``/etc/libvirt/qemu`` with a text editor, | |
194 | ``libvirt`` may not recognize the change. If there is a discrepancy between | |
195 | the contents of the XML file under ``/etc/libvirt/qemu`` and the result of | |
196 | ``sudo virsh dumpxml {vm-domain-name}``, then your VM may not work | |
197 | properly. | |
198 | ||
199 | ||
200 | #. Add the Ceph RBD image you created as a ``<disk>`` entry. :: | |
201 | ||
202 | <disk type='network' device='disk'> | |
203 | <source protocol='rbd' name='libvirt-pool/new-libvirt-image'> | |
204 | <host name='{monitor-host}' port='6789'/> | |
205 | </source> | |
206 | <target dev='vda' bus='virtio'/> | |
207 | </disk> | |
208 | ||
209 | Replace ``{monitor-host}`` with the name of your host, and replace the | |
210 | pool and/or image name as necessary. You may add multiple ``<host>`` | |
211 | entries for your Ceph monitors. The ``dev`` attribute is the logical | |
212 | device name that will appear under the ``/dev`` directory of your | |
213 | VM. The optional ``bus`` attribute indicates the type of disk device to | |
214 | emulate. The valid settings are driver specific (e.g., "ide", "scsi", | |
215 | "virtio", "xen", "usb" or "sata"). | |
216 | ||
217 | See `Disks`_ for details of the ``<disk>`` element, and its child elements | |
218 | and attributes. | |
219 | ||
220 | #. Save the file. | |
221 | ||
222 | #. If your Ceph Storage Cluster has `Ceph Authentication`_ enabled (it does by | |
223 | default), you must generate a secret. :: | |
224 | ||
225 | cat > secret.xml <<EOF | |
226 | <secret ephemeral='no' private='no'> | |
227 | <usage type='ceph'> | |
228 | <name>client.libvirt secret</name> | |
229 | </usage> | |
230 | </secret> | |
231 | EOF | |
232 | ||
233 | #. Define the secret. :: | |
234 | ||
235 | sudo virsh secret-define --file secret.xml | |
236 | <uuid of secret is output here> | |
237 | ||
238 | #. Get the ``client.libvirt`` key and save the key string to a file. :: | |
239 | ||
240 | ceph auth get-key client.libvirt | sudo tee client.libvirt.key | |
241 | ||
242 | #. Set the UUID of the secret. :: | |
243 | ||
244 | sudo virsh secret-set-value --secret {uuid of secret} --base64 $(cat client.libvirt.key) && rm client.libvirt.key secret.xml | |
245 | ||
246 | You must also set the secret manually by adding the following ``<auth>`` | |
247 | entry to the ``<disk>`` element you entered earlier (replacing the | |
248 | ``uuid`` value with the result from the command line example above). :: | |
249 | ||
250 | sudo virsh edit {vm-domain-name} | |
251 | ||
252 | Then, add ``<auth></auth>`` element to the domain configuration file:: | |
253 | ||
254 | ... | |
255 | </source> | |
256 | <auth username='libvirt'> | |
257 | <secret type='ceph' uuid='9ec59067-fdbc-a6c0-03ff-df165c0587b8'/> | |
258 | </auth> | |
259 | <target ... | |
260 | ||
261 | ||
262 | **NOTE:** The exemplary ID is ``libvirt``, not the Ceph name | |
263 | ``client.libvirt`` as generated at step 2 of `Configuring Ceph`_. Ensure | |
264 | you use the ID component of the Ceph name you generated. If for some reason | |
265 | you need to regenerate the secret, you will have to execute | |
266 | ``sudo virsh secret-undefine {uuid}`` before executing | |
267 | ``sudo virsh secret-set-value`` again. | |
268 | ||
269 | ||
270 | Summary | |
271 | ======= | |
272 | ||
273 | Once you have configured the VM for use with Ceph, you can start the VM. | |
274 | To verify that the VM and Ceph are communicating, you may perform the | |
275 | following procedures. | |
276 | ||
277 | ||
278 | #. Check to see if Ceph is running:: | |
279 | ||
280 | ceph health | |
281 | ||
282 | #. Check to see if the VM is running. :: | |
283 | ||
284 | sudo virsh list | |
285 | ||
286 | #. Check to see if the VM is communicating with Ceph. Replace | |
287 | ``{vm-domain-name}`` with the name of your VM domain:: | |
288 | ||
289 | sudo virsh qemu-monitor-command --hmp {vm-domain-name} 'info block' | |
290 | ||
291 | #. Check to see if the device from ``<target dev='hdb' bus='ide'/>`` appears | |
292 | under ``/dev`` or under ``proc/partitions``. :: | |
293 | ||
294 | ls dev | |
295 | cat proc/partitions | |
296 | ||
297 | If everything looks okay, you may begin using the Ceph block device | |
298 | within your VM. | |
299 | ||
300 | ||
301 | .. _Installation: ../../install | |
302 | .. _libvirt Virtualization API: http://www.libvirt.org | |
303 | .. _Block Devices and OpenStack: ../rbd-openstack | |
304 | .. _Block Devices and CloudStack: ../rbd-cloudstack | |
305 | .. _Create a pool: ../../rados/operations/pools#create-a-pool | |
306 | .. _Create a Ceph User: ../../rados/operations/user-management#add-a-user | |
307 | .. _create an image: ../qemu-rbd#creating-images-with-qemu | |
308 | .. _Virsh Command Reference: http://www.libvirt.org/virshcmdref.html | |
309 | .. _KVM/VirtManager: https://help.ubuntu.com/community/KVM/VirtManager | |
310 | .. _Ceph Authentication: ../../rados/configuration/auth-config-ref | |
311 | .. _Disks: http://www.libvirt.org/formatdomain.html#elementsDisks | |
312 | .. _rbd create: ../rados-rbd-cmds#creating-a-block-device-image | |
313 | .. _User Management - User: ../../rados/operations/user-management#user | |
314 | .. _User Management - CLI: ../../rados/operations/user-management#command-line-usage | |
315 | .. _Virtio: http://www.linux-kvm.org/page/Virtio |