]> git.proxmox.com Git - ceph.git/blame - ceph/doc/rbd/libvirt.rst
update sources to v12.1.2
[ceph.git] / ceph / doc / rbd / libvirt.rst
CommitLineData
7c673cae
FG
1=================================
2 Using libvirt with Ceph RBD
3=================================
4
5.. index:: Ceph Block Device; livirt
6
7The ``libvirt`` library creates a virtual machine abstraction layer between
8hypervisor interfaces and the software applications that use them. With
9``libvirt``, developers and system administrators can focus on a common
10management framework, common API, and common shell interface (i.e., ``virsh``)
11to many different hypervisors, including:
12
13- QEMU/KVM
14- XEN
15- LXC
16- VirtualBox
17- etc.
18
19Ceph block devices support QEMU/KVM. You can use Ceph block devices with
20software that interfaces with ``libvirt``. The following stack diagram
21illustrates how ``libvirt`` and QEMU use Ceph block devices via ``librbd``.
22
23
24.. ditaa:: +---------------------------------------------------+
25 | libvirt |
26 +------------------------+--------------------------+
27 |
28 | configures
29 v
30 +---------------------------------------------------+
31 | QEMU |
32 +---------------------------------------------------+
33 | librbd |
34 +------------------------+-+------------------------+
35 | OSDs | | Monitors |
36 +------------------------+ +------------------------+
37
38
39The most common ``libvirt`` use case involves providing Ceph block devices to
40cloud solutions like OpenStack or CloudStack. The cloud solution uses
41``libvirt`` to interact with QEMU/KVM, and QEMU/KVM interacts with Ceph block
42devices via ``librbd``. See `Block Devices and OpenStack`_ and `Block Devices
43and CloudStack`_ for details. See `Installation`_ for installation details.
44
45You can also use Ceph block devices with ``libvirt``, ``virsh`` and the
46``libvirt`` API. See `libvirt Virtualization API`_ for details.
47
48
49To create VMs that use Ceph block devices, use the procedures in the following
c07f9fc5 50sections. In the exemplary embodiment, we have used ``libvirt-pool`` for the pool
7c673cae
FG
51name, ``client.libvirt`` for the user name, and ``new-libvirt-image`` for the
52image name. You may use any value you like, but ensure you replace those values
53when executing commands in the subsequent procedures.
54
55
56Configuring Ceph
57================
58
59To configure Ceph for use with ``libvirt``, perform the following steps:
60
c07f9fc5 61#. `Create a pool`_. The following example uses the
7c673cae
FG
62 pool name ``libvirt-pool`` with 128 placement groups. ::
63
64 ceph osd pool create libvirt-pool 128 128
65
66 Verify the pool exists. ::
67
68 ceph osd lspools
69
c07f9fc5
FG
70#. Use the ``rbd`` tool to initialize the pool for use by RBD::
71
72 rbd pool init <pool-name>
73
74#. `Create a Ceph User`_ (or use ``client.admin`` for version 0.9.7 and
75 earlier). The following example uses the Ceph user name ``client.libvirt``
7c673cae
FG
76 and references ``libvirt-pool``. ::
77
c07f9fc5 78 ceph auth get-or-create client.libvirt mon 'profile rbd' osd 'profile rbd pool=libvirt-pool'
7c673cae
FG
79
80 Verify the name exists. ::
81
c07f9fc5 82 ceph auth ls
7c673cae
FG
83
84 **NOTE**: ``libvirt`` will access Ceph using the ID ``libvirt``,
85 not the Ceph name ``client.libvirt``. See `User Management - User`_ and
86 `User Management - CLI`_ for a detailed explanation of the difference
87 between ID and name.
88
89#. Use QEMU to `create an image`_ in your RBD pool.
90 The following example uses the image name ``new-libvirt-image``
91 and references ``libvirt-pool``. ::
92
93 qemu-img create -f rbd rbd:libvirt-pool/new-libvirt-image 2G
94
95 Verify the image exists. ::
96
97 rbd -p libvirt-pool ls
98
99 **NOTE:** You can also use `rbd create`_ to create an image, but we
100 recommend ensuring that QEMU is working properly.
101
102.. tip:: Optionally, if you wish to enable debug logs and the admin socket for
103 this client, you can add the following section to ``/etc/ceph/ceph.conf``::
104
105 [client.libvirt]
106 log file = /var/log/ceph/qemu-guest-$pid.log
107 admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
108
109 The ``client.libvirt`` section name should match the cephx user you created
110 above. If SELinux or AppArmor is enabled, note that this could prevent the
111 client process (qemu via libvirt) from writing the logs or admin socket to
112 the destination locations (``/var/log/ceph`` or ``/var/run/ceph``).
113
114
115
116Preparing the VM Manager
117========================
118
119You may use ``libvirt`` without a VM manager, but you may find it simpler to
120create your first domain with ``virt-manager``.
121
122#. Install a virtual machine manager. See `KVM/VirtManager`_ for details. ::
123
124 sudo apt-get install virt-manager
125
126#. Download an OS image (if necessary).
127
128#. Launch the virtual machine manager. ::
129
130 sudo virt-manager
131
132
133
134Creating a VM
135=============
136
137To create a VM with ``virt-manager``, perform the following steps:
138
139#. Press the **Create New Virtual Machine** button.
140
141#. Name the new virtual machine domain. In the exemplary embodiment, we
142 use the name ``libvirt-virtual-machine``. You may use any name you wish,
143 but ensure you replace ``libvirt-virtual-machine`` with the name you
144 choose in subsequent commandline and configuration examples. ::
145
146 libvirt-virtual-machine
147
148#. Import the image. ::
149
150 /path/to/image/recent-linux.img
151
152 **NOTE:** Import a recent image. Some older images may not rescan for
153 virtual devices properly.
154
155#. Configure and start the VM.
156
157#. You may use ``virsh list`` to verify the VM domain exists. ::
158
159 sudo virsh list
160
161#. Login to the VM (root/root)
162
163#. Stop the VM before configuring it for use with Ceph.
164
165
166Configuring the VM
167==================
168
169When configuring the VM for use with Ceph, it is important to use ``virsh``
170where appropriate. Additionally, ``virsh`` commands often require root
171privileges (i.e., ``sudo``) and will not return appropriate results or notify
172you that that root privileges are required. For a reference of ``virsh``
173commands, refer to `Virsh Command Reference`_.
174
175
176#. Open the configuration file with ``virsh edit``. ::
177
178 sudo virsh edit {vm-domain-name}
179
180 Under ``<devices>`` there should be a ``<disk>`` entry. ::
181
182 <devices>
183 <emulator>/usr/bin/kvm</emulator>
184 <disk type='file' device='disk'>
185 <driver name='qemu' type='raw'/>
186 <source file='/path/to/image/recent-linux.img'/>
187 <target dev='vda' bus='virtio'/>
188 <address type='drive' controller='0' bus='0' unit='0'/>
189 </disk>
190
191
192 Replace ``/path/to/image/recent-linux.img`` with the path to the OS image.
193 The minimum kernel for using the faster ``virtio`` bus is 2.6.25. See
194 `Virtio`_ for details.
195
196 **IMPORTANT:** Use ``sudo virsh edit`` instead of a text editor. If you edit
197 the configuration file under ``/etc/libvirt/qemu`` with a text editor,
198 ``libvirt`` may not recognize the change. If there is a discrepancy between
199 the contents of the XML file under ``/etc/libvirt/qemu`` and the result of
200 ``sudo virsh dumpxml {vm-domain-name}``, then your VM may not work
201 properly.
202
203
204#. Add the Ceph RBD image you created as a ``<disk>`` entry. ::
205
206 <disk type='network' device='disk'>
207 <source protocol='rbd' name='libvirt-pool/new-libvirt-image'>
208 <host name='{monitor-host}' port='6789'/>
209 </source>
210 <target dev='vda' bus='virtio'/>
211 </disk>
212
213 Replace ``{monitor-host}`` with the name of your host, and replace the
214 pool and/or image name as necessary. You may add multiple ``<host>``
215 entries for your Ceph monitors. The ``dev`` attribute is the logical
216 device name that will appear under the ``/dev`` directory of your
217 VM. The optional ``bus`` attribute indicates the type of disk device to
218 emulate. The valid settings are driver specific (e.g., "ide", "scsi",
219 "virtio", "xen", "usb" or "sata").
220
221 See `Disks`_ for details of the ``<disk>`` element, and its child elements
222 and attributes.
223
224#. Save the file.
225
226#. If your Ceph Storage Cluster has `Ceph Authentication`_ enabled (it does by
227 default), you must generate a secret. ::
228
229 cat > secret.xml <<EOF
230 <secret ephemeral='no' private='no'>
231 <usage type='ceph'>
232 <name>client.libvirt secret</name>
233 </usage>
234 </secret>
235 EOF
236
237#. Define the secret. ::
238
239 sudo virsh secret-define --file secret.xml
240 <uuid of secret is output here>
241
242#. Get the ``client.libvirt`` key and save the key string to a file. ::
243
244 ceph auth get-key client.libvirt | sudo tee client.libvirt.key
245
246#. Set the UUID of the secret. ::
247
248 sudo virsh secret-set-value --secret {uuid of secret} --base64 $(cat client.libvirt.key) && rm client.libvirt.key secret.xml
249
250 You must also set the secret manually by adding the following ``<auth>``
251 entry to the ``<disk>`` element you entered earlier (replacing the
252 ``uuid`` value with the result from the command line example above). ::
253
254 sudo virsh edit {vm-domain-name}
255
256 Then, add ``<auth></auth>`` element to the domain configuration file::
257
258 ...
259 </source>
260 <auth username='libvirt'>
261 <secret type='ceph' uuid='9ec59067-fdbc-a6c0-03ff-df165c0587b8'/>
262 </auth>
263 <target ...
264
265
266 **NOTE:** The exemplary ID is ``libvirt``, not the Ceph name
267 ``client.libvirt`` as generated at step 2 of `Configuring Ceph`_. Ensure
268 you use the ID component of the Ceph name you generated. If for some reason
269 you need to regenerate the secret, you will have to execute
270 ``sudo virsh secret-undefine {uuid}`` before executing
271 ``sudo virsh secret-set-value`` again.
272
273
274Summary
275=======
276
277Once you have configured the VM for use with Ceph, you can start the VM.
278To verify that the VM and Ceph are communicating, you may perform the
279following procedures.
280
281
282#. Check to see if Ceph is running::
283
284 ceph health
285
286#. Check to see if the VM is running. ::
287
288 sudo virsh list
289
290#. Check to see if the VM is communicating with Ceph. Replace
291 ``{vm-domain-name}`` with the name of your VM domain::
292
293 sudo virsh qemu-monitor-command --hmp {vm-domain-name} 'info block'
294
295#. Check to see if the device from ``<target dev='hdb' bus='ide'/>`` appears
296 under ``/dev`` or under ``proc/partitions``. ::
297
298 ls dev
299 cat proc/partitions
300
301If everything looks okay, you may begin using the Ceph block device
302within your VM.
303
304
305.. _Installation: ../../install
306.. _libvirt Virtualization API: http://www.libvirt.org
307.. _Block Devices and OpenStack: ../rbd-openstack
308.. _Block Devices and CloudStack: ../rbd-cloudstack
309.. _Create a pool: ../../rados/operations/pools#create-a-pool
310.. _Create a Ceph User: ../../rados/operations/user-management#add-a-user
311.. _create an image: ../qemu-rbd#creating-images-with-qemu
312.. _Virsh Command Reference: http://www.libvirt.org/virshcmdref.html
313.. _KVM/VirtManager: https://help.ubuntu.com/community/KVM/VirtManager
314.. _Ceph Authentication: ../../rados/configuration/auth-config-ref
315.. _Disks: http://www.libvirt.org/formatdomain.html#elementsDisks
316.. _rbd create: ../rados-rbd-cmds#creating-a-block-device-image
317.. _User Management - User: ../../rados/operations/user-management#user
318.. _User Management - CLI: ../../rados/operations/user-management#command-line-usage
319.. _Virtio: http://www.linux-kvm.org/page/Virtio