]> git.proxmox.com Git - ceph.git/blob - ceph/doc/rbd/rbd-cloudstack.rst
add subtree-ish sources for 12.0.3
[ceph.git] / ceph / doc / rbd / rbd-cloudstack.rst
1 =============================
2 Block Devices and CloudStack
3 =============================
4
5 You may use Ceph Block Device images with CloudStack 4.0 and higher through
6 ``libvirt``, which configures the QEMU interface to ``librbd``. Ceph stripes
7 block device images as objects across the cluster, which means that large Ceph
8 Block Device images have better performance than a standalone server!
9
10 To use Ceph Block Devices with CloudStack 4.0 and higher, you must install QEMU,
11 ``libvirt``, and CloudStack first. We recommend using a separate physical host
12 for your CloudStack installation. CloudStack recommends a minimum of 4GB of RAM
13 and a dual-core processor, but more CPU and RAM will perform better. The
14 following diagram depicts the CloudStack/Ceph technology stack.
15
16
17 .. ditaa:: +---------------------------------------------------+
18 | CloudStack |
19 +---------------------------------------------------+
20 | libvirt |
21 +------------------------+--------------------------+
22 |
23 | configures
24 v
25 +---------------------------------------------------+
26 | QEMU |
27 +---------------------------------------------------+
28 | librbd |
29 +---------------------------------------------------+
30 | librados |
31 +------------------------+-+------------------------+
32 | OSDs | | Monitors |
33 +------------------------+ +------------------------+
34
35 .. important:: To use Ceph Block Devices with CloudStack, you must have
36 access to a running Ceph Storage Cluster.
37
38 CloudStack integrates with Ceph's block devices to provide CloudStack with a
39 back end for CloudStack's Primary Storage. The instructions below detail the
40 setup for CloudStack Primary Storage.
41
42 .. note:: We recommend installing with Ubuntu 14.04 or later so that
43 you can use package installation instead of having to compile
44 libvirt from source.
45
46 Installing and configuring QEMU for use with CloudStack doesn't require any
47 special handling. Ensure that you have a running Ceph Storage Cluster. Install
48 QEMU and configure it for use with Ceph; then, install ``libvirt`` version
49 0.9.13 or higher (you may need to compile from source) and ensure it is running
50 with Ceph.
51
52
53 .. note:: Ubuntu 14.04 and CentOS 7.2 will have ``libvirt`` with RBD storage
54 pool support enabled by default.
55
56 .. index:: pools; CloudStack
57
58 Create a Pool
59 =============
60
61 By default, Ceph block devices use the ``rbd`` pool. Create a pool for
62 CloudStack NFS Primary Storage. Ensure your Ceph cluster is running, then create
63 the pool. ::
64
65 ceph osd pool create cloudstack
66
67 See `Create a Pool`_ for details on specifying the number of placement groups
68 for your pools, and `Placement Groups`_ for details on the number of placement
69 groups you should set for your pools.
70
71 Create a Ceph User
72 ==================
73
74 To access the Ceph cluster we require a Ceph user which has the correct
75 credentials to access the ``cloudstack`` pool we just created. Although we could
76 use ``client.admin`` for this, it's recommended to create a user with only
77 access to the ``cloudstack`` pool. ::
78
79 ceph auth get-or-create client.cloudstack mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=cloudstack'
80
81 Use the information returned by the command in the next step when adding the
82 Primary Storage.
83
84 See `User Management`_ for additional details.
85
86 Add Primary Storage
87 ===================
88
89 To add primary storage, refer to `Add Primary Storage (4.2.0)`_ to add a Ceph block device, the steps
90 include:
91
92 #. Log in to the CloudStack UI.
93 #. Click **Infrastructure** on the left side navigation bar.
94 #. Select the Zone you want to use for Primary Storage.
95 #. Click the **Compute** tab.
96 #. Select **View All** on the `Primary Storage` node in the diagram.
97 #. Click **Add Primary Storage**.
98 #. Follow the CloudStack instructions.
99
100 - For **Protocol**, select ``RBD``.
101 - Add cluster information (cephx is supported). Note: Do not include the ``client.`` part of the user.
102 - Add ``rbd`` as a tag.
103
104
105 Create a Disk Offering
106 ======================
107
108 To create a new disk offering, refer to `Create a New Disk Offering (4.2.0)`_.
109 Create a disk offering so that it matches the ``rbd`` tag.
110 The ``StoragePoolAllocator`` will choose the ``rbd``
111 pool when searching for a suitable storage pool. If the disk offering doesn't
112 match the ``rbd`` tag, the ``StoragePoolAllocator`` may select the pool you
113 created (e.g., ``cloudstack``).
114
115
116 Limitations
117 ===========
118
119 - CloudStack will only bind to one monitor (You can however create a Round Robin DNS record over multiple monitors)
120
121
122
123 .. _Create a Pool: ../../rados/operations/pools#createpool
124 .. _Placement Groups: ../../rados/operations/placement-groups
125 .. _Install and Configure QEMU: ../qemu-rbd
126 .. _Install and Configure libvirt: ../libvirt
127 .. _KVM Hypervisor Host Installation: http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Installation_Guide/hypervisor-kvm-install-flow.html
128 .. _Add Primary Storage (4.2.0): http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Admin_Guide/primary-storage-add.html
129 .. _Create a New Disk Offering (4.2.0): http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Admin_Guide/compute-disk-service-offerings.html#creating-disk-offerings
130 .. _User Management: ../../rados/operations/user-management