1 ==========================
2 Block Device Quick Start
3 ==========================
5 To use this guide, you must have executed the procedures in the `Storage
6 Cluster Quick Start`_ guide first. Ensure your :term:`Ceph Storage Cluster` is
7 in an ``active + clean`` state before working with the :term:`Ceph Block
10 .. note:: The Ceph Block Device is also known as :term:`RBD` or :term:`RADOS`
15 /------------------\ /----------------\
16 | Admin Node | | ceph-client |
18 | ceph-deploy | | ceph |
19 \------------------/ \----------------/
22 You may use a virtual machine for your ``ceph-client`` node, but do not
23 execute the following procedures on the same physical node as your Ceph
24 Storage Cluster nodes (unless you use a VM). See `FAQ`_ for details.
30 #. Verify that you have an appropriate version of the Linux kernel.
31 See `OS Recommendations`_ for details. ::
36 #. On the admin node, use ``ceph-deploy`` to install Ceph on your
37 ``ceph-client`` node. ::
39 ceph-deploy install ceph-client
41 #. On the admin node, use ``ceph-deploy`` to copy the Ceph configuration file
42 and the ``ceph.client.admin.keyring`` to the ``ceph-client``. ::
44 ceph-deploy admin ceph-client
46 The ``ceph-deploy`` utility copies the keyring to the ``/etc/ceph``
47 directory. Ensure that the keyring file has appropriate read permissions
48 (e.g., ``sudo chmod +r /etc/ceph/ceph.client.admin.keyring``).
52 #. On the admin node, use the ``ceph`` tool to `Create a Pool`_
53 (we recommend the name 'rbd').
55 Configure a Block Device
56 ========================
58 #. On the ``ceph-client`` node, create a block device image. ::
60 rbd create foo --size 4096 [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]
62 #. On the ``ceph-client`` node, map the image to a block device. ::
64 sudo rbd map foo --name client.admin [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]
66 #. Use the block device by creating a file system on the ``ceph-client``
69 sudo mkfs.ext4 -m0 /dev/rbd/rbd/foo
71 This may take a few moments.
73 #. Mount the file system on the ``ceph-client`` node. ::
75 sudo mkdir /mnt/ceph-block-device
76 sudo mount /dev/rbd/rbd/foo /mnt/ceph-block-device
77 cd /mnt/ceph-block-device
79 #. Optionally configure the block device to be automatically mapped and mounted
80 at boot (and unmounted/unmapped at shutdown) - see the `rbdmap manpage`_.
83 See `block devices`_ for additional details.
85 .. _Create a Pool: ../../rados/operations/pools#createpool
86 .. _Storage Cluster Quick Start: ../quick-ceph-deploy
87 .. _block devices: ../../rbd/rbd
88 .. _FAQ: http://wiki.ceph.com/How_Can_I_Give_Ceph_a_Try
89 .. _OS Recommendations: ../os-recommendations
90 .. _rbdmap manpage: ../../man/8/rbdmap