]> git.proxmox.com Git - ceph.git/blob - ceph/doc/rbd/index.rst
update sources to v12.1.3
[ceph.git] / ceph / doc / rbd / index.rst
1 ===================
2 Ceph Block Device
3 ===================
4
5 .. index:: Ceph Block Device; introduction
6
7 A block is a sequence of bytes (for example, a 512-byte block of data).
8 Block-based storage interfaces are the most common way to store data with
9 rotating media such as hard disks, CDs, floppy disks, and even traditional
10 9-track tape. The ubiquity of block device interfaces makes a virtual block
11 device an ideal candidate to interact with a mass data storage system like Ceph.
12
13 Ceph block devices are thin-provisioned, resizable and store data striped over
14 multiple OSDs in a Ceph cluster. Ceph block devices leverage
15 :abbr:`RADOS (Reliable Autonomic Distributed Object Store)` capabilities
16 such as snapshotting, replication and consistency. Ceph's
17 :abbr:`RADOS (Reliable Autonomic Distributed Object Store)` Block Devices (RBD)
18 interact with OSDs using kernel modules or the ``librbd`` library.
19
20 .. ditaa:: +------------------------+ +------------------------+
21 | Kernel Module | | librbd |
22 +------------------------+-+------------------------+
23 | RADOS Protocol |
24 +------------------------+-+------------------------+
25 | OSDs | | Monitors |
26 +------------------------+ +------------------------+
27
28 .. note:: Kernel modules can use Linux page caching. For ``librbd``-based
29 applications, Ceph supports `RBD Caching`_.
30
31 Ceph's block devices deliver high performance with infinite scalability to
32 `kernel modules`_, or to :abbr:`KVMs (kernel virtual machines)` such as `QEMU`_, and
33 cloud-based computing systems like `OpenStack`_ and `CloudStack`_ that rely on
34 libvirt and QEMU to integrate with Ceph block devices. You can use the same cluster
35 to operate the `Ceph RADOS Gateway`_, the `Ceph FS filesystem`_, and Ceph block
36 devices simultaneously.
37
38 .. important:: To use Ceph Block Devices, you must have access to a running
39 Ceph cluster.
40
41 .. toctree::
42 :maxdepth: 1
43
44 Commands <rados-rbd-cmds>
45 Kernel Modules <rbd-ko>
46 Snapshots<rbd-snapshot>
47 Mirroring <rbd-mirroring>
48 QEMU <qemu-rbd>
49 libvirt <libvirt>
50 Cache Settings <rbd-config-ref/>
51 OpenStack <rbd-openstack>
52 CloudStack <rbd-cloudstack>
53 RBD Replay <rbd-replay>
54
55 .. toctree::
56 :maxdepth: 2
57
58 Manpages <man/index>
59
60 .. toctree::
61 :maxdepth: 2
62
63 APIs <api/index>
64
65
66
67
68 .. _RBD Caching: ../rbd-config-ref/
69 .. _kernel modules: ../rbd-ko/
70 .. _QEMU: ../qemu-rbd/
71 .. _OpenStack: ../rbd-openstack
72 .. _CloudStack: ../rbd-cloudstack
73 .. _Ceph RADOS Gateway: ../../radosgw/
74 .. _Ceph FS filesystem: ../../cephfs/