]> git.proxmox.com Git - ceph.git/blame - ceph/doc/rbd/rbd.rst
bump version to 12.1.2-pve1
[ceph.git] / ceph / doc / rbd / rbd.rst
CommitLineData
7c673cae
FG
1===================
2 Ceph Block Device
3===================
4
5.. index:: Ceph Block Device; introduction
6
7A block is a sequence of bytes (for example, a 512-byte block of data).
8Block-based storage interfaces are the most common way to store data with
9rotating media such as hard disks, CDs, floppy disks, and even traditional
109-track tape. The ubiquity of block device interfaces makes a virtual block
11device an ideal candidate to interact with a mass data storage system like Ceph.
12
13Ceph block devices are thin-provisioned, resizable and store data striped over
14multiple OSDs in a Ceph cluster. Ceph block devices leverage
15:abbr:`RADOS (Reliable Autonomic Distributed Object Store)` capabilities
16such as snapshotting, replication and consistency. Ceph's
17:abbr:`RADOS (Reliable Autonomic Distributed Object Store)` Block Devices (RBD)
18interact with OSDs using kernel modules or the ``librbd`` library.
19
20.. ditaa:: +------------------------+ +------------------------+
21 | Kernel Module | | librbd |
22 +------------------------+-+------------------------+
23 | RADOS Protocol |
24 +------------------------+-+------------------------+
25 | OSDs | | Monitors |
26 +------------------------+ +------------------------+
27
28.. note:: Kernel modules can use Linux page caching. For ``librbd``-based
29 applications, Ceph supports `RBD Caching`_.
30
31Ceph's block devices deliver high performance with infinite scalability to
32`kernel modules`_, or to :abbr:`KVMs (kernel virtual machines)` such as `QEMU`_, and
33cloud-based computing systems like `OpenStack`_ and `CloudStack`_ that rely on
34libvirt and QEMU to integrate with Ceph block devices. You can use the same cluster
35to operate the `Ceph RADOS Gateway`_, the `Ceph FS filesystem`_, and Ceph block
36devices simultaneously.
37
38.. important:: To use Ceph Block Devices, you must have access to a running
39 Ceph cluster.
40
41.. toctree::
42 :maxdepth: 1
43
44 Commands <rados-rbd-cmds>
45 Kernel Modules <rbd-ko>
46 Snapshots<rbd-snapshot>
47 Mirroring <rbd-mirroring>
48 QEMU <qemu-rbd>
49 libvirt <libvirt>
50 Cache Settings <rbd-config-ref/>
51 OpenStack <rbd-openstack>
52 CloudStack <rbd-cloudstack>
53 Manpage rbd <../../man/8/rbd>
54 Manpage rbd-fuse <../../man/8/rbd-fuse>
55 Manpage rbd-nbd <../../man/8/rbd-nbd>
56 Manpage ceph-rbdnamer <../../man/8/ceph-rbdnamer>
57 RBD Replay <rbd-replay>
58 Manpage rbd-replay-prep <../../man/8/rbd-replay-prep>
59 Manpage rbd-replay <../../man/8/rbd-replay>
60 Manpage rbd-replay-many <../../man/8/rbd-replay-many>
61 Manpage rbdmap <../../man/8/rbdmap>
62 librbd <librbdpy>
63
64
65
66.. _RBD Caching: ../rbd-config-ref/
67.. _kernel modules: ../rbd-ko/
68.. _QEMU: ../qemu-rbd/
69.. _OpenStack: ../rbd-openstack
70.. _CloudStack: ../rbd-cloudstack
71.. _Ceph RADOS Gateway: ../../radosgw/
72.. _Ceph FS filesystem: ../../cephfs/