1=============================
2Ceph RADOS Block Device (RBD)
3=============================
4
5If you use KVM or QEMU as your hypervisor, you can configure the Compute
6service to use `Ceph RADOS block devices
7(RBD) <https://ceph.com/ceph-storage/block-storage/>`__ for volumes.
8
9Ceph is a massively scalable, open source, distributed storage system.
10It is comprised of an object store, block store, and a POSIX-compliant
11distributed file system. The platform can auto-scale to the exabyte
12level and beyond. It runs on commodity hardware, is self-healing and
13self-managing, and has no single point of failure. Due to its open-source
14nature, you can install and use this portable storage platform in
15public or private clouds.
16
17.. figure:: ../../figures/ceph-architecture.png
18
19    Ceph architecture
20
21RADOS
22~~~~~
23
24Ceph is based on Reliable Autonomic Distributed Object Store (RADOS).
25RADOS distributes objects across the storage cluster and replicates
26objects for fault tolerance. RADOS contains the following major
27components:
28
29*Object Storage Device (OSD) Daemon*
30 The storage daemon for the RADOS service, which interacts with the
31 OSD (physical or logical storage unit for your data).
32 You must run this daemon on each server in your cluster. For each
33 OSD, you can have an associated hard drive disk. For performance
34 purposes, pool your hard drive disk with raid arrays, or logical volume
35 management (LVM). By default, the following pools are created: data,
36 metadata, and RBD.
37
38*Meta-Data Server (MDS)*
39 Stores metadata. MDSs build a POSIX file
40 system on top of objects for Ceph clients. However, if you do not use
41 the Ceph file system, you do not need a metadata server.
42
43*Monitor (MON)*
44 A lightweight daemon that handles all communications
45 with external applications and clients. It also provides a consensus
46 for distributed decision making in a Ceph/RADOS cluster. For
47 instance, when you mount a Ceph shared on a client, you point to the
48 address of a MON server. It checks the state and the consistency of
49 the data. In an ideal setup, you must run at least three ``ceph-mon``
50 daemons on separate servers.
51
52Ways to store, use, and expose data
53~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
54
55To store and access your data, you can use the following storage
56systems:
57
58*RADOS*
59 Use as an object, default storage mechanism.
60
61*RBD*
62 Use as a block device. The Linux kernel RBD (RADOS block
63 device) driver allows striping a Linux block device over multiple
64 distributed object store data objects. It is compatible with the KVM
65 RBD image.
66
67*CephFS*
68 Use as a file, POSIX-compliant file system.
69
70Ceph exposes RADOS; you can access it through the following interfaces:
71
72*RADOS Gateway*
73 OpenStack Object Storage and Amazon-S3 compatible
74 RESTful interface (see `RADOS_Gateway
75 <https://ceph.com/wiki/RADOS_Gateway>`__).
76
77*librados*
78 and its related C/C++ bindings
79
80*RBD and QEMU-RBD*
81 Linux kernel and QEMU block devices that stripe
82 data across multiple objects.
83
84Driver options
85~~~~~~~~~~~~~~
86
87The following table contains the configuration options supported by the
88Ceph RADOS Block Device driver.
89
90.. include:: ../../tables/cinder-storage_ceph.inc
91