0716f28e13
The current document mentions that ceph developers recommend btrfs as a fs for storage, however this is not the case as btrfs is not yet stable for production clusters and only recommended in not critical environments. Change-Id: Ic8a977c6392e49c8516c27ea20374abc02abcb31 Closes-bug: #1491767
123 lines
6.6 KiB
XML
123 lines
6.6 KiB
XML
<section xml:id="ceph-rados" xmlns="http://docbook.org/ns/docbook"
|
||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
|
||
<title>Ceph RADOS Block Device (RBD)</title>
|
||
<para>If you use KVM or QEMU as your hypervisor, you can configure
|
||
the Compute service to use <link
|
||
xlink:href="http://ceph.com/ceph-storage/block-storage/">
|
||
Ceph RADOS block devices (RBD)</link> for volumes.</para>
|
||
<para>Ceph is a massively scalable, open source, distributed storage system. It is comprised of
|
||
an object store, block store, and a POSIX-compliant distributed file system. The platform
|
||
can auto-scale to the exabyte level and beyond. It runs on commodity hardware, is
|
||
self-healing and self-managing, and has no single point of failure. Ceph is in the Linux
|
||
kernel and is integrated with the OpenStack cloud operating system. Due to its open-source
|
||
nature, you can install and use this portable storage platform in public or private clouds. <figure>
|
||
<title>Ceph architecture</title>
|
||
<mediaobject>
|
||
<imageobject>
|
||
<imagedata fileref="../../../common/figures/ceph/ceph-architecture.png"
|
||
contentwidth="6in"/>
|
||
</imageobject>
|
||
</mediaobject>
|
||
</figure>
|
||
</para>
|
||
<simplesect>
|
||
<title>RADOS</title>
|
||
<para>Ceph is based on <emphasis>RADOS: Reliable Autonomic Distributed Object
|
||
Store</emphasis>. RADOS distributes objects across the storage cluster and
|
||
replicates objects for fault tolerance. RADOS contains the following major
|
||
components:</para>
|
||
<itemizedlist>
|
||
<listitem>
|
||
<para><emphasis>Object Storage Device (OSD) Daemon</emphasis>. The storage daemon
|
||
for the RADOS service, which interacts with the OSD (physical or logical storage
|
||
unit for your data).</para>
|
||
<para>You must run this daemon on each server in your cluster. For each OSD, you can
|
||
have an associated hard drive disk. For performance purposes, pool your hard
|
||
drive disk with raid arrays, logical volume management (LVM), or B-tree file
|
||
system (<systemitem>Btrfs</systemitem>) pooling. By default, the following pools
|
||
are created: data, metadata, and RBD.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para><emphasis>Meta-Data Server (MDS)</emphasis>.
|
||
Stores metadata. MDSs build a POSIX file system on
|
||
top of objects for Ceph clients. However, if you
|
||
do not use the Ceph file system, you do not need a
|
||
metadata server.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para><emphasis>Monitor (MON)</emphasis>. A lightweight daemon that handles all
|
||
communications with external applications and clients. It also provides a
|
||
consensus for distributed decision making in a Ceph/RADOS cluster. For instance,
|
||
when you mount a Ceph shared on a client, you point to the address of a MON
|
||
server. It checks the state and the consistency of the data. In an ideal setup,
|
||
you must run at least three <code>ceph-mon</code> daemons on separate
|
||
servers.</para>
|
||
</listitem>
|
||
</itemizedlist>
|
||
<para>Ceph developers recommend XFS for production deployments, <systemitem>Btrfs</systemitem>
|
||
for testing, development, and any non-critical deployments. Btrfs has the correct
|
||
feature set and roadmap to serve Ceph in the long-term, but XFS and ext4 provide the
|
||
necessary stability for today’s deployments.</para>
|
||
<note>
|
||
<para>If using <systemitem>Btrfs</systemitem>, ensure that you use the correct version
|
||
(see <link xlink:href="http://ceph.com/docs/master/start/os-recommendations/.">Ceph
|
||
Dependencies</link>).</para>
|
||
<para>For more information about usable file systems, see <link
|
||
xlink:href="http://ceph.com/ceph-storage/file-system/"
|
||
>ceph.com/ceph-storage/file-system/</link>.</para>
|
||
</note>
|
||
</simplesect>
|
||
<simplesect>
|
||
<title>Ways to store, use, and expose data</title>
|
||
<para>To store and access your data, you can use the following
|
||
storage systems:</para>
|
||
<itemizedlist>
|
||
<listitem>
|
||
<para><emphasis>RADOS</emphasis>. Use as an object,
|
||
default storage mechanism.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para><emphasis>RBD</emphasis>. Use as a block device.
|
||
The Linux kernel RBD (RADOS block device) driver
|
||
allows striping a Linux block device over multiple
|
||
distributed object store data objects. It is
|
||
compatible with the KVM RBD image.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para><emphasis>CephFS</emphasis>. Use as a file,
|
||
POSIX-compliant file system.</para>
|
||
</listitem>
|
||
</itemizedlist>
|
||
<para>Ceph exposes RADOS; you can access it through the following interfaces:</para>
|
||
<itemizedlist>
|
||
<listitem>
|
||
<para><emphasis>RADOS Gateway</emphasis>. OpenStack Object Storage and Amazon-S3
|
||
compatible RESTful interface (see <link
|
||
xlink:href="http://ceph.com/wiki/RADOS_Gateway"
|
||
>RADOS_Gateway</link>).</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para><emphasis>librados</emphasis>, and its related C/C++ bindings.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para><emphasis>RBD and QEMU-RBD</emphasis>. Linux
|
||
kernel and QEMU block devices that stripe data
|
||
across multiple objects.</para>
|
||
</listitem>
|
||
</itemizedlist>
|
||
</simplesect>
|
||
<simplesect>
|
||
<title>Driver options</title>
|
||
<para>The following table contains the configuration options
|
||
supported by the Ceph RADOS Block Device driver.</para>
|
||
<note>
|
||
<title>Deprecation notice</title>
|
||
<para>The <option>volume_tmp_dir</option> option has been
|
||
deprecated and replaced by <option>image_conversion_dir</option>.</para>
|
||
</note>
|
||
<xi:include
|
||
href="../../../common/tables/cinder-storage_ceph.xml"/>
|
||
</simplesect>
|
||
</section>
|