<?xml version="1.0" encoding="utf-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
  xmlns:xi="http://www.w3.org/2001/XInclude"
  xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
  xml:id="module001-ch011-block-storage">
  <title>OpenStack Block Storage</title>
  <para><emphasis role="bold">Block Storage and OpenStack
      Compute</emphasis></para>
  <para>OpenStack provides two classes of block storage, "ephemeral"
    storage and persistent "volumes". Ephemeral storage exists only
    for the life of an instance, it will persist across reboots of the
    guest operating system but when the instance is deleted so is the
    associated storage. All instances have some ephemeral storage.
    Volumes are persistent virtualized block devices independent of
    any particular instance. Volumes may be attached to a single
    instance at a time, but may be detached or reattached to a
    different instance while retaining all data, much like a USB
    drive.</para>
  <para><guilabel>Ephemeral Storage</guilabel></para>
  <para>Ephemeral storage is associated with a single unique instance.
    Its size is defined by the flavor of the instance.</para>
  <para>Data on ephemeral storage ceases to exist when the instance it
    is associated with is terminated. Rebooting the VM or restarting
    the host server, however, will not destroy ephemeral data. In the
    typical use case an instance's root filesystem is stored on
    ephemeral storage. This is often an unpleasant surprise for people
    unfamiliar with the cloud model of computing.</para>
  <para>In addition to the ephemeral root volume all flavors except
    the smallest, m1.tiny, provide an additional ephemeral block
    device varying from 20G for the m1.small through 160G for the
    m1.xlarge by default - these sizes are configurable. This is
    presented as a raw block device with no partition table or
    filesystem. Cloud aware operating system images may discover,
    format, and mount this device. For example the cloud-init package
    included in Ubuntu's stock cloud images will format this space as
    an ext3 filesystem and mount it on /mnt. It is important to note
    this a feature of the guest operating system. OpenStack only
    provisions the raw storage.</para>
  <para><guilabel>Volume Storage</guilabel></para>
  <para>Volume storage is independent of any particular instance and
    is persistent. Volumes are user created and within quota and
    availability limits may be of any arbitrary size.</para>
  <para>When first created volumes are raw block devices with no
    partition table and no filesystem. They must be attached to an
    instance to be partitioned and/or formatted. Once this is done
    they may be used much like an external disk drive. Volumes may
    attached to only one instance at a time, but may be detached and
    reattached to either the same or different instances.</para>
  <para>It is possible to configure a volume so that it is bootable
    and provides a persistent virtual instance similar to traditional
    non-cloud based virtualization systems. In this use case the
    resulting instance may still have ephemeral storage depending on
    the flavor selected, but the root filesystem (and possibly others)
    will be on the persistent volume and thus state will be maintained
    even if the instance is shutdown. Details of this configuration
    are discussed in the<link
      xlink:href="http://docs.openstack.org/user-guide/content/"
      >OpenStack End User Guide</link>.</para>
  <para>Volumes do not provide concurrent access from multiple
    instances. For that you need either a traditional network
    filesystem like NFS or CIFS or a cluster filesystem such as
    GlusterFS. These may be built within an OpenStack cluster or
    provisioned outside of it, but are not features provided by the
    OpenStack software.</para>
  <para>The OpenStack Block Storage service works via the interaction
    of a series of daemon processes named cinder-* that reside
    persistently on the host machine or machines. The binaries can all
    be run from a single node, or spread across multiple nodes. They
    can also be run on the same node as other OpenStack
    services.</para>
  <para><guilabel>The current services available in OpenStack Block
      Storage are:</guilabel></para>
  <itemizedlist>
    <listitem>
      <para><emphasis role="bold">cinder-api</emphasis> - The
        cinder-api service is a WSGI app that authenticates and routes
        requests throughout the Block Storage system. It supports the
        OpenStack API's only, although there is a translation that can
        be done via Nova's EC2 interface which calls in to the
        cinderclient.</para>
    </listitem>
  </itemizedlist>
  <itemizedlist>
    <listitem>
      <para><emphasis role="bold">cinder-scheduler</emphasis> - The
        cinder-scheduler is responsible for scheduling/routing
        requests to the appropriate volume service. As of Grizzly;
        depending upon your configuration this may be simple
        round-robin scheduling to the running volume services, or it
        can be more sophisticated through the use of the Filter
        Scheduler. The Filter Scheduler is the default in Grizzly and
        enables filter on things like Capacity, Availability Zone,
        Volume Types and Capabilities as well as custom
        filters.</para>
    </listitem>
  </itemizedlist>
  <itemizedlist>
    <listitem>
      <para><emphasis role="bold">cinder-volume</emphasis> - The
        cinder-volume service is responsible for managing Block
        Storage devices, specifically the back-end devices
        themselves.</para>
    </listitem>
  </itemizedlist>
  <itemizedlist>
    <listitem>
      <para><emphasis role="bold">cinder-backup</emphasis> - The
        cinder-backup service provides a means to back up a Cinder
        Volume to OpenStack Object Store (SWIFT).</para>
    </listitem>
  </itemizedlist>
  <para><guilabel>Introduction to OpenStack Block
    Storage</guilabel></para>
  <para>OpenStack Block Storage provides persistent High Performance
    Block Storage resources that can be consumed by OpenStack Compute
    instances. This includes secondary attached storage similar to
    Amazon's Elastic Block Storage (EBS). In addition images can be
    written to a Block Storage device and specified for OpenStack
    Compute to use a bootable persistent instance.</para>
  <para>There are some differences from Amazon's EBS that one should
    be aware of. OpenStack Block Storage is not a shared storage
    solution like NFS, but currently is designed so that the device is
    attached and in use by a single instance at a time.</para>
  <para><guilabel>Backend Storage Devices</guilabel></para>
  <para>OpenStack Block Storage requires some form of back-end storage
    that the service is built on. The default implementation is to use
    LVM on a local Volume Group named "cinder-volumes". In addition to
    the base driver implementation, OpenStack Block Storage also
    provides the means to add support for other storage devices to be
    utilized such as external Raid Arrays or other Storage
    appliances.</para>
  <para><guilabel>Users and Tenants (Projects)</guilabel></para>
  <para>The OpenStack Block Storage system is designed to be used by
    many different cloud computing consumers or customers, basically
    tenants on a shared system, using role-based access assignments.
    Roles control the actions that a user is allowed to perform. In
    the default configuration, most actions do not require a
    particular role, but this is configurable by the system
    administrator editing the appropriate policy.json file that
    maintains the rules. A user's access to particular volumes is
    limited by tenant, but the username and password are assigned per
    user. Key pairs granting access to a volume are enabled per user,
    but quotas to control resource consumption across available
    hardware resources are per tenant.</para>
  <para><guilabel>For tenants, quota controls are available to limit
      the:</guilabel></para>
  <itemizedlist>
    <listitem>
      <para>Number of volumes which may be created</para>
    </listitem>
    <listitem>
      <para>Number of snapshots which may be created</para>
    </listitem>
    <listitem>
      <para>Total number of Giga Bytes allowed per tenant (shared
          between snapshots and volumes)</para>
    </listitem>
  </itemizedlist>
  <para><guilabel>Volumes Snapshots and Backups</guilabel></para>
  <para>This introduction provides a high level overview of the two
    basic resources offered by the OpenStack Block Storage service.
    The first is Volumes and the second is Snapshots which are derived
    from Volumes.</para>
  <para><guilabel>Volumes</guilabel></para>
  <para>Volumes are allocated block storage resources that can be
    attached to instances as secondary storage or they can be used as
    the root store to boot instances. Volumes are persistent R/W Block
    Storage devices most commonly attached to the compute node via
    iSCSI.</para>
  <para><guilabel>Snapshots</guilabel></para>
  <para>A Snapshot in OpenStack Block Storage is a read-only point in
    time copy of a Volume. The Snapshot can be created from a Volume
    that is currently in use (via the use of '--force True') or in an
    available state. The Snapshot can then be used to create a new
    volume via create from snapshot.</para>
  <para><guilabel>Backups</guilabel></para>
  <para>A Backup is an archived copy of a Volume currently stored in
    Object Storage (Swift).</para>
  <para><guilabel>Managing Volumes</guilabel></para>
  <para>Cinder is the OpenStack service that allows you to give extra
    block level storage to your OpenStack Compute instances. You may
    recognize this as a similar offering from Amazon EC2 known as
    Elastic Block Storage (EBS). The default Cinder implementation is
    an iSCSI solution that employs the use of Logical Volume Manager
    (LVM) for Linux. Note that a volume may only be attached to one
    instance at a time. This is not a ‘shared storage’ solution like a
    SAN of NFS on which multiple servers can attach to. It's also
    important to note that Cinder also includes a number of drivers to
    allow you to use a number of other vendor's back-end storage
    devices in addition to or instead of the base LVM
    implementation.</para>
  <para>Here is brief walk-through of a simple create/attach sequence,
    keep in mind this requires proper configuration of both OpenStack
    Compute via cinder.conf and OpenStack Block Storage via
    cinder.conf.</para>
  <orderedlist>
    <listitem>
      <para>The volume is created via cinder create; which creates
          an LV into the volume group (VG) "cinder-volumes"</para>
    </listitem>
    <listitem>
      <para>The volume is attached to an instance via nova
          volume-attach; which creates a unique iSCSI IQN that will be
          exposed to the compute node</para>
    </listitem>
    <listitem>
      <para>The compute node which run the concerned instance has
          now an active ISCSI session; and a new local storage
          (usually a /dev/sdX disk)</para>
    </listitem>
    <listitem>
      <para>libvirt uses that local storage as a storage for the
          instance; the instance get a new disk (usually a /dev/vdX
          disk)</para>
    </listitem>
  </orderedlist>
  <para><guilabel>Block Storage Capabilities</guilabel></para>
  <itemizedlist>
    <listitem>
      <para>OpenStack provides persistent block level storage
          devices for use with OpenStack compute instances.</para>
    </listitem>
    <listitem>
      <para>The block storage system manages the creation, attaching
          and detaching of the block devices to servers. Block storage
          volumes are fully integrated into OpenStack Compute and the
          Dashboard allowing for cloud users to manage their own
          storage needs.</para>
    </listitem>
    <listitem>
      <para>In addition to using simple Linux server storage, it has
          unified storage support for numerous storage platforms
          including Ceph, NetApp, Nexenta, SolidFire, and
          Zadara.</para>
    </listitem>
    <listitem>
      <para>Block storage is appropriate for performance sensitive
          scenarios such as database storage, expandable file systems,
          or providing a server with access to raw block level
          storage.</para>
    </listitem>
    <listitem>
      <para>Snapshot management provides powerful functionality for
          backing up data stored on block storage volumes. Snapshots
          can be restored or used to create a new block storage
          volume.</para>
    </listitem>
  </itemizedlist>
</chapter>