b0ce83d945
Instead of declaring entities separately for each file, include the file common/entities/openstack.ent. Change-Id: I99f905be679eab34059a30311f11189c17b5498c
94 lines
4.9 KiB
XML
94 lines
4.9 KiB
XML
<?xml version="1.0" encoding="utf-8"?>
|
|
<!DOCTYPE section [
|
|
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
|
|
%openstack;
|
|
]>
|
|
<chapter xmlns="http://docbook.org/ns/docbook"
|
|
xmlns:xi="http://www.w3.org/2001/XInclude"
|
|
xmlns:xlink="http://www.w3.org/1999/xlink"
|
|
version="5.0"
|
|
xml:id="module003-ch007-cluster-architecture">
|
|
<title>Cluster architecture</title>
|
|
<para><guilabel>Access Tier</guilabel></para>
|
|
<figure>
|
|
<title>Object Storage cluster architecture</title>
|
|
<mediaobject>
|
|
<imageobject>
|
|
<imagedata fileref="figures/image47.png"/>
|
|
</imageobject>
|
|
</mediaobject>
|
|
</figure>
|
|
<para>Large-scale deployments segment off an "Access Tier".
|
|
This tier is the “Grand Central” of the Object Storage
|
|
system. It fields incoming API requests from clients and
|
|
moves data in and out of the system. This tier is composed
|
|
of front-end load balancers, ssl- terminators,
|
|
authentication services, and it runs the (distributed)
|
|
brain of the Object Storage system — the proxy server
|
|
processes.</para>
|
|
<para>Having the access servers in their own tier enables
|
|
read/write access to be scaled out independently of
|
|
storage capacity. For example, if the cluster is on the
|
|
public Internet and requires SSL-termination and has high
|
|
demand for data access, many access servers can be
|
|
provisioned. However, if the cluster is on a private
|
|
network and it is being used primarily for archival
|
|
purposes, fewer access servers are needed.</para>
|
|
<para>A load balancer can be incorporated into the access tier,
|
|
because this is an HTTP addressable storage service.</para>
|
|
<para>Typically, this tier comprises a collection of 1U
|
|
servers. These machines use a moderate amount of RAM and
|
|
are network I/O intensive. It is wise to provision them with
|
|
two high-throughput (10GbE) interfaces, because these systems
|
|
field each incoming API request. One interface is
|
|
used for 'front-end' incoming requests and the other for
|
|
'back-end' access to the Object Storage nodes to put and
|
|
fetch data.</para>
|
|
<para><guilabel>Factors to consider</guilabel></para>
|
|
<para>For most publicly facing deployments as well as
|
|
private deployments available across a wide-reaching
|
|
corporate network, SSL is used to encrypt traffic
|
|
to the client. SSL adds significant processing load to
|
|
establish sessions between clients; it adds more capacity to
|
|
the access layer that will need to be provisioned. SSL may
|
|
not be required for private deployments on trusted
|
|
networks.</para>
|
|
<para><guilabel>Storage Nodes</guilabel></para>
|
|
<figure>
|
|
<title>Object Storage (Swift)</title>
|
|
<mediaobject>
|
|
<imageobject>
|
|
<imagedata fileref="figures/image48.png"/>
|
|
</imageobject>
|
|
</mediaobject>
|
|
</figure>
|
|
<para>The next component is the storage servers themselves.
|
|
Generally, most configurations should provide each of the
|
|
five Zones with an equal amount of storage capacity.
|
|
Storage nodes use a reasonable amount of memory and CPU.
|
|
Metadata needs to be readily available to quickly return
|
|
objects. The object stores run services not only to field
|
|
incoming requests from the Access Tier, but to also run
|
|
replicators, auditors, and reapers. Object stores can be
|
|
provisioned with a single gigabit or a 10-gigabit network
|
|
interface depending on expected workload and desired
|
|
performance.</para>
|
|
<para>Currently, a 2 TB or 3 TB SATA disk delivers
|
|
good performance for the price. Desktop-grade drives can
|
|
be used where there are responsive remote hands in the
|
|
datacenter, and enterprise-grade drives can be used where
|
|
this is not the case.</para>
|
|
<para><guilabel>Factors to Consider</guilabel></para>
|
|
<para>Desired I/O performance for single-threaded requests
|
|
should be kept in mind. This system does not use RAID,
|
|
so each request for an object is handled by a single
|
|
disk. Disk performance impacts single-threaded
|
|
response rates.</para>
|
|
<para>To achieve apparent higher throughput, the object
|
|
storage system is designed with concurrent
|
|
uploads/downloads in mind. The network I/O capacity
|
|
(1GbE, bonded 1GbE pair, or 10GbE) should match your
|
|
desired concurrent throughput needs for reads and
|
|
writes.</para>
|
|
</chapter>
|