openstack-manuals/doc/src/docbkx/openstack-object-storage-admin/objectstorageinstall.xml

585 lines
30 KiB
XML

<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_installing-and-configuring-openstack-object-storage">
<title>Installing and Configuring OpenStack Object Storage</title>
<section xml:id="object-storage-system-requirements">
<title>System Requirements</title><para><emphasis role="bold">Hardware</emphasis>: OpenStack Object Storage specifically is designed
to run on commodity hardware. At Rackspace, our storage servers are
currently running fairly generic 4U servers with 24 2T SATA drives and 8
cores of processing power. RAID on the storage drives is not required and
not recommended. Swift's disk usage pattern is the worst case possible for
RAID, and performance degrades very quickly using RAID 5 or 6.</para>
<para><emphasis role="bold">Operating System</emphasis>: OpenStack currently runs on
Ubuntu and the large scale deployment at Rackspace runs on Ubuntu 10.04
LTS.</para>
<para><emphasis role="bold">Networking</emphasis>: 1000 Mbps are suggested. For
OpenStack Object Storage, an external network should connect the outside
world to the proxy servers, and the storage network is intended to be
isolated on a private network or multiple private networks. </para>
<para><emphasis role="bold">Database</emphasis>: For OpenStack Object Storage, a
SQLite database is part of the OpenStack Object Storage container and
account management process.</para>
<para><emphasis role="bold">Permissions</emphasis>: You can install OpenStack Object
Storage either as root or as a user with sudo permissions if you configure
the sudoers file to enable all the permissions.</para>
</section>
<section xml:id="installing-openstack-object-storage-on-ubuntu">
<title>Installing OpenStack Object Storage on Ubuntu</title>
<para>Though you can install OpenStack Object Storage for development or testing purposes on a single server, a multiple-server installation enables the high availability and redundancy you want in a production distributed object storage system.</para>
<para>If you would like to perform a single node installation on Ubuntu for
development purposes from source code, use the Swift All In One
instructions. See
<link xlink:href="http://swift.openstack.org/development_saio.html">http://swift.openstack.org/development_saio.html</link>.</para>
<section xml:id="before-you-begin-swift-install">
<title>Before You Begin</title>
<para>Have a copy of the Ubuntu Server 10.04 LTS installation media on hand
if you are installing on a new server. </para>
<para>This document demonstrates installing a cluster using the following
types of nodes:</para>
<itemizedlist>
<listitem>
<para>One Proxy node which runs the swift-proxy-server
processes and may also run optional swauth services.
It serves proxy requests to the appropriate Storage
nodes.</para>
</listitem>
<listitem>
<para>Five Storage nodes that run the swift-account-server,
swift-container-server, and swift-object-server
processes which control storage of the account
databases, the container databases, as well as the
actual stored objects.</para>
</listitem>
</itemizedlist>
<note>
<para>Fewer Storage nodes can be used initially, but a minimum of 5
is recommended for a production cluster.</para>
</note>
</section>
<section xml:id="example-installation-architecture-swift">
<title>Example Installation Architecture</title>
<itemizedlist>
<listitem>
<para>node - a host machine running one or more OpenStack
Object Storage services</para>
</listitem>
<listitem>
<para>Proxy node - node that runs Proxy services</para>
</listitem>
<listitem>
<para>Auth node - an optional node that runs the Auth
service separately from the Proxy services</para>
</listitem>
<listitem>
<para>Storage node - node that runs Account, Container, and
Object services</para>
</listitem>
<listitem>
<para>ring - a set of mappings of OpenStack Object Storage
data to physical devices</para>
</listitem>
</itemizedlist>
<para>To increase reliability, you may want to add additional Proxy servers
for performance. </para>
<para>This document describes each Storage node as a separate zone in the
ring. It is recommended to have a minimum of 5 zones. A zone is a
group of nodes that is as isolated as possible from other nodes
(separate servers, network, power, even geography). The ring
guarantees that every replica is stored in a separate zone. This
diagram shows one possible configuration for a minimal
installation.</para>
<para><inlinemediaobject>
<imageobject>
<imagedata fileref="figures/swift_install_arch.png"
/>
</imageobject>
</inlinemediaobject></para>
</section>
<section xml:id="network-setup-notes">
<title>Network Setup Notes</title>
<para>This document refers to two networks. An external network for
connecting to the Proxy server, and a storage network that is not
accessible from outside the cluster, to which all of the nodes are
connected. All of the OpenStack Object Storage services, as well as
the rsync daemon on the Storage nodes are configured to listen on
their STORAGE_LOCAL_NET IP addresses.</para>
</section>
<section xml:id="general-installation-steps-swift">
<title>General Installation Steps </title>
<orderedlist>
<listitem>
<para>Install the baseline Ubuntu Server 10.04 LTS on all
nodes.</para>
</listitem>
<listitem>
<para>Install common OpenStack Object Storage software and
pre-requisites:</para>
<para>
<literallayout class="monospaced">
apt-get install python-software-properties
add-apt-repository ppa:swift-core/ppa
apt-get update
apt-get install swift openssh-server
</literallayout>
</para>
</listitem>
</orderedlist>
</section></section>
<section xml:id="configuring-openstack-object-storage">
<title>Configuring OpenStack Object Storage </title>
<orderedlist>
<listitem><para>Create and populate configuration directories on all nodes:</para>
<para><literallayout class="monospaced">mkdir -p /etc/swift
chown -R swift:swift /etc/swift/
</literallayout></para>
</listitem>
<listitem><para>Create /etc/swift/swift.conf:</para>
<para><programlisting>
# random unique string that can never change, keep it secret and do NOT lose it
swift_hash_path_suffix = changeme
</programlisting></para>
</listitem>
</orderedlist>
<note>
<para>The suffix value in /etc/swift/swift.conf should be set to some random
string of text to be used as a salt when hashing to determine
mappings in the ring. This file should be the same on every node in
the cluster!</para>
</note>
<section xml:id="installing-and-configuring-auth-node">
<title>Installing and Configuring an Auth Node</title><para>There are options for running an authorization node to authorize requests against a swift
cluster. Swauth is one implementation, an auth service for Swift as
WSGI middleware that uses Swift itself as a backing store. Swauth,
the example authorization system that was bundled with the Cactus
release, is now available as a separate download rather than part of
Swift, at <link xlink:href="https://github.com/gholt/swauth"
>https://github.com/gholt/swauth</link>. You can install it
on the proxy server, or on a separate server, but you need to point
to swauth from the proxy-server.conf file in the following
line:</para>
<programlisting>
[filter:swauth]
use = egg:swauth#swauth
</programlisting>
<para>In the Diablo release, the Keystone project at <link xlink:href="http://github.com/rackspace/keystone">http://github.com/rackspace/keystone</link> should become the auth standard for OpenStack, but swauth may be used as an alternative.</para></section>
<section xml:id="installing-and-configuring-the-proxy-node">
<title>Installing and Configuring the Proxy Node</title>
<para>The proxy server takes each request and looks up locations for the
account, container, or object and routes the requests correctly. The
proxy server also handles API requests. You enable account
management by configuring it in the proxy-server.conf file. </para>
<note>
<para>It is assumed that all commands are run as the root user.</para>
</note>
<orderedlist>
<listitem>
<para>Install swift-proxy service:</para>
<para><literallayout class="monospaced">apt-get install swift-proxy memcached</literallayout></para>
</listitem>
<listitem>
<para>Create self-signed cert for SSL:</para>
<para>
<literallayout class="monospaced">cd /etc/swift
openssl req -new -x509 -nodes -out cert.crt -keyout cert.key </literallayout>
</para>
</listitem>
<listitem>
<para>Modify memcached to listen on the default interfaces.
Preferably this should be on a local, non-public network.
Edit the following line in /etc/memcached.conf,
changing:</para>
<para>
<literallayout class="monospaced">-l 127.0.0.1
to
-l &lt;PROXY_LOCAL_NET_IP&gt;</literallayout>
</para>
</listitem>
<listitem>
<para>Restart the memcached server:</para>
<para><literallayout class="monospaced">service memcached restart</literallayout></para>
</listitem>
<listitem>
<para>Create /etc/swift/proxy-server.conf:</para>
<para>
<programlisting>[DEFAULT]
# Enter these next two values if using SSL certifications
cert_file = /etc/swift/cert.crt
key_file = /etc/swift/cert.key
bind_port = 8080
workers = 8
user = swift
[pipeline:main]
# keep swauth in the line below if you plan to use swauth for authentication
pipeline = healthcheck cache swauth proxy-server
[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
[filter:swauth]
# the line below points to swauth as a separate project from swift
use = egg:swauth#swauth
# Highly recommended to change this.
super_admin_key = swauthkey
# Local cluster
default_swift_cluster = https://127.0.0.1:8080/v1
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:cache]
use = egg:swift#memcache
memcache_servers = &lt;PROXY_LOCAL_NET_IP&gt;:11211
</programlisting>
</para>
<note>
<para>If you run multiple memcache servers, put the multiple
IP:port listings in the [filter:cache] section of
the proxy-server.conf file like:
<literallayout class="monospaced">10.1.2.3:11211,10.1.2.4:11211</literallayout>Only
the proxy server uses memcache.</para>
</note>
</listitem>
<listitem>
<para>Create the account, container and object rings:</para>
<para>
<literallayout class="monospaced">
cd /etc/swift
swift-ring-builder account.builder create 18 3 1
swift-ring-builder container.builder create 18 3 1
swift-ring-builder object.builder create 18 3 1
</literallayout>
</para>
</listitem>
<listitem>
<para>For every storage device on each node add entries to each
ring:</para>
<para>
<literallayout class="monospaced">
swift-ring-builder account.builder add z&lt;ZONE&gt;-&lt;STORAGE_LOCAL_NET_IP&gt;:6002/&lt;DEVICE&gt; 100
swift-ring-builder container.builder add z&lt;ZONE&gt;-&lt;STORAGE_LOCAL_NET_IP_1&gt;:6001/&lt;DEVICE&gt; 100
swift-ring-builder object.builder add z&lt;ZONE&gt;-&lt;STORAGE_LOCAL_NET_IP_1&gt;:6000/&lt;DEVICE&gt; 100
</literallayout>
</para>
<para>For example, if you were setting up a storage node with a
partition of /dev/sdb1 in Zone 1 on IP 10.0.0.1, the
DEVICE would be sdb1 and the commands would look
like:
<literallayout class="monospaced">
swift-ring-builder account.builder add z1-10.0.0.1:6002/sdb1 100
swift-ring-builder container.builder add z1-10.0.0.1:6001/sdb1 100
swift-ring-builder object.builder add z1-10.0.0.1:6000/sdb1 100
</literallayout></para>
<note>
<para>Assuming there are 5 zones with 1 node per zone, ZONE
should start at 1 and increment by one for each
additional node.</para>
</note>
</listitem>
<listitem>
<para>Verify the ring contents for each ring:</para>
<para>
<literallayout class="monospaced">
swift-ring-builder account.builder
swift-ring-builder container.builder
swift-ring-builder object.builder
</literallayout>
</para>
</listitem>
<listitem>
<para>Rebalance the rings:</para>
<para>
<literallayout class="monospaced">
swift-ring-builder account.builder rebalance
swift-ring-builder container.builder rebalance
swift-ring-builder object.builder rebalance
</literallayout>
</para>
<note>
<para>Rebalancing rings can take some time.</para>
</note>
</listitem>
<listitem>
<para>Copy the account.ring.gz, container.ring.gz, and
object.ring.gz files to each of the Proxy and Storage nodes
in /etc/swift.</para>
</listitem>
<listitem>
<para>Make sure all the config files are owned by the swift
user:</para>
<para><literallayout class="monospaced">chown -R swift:swift /etc/swift</literallayout></para>
</listitem>
<listitem>
<para>Start Proxy services:</para>
<para><literallayout class="monospaced">swift-init proxy start</literallayout></para>
</listitem>
</orderedlist>
</section>
<section xml:id="installing-and-configuring-storage-nodes">
<title>Installing and Configuring the Storage Nodes</title>
<note>
<para>OpenStack Object Storage should work on any modern filesystem that supports
Extended Attributes (XATTRS). We currently recommend XFS as it
demonstrated the best overall performance for the swift use case after
considerable testing and benchmarking at Rackspace. It is also the
only filesystem that has been thoroughly tested.</para>
</note>
<orderedlist>
<listitem><para>Install Storage node packages:</para>
<para><literallayout class="monospaced">apt-get install swift-account swift-container swift-object xfsprogs</literallayout></para>
</listitem>
<listitem><para>For every device on the node, setup the XFS volume (/dev/sdb is used
as an example):</para>
<para><literallayout class="monospaced"># We create a partition
fdisk /dev/sdb (set up a single partition)
# We format it
mkfs.xfs -i size=1024 /dev/sdb1
# We add the partition to the /etc/fstab file
echo "/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" &gt;&gt; /etc/fstab
# We mount the partition
mkdir -p /srv/node/sdb1
mount /srv/node/sdb1
# We set the rights
chown -R swift:swift /srv/node
</literallayout></para>
</listitem>
<listitem><para>Create /etc/rsyncd.conf:</para>
<para><programlisting>
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = &lt;STORAGE_LOCAL_NET_IP&gt;
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock</programlisting></para>
</listitem>
<listitem><para>Edit the following line in /etc/default/rsync:</para>
<para><literallayout class="monospaced">RSYNC_ENABLE = true</literallayout></para>
</listitem>
<listitem><para>Start rsync daemon:</para>
<para><literallayout class="monospaced">service rsync start</literallayout></para>
<note><title>Note</title>
<para>The rsync daemon requires no authentication, so it should be run on
a local, private network.</para></note>
</listitem>
<listitem><para>Create /etc/swift/account-server.conf:</para>
<para><programlisting>
[DEFAULT]
bind_ip = &lt;STORAGE_LOCAL_NET_IP&gt;
workers = 2
[pipeline:main]
pipeline = account-server
[app:account-server]
use = egg:swift#account
[account-replicator]
[account-auditor]
[account-reaper]</programlisting></para>
</listitem>
<listitem><para>Create /etc/swift/container-server.conf:</para>
<para><programlisting>
DEFAULT]
bind_ip = &lt;STORAGE_LOCAL_NET_IP&gt;
workers = 2
[pipeline:main]
pipeline = container-server
[app:container-server]
use = egg:swift#container
[container-replicator]
[container-updater]
[container-auditor]</programlisting></para>
</listitem>
<listitem><para>Create /etc/swift/object-server.conf:</para>
<para>
<programlisting>
[DEFAULT]
bind_ip = &lt;STORAGE_LOCAL_NET_IP&gt;
workers = 2
[pipeline:main]
pipeline = object-server
[app:object-server]
use = egg:swift#object
[object-replicator]
[object-updater]
[object-auditor]
</programlisting>
</para>
</listitem>
<listitem><para>Start the storage services:</para>
<para><literallayout class="monospaced">
swift-init object-server start
swift-init object-replicator start
swift-init object-updater start
swift-init object-auditor start
swift-init container-server start
swift-init container-replicator start
swift-init container-updater start
swift-init container-auditor start
swift-init account-server start
swift-init account-replicator start
swift-init account-auditor start
</literallayout></para>
</listitem>
</orderedlist>
<section xml:id="verify-swift-installation">
<title>Create OpenStack Object Storage admin Account and Verify the Installation</title>
<para>You can run these commands from the proxy server if you have installed swauth there. Look for
the default_swift_cluster setting in the proxy-server.conf
and match the URLs (<emphasis role="bold">including http or
https</emphasis>) when issuing swauth
commands.</para>
<orderedlist>
<listitem>
<para>Prepare the system for authorization commands
by telling it the key and the URL for auth. </para>
<note>
<para> The key we use along with the following command is the one you defined in the
proxy-server.conf file.</para>
</note>
<para>
<literallayout class="monospaced">
swauth-prep -K $key -A http://&lt;AUTH_HOSTNAME&gt;:8080/auth/
</literallayout>
</para>
</listitem>
<listitem>
<para>Create a user with administrative privileges
(account = system, username = root, password =
testpass). Make sure to replace key
in the swauth-add-user command below with whatever
super_admin key you assigned in the
proxy-server.conf file above. None of the values
of account, username, or password are special -
they can be anything.</para>
<para>
<literallayout class="monospaced">swauth-add-user -K $key -A http://&lt;AUTH_HOSTNAME&gt;:8080/auth/ -a system root testpass</literallayout>
</para>
</listitem>
<listitem><para>Get an X-Storage-Url and X-Auth-Token:</para>
<para><literallayout class="monospaced">curl -k -v -H 'X-Storage-User: system:root' -H 'X-Storage-Pass: testpass' http://&lt;AUTH_HOSTNAME&gt;:8080/auth/v1.0</literallayout></para>
</listitem>
<listitem><para>Check that you can HEAD the account:</para>
<para><literallayout class="monospaced">curl -k -v -H 'X-Auth-Token: &lt;token-from-x-auth-token-above&gt;' &lt;url-from-x-storage-url-above&gt;</literallayout></para>
</listitem>
<listitem><para>Check that the Swift Tool, swift, works: </para>
<para><literallayout class="monospaced">swift -A http://&lt;AUTH_HOSTNAME&gt;:8080/auth/v1.0 -U system:root -K testpass stat</literallayout></para>
</listitem>
<listitem><para>Use swift to upload a few files named &#8216;bigfile[1-2].tgz&#8217; to a container named
&#8216;myfiles&#8217;:</para>
<para><literallayout class="monospaced">swift -A http://&lt;AUTH_HOSTNAME&gt;:8080/auth/v1.0 -U system:root -K testpass upload myfiles bigfile1.tgz
swift -A http://&lt;AUTH_HOSTNAME&gt;:8080/auth/v1.0 -U system:root -K testpass upload myfiles bigfile2.tgz</literallayout></para>
</listitem>
<listitem><para>Use swift to download all files from the &#8216;myfiles&#8217; container:</para>
<para><literallayout class="monospaced">swift -A http://&lt;AUTH_HOSTNAME&gt;:8080/auth/v1.0 -U system:root -K testpass download myfiles</literallayout></para>
</listitem>
</orderedlist>
</section>
<section xml:id="adding-a-proxy-server">
<title>Adding an Additional Proxy Server</title>
<para>For reliability&#8217;s sake you may want to have more than one proxy server. You can set up the additional proxy node in the same manner that you set up the first proxy node but with additional configuration steps.</para>
<para>Once you have more than two proxies, you also want to load balance between the two, which means your storage endpoint also changes. You can select from different strategies for load balancing. For example, you could use round robin dns, or an actual load balancer (like pound) in front of the two proxies, and point your storage url to the load balancer.</para>
<para>See Configure the Proxy node for the initial setup, and then follow these additional steps.</para>
<orderedlist>
<listitem><para>Update the list of memcache servers in /etc/swift/proxy-server.conf for all the added proxy servers. If you run multiple memcache servers, use this pattern for the multiple IP:port listings: <literallayout class="monospaced">10.1.2.3:11211,10.1.2.4:11211</literallayout> in each proxy server&#8217;s conf file.:</para>
<para><programlisting>
[filter:cache]
use = egg:swift#memcache
memcache_servers = &lt;PROXY_LOCAL_NET_IP&gt;:11211
</programlisting>
</para>
</listitem>
<listitem><para>Change the default_cluster_url to point to the load balanced url, rather than the first proxy
server you created in
/etc/swift/proxy-server.conf:</para>
<para><programlisting>
[app:auth-server]
use = egg:swift#auth
default_cluster_url = https://&lt;LOAD_BALANCER_HOSTNAME&gt;/v1
# Highly recommended to change this key to something else!
super_admin_key = devauth
</programlisting></para>
</listitem>
<listitem><para>After you change the default_cluster_url setting, you have to delete the auth database and recreate the OpenStack Object Storage users, or manually update the auth database with the correct URL for each account.</para>
</listitem>
<listitem><para>Next, copy all the ring information to all the nodes, including your new proxy nodes, and ensure the ring info gets to all the storage nodes as well.</para>
</listitem>
<listitem><para>After you sync all the nodes, make sure the admin has the keys in /etc/swift and the ownership for the ring file is correct.</para>
</listitem>
</orderedlist>
</section>
<section xml:id="troubleshooting-notes">
<title>Troubleshooting Notes</title>
<para>If you see problems, look in var/log/syslog (or messages on some distros).</para>
<para>Also, at Rackspace we have seen hints at drive failures by looking at error messages in /var/log/kern.log.</para>
</section></section></section>
</chapter>