[install-guide] migrate section swift to RST

Implements: blueprint installguide-liberty

Change-Id: I45743e259ae4318a68c8ae64d2757671954ad0b1
This commit is contained in:
Christian Berendt 2015-06-26 13:11:13 +02:00 committed by Karen Bradshaw
parent 3f938bbc9b
commit 24395ba8d2
11 changed files with 1105 additions and 1 deletions

View File

@ -0,0 +1,81 @@
Edit the :file:`/etc/swift/proxy-server.conf` file and complete the
following actions:
* In the ``[DEFAULT]`` section, configure the bind port, user, and
configuration directory:
.. code-block:: ini
:linenos:
[DEFAULT]
...
bind_port = 8080
user = swift
swift_dir = /etc/swift
* In the ``[pipeline:main]`` section, enable the appropriate modules:
.. code-block:: ini
:linenos:
[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache \
container_sync bulk ratelimit authtoken keystoneauth container-quotas \
account-quotas slo dlo proxy-logging proxy-server
.. note::
For more information on other modules that enable additional features,
see the `Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
* In the ``[app:proxy-server]`` section, enable automatic account creation:
.. code-block:: console
[app:proxy-server]
...
account_autocreate = true
* In the ``[filter:keystoneauth]`` section, configure the operator roles:
.. code-block:: console
[filter:keystoneauth]
use = egg:swift#keystoneauth
...
operator_roles = admin,user
* In the ``[filter:authtoken]`` section, configure Identity service access:
.. code-block:: ini
:linenos:
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = swift
password = SWIFT_PASS
delay_auth_decision = true
Replace ``SWIFT_PASS`` with the password you chose for the ``swift`` user
in the Identity service.
.. note::
Comment out or remove any other options in the ``[filter:authtoken]``
section.
* In the ``[filter:cache]`` section, configure the ``memcached`` location:
.. code-block:: ini
:linenos:
[filter:cache]
...
memcache_servers = 127.0.0.1:11211

View File

@ -0,0 +1,178 @@
=========================================
Install and configure the controller node
=========================================
This section describes how to install and configure the proxy service that
handles requests for the account, container, and object services operating
on the storage nodes. For simplicity, this guide installs and configures
the proxy service on the controller node. However, you can run the proxy
service on any node with network connectivity to the storage nodes.
Additionally, you can install and configure the proxy service on multiple
nodes to increase performance and redundancy. For more information, see the
`Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
To configure prerequisites
~~~~~~~~~~~~~~~~~~~~~~~~~~
The proxy service relies on an authentication and authorization mechanism such
as the Identity service. However, unlike other services, it also offers an
internal mechanism that allows it to operate without any other OpenStack
services. However, for simplicity, this guide references the Identity service
in :doc:`keystone`. Before you configure the Object Storage service, you must
create service credentials and an API endpoint.
.. note::
The Object Storage service does not use a SQL database on the controller
node. Instead, it uses distributed SQLite databases on each storage node.
#. Source the ``admin`` credentials to gain access to admin-only CLI commands:
.. code-block:: console
$ source admin-openrc.sh
#. To create the Identity service credentials, complete these steps:
* Create the ``swift`` user:
.. code-block:: console
$ openstack user create --password-prompt swift
User Password:
Repeat User Password:
+----------+----------------------------------+
| Field | Value |
+----------+----------------------------------+
| email | None |
| enabled | True |
| id | d535e5cbd2b74ac7bfb97db9cced3ed6 |
| name | swift |
| username | swift |
+----------+----------------------------------+
* Add the ``admin`` role to the ``swift`` user:
.. code-block:: console
$ openstack role add --project service --user swift admin
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id | cd2cb9a39e874ea69e5d4b896eb16128 |
| name | admin |
+-------+----------------------------------+
* Create the ``swift`` service entity:
.. code-block:: console
$ openstack service create --name swift \
--description "OpenStack Object Storage" object-store
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Object Storage |
| enabled | True |
| id | 75ef509da2c340499d454ae96a2c5c34 |
| name | swift |
| type | object-store |
+-------------+----------------------------------+
#. Create the Object Storage service API endpoint:
.. code-block:: console
$ openstack endpoint create \
--publicurl 'http://controller:8080/v1/AUTH_%(tenant_id)s' \
--internalurl 'http://controller:8080/v1/AUTH_%(tenant_id)s' \
--adminurl http://controller:8080 \
--region RegionOne \
object-store
+--------------+----------------------------------------------+
| Field | Value |
+--------------+----------------------------------------------+
| adminurl | http://controller:8080/ |
| id | af534fb8b7ff40a6acf725437c586ebe |
| internalurl | http://controller:8080/v1/AUTH_%(tenant_id)s |
| publicurl | http://controller:8080/v1/AUTH_%(tenant_id)s |
| region | RegionOne |
| service_id | 75ef509da2c340499d454ae96a2c5c34 |
| service_name | swift |
| service_type | object-store |
+--------------+----------------------------------------------+
To install and configure the controller node components
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. note::
Default configuration files vary by distribution. You might need
to add these sections and options rather than modifying existing
sections and options. Also, an ellipsis (...) in the configuration
snippets indicates potential default configuration options that you
should retain.
#. Install the packages:
.. note::
Complete OpenStack environments already include some of these
packages.
.. only:: ubuntu or debian
.. code-block:: console
# apt-get install swift swift-proxy python-swiftclient python-keystoneclient \
python-keystonemiddleware memcached
.. only:: rdo
.. code-block:: console
# yum install openstack-swift-proxy python-swiftclient python-keystone-auth-token \
python-keystonemiddleware memcached
.. only:: obs
.. code-block:: console
# zypper install openstack-swift-proxy python-swiftclient python-keystoneclient \
python-keystonemiddleware python-xml memcached
.. only:: ubuntu or debian
2. Create the :file:`/etc/swift` directory.
3. Obtain the proxy service configuration file from the Object Storage
source repository:
.. code-block:: console
# curl -o /etc/swift/proxy-server.conf \
https://git.openstack.org/cgit/openstack/swift/plain/etc/ \
proxy-server.conf-sample?h=stable/kilo
.. only:: rdo
2. Obtain the proxy service configuration file from the Object Storage
source repository:
.. code-block:: console
# curl -o /etc/swift/proxy-server.conf \
https://git.openstack.org/cgit/openstack/swift/plain/etc/ \
proxy-server.conf-sample?h=stable/kilo
.. only:: obs
2. .. include:: swift-controller-node-include.txt
.. only:: rdo
3. .. include:: swift-controller-node-include.txt
.. only:: ubuntu
4. .. include:: swift-controller-node-include.txt

View File

@ -0,0 +1,112 @@
===========================================
Configure hashes and default storage policy
===========================================
.. note::
Default configuration files vary by distribution. You might need
to add these sections and options rather than modifying existing
sections and options. Also, an ellipsis (...) in the configuration
snippets indicates potential default configuration options that you
should retain.
.. only:: ubuntu or rdo or debian
#. Obtain the :file:`/etc/swift/swift.conf` file from the Object
Storage source repository:
.. code-block:: console
# curl -o /etc/swift/swift.conf \
https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample?h=stable/kilo
#. Edit the :file:`/etc/swift/swift.conf` file and complete the following
actions:
* In the ``[swift-hash]`` section, configure the hash path prefix and
suffix for your environment.
.. code-block:: ini
:linenos:
[swift-hash]
...
swift_hash_path_suffix = HASH_PATH_PREFIX
swift_hash_path_prefix = HASH_PATH_SUFFIX
Replace HASH_PATH_PREFIX and HASH_PATH_SUFFIX with unique values.
.. warning::
Keep these values secret and do not change or lose them.
* In the ``[storage-policy:0]`` section, configure the default
storage policy:
.. code-block:: ini
:linenos:
[storage-policy:0]
...
name = Policy-0
default = yes
#. Copy the :file:`swift.conf` file to the :file:`/etc/swift` directory on
each storage node and any additional nodes running the proxy service.
#. On all nodes, ensure proper ownership of the configuration directory:
.. code-block:: console
# chown -R swift:swift /etc/swift
.. only:: ubuntu or debian
4. On the controller node and any other nodes running the proxy service,
restart the Object Storage proxy service including its dependencies:
.. code-block:: console
# service memcached restart
# service swift-proxy restart
5. On the storage nodes, start the Object Storage services:
.. code-block:: console
# swift-init all start
.. note::
The storage node runs many Object Storage services and the
``swift-init`` command makes them easier to manage. You can ignore
errors from services not running on the storage node.
.. only:: rdo or obs
4. On the controller node and any other nodes running the proxy service,
start the Object Storage proxy service including its dependencies and
configure them to start when the system boots:
.. code-block:: console
# systemctl enable openstack-swift-proxy.service memcached.service
# systemctl start openstack-swift-proxy.service memcached.service
5. On the storage nodes, start the Object Storage services and configure
them to start when the system boots:
.. code-block:: console
# systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service \
openstack-swift-account-reaper.service openstack-swift-account-replicator.service
# systemctl start openstack-swift-account.service openstack-swift-account-auditor.service \
openstack-swift-account-reaper.service openstack-swift-account-replicator.service
# systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service \
openstack-swift-container-replicator.service openstack-swift-container-updater.service
# systemctl start openstack-swift-container.service openstack-swift-container-auditor.service \
openstack-swift-container-replicator.service openstack-swift-container-updater.service
# systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service \
openstack-swift-object-replicator.service openstack-swift-object-updater.service
# systemctl start openstack-swift-object.service openstack-swift-object-auditor.service \
openstack-swift-object-replicator.service openstack-swift-object-updater.service

View File

@ -0,0 +1,259 @@
====================
Create initial rings
====================
Before starting the Object Storage services, you must create the initial
account, container, and object rings. The ring builder creates configuration
files that each node uses to determine and deploy the storage architecture.
For simplicity, this guide uses one region and zone with 2^10 (1024) maximum
partitions, 3 replicas of each object, and 1 hour minimum time between moving
a partition more than once. For Object Storage, a partition indicates a
directory on a storage device rather than a conventional partition table.
For more information, see the
`Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
Account ring
~~~~~~~~~~~~
The account server uses the account ring to maintain lists of containers.
To create the ring perform the following steps on the controller node.
#. Change to the :file:`/etc/swift` directory.
#. Create the base :file:`account.builder` file:
.. code-block:: console
# swift-ring-builder account.builder create 10 3 1
.. note::
This command provides no output.
#. Add each storage node to the ring:
.. code-block:: console
# swift-ring-builder account.builder \
add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6002 \
--device DEVICE_NAME --weight DEVICE_WEIGHT
Replace ``STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
of the management network on the storage node. Replace ``DEVICE_NAME`` with a
storage device name on the same storage node. For example, using the first
storage node in :doc:`swift-storage-node` with the :file:`/dev/sdb1` storage
device and weight of 100:
.. code-block:: console
# swift-ring-builder account.builder add \
--region 1 --zone 1 --ip 10.0.0.51 --port 6002 --device sdb1 --weight 100
Repeat this command for each storage device on each storage node. In the
example architecture, use the command in four variations:
.. code-block:: console
# swift-ring-builder account.builder add \
--region 1 --zone 1 --ip 10.0.0.51 --port 6002 --device sdb1 --weight 100
Device d0r1z1-10.0.0.51:6002R10.0.0.51:6002/sdb1_"" with 100.0 weight got id 0
# swift-ring-builder account.builder add \
--region 1 --zone 2 --ip 10.0.0.51 --port 6002 --device sdc1 --weight 100
Device d1r1z2-10.0.0.51:6002R10.0.0.51:6002/sdc1_"" with 100.0 weight got id 1
# swift-ring-builder account.builder add \
--region 1 --zone 3 --ip 10.0.0.52 --port 6002 --device sdb1 --weight 100
Device d2r1z3-10.0.0.52:6002R10.0.0.52:6002/sdb1_"" with 100.0 weight got id 2
# swift-ring-builder account.builder add \
--region 1 --zone 4 --ip 10.0.0.52 --port 6002 --device sdc1 --weight 100
Device d3r1z4-10.0.0.52:6002R10.0.0.52:6002/sdc1_"" with 100.0 weight got id 3
#. Verify the ring contents:
.. code-block:: console
# swift-ring-builder account.builder
account.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 4 zones, 4 devices, 100.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1
The overload factor is 0.00% (0.000000)
Devices: id region zone ip address port replication ip replication port name weight partitions balance meta
0 1 1 10.0.0.51 6002 10.0.0.51 6002 sdb1 100.00 0 -100.00
1 1 2 10.0.0.51 6002 10.0.0.51 6002 sdc1 100.00 0 -100.00
2 1 3 10.0.0.52 6002 10.0.0.52 6002 sdb1 100.00 0 -100.00
3 1 4 10.0.0.52 6002 10.0.0.52 6002 sdc1 100.00 0 -100.00</computeroutput></screen>
#. Rebalance the ring:
.. code-block:: console
# swift-ring-builder account.builder rebalance
Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
|
Container ring
~~~~~~~~~~~~~~
The container server uses the container ring to maintain lists of objects.
However, it does not track object locations.
To create the ring perform the following steps on the controller node.
#. Change to the :file:`/etc/swift` directory.
#. Create the base :file:`container.builder` file:
.. code-block:: console
# swift-ring-builder container.builder create 10 3 1
.. note::
This command provides no output.
#. Add each storage node to the ring:
.. code-block:: console
# swift-ring-builder container.builder \
add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6001 \
--device DEVICE_NAME --weight DEVICE_WEIGHT
Replace ``STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
of the management network on the storage node. Replace ``DEVICE_NAME`` with a
storage device name on the same storage node. For example, using the first
storage node in :doc:`swift-storage-node` with the :file:`/dev/sdb1`
storage device and weight of 100:
.. code-block:: console
# swift-ring-builder container.builder add \
--region 1 --zone 1 --ip 10.0.0.51 --port 6001 --device sdb1 --weight 100
Repeat this command for each storage device on each storage node. In the
example architecture, use the command in four variations:
.. code-block:: console
# swift-ring-builder container.builder add \
--region 1 --zone 1 --ip 10.0.0.51 --port 6001 --device sdb1 --weight 100
Device d0r1z1-10.0.0.51:6001R10.0.0.51:6001/sdb1_"" with 100.0 weight got id 0
# swift-ring-builder container.builder add \
--region 1 --zone 2 --ip 10.0.0.51 --port 6001 --device sdc1 --weight 100
Device d1r1z2-10.0.0.51:6001R10.0.0.51:6001/sdc1_"" with 100.0 weight got id 1
# swift-ring-builder container.builder add \
--region 1 --zone 3 --ip 10.0.0.52 --port 6001 --device sdb1 --weight 100
Device d2r1z3-10.0.0.52:6001R10.0.0.52:6001/sdb1_"" with 100.0 weight got id 2
# swift-ring-builder container.builder add \
--region 1 --zone 4 --ip 10.0.0.52 --port 6001 --device sdc1 --weight 100
Device d3r1z4-10.0.0.52:6001R10.0.0.52:6001/sdc1_"" with 100.0 weight got id 3
#. Verify the ring contents:
.. code-block:: console
# swift-ring-builder container.builder
container.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 4 zones, 4 devices, 100.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1
The overload factor is 0.00% (0.000000)
Devices: id region zone ip address port replication ip replication port name weight partitions balance meta
0 1 1 10.0.0.51 6001 10.0.0.51 6001 sdb1 100.00 0 -100.00
1 1 2 10.0.0.51 6001 10.0.0.51 6001 sdc1 100.00 0 -100.00
2 1 3 10.0.0.52 6001 10.0.0.52 6001 sdb1 100.00 0 -100.00
3 1 4 10.0.0.52 6001 10.0.0.52 6001 sdc1 100.00 0 -100.00
#. Rebalance the ring:
.. code-block:: console
# swift-ring-builder container.builder rebalance
Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
Object ring
~~~~~~~~~~~
The object server uses the object ring to maintain lists of object locations
on local devices.
To create the ring perform the following steps on the controller node.
#. Change to the :file:`/etc/swift` directory.
#. Create the base :file:`object.builder` file:
.. code-block:: console
# swift-ring-builder object.builder create 10 3 1
.. note::
This command provides no output.
#. Add each storage node to the ring:
.. code-block:: console
# swift-ring-builder object.builder \
add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6000 \
--device DEVICE_NAME --weight DEVICE_WEIGHT
Replace ``STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
of the management network on the storage node. Replace ``DEVICE_NAME`` with
a storage device name on the same storage node. For example, using the first
storage node in :doc:`swift-storage-node` with the :file:`/dev/sdb1` storage
device and weight of 100:
.. code-block:: console
# swift-ring-builder object.builder add r1z1-10.0.0.51:6000/sdb1 100
Repeat this command for each storage device on each storage node. In the
example architecture, use the command in four variations:
.. code-block:: console
# swift-ring-builder object.builder add \
--region 1 --zone 1 --ip 10.0.0.51 --port 6000 --device sdb1 --weight 100
Device d0r1z1-10.0.0.51:6000R10.0.0.51:6000/sdb1_"" with 100.0 weight got id 0
# swift-ring-builder object.builder add \
--region 1 --zone 2 --ip 10.0.0.51 --port 6000 --device sdc1 --weight 100
Device d1r1z2-10.0.0.51:6000R10.0.0.51:6000/sdc1_"" with 100.0 weight got id 1
# swift-ring-builder object.builder add \
--region 1 --zone 3 --ip 10.0.0.52 --port 6000 --device sdb1 --weight 100
Device d2r1z3-10.0.0.52:6000R10.0.0.52:6000/sdb1_"" with 100.0 weight got id 2
# swift-ring-builder object.builder add \
--region 1 --zone 4 --ip 10.0.0.52 --port 6000 --device sdc1 --weight 100
Device d3r1z4-10.0.0.52:6000R10.0.0.52:6000/sdc1_"" with 100.0 weight got id 3
#. Verify the ring contents:
.. code-block:: console
# swift-ring-builder object.builder
object.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 4 zones, 4 devices, 100.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1
The overload factor is 0.00% (0.000000)
Devices: id region zone ip address port replication ip replication port name weight partitions balance meta
0 1 1 10.0.0.51 6000 10.0.0.51 6000 sdb1 100.00 0 -100.00
1 1 2 10.0.0.51 6000 10.0.0.51 6000 sdc1 100.00 0 -100.00
2 1 3 10.0.0.52 6000 10.0.0.52 6000 sdb1 100.00 0 -100.00
3 1 4 10.0.0.52 6000 10.0.0.52 6000 sdc1 100.00 0 -100.00
#. Rebalance the ring:
.. code-block:: console
# swift-ring-builder object.builder rebalance
Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
Distribute ring configuration files
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Copy the :file:`account.ring.gz`, :file:`container.ring.gz`, and
:file:`object.ring.gz` files to the :file:`/etc/swift` directory
on each storage node and any additional nodes running the proxy service.

View File

@ -0,0 +1,7 @@
==========
Next steps
==========
Your OpenStack environment now includes Object Storage. You can
:doc:`launch an instance <launch-instance>` or add more services
to your environment in the following chapters.

View File

@ -0,0 +1,42 @@
Edit the :file:`/etc/swift/account-server.conf` file and complete the
following actions:
* In the ``[DEFAULT]`` section, configure the bind IP address, bind port,
user, configuration directory, and mount point directory:
.. code-block:: ini
:linenos:
[DEFAULT]
...
bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6002
user = swift
swift_dir = /etc/swift
devices = /srv/node
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address of the
management network on the storage node.
* In the ``[pipeline:main]`` section, enable the appropriate modules:
.. code-block:: ini
:linenos:
[pipeline:main]
pipeline = healthcheck recon account-server
.. note::
For more information on other modules that enable additional features,
see the `Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
* In the ``[filter:recon]`` section, configure the recon (meters) cache
directory:
.. code-block:: ini
:linenos:
[filter:recon]
...
recon_cache_path = /var/cache/swift

View File

@ -0,0 +1,42 @@
Edit the :file:`/etc/swift/container-server.conf` file and complete the
following actions:
* In the ``[DEFAULT]`` section, configure the bind IP address, bind port,
user, configuration directory, and mount point directory:
.. code-block:: ini
:linenos:
[DEFAULT]
...
bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6001
user = swift
swift_dir = /etc/swift
devices = /srv/node
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address of the
management network on the storage node.
* In the ``[pipeline:main]`` section, enable the appropriate modules:
.. code-block:: ini
:linenos:
[pipeline:main]
pipeline = healthcheck recon container-server
.. note::
For more information on other modules that enable additional features,
see the `Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
* In the ``[filter:recon]`` section, configure the recon (meters) cache
directory:
.. code-block:: ini
:linenos:
[filter:recon]
...
recon_cache_path = /var/cache/swift

View File

@ -0,0 +1,43 @@
Edit the :file:`/etc/swift/object-server.conf` file and complete the
following actions:
* In the ``[DEFAULT]`` section, configure the bind IP address, bind port,
user, configuration directory, and mount point directory:
.. code-block:: ini
:linenos:
[DEFAULT]
...
bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6000
user = swift
swift_dir = /etc/swift
devices = /srv/node
Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the
management network on the storage node.
* In the ``[pipeline:main]`` section, enable the appropriate modules:
.. code-block:: ini
:linenos:
[pipeline:main]
pipeline = healthcheck recon object-server
.. note::
For more information on other modules that enable additional features,
see the `Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
* In the ``[filter:recon]`` section, configure the recon (meters) cache
and lock directories:
.. code-block:: ini
:linenos:
[filter:recon]
...
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock

View File

@ -0,0 +1,266 @@
=======================================
Install and configure the storage nodes
=======================================
This section describes how to install and configure storage nodes
that operate the account, container, and object services. For
simplicity, this configuration references two storage nodes, each
containing two empty local block storage devices. Each of the
devices, :file:`/dev/sdb` and :file:`/dev/sdc`, must contain a
suitable partition table with one partition occupying the entire
device. Although the Object Storage service supports any file system
with :term:`extended attributes (xattr)`, testing and benchmarking
indicate the best performance and reliability on :term:`XFS`. For
more information on horizontally scaling your environment, see the
`Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`_.
To configure prerequisites
~~~~~~~~~~~~~~~~~~~~~~~~~~
You must configure each storage node before you install and configure
the Object Storage service on it. Similar to the controller node, each
storage node contains one network interface on the :term:`management network`.
Optionally, each storage node can contain a second network interface on
a separate network for replication. For more information, see
:doc:`basic_environment`.
#. Configure unique items on the first storage node:
Configure the management interface:
* IP address: ``10.0.0.51``
* Network mask: ``255.255.255.0`` (or ``/24``)
* Default gateway: ``10.0.0.1``
Set the hostname of the node to ``object1``.
#. Configure unique items on the second storage node:
Configure the management interface:
* IP address: ``10.0.0.52``
* Network mask: ``255.255.255.0`` (or ``/24``)
* Default gateway: ``10.0.0.1``
Set the hostname of the node to ``object2``.
#. Configure shared items on both storage nodes:
* Copy the contents of the :file:`/etc/hosts` file from the controller
node and add the following to it:
.. code-block:: ini
:linenos:
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2
Also add this content to the :file:`/etc/hosts` file on all other
nodes in your environment.
* Install and configure :term:`NTP <Network Time Protocol (NTP)>` using
the instructions in :doc:`basics-ntp`.
* Install the supporting utility packages:
.. only:: ubuntu or debian
.. code-block:: console
# apt-get install xfsprogs rsync
.. only:: rdo
.. code-block:: console
# yum install xfsprogs rsync
.. only:: obs
.. code-block:: console
# zypper install xfsprogs rsync
* Format the :file:`/dev/sdb1` and :file:`/dev/sdc1` partitions as XFS:
.. code-block:: console
# mkfs.xfs /dev/sdb1
# mkfs.xfs /dev/sdc1
* Create the mount point directory structure:
.. code-block:: console
# mkdir -p /srv/node/sdb1
# mkdir -p /srv/node/sdc1
* Edit the :file:`etc/fstab` file and add the following to it:
.. code-block:: ini
:linenos:
/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdc1 /srv/node/sdc1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
* Mount the devices:
.. code-block:: console
# mount /srv/node/sdb1
# mount /srv/node/sdc1
#. Edit the :file:`/etc/rsyncd.conf` file and add the following to it:
.. code-block:: ini
:linenos:
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = MANAGEMENT_INTERFACE_IP_ADDRESS
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address of the
management network on the storage node.
.. note::
The ``rsync`` service requires no authentication, so consider running
it on a private network.
.. only:: ubuntu or debian
5. Edit the :file:`/etc/default/rsync` file and enable the ``rsync``
service:
.. code-block:: ini
:linenos:
RSYNC_ENABLE=true
6. Start the ``rsync`` service:
.. code-block:: console
# service rsync start
.. only:: obs or rdo
5. Start the ``rsyncd`` service and configure it to start when the
system boots:
.. code-block:: console
# systemctl enable rsyncd.service
# systemctl start rsyncd.service
Install and configure storage node components
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. note::
Default configuration files vary by distribution. You might need
to add these sections and options rather than modifying existing
sections and options. Also, an ellipsis (...) in the configuration
snippets indicates potential default configuration options that you
should retain.
.. note::
Perform these steps on each storage node.
#. Install the packages:
.. only:: ubuntu or debian
.. code-block:: console
# apt-get install swift swift-account swift-container swift-object
.. only:: rdo
.. code-block:: console
# yum install openstack-swift-account openstack-swift-container \
openstack-swift-object
.. only:: obs
.. code-block:: console
# zypper install openstack-swift-account \
openstack-swift-container openstack-swift-object python-xml
.. only:: ubuntu or rdo or debian
2. Obtain the accounting, container, object, container-reconciler, and
object-expirer service configuration files from the Object Storage
source repository:
.. code-block:: console
# curl -o /etc/swift/account-server.conf \
https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/kilo
# curl -o /etc/swift/container-server.conf \
https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/kilo
# curl -o /etc/swift/object-server.conf \
https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/kilo
# curl -o /etc/swift/container-reconciler.conf \
https://git.openstack.org/cgit/openstack/swift/plain/etc/container-reconciler.conf-sample?h=stable/kilo
# curl -o /etc/swift/object-expirer.conf \
https://git.openstack.org/cgit/openstack/swift/plain/etc/object-expirer.conf-sample?h=stable/kilo
3. .. include:: swift-storage-node-include1.txt
4. .. include:: swift-storage-node-include2.txt
5. .. include:: swift-storage-node-include3.txt
6. Ensure proper ownership of the mount point directory structure:
.. code-block:: console
# chown -R swift:swift /srv/node
7. Create the :file:`recon` directory and ensure proper ownership of it:
.. code-block:: console
# mkdir -p /var/cache/swift
# chown -R swift:swift /var/cache/swift
.. only:: obs
2. .. include:: swift-storage-node-include1.txt
3. .. include:: swift-storage-node-include2.txt
4. .. include:: swift-storage-node-include3.txt
5. Ensure proper ownership of the mount point directory structure:
.. code-block:: console
# chown -R swift:swift /srv/node
6. Create the :file:`recon` directory and ensure proper ownership of it:
.. code-block:: console
# mkdir -p /var/cache/swift
# chown -R swift:swift /var/cache/swift

View File

@ -0,0 +1,59 @@
Verify operation
~~~~~~~~~~~~~~~~
This section describes how to verify operation of the Object Storage
service.
.. note::
The ``swift`` client requires the ``-V 3`` parameter to use the
Identity version 3 API.
.. note::
Perform these steps on the controller node.
#. Source the ``demo`` credentials:
.. code-block:: console
$ source demo-openrc.sh
#. Show the service status:
.. code-block:: console
$ swift -V 3 stat
Account: AUTH_c75cafb58f5049b8a976506737210756
Containers: 0
Objects: 0
Bytes: 0
X-Put-Timestamp: 1429736713.92936
X-Timestamp: 1429736713.92936
X-Trans-Id: txdea07add01ca4dbdb49a2-0055380d09
Content-Type: text/plain; charset=utf-8
#. Upload a test file:
.. code-block:: console
$ swift -V 3 upload demo-container1 FILE
FILE
Replace ``FILE`` with the name of a local file to upload to the
``demo-container1`` container.
#. List containers:
.. code-block:: console
$ swift -V 3 list
demo-container1
#. Download a test file:
.. code-block:: console
$ swift -V 3 download demo-container1 FILE
FILE [auth 0.295s, headers 0.339s, total 0.339s, 0.005 MB/s]
Replace ``FILE`` with the name of the file uploaded to the
``demo-container1`` container.

View File

@ -1,6 +1,21 @@
.. _swift: .. _swift:
================== ==================
Add Object Storage Add Object Storage
================== ==================
The OpenStack Object Storage services (swift) work together to provide
object storage and retrieval through a :term:`REST API <RESTful>`.
Your environment must at least include the Identity service (keystone)
prior to deploying Object Storage.
.. toctree::
:maxdepth: 2
common/get_started_object_storage.rst
swift-controller-node.rst
swift-storage-node.rst
swift-initial-rings.rst
swift-finalize-installation.rst
swift-verify.rst
swift-next-steps.rst