[config-ref] Convert the swift section to RST

The organization has been reworked to workaround some sphinx problems
with multiple includes.

Change-Id: I59d24a585cf8af7a41c62acae707fbbc6ef021d6
Implements: blueprint config-ref-rst
This commit is contained in:
Gauvain Pocentek 2015-12-07 13:44:27 +01:00
parent 353df56915
commit c62bcd2c05
9 changed files with 1153 additions and 0 deletions

View File

@ -1,3 +1,16 @@
==============
Object Storage
==============
.. toctree::
:maxdepth: 2
object-storage/about.rst
object-storage/general-service-conf.rst
object-storage/configure.rst
object-storage/features.rst
object-storage/configure-s3.rst
object-storage/cors.rst
object-storage/listendpoints.rst
tables/conf-changes/swift.rst

View File

@ -0,0 +1,24 @@
==============================
Introduction to Object Storage
==============================
Object Storage is a robust, highly scalable and fault tolerant storage platform
for unstructured data such as objects. Objects are stored bits, accessed
through a RESTful, HTTP-based interface. You cannot access data at the block or
file level. Object Storage is commonly used to archive and back up data, with
use cases in virtual machine image, photo, video and music storage.
Object Storage provides a high degree of availability, throughput, and
performance with its scale out architecture. Each object is replicated across
multiple servers, residing within the same data center or across data centers,
which mitigates the risk of network and hardware failure. In the event of
hardware failure, Object Storage will automatically copy objects to a new
location to ensure that there are always three copies available. Object Storage
is an eventually consistent distributed storage platform; it sacrifices
consistency for maximum availability and partition tolerance. Object Storage
enables you to create a reliable platform by using commodity hardware and
inexpensive storage.
For more information, review the key concepts in the developer documentation at
`docs.openstack.org/developer/swift/
<http://docs.openstack.org/developer/swift/>`__.

View File

@ -0,0 +1,92 @@
========================================
Configure Object Storage with the S3 API
========================================
The Swift3 middleware emulates the S3 REST API on top of Object Storage.
The following operations are currently supported:
- GET Service
- DELETE Bucket
- GET Bucket (List Objects)
- PUT Bucket
- DELETE Object
- GET Object
- HEAD Object
- PUT Object
- PUT Object (Copy)
To use this middleware, first download the latest version from its repository
to your proxy servers.
.. code-block:: console
$ git clone https://git.openstack.org/openstack/swift3
Then, install it using standard python mechanisms, such as:
.. code-block:: console
# python setup.py install
Alternatively, if you have configured the Ubuntu Cloud Archive, you may use:
.. code-block:: console
# apt-get install swift-python-s3
To add this middleware to your configuration, add the swift3 middleware in
front of the swauth middleware, and before any other middleware that looks at
Object Storage requests (like rate limiting).
Ensure that your ``proxy-server.conf`` file contains swift3 in the pipeline and
the ``[filter:swift3]`` section, as shown below:
.. code-block:: ini
[pipeline:main]
pipeline = catch_errors healthcheck cache swift3 swauth proxy-server
[filter:swift3]
use = egg:swift3#swift3
Next, configure the tool that you use to connect to the S3 API. For S3curl, for
example, you must add your host IP information by adding your host IP to the
``@endpoints`` array (line 33 in ``s3curl.pl``):
.. code-block:: perl
my @endpoints = ( '1.2.3.4');
Now you can send commands to the endpoint, such as:
.. code-block:: console
$ ./s3curl.pl - 'a7811544507ebaf6c9a7a8804f47ea1c' \
-key 'a7d8e981-e296-d2ba-cb3b-db7dd23159bd' \
-get - -s -v http://1.2.3.4:8080
To set up your client, ensure you are using the ec2 credentials, which
can be downloaded from the API Endpoints tab of the dashboard. The host
should also point to the Object Storage node's hostname. It also will
have to use the old-style calling format, and not the hostname-based
container format. Here is an example client setup using the Python boto
library on a locally installed all-in-one Object Storage installation.
.. code-block:: python
connection = boto.s3.Connection(
aws_access_key_id='a7811544507ebaf6c9a7a8804f47ea1c',
aws_secret_access_key='a7d8e981-e296-d2ba-cb3b-db7dd23159bd',
port=8080,
host='127.0.0.1',
is_secure=False,
calling_format=boto.s3.connection.OrdinaryCallingFormat())

View File

@ -0,0 +1,211 @@
========================
Configure Object Storage
========================
OpenStack Object Storage uses multiple configuration files for multiple
services and background daemons, and ``paste.deploy`` to manage server
configurations. Default configuration options appear in the ``[DEFAULT]``
section. You can override the default values by setting values in the other
sections.
Object server configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Find an example object server configuration at
``etc/object-server.conf-sample`` in the source code repository.
The available configuration options are:
.. include:: ../tables/swift-object-server-DEFAULT.rst
.. include:: ../tables/swift-object-server-app-object-server.rst
.. include:: ../tables/swift-object-server-pipeline-main.rst
.. include:: ../tables/swift-object-server-object-replicator.rst
.. include:: ../tables/swift-object-server-object-updater.rst
.. include:: ../tables/swift-object-server-object-auditor.rst
.. include:: ../tables/swift-object-server-filter-healthcheck.rst
.. include:: ../tables/swift-object-server-filter-recon.rst
.. include:: ../tables/swift-object-server-filter-xprofile.rst
Sample object server configuration file
---------------------------------------
.. remote-code-block:: ini
https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/liberty
Object expirer configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Find an example object expirer configuration at
``etc/object-expirer.conf-sample`` in the source code repository.
The available configuration options are:
.. include:: ../tables/swift-object-expirer-DEFAULT.rst
.. include:: ../tables/swift-object-expirer-app-proxy-server.rst
.. include:: ../tables/swift-object-expirer-filter-cache.rst
.. include:: ../tables/swift-object-expirer-filter-catch_errors.rst
.. include:: ../tables/swift-object-expirer-filter-proxy-logging.rst
.. include:: ../tables/swift-object-expirer-object-expirer.rst
.. include:: ../tables/swift-object-expirer-pipeline-main.rst
Sample object expirer configuration file
----------------------------------------
.. remote-code-block:: ini
https://git.openstack.org/cgit/openstack/swift/plain/etc/object-expirer.conf-sample?h=stable/liberty
Container server configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Find an example container server configuration at
``etc/container-server.conf-sample`` in the source code repository.
The available configuration options are:
.. include:: ../tables/swift-container-server-DEFAULT.rst
.. include:: ../tables/swift-container-server-app-container-server.rst
.. include:: ../tables/swift-container-server-pipeline-main.rst
.. include:: ../tables/swift-container-server-container-replicator.rst
.. include:: ../tables/swift-container-server-container-updater.rst
.. include:: ../tables/swift-container-server-container-auditor.rst
.. include:: ../tables/swift-container-server-container-sync.rst
.. include:: ../tables/swift-container-server-filter-healthcheck.rst
.. include:: ../tables/swift-container-server-filter-recon.rst
.. include:: ../tables/swift-container-server-filter-xprofile.rst
Sample container server configuration file
------------------------------------------
.. remote-code-block:: ini
https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/liberty
Container sync realms configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Find an example container sync realms configuration at
``etc/container-sync-realms.conf-sample`` in the source code repository.
The available configuration options are:
.. include:: ../tables/swift-container-sync-realms-DEFAULT.rst
.. include:: ../tables/swift-container-sync-realms-realm1.rst
.. include:: ../tables/swift-container-sync-realms-realm2.rst
Sample container sync realms configuration file
-----------------------------------------------
.. remote-code-block:: ini
https://git.openstack.org/cgit/openstack/swift/plain/etc/container-sync-realms.conf-sample?h=stable/liberty
Container reconciler configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Find an example container sync realms configuration at
``etc/container-reconciler.conf-sample`` in the source code repository.
The available configuration options are:
.. include:: ../tables/swift-container-reconciler-DEFAULT.rst
.. include:: ../tables/swift-container-reconciler-app-proxy-server.rst
.. include:: ../tables/swift-container-reconciler-container-reconciler.rst
.. include:: ../tables/swift-container-reconciler-filter-cache.rst
.. include:: ../tables/swift-container-reconciler-filter-catch_errors.rst
.. include:: ../tables/swift-container-reconciler-filter-proxy-logging.rst
.. include:: ../tables/swift-container-reconciler-pipeline-main.rst
Sample container sync reconciler configuration file
---------------------------------------------------
.. remote-code-block:: ini
https://git.openstack.org/cgit/openstack/swift/plain/etc/container-reconciler.conf-sample?h=stable/liberty
Account server configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Find an example account server configuration at
``etc/account-server.conf-sample`` in the source code repository.
The available configuration options are:
.. include:: ../tables/swift-account-server-DEFAULT.rst
.. include:: ../tables/swift-account-server-app-account-server.rst
.. include:: ../tables/swift-account-server-pipeline-main.rst
.. include:: ../tables/swift-account-server-account-replicator.rst
.. include:: ../tables/swift-account-server-account-auditor.rst
.. include:: ../tables/swift-account-server-account-reaper.rst
.. include:: ../tables/swift-account-server-filter-healthcheck.rst
.. include:: ../tables/swift-account-server-filter-recon.rst
.. include:: ../tables/swift-account-server-filter-xprofile.rst
Sample account server configuration file
----------------------------------------
.. remote-code-block:: ini
https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/liberty
Proxy server configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~
Find an example proxy server configuration at
``etc/proxy-server.conf-sample`` in the source code repository.
The available configuration options are:
.. include:: ../tables/swift-proxy-server-app-proxy-server.rst
.. include:: ../tables/swift-proxy-server-pipeline-main.rst
.. include:: ../tables/swift-proxy-server-filter-account-quotas.rst
.. include:: ../tables/swift-proxy-server-filter-authtoken.rst
.. include:: ../tables/swift-proxy-server-filter-cache.rst
.. include:: ../tables/swift-proxy-server-filter-catch_errors.rst
.. include:: ../tables/swift-proxy-server-filter-container_sync.rst
.. include:: ../tables/swift-proxy-server-filter-dlo.rst
.. include:: ../tables/swift-proxy-server-filter-versioned_writes.rst
.. include:: ../tables/swift-proxy-server-filter-gatekeeper.rst
.. include:: ../tables/swift-proxy-server-filter-healthcheck.rst
.. include:: ../tables/swift-proxy-server-filter-keystoneauth.rst
.. include:: ../tables/swift-proxy-server-filter-list-endpoints.rst
.. include:: ../tables/swift-proxy-server-filter-proxy-logging.rst
.. include:: ../tables/swift-proxy-server-filter-tempauth.rst
.. include:: ../tables/swift-proxy-server-filter-xprofile.rst
Sample proxy server configuration file
--------------------------------------
.. remote-code-block:: ini
https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/liberty
Proxy server memcache configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Find an example memcache configuration for the proxy server at
``etc/memcache.conf-sample`` in the source code repository.
The available configuration options are:
.. include:: ../tables/swift-memcache-memcache.rst
Rsyncd configuration
~~~~~~~~~~~~~~~~~~~~
Find an example rsyncd configuration at ``etc/rsyncd.conf-sample`` in
the source code repository.
The available configuration options are:
.. include:: ../tables/swift-rsyncd-account.rst
.. include:: ../tables/swift-rsyncd-container.rst
.. include:: ../tables/swift-rsyncd-object.rst
.. include:: ../tables/swift-rsyncd-object6010.rst
.. include:: ../tables/swift-rsyncd-object6020.rst
.. include:: ../tables/swift-rsyncd-object6030.rst
.. include:: ../tables/swift-rsyncd-object6040.rst
.. include:: ../tables/swift-rsyncd-object_sda.rst
.. include:: ../tables/swift-rsyncd-object_sdb.rst
.. include:: ../tables/swift-rsyncd-object_sdc.rst

View File

@ -0,0 +1,13 @@
=============================
Cross-origin resource sharing
=============================
Cross-Origin Resource Sharing (CORS) is a mechanism that allows code running in
a browser (JavaScript for example) to make requests to a domain, other than the
one it was originated from. OpenStack Object Storage supports CORS requests to
containers and objects within the containers using metadata held on the
container.
In addition to the metadata on containers, you can use the
``cors_allow_origin`` option in the ``proxy-server.conf`` file to set a list of
hosts that are included with any CORS request by default.

View File

@ -0,0 +1,652 @@
=================================
Configure Object Storage features
=================================
Object Storage zones
~~~~~~~~~~~~~~~~~~~~
In OpenStack Object Storage, data is placed across different tiers of failure
domains. First, data is spread across regions, then zones, then servers, and
finally across drives. Data is placed to get the highest failure domain
isolation. If you deploy multiple regions, the Object Storage service places
the data across the regions. Within a region, each replica of the data should
be stored in unique zones, if possible. If there is only one zone, data should
be placed on different servers. And if there is only one server, data should
be placed on different drives.
Regions are widely separated installations with a high-latency or otherwise
constrained network link between them. Zones are arbitrarily assigned, and it
is up to the administrator of the Object Storage cluster to choose an isolation
level and attempt to maintain the isolation level through appropriate zone
assignment. For example, a zone may be defined as a rack with a single power
source. Or a zone may be a DC room with a common utility provider. Servers are
identified by a unique IP/port. Drives are locally attached storage volumes
identified by mount point.
In small clusters (five nodes or fewer), everything is normally in a single
zone. Larger Object Storage deployments may assign zone designations
differently; for example, an entire cabinet or rack of servers may be
designated as a single zone to maintain replica availability if the cabinet
becomes unavailable (for example, due to failure of the top of rack switches or
a dedicated circuit). In very large deployments, such as service provider level
deployments, each zone might have an entirely autonomous switching and power
infrastructure, so that even the loss of an electrical circuit or switching
aggregator would result in the loss of a single replica at most.
Rackspace zone recommendations
------------------------------
For ease of maintenance on OpenStack Object Storage, Rackspace recommends that
you set up at least five nodes. Each node is assigned its own zone (for a total
of five zones), which gives you host level redundancy. This enables you to take
down a single zone for maintenance and still guarantee object availability in
the event that another zone fails during your maintenance.
You could keep each server in its own cabinet to achieve cabinet level
isolation, but you may wish to wait until your Object Storage service is better
established before developing cabinet-level isolation. OpenStack Object Storage
is flexible; if you later decide to change the isolation level, you can take
down one zone at a time and move them to appropriate new homes.
RAID controller configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack Object Storage does not require RAID. In fact, most RAID
configurations cause significant performance degradation. The main reason for
using a RAID controller is the battery-backed cache. It is very important for
data integrity reasons that when the operating system confirms a write has been
committed that the write has actually been committed to a persistent location.
Most disks lie about hardware commits by default, instead writing to a faster
write cache for performance reasons. In most cases, that write cache exists
only in non-persistent memory. In the case of a loss of power, this data may
never actually get committed to disk, resulting in discrepancies that the
underlying file system must handle.
OpenStack Object Storage works best on the XFS file system, and this document
assumes that the hardware being used is configured appropriately to be mounted
with the ``nobarriers`` option. For more information, see the `XFS FAQ
<http://xfs.org/index.php/XFS_FAQ>`__.
To get the most out of your hardware, it is essential that every disk used in
OpenStack Object Storage is configured as a standalone, individual RAID 0 disk;
in the case of 6 disks, you would have six RAID 0s or one JBOD. Some RAID
controllers do not support JBOD or do not support battery backed cache with
JBOD. To ensure the integrity of your data, you must ensure that the individual
drive caches are disabled and the battery backed cache in your RAID card is
configured and used. Failure to configure the controller properly in this case
puts data at risk in the case of sudden loss of power.
You can also use hybrid drives or similar options for battery backed up cache
configurations without a RAID controller.
Throttle resources through rate limits
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Rate limiting in OpenStack Object Storage is implemented as a pluggable
middleware that you configure on the proxy server. Rate limiting is performed
on requests that result in database writes to the account and container SQLite
databases. It uses memcached and is dependent on the proxy servers having
highly synchronized time. The rate limits are limited by the accuracy of the
proxy server clocks.
Configure rate limiting
-----------------------
All configuration is optional. If no account or container limits are provided,
no rate limiting occurs. Available configuration options include:
.. include:: ../tables/swift-proxy-server-filter-ratelimit.rst
The container rate limits are linearly interpolated from the values given. A
sample container rate limiting could be:
.. code-block:: ini
container_ratelimit_100 = 100
container_ratelimit_200 = 50
container_ratelimit_500 = 20
This would result in:
.. list-table:: Values for Rate Limiting with Sample Configuration Settings
:header-rows: 1
* - Container Size
- Rate Limit
* - 0-99
- No limiting
* - 100
- 100
* - 150
- 75
* - 500
- 20
* - 1000
- 20
Health check
~~~~~~~~~~~~
Provides an easy way to monitor whether the Object Storage proxy server is
alive. If you access the proxy with the path ``/healthcheck``, it responds with
``OK`` in the response body, which monitoring tools can use.
.. include:: ../tables/swift-account-server-filter-healthcheck.rst
Domain remap
~~~~~~~~~~~~
Middleware that translates container and account parts of a domain to path
parameters that the proxy server understands.
.. include:: ../tables/swift-proxy-server-filter-domain_remap.rst
CNAME lookup
~~~~~~~~~~~~
Middleware that translates an unknown domain in the host header to
something that ends with the configured ``storage_domain`` by looking up
the given domain's CNAME record in DNS.
.. include:: ../tables/swift-proxy-server-filter-cname_lookup.rst
Temporary URL
~~~~~~~~~~~~~
Allows the creation of URLs to provide temporary access to objects. For
example, a website may wish to provide a link to download a large object in
OpenStack Object Storage, but the Object Storage account has no public access.
The website can generate a URL that provides GET access for a limited time to
the resource. When the web browser user clicks on the link, the browser
downloads the object directly from Object Storage, eliminating the need for the
website to act as a proxy for the request. If the user shares the link with
all his friends, or accidentally posts it on a forum, the direct access is
limited to the expiration time set when the website created the link.
A temporary URL is the typical URL associated with an object, with two
additional query parameters:
``temp_url_sig``
A cryptographic signature.
``temp_url_expires``
An expiration date, in Unix time.
An example of a temporary URL:
.. code-block:: none
https://swift-cluster.example.com/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30/container/object?
temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709&
temp_url_expires=1323479485
To create temporary URLs, first set the ``X-Account-Meta-Temp-URL-Key`` header
on your Object Storage account to an arbitrary string. This string serves as a
secret key. For example, to set a key of ``b3968d0207b54ece87cccc06515a89d4``
by using the ``swift`` command-line tool:
.. code-block:: console
$ swift post -m "Temp-URL-Key:b3968d0207b54ece87cccc06515a89d4"
Next, generate an HMAC-SHA1 (RFC 2104) signature to specify:
- Which HTTP method to allow (typically ``GET`` or ``PUT``).
- The expiry date as a Unix timestamp.
- The full path to the object.
- The secret key set as the ``X-Account-Meta-Temp-URL-Key``.
Here is code generating the signature for a GET for 24 hours on
``/v1/AUTH_account/container/object``:
.. code-block:: python
import hmac
from hashlib import sha1
from time import time
method = 'GET'
duration_in_seconds = 60*60*24
expires = int(time() + duration_in_seconds)
path = '/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30/container/object'
key = 'mykey'
hmac_body = '%s\n%s\n%s' % (method, expires, path)
sig = hmac.new(key, hmac_body, sha1).hexdigest()
s = 'https://{host}/{path}?temp_url_sig={sig}&temp_url_expires={expires}'
url = s.format(host='swift-cluster.example.com', path=path, sig=sig, expires=expires)
Any alteration of the resource path or query arguments results in a 401
Unauthorized error. Similarly, a PUT where GET was the allowed method returns a
401 error. HEAD is allowed if GET or PUT is allowed. Using this in combination
with browser form post translation middleware could also allow
direct-from-browser uploads to specific locations in Object Storage.
.. note::
Changing the ``X-Account-Meta-Temp-URL-Key`` invalidates any previously
generated temporary URLs within 60 seconds, which is the memcache time for
the key. Object Storage supports up to two keys, specified by
``X-Account-Meta-Temp-URL-Key`` and ``X-Account-Meta-Temp-URL-Key-2``.
Signatures are checked against both keys, if present. This process enables
key rotation without invalidating all existing temporary URLs.
Object Storage includes the ``swift-temp-url`` script that generates the
query parameters automatically:
.. code-block:: console
$ bin/swift-temp-url GET 3600 /v1/AUTH_account/container/object mykey\
/v1/AUTH_account/container/object?\
temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91&\
temp_url_expires=1374497657
Because this command only returns the path, you must prefix the Object Storage
host name (for example, ``https://swift-cluster.example.com``).
With GET Temporary URLs, a ``Content-Disposition`` header is set on the
response so that browsers interpret this as a file attachment to be saved. The
file name chosen is based on the object name, but you can override this with a
``filename`` query parameter. The following example specifies a filename of
``My Test File.pdf``:
.. code-block:: none
https://swift-cluster.example.com/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30/container/object?
temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709&
temp_url_expires=1323479485&
filename=My+Test+File.pdf
If you do not want the object to be downloaded, you can cause
``Content-Disposition: inline`` to be set on the response by adding the
``inline`` parameter to the query string, as follows:
.. code-block:: none
https://swift-cluster.example.com/v1/AUTH_account/container/object?
temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709&
temp_url_expires=1323479485&inline
To enable Temporary URL functionality, edit ``/etc/swift/proxy-server.conf`` to
add ``tempurl`` to the ``pipeline`` variable defined in the ``[pipeline:main]``
section. The ``tempurl`` entry should appear immediately before the
authentication filters in the pipeline, such as ``authtoken``, ``tempauth`` or
``keystoneauth``. For example:
.. code-block:: ini
[pipeline:main]
pipeline = healthcheck cache tempurl authtoken keystoneauth proxy-server
.. include:: ../tables/swift-proxy-server-filter-tempurl.rst
Name check filter
~~~~~~~~~~~~~~~~~
Name Check is a filter that disallows any paths that contain defined forbidden
characters or that exceed a defined length.
.. include:: ../tables/swift-proxy-server-filter-name_check.rst
Constraints
~~~~~~~~~~~
To change the OpenStack Object Storage internal limits, update the values in
the ``swift-constraints`` section in the ``swift.conf`` file. Use caution when
you update these values because they affect the performance in the entire
cluster.
.. include:: ../tables/swift-swift-swift-constraints.rst
Cluster health
~~~~~~~~~~~~~~
Use the ``swift-dispersion-report`` tool to measure overall cluster health.
This tool checks if a set of deliberately distributed containers and objects
are currently in their proper places within the cluster. For instance, a common
deployment has three replicas of each object. The health of that object can be
measured by checking if each replica is in its proper place. If only 2 of the 3
is in place the object's health can be said to be at 66.66%, where 100% would
be perfect. A single object's health, especially an older object, usually
reflects the health of that entire partition the object is in. If you make
enough objects on a distinct percentage of the partitions in the cluster,you
get a good estimate of the overall cluster health.
In practice, about 1% partition coverage seems to balance well between accuracy
and the amount of time it takes to gather results. To provide this health
value, you must create an account solely for this usage. Next, you must place
the containers and objects throughout the system so that they are on distinct
partitions. Use the ``swift-dispersion-populate`` tool to create random
container and object names until they fall on distinct partitions.
Last, and repeatedly for the life of the cluster, you must run the
``swift-dispersion-report`` tool to check the health of each container and
object.
These tools must have direct access to the entire cluster and ring files.
Installing them on a proxy server suffices.
The ``swift-dispersion-populate`` and ``swift-dispersion-report`` commands both
use the same ``/etc/swift/dispersion.conf`` configuration file. Example
``dispersion.conf`` file:
.. code-block:: ini
[dispersion]
auth_url = http://localhost:8080/auth/v1.0
auth_user = test:tester
auth_key = testing
You can use configuration options to specify the dispersion coverage, which
defaults to 1%, retries, concurrency, and so on. However, the defaults are
usually fine. After the configuration is in place, run the
``swift-dispersion-populate`` tool to populate the containers and objects
throughout the cluster. Now that those containers and objects are in place, you
can run the ``swift-dispersion-report`` tool to get a dispersion report or view
the overall health of the cluster. Here is an example of a cluster in perfect
health:
.. code-block:: console
$ swift-dispersion-report
Queried 2621 containers for dispersion reporting, 19s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space
Queried 2619 objects for dispersion reporting, 7s, 0 retries
100.00% of object copies found (7857 of 7857)
Sample represents 1.00% of the object partition space
Now, deliberately double the weight of a device in the object ring (with
replication turned off) and re-run the dispersion report to show what impact
that has:
.. code-block:: console
$ swift-ring-builder object.builder set_weight d0 200
$ swift-ring-builder object.builder rebalance
...
$ swift-dispersion-report
Queried 2621 containers for dispersion reporting, 8s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space
Queried 2619 objects for dispersion reporting, 7s, 0 retries
There were 1763 partitions missing one copy.
77.56% of object copies found (6094 of 7857)
Sample represents 1.00% of the object partition space
You can see the health of the objects in the cluster has gone down
significantly. Of course, this test environment has just four devices, in a
production environment with many devices the impact of one device change is
much less. Next, run the replicators to get everything put back into place and
then rerun the dispersion report:
.. code-block:: console
# start object replicators and monitor logs until they're caught up ...
$ swift-dispersion-report
Queried 2621 containers for dispersion reporting, 17s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space
Queried 2619 objects for dispersion reporting, 7s, 0 retries
100.00% of object copies found (7857 of 7857)
Sample represents 1.00% of the object partition space
Alternatively, the dispersion report can also be output in JSON format. This
allows it to be more easily consumed by third-party utilities:
.. code-block:: console
$ swift-dispersion-report -j
{"object": {"retries:": 0, "missing_two": 0, "copies_found": 7863, "missing_one": 0,
"copies_expected": 7863, "pct_found": 100.0, "overlapping": 0, "missing_all": 0}, "container":
{"retries:": 0, "missing_two": 0, "copies_found": 12534, "missing_one": 0, "copies_expected":
12534, "pct_found": 100.0, "overlapping": 15, "missing_all": 0}}
.. include:: ../tables/swift-dispersion-dispersion.rst
Static Large Object (SLO) support
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This feature is very similar to Dynamic Large Object (DLO) support in that it
enables the user to upload many objects concurrently and afterwards download
them as a single object. It is different in that it does not rely on eventually
consistent container listings to do so. Instead, a user-defined manifest of
the object segments is used.
For more information regarding SLO usage and support, please see: `Static Large
Objects <http://docs.openstack.org/developer/swift/middleware.html#slo-doc>`__.
.. include:: ../tables/swift-proxy-server-filter-slo.rst
Container quotas
~~~~~~~~~~~~~~~~
The ``container_quotas`` middleware implements simple quotas that can be
imposed on Object Storage containers by a user with the ability to set
container metadata, most likely the account administrator. This can be useful
for limiting the scope of containers that are delegated to non-admin users,
exposed to form POST uploads, or just as a self-imposed sanity check.
Any object PUT operations that exceed these quotas return a ``Forbidden (403)``
status code.
Quotas are subject to several limitations: eventual consistency, the timeliness
of the cached container\_info (60 second TTL by default), and it is unable to
reject chunked transfer uploads that exceed the quota (though once the quota is
exceeded, new chunked transfers are refused).
Set quotas by adding meta values to the container. These values are validated
when you set them:
``X-Container-Meta-Quota-Bytes``
Maximum size of the container, in bytes.
``X-Container-Meta-Quota-Count``
Maximum object count of the container.
Account quotas
~~~~~~~~~~~~~~
The ``x-account-meta-quota-bytes`` metadata entry must be requests (PUT, POST)
if a given account quota (in bytes) is exceeded while DELETE requests are still
allowed.
The ``x-account-meta-quota-bytes`` metadata entry must be set to store and
enable the quota. Write requests to this metadata entry are only permitted for
resellers. There is no account quota limitation on a reseller account even if
``x-account-meta-quota-bytes`` is set.
Any object PUT operations that exceed the quota return a 413 response (request
entity too large) with a descriptive body.
The following command uses an admin account that own the Reseller role to set a
quota on the test account:
.. code-block:: console
$ swift -A http://127.0.0.1:8080/auth/v1.0 -U admin:admin -K admin \
--os-storage-url http://127.0.0.1:8080/v1/AUTH_test post -m quota-bytes:10000
Here is the stat listing of an account where quota has been set:
.. code-block:: console
$ swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat
Account: AUTH_test
Containers: 0
Objects: 0
Bytes: 0
Meta Quota-Bytes: 10000
X-Timestamp: 1374075958.37454
X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a
This command removes the account quota:
.. code-block:: console
$ swift -A http://127.0.0.1:8080/auth/v1.0 -U admin:admin -K admin \
--os-storage-url http://127.0.0.1:8080/v1/AUTH_test post -m quota-bytes:
Bulk delete
~~~~~~~~~~~
Use ``bulk-delete`` to delete multiple files from an account with a single
request. Responds to DELETE requests with a header 'X-Bulk-Delete:
true\_value'. The body of the DELETE request is a new line-separated list of
files to delete. The files listed must be URL encoded and in the form:
.. code-block:: none
/container_name/obj_name
If all files are successfully deleted (or did not exist), the operation returns
``HTTPOk``. If any files failed to delete, the operation returns
``HTTPBadGateway``. In both cases, the response body is a JSON dictionary that
shows the number of files that were successfully deleted or not found. The
files that failed are listed.
.. include:: ../tables/swift-proxy-server-filter-bulk.rst
Drive audit
~~~~~~~~~~~
The ``swift-drive-audit`` configuration items reference a script that can be
run by using ``cron`` to watch for bad drives. If errors are detected, it
unmounts the bad drive so that OpenStack Object Storage can work around it. It
takes the following options:
.. include:: ../tables/swift-drive-audit-drive-audit.rst
Form post
~~~~~~~~~
Middleware that enables you to upload objects to a cluster by using an HTML
form POST.
The format of the form is:
.. code-block:: html
<form action="<swift-url>" method="POST"
enctype="multipart/form-data">
<input type="hidden" name="redirect" value="<redirect-url>" />
<input type="hidden" name="max_file_size" value="<bytes>" />
<input type="hidden" name="max_file_count" value="<count>" />
<input type="hidden" name="expires" value="<unix-timestamp>" />
<input type="hidden" name="signature" value="<hmac>" />
<input type="hidden" name="x_delete_at" value="<unix-timestamp>"/>
<input type="hidden" name="x_delete_after" value="<seconds>"/>
<input type="file" name="file1" /><br />
<input type="submit" />
</form>
In the form:
``action="<swift-url>"``
The URL to the Object Storage destination, such as
https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix.
The name of each uploaded file is appended to the specified ``swift-url``.
So, you can upload directly to the root of container with a URL like
https://swift-cluster.example.com/v1/AUTH_account/container/.
Optionally, you can include an object prefix to separate different users'
uploads, such as
https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix.
``method="POST"``
The form ``method`` must be POST.
``enctype="multipart/form-data``
The ``enctype`` must be set to ``multipart/form-data``.
``name="redirect"``
The URL to which to redirect the browser after the upload completes.
The URL has status and message query parameters added to it that
indicate the HTTP status code for the upload and, optionally,
additional error information. The 2\ *nn* status code indicates
success. If an error occurs, the URL might include error information,
such as ``"max_file_size exceeded"``.
``name="max_file_size"``
Required. The maximum number of bytes that can be uploaded in a single file
upload.
``name="max_file_count"``
Required. The maximum number of files that can be uploaded with the form.
``name="expires"``
The expiration date and time for the form in `UNIX Epoch time stamp format
<https://en.wikipedia.org/wiki/Unix_time>`__. After this date and time, the
form is no longer valid.
For example, ``1440619048`` is equivalent to ``Mon, Wed, 26 Aug 2015
19:57:28 GMT``.
``name="signature"``
The HMAC-SHA1 signature of the form. This sample Python code shows
how to compute the signature:
.. code-block:: python
import hmac
from hashlib import sha1
from time import time
path = '/v1/account/container/object_prefix'
redirect = 'https://myserver.com/some-page'
max_file_size = 104857600
max_file_count = 10
expires = int(time() + 600)
key = 'mykey'
hmac_body = '%s\n%s\n%s\n%s\n%s' % (path, redirect,
max_file_size, max_file_count, expires)
signature = hmac.new(key, hmac_body, sha1).hexdigest()
The key is the value of the ``X-Account-Meta-Temp-URL-Key`` header on the
account.
Use the full path from the ``/v1/`` value and onward.
During testing, you can use the ``swift-form-signature`` command-line tool
to compute the ``expires`` and ``signature`` values.
``name="x_delete_at"``
The date and time in `UNIX Epoch time stamp format
<https://en.wikipedia.org/wiki/Unix_time>`__ when the object will be
removed.
For example, ``1440619048`` is equivalent to ``Mon, Wed, 26 Aug 2015
19:57:28 GMT``.
This attribute enables you to specify the ``X-Delete- At`` header value in
the form POST.
``name="x_delete_after"``
The number of seconds after which the object is removed. Internally, the
Object Storage system stores this value in the ``X-Delete-At`` metadata
item. This attribute enables you to specify the ``X-Delete-After`` header
value in the form POST.
``type="file" name="filexx"``
Optional. One or more files to upload. Must appear after the other
attributes to be processed correctly. If attributes come after the ``file``
attribute, they are not sent with the sub- request because on the server
side, all attributes in the file cannot be parsed unless the whole file is
read into memory and the server does not have enough memory to service these
requests. So, attributes that follow the ``file`` attribute are ignored.
.. include:: ../tables/swift-proxy-server-filter-formpost.rst
Static web sites
~~~~~~~~~~~~~~~~
When configured, this middleware serves container data as a static web site
with index file and error file resolution and optional file listings. This mode
is normally only active for anonymous requests.
.. include:: ../tables/swift-proxy-server-filter-staticweb.rst

View File

@ -0,0 +1,99 @@
============================================
Object Storage general service configuration
============================================
Most Object Storage services fall into two categories, Object Storage's WSGI
servers and background daemons.
Object Storage uses paste.deploy to manage server configurations. Read more at
http://pythonpaste.org/deploy/.
Default configuration options are set in the ``[DEFAULT]`` section, and any
options specified there can be overridden in any of the other sections when the
syntax ``set option_name = value`` is in place.
Configuration for servers and daemons can be expressed together in the same
file for each type of server, or separately. If a required section for the
service trying to start is missing, there will be an error. Sections not used
by the service are ignored.
Consider the example of an Object Storage node. By convention configuration for
the ``object-server``, ``object-updater``, ``object-replicator``, and
``object-auditor`` exist in a single file ``/etc/swift/object-server.conf``:
.. code-block:: ini
[DEFAULT]
[pipeline:main]
pipeline = object-server
[app:object-server]
use = egg:swift#object
[object-replicator]
reclaim_age = 259200
[object-updater]
[object-auditor]
Object Storage services expect a configuration path as the first argument:
.. code-block:: console
$ swift-object-auditor
Usage: swift-object-auditor CONFIG [options]
Error: missing config path argument
If you omit the object-auditor section, this file cannot be used as the
configuration path when starting the ``swift-object-auditor`` daemon:
.. code-block:: console
$ swift-object-auditor /etc/swift/object-server.conf
Unable to find object-auditor config section in /etc/swift/object-server.conf
If the configuration path is a directory instead of a file, all of the files in
the directory with the file extension ``.conf`` will be combined to generate
the configuration object which is delivered to the Object Storage service. This
is referred to generally as directory-based configuration.
Directory-based configuration leverages ``ConfigParser``'s native multi-file
support. Files ending in ``.conf`` in the given directory are parsed in
lexicographical order. File names starting with ``.`` are ignored. A mixture of
file and directory configuration paths is not supported. If the configuration
path is a file, only that file will be parsed.
The Object Storage service management tool ``swift-init`` has adopted the
convention of looking for ``/etc/swift/{type}-server.conf.d/`` if the file
``/etc/swift/{type}-server.conf`` file does not exist.
When using directory-based configuration, if the same option under the same
section appears more than once in different files, the last value parsed is
said to override previous occurrences. You can ensure proper override
precedence by prefixing the files in the configuration directory with numerical
values, as in the following example file layout:
.. code-block:: none
/etc/swift/
default.base
object-server.conf.d/
000_default.conf -> ../default.base
001_default-override.conf
010_server.conf
020_replicator.conf
030_updater.conf
040_auditor.conf
You can inspect the resulting combined configuration object using the
``swift-config`` command-line tool.
All the services of an Object Store deployment share a common configuration in
the ``[swift-hash]`` section of the ``/etc/swift/swift.conf`` file. The
``swift_hash_path_suffix`` and ``swift_hash_path_prefix`` values must be
identical on all the nodes.
.. include:: ../tables/swift-swift-swift-hash.rst

View File

@ -0,0 +1,36 @@
===========================
Endpoint listing middleware
===========================
The endpoint listing middleware enables third-party services that use data
locality information to integrate with OpenStack Object Storage. This
middleware reduces network overhead and is designed for third-party services
that run inside the firewall. Deploy this middleware on a proxy server because
usage of this middleware is not authenticated.
Format requests for endpoints, as follows:
.. code-block:: none
/endpoints/{account}/{container}/{object}
/endpoints/{account}/{container}
/endpoints/{account}
Use the ``list_endpoints_path`` configuration option in the
``proxy_server.conf`` file to customize the ``/endpoints/`` path.
Responses are JSON-encoded lists of endpoints, as follows:
.. code-block:: none
http://{server}:{port}/{dev}/{part}/{acc}/{cont}/{obj}
http://{server}:{port}/{dev}/{part}/{acc}/{cont}
http://{server}:{port}/{dev}/{part}/{acc}
An example response is:
.. code-block:: none
http://10.1.1.1:6000/sda1/2/a/c2/o1
http://10.1.1.1:6000/sda1/2/a/c2
http://10.1.1.1:6000/sda1/2/a

View File

@ -0,0 +1,13 @@
New, updated, and deprecated options in Mitaka for OpenStack Object Storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
..
Warning: Do not edit this file. It is automatically generated and your
changes will be overwritten. The tool to do so lives in the
openstack-doc-tools repository.
There are no new, updated, and deprecated options
in Mitaka for OpenStack Object Storage.