Fixed inconsistent naming conventions
Fixed naming conventions of Keystone, Swift and proxy servers in the docs. Change-Id: I294afd8d7bffa8c1fc299f5812effacb9ad08910
This commit is contained in:
parent
a0827114ef
commit
6f230c7ea0
@ -17,8 +17,8 @@ or 6.
|
||||
Deployment Options
|
||||
------------------
|
||||
|
||||
The swift services run completely autonomously, which provides for a lot of
|
||||
flexibility when architecting the hardware deployment for swift. The 4 main
|
||||
The Swift services run completely autonomously, which provides for a lot of
|
||||
flexibility when architecting the hardware deployment for Swift. The 4 main
|
||||
services are:
|
||||
|
||||
#. Proxy Services
|
||||
@ -265,7 +265,7 @@ lexicographical order. Filenames starting with '.' are ignored. A mixture of
|
||||
file and directory configuration paths is not supported - if the configuration
|
||||
path is a file only that file will be parsed.
|
||||
|
||||
The swift service management tool ``swift-init`` has adopted the convention of
|
||||
The Swift service management tool ``swift-init`` has adopted the convention of
|
||||
looking for ``/etc/swift/{type}-server.conf.d/`` if the file
|
||||
``/etc/swift/{type}-server.conf`` file does not exist.
|
||||
|
||||
@ -1581,7 +1581,7 @@ We do not recommend running Swift on RAID, but if you are using
|
||||
RAID it is also important to make sure that the proper sunit and swidth
|
||||
settings get set so that XFS can make most efficient use of the RAID array.
|
||||
|
||||
For a standard swift install, all data drives are mounted directly under
|
||||
For a standard Swift install, all data drives are mounted directly under
|
||||
``/srv/node`` (as can be seen in the above example of mounting ``/dev/sda1`` as
|
||||
``/srv/node/sda``). If you choose to mount the drives in another directory,
|
||||
be sure to set the `devices` config option in all of the server configs to
|
||||
|
@ -196,7 +196,7 @@ headers)
|
||||
All user resources in Swift (i.e. account, container, objects) can have
|
||||
user metadata associated with them. Middleware may also persist custom
|
||||
metadata to accounts and containers safely using System Metadata. Some
|
||||
core swift features which predate sysmeta have added exceptions for
|
||||
core Swift features which predate sysmeta have added exceptions for
|
||||
custom non-user metadata headers (e.g. :ref:`acls`,
|
||||
:ref:`large-objects`)
|
||||
|
||||
|
@ -102,7 +102,7 @@ reseller_request to True. This can be used by other middlewares.
|
||||
|
||||
TempAuth will now allow OPTIONS requests to go through without a token.
|
||||
|
||||
The user starts a session by sending a ReST request to the auth system to
|
||||
The user starts a session by sending a REST request to the auth system to
|
||||
receive the auth token and a URL to the Swift system.
|
||||
|
||||
-------------
|
||||
@ -143,7 +143,7 @@ having this in your ``/etc/keystone/default_catalog.templates`` ::
|
||||
catalog.RegionOne.object_store.adminURL = http://swiftproxy:8080/
|
||||
catalog.RegionOne.object_store.internalURL = http://swiftproxy:8080/v1/AUTH_$(tenant_id)s
|
||||
|
||||
On your Swift Proxy server you will want to adjust your main pipeline
|
||||
On your Swift proxy server you will want to adjust your main pipeline
|
||||
and add auth_token and keystoneauth in your
|
||||
``/etc/swift/proxy-server.conf`` like this ::
|
||||
|
||||
@ -326,7 +326,7 @@ Extending Auth
|
||||
|
||||
TempAuth is written as wsgi middleware, so implementing your own auth is as
|
||||
easy as writing new wsgi middleware, and plugging it in to the proxy server.
|
||||
The KeyStone project and the Swauth project are examples of additional auth
|
||||
The Keystone project and the Swauth project are examples of additional auth
|
||||
services.
|
||||
|
||||
Also, see :doc:`development_auth`.
|
||||
|
@ -12,7 +12,7 @@ As expiring objects are added to the system, the object servers will record the
|
||||
|
||||
Usually, just one instance of the ``swift-object-expirer`` daemon needs to run for a cluster. This isn't exactly automatic failover high availability, but if this daemon doesn't run for a few hours it should not be any real issue. The expired-but-not-yet-deleted objects will still ``404 Not Found`` if someone tries to ``GET`` or ``HEAD`` them and they'll just be deleted a bit later when the daemon is restarted.
|
||||
|
||||
By default, the ``swift-object-expirer`` daemon will run with a concurrency of 1. Increase this value to get more concurrency. A concurrency of 1 may not be enough to delete expiring objects in a timely fashion for a particular swift cluster.
|
||||
By default, the ``swift-object-expirer`` daemon will run with a concurrency of 1. Increase this value to get more concurrency. A concurrency of 1 may not be enough to delete expiring objects in a timely fashion for a particular Swift cluster.
|
||||
|
||||
It is possible to run multiple daemons to do different parts of the work if a single process with a concurrency of more than 1 is not enough (see the sample config file for details).
|
||||
|
||||
|
@ -90,7 +90,7 @@ History
|
||||
Dynamic large object support has gone through various iterations before
|
||||
settling on this implementation.
|
||||
|
||||
The primary factor driving the limitation of object size in swift is
|
||||
The primary factor driving the limitation of object size in Swift is
|
||||
maintaining balance among the partitions of the ring. To maintain an even
|
||||
dispersion of disk usage throughout the cluster the obvious storage pattern
|
||||
was to simply split larger objects into smaller segments, which could then be
|
||||
@ -121,7 +121,7 @@ The current "user manifest" design was chosen in order to provide a
|
||||
transparent download of large objects to the client and still provide the
|
||||
uploading client a clean API to support segmented uploads.
|
||||
|
||||
To meet an many use cases as possible swift supports two types of large
|
||||
To meet an many use cases as possible Swift supports two types of large
|
||||
object manifests. Dynamic and static large object manifests both support
|
||||
the same idea of allowing the user to upload many segments to be later
|
||||
downloaded as a single file.
|
||||
@ -143,7 +143,7 @@ also improves concurrent upload speed. It has the disadvantage that the
|
||||
manifest is finalized once PUT. Any changes to it means it has to be replaced.
|
||||
|
||||
Between these two methods the user has great flexibility in how (s)he chooses
|
||||
to upload and retrieve large objects to swift. Swift does not, however, stop
|
||||
to upload and retrieve large objects to Swift. Swift does not, however, stop
|
||||
the user from harming themselves. In both cases the segments are deletable by
|
||||
the user at any time. If a segment was deleted by mistake, a dynamic large
|
||||
object, having no way of knowing it was ever there, would happily ignore the
|
||||
|
@ -49,7 +49,7 @@ Containers and Policies
|
||||
Policies are implemented at the container level. There are many advantages to
|
||||
this approach, not the least of which is how easy it makes life on
|
||||
applications that want to take advantage of them. It also ensures that
|
||||
Storage Policies remain a core feature of swift independent of the auth
|
||||
Storage Policies remain a core feature of Swift independent of the auth
|
||||
implementation. Policies were not implemented at the account/auth layer
|
||||
because it would require changes to all auth systems in use by Swift
|
||||
deployers. Each container has a new special immutable metadata element called
|
||||
|
@ -18,7 +18,7 @@ account-server.conf to delay the actual deletion of data. At this time, there
|
||||
is no utility to undelete an account; one would have to update the account
|
||||
database replicas directly, setting the status column to an empty string and
|
||||
updating the put_timestamp to be greater than the delete_timestamp. (On the
|
||||
TODO list is writing a utility to perform this task, preferably through a ReST
|
||||
TODO list is writing a utility to perform this task, preferably through a REST
|
||||
call.)
|
||||
|
||||
The account reaper runs on each account server and scans the server
|
||||
@ -53,7 +53,7 @@ History
|
||||
At first, a simple approach of deleting an account through completely external
|
||||
calls was considered as it required no changes to the system. All data would
|
||||
simply be deleted in the same way the actual user would, through the public
|
||||
ReST API. However, the downside was that it would use proxy resources and log
|
||||
REST API. However, the downside was that it would use proxy resources and log
|
||||
everything when it didn't really need to. Also, it would likely need a
|
||||
dedicated server or two, just for issuing the delete requests.
|
||||
|
||||
|
@ -2,7 +2,7 @@
|
||||
Replication
|
||||
===========
|
||||
|
||||
Because each replica in swift functions independently, and clients generally
|
||||
Because each replica in Swift functions independently, and clients generally
|
||||
require only a simple majority of nodes responding to consider an operation
|
||||
successful, transient failures like network partitions can quickly cause
|
||||
replicas to diverge. These differences are eventually reconciled by
|
||||
|
@ -4,7 +4,7 @@
|
||||
Rate Limiting
|
||||
=============
|
||||
|
||||
Rate limiting in swift is implemented as a pluggable middleware. Rate
|
||||
Rate limiting in Swift is implemented as a pluggable middleware. Rate
|
||||
limiting is performed on requests that result in database writes to the
|
||||
account and container sqlite dbs. It uses memcached and is dependent on
|
||||
the proxy servers having highly synchronized time. The rate limits are
|
||||
|
Loading…
Reference in New Issue
Block a user