Fix missing *-replicator conf sections in deployment guide

The doc for these sections was missing because of an rst error - the
source is there in rst file but didn't make it into the html output.

Add doc for per_diff and max_diffs in account and container doc sections.

Also, fix a bunch of other sphinx build errors and most of the warnings.

Change-Id: If9ed2619b2f92c6c65a94f41d8819db8726d3893
This commit is contained in:
Alistair Coles 2015-10-19 13:55:02 +01:00
parent 4a13dcc4a8
commit 1a2b54fc0a
15 changed files with 203 additions and 128 deletions

View File

@ -177,7 +177,7 @@ Logging level. The default is INFO.
.IP \fBlog_address\fR .IP \fBlog_address\fR
Logging address. The default is /dev/log. Logging address. The default is /dev/log.
.IP \fBper_diff\fR .IP \fBper_diff\fR
The default is 1000. Maximum number of database rows that will be sync'd in a single HTTP replication request. The default is 1000.
.IP \fBmax_diffs\fR .IP \fBmax_diffs\fR
This caps how long the replicator will spend trying to sync a given database per pass so the other databases don't get starved. The default is 100. This caps how long the replicator will spend trying to sync a given database per pass so the other databases don't get starved. The default is 100.
.IP \fBconcurrency\fR .IP \fBconcurrency\fR

View File

@ -183,7 +183,7 @@ Logging level. The default is INFO.
.IP \fBlog_address\fR .IP \fBlog_address\fR
Logging address. The default is /dev/log. Logging address. The default is /dev/log.
.IP \fBer_diff\fR .IP \fBer_diff\fR
The default is 1000. Maximum number of database rows that will be sync'd in a single HTTP replication request. The default is 1000.
.IP \fBmax_diffs\fR .IP \fBmax_diffs\fR
This caps how long the replicator will spend trying to sync a given database per pass so the other databases don't get starved. The default is 100. This caps how long the replicator will spend trying to sync a given database per pass so the other databases don't get starved. The default is 100.
.IP \fBconcurrency\fR .IP \fBconcurrency\fR

View File

@ -1,5 +1,3 @@
.. _formpost:
==================== ====================
Form POST middleware Form POST middleware
==================== ====================

View File

@ -293,10 +293,12 @@ a manifest object but a normal object with content same as what you would
get on a **GET** request to original manifest object. get on a **GET** request to original manifest object.
To duplicate a manifest object: To duplicate a manifest object:
* Use the **GET** operation to read the value of ``X-Object-Manifest`` and * Use the **GET** operation to read the value of ``X-Object-Manifest`` and
use this value in the ``X-Object-Manifest`` request header in a **PUT** use this value in the ``X-Object-Manifest`` request header in a **PUT**
operation. operation.
* Alternatively, you can include *``?multipart-manifest=get``* query * Alternatively, you can include *``?multipart-manifest=get``* query
string in the **COPY** request. string in the **COPY** request.
This creates a new manifest object that shares the same set of segment This creates a new manifest object that shares the same set of segment
objects as the original manifest object. objects as the original manifest object.

View File

@ -559,53 +559,60 @@ replication_failure_ratio 1.0 If the value of failures /
[object-replicator] [object-replicator]
================== ================= ======================================= ================== ======================== ================================
Option Default Description Option Default Description
------------------ ----------------- --------------------------------------- ------------------ ------------------------ --------------------------------
log_name object-replicator Label used when logging log_name object-replicator Label used when logging
log_facility LOG_LOCAL0 Syslog log facility log_facility LOG_LOCAL0 Syslog log facility
log_level INFO Logging level log_level INFO Logging level
daemonize yes Whether or not to run replication as a daemonize yes Whether or not to run replication
daemon as a daemon
interval 30 Time in seconds to wait between interval 30 Time in seconds to wait between
replication passes replication passes
concurrency 1 Number of replication workers to spawn concurrency 1 Number of replication workers to
timeout 5 Timeout value sent to rsync --timeout spawn
and --contimeout options timeout 5 Timeout value sent to rsync
stats_interval 3600 Interval in seconds between logging --timeout and --contimeout
replication statistics options
reclaim_age 604800 Time elapsed in seconds before an stats_interval 3600 Interval in seconds between
object can be reclaimed logging replication statistics
handoffs_first false If set to True, partitions that are reclaim_age 604800 Time elapsed in seconds before an
not supposed to be on the node will be object can be reclaimed
replicated first. The default setting handoffs_first false If set to True, partitions that
should not be changed, except for are not supposed to be on the
extreme situations. node will be replicated first.
handoff_delete auto By default handoff partitions will be The default setting should not be
removed when it has successfully changed, except for extreme
replicated to all the canonical nodes. situations.
If set to an integer n, it will remove handoff_delete auto By default handoff partitions
the partition if it is successfully will be removed when it has
replicated to n nodes. The default successfully replicated to all
setting should not be changed, except the canonical nodes. If set to an
for extreme situations. integer n, it will remove the
node_timeout DEFAULT or 10 Request timeout to external services. partition if it is successfully
This uses what's set here, or what's set replicated to n nodes. The
in the DEFAULT section, or 10 (though default setting should not be
other sections use 3 as the final changed, except for extreme
default). situations.
rsync_module {replication_ip}::object node_timeout DEFAULT or 10 Request timeout to external
Format of the rsync module where the services. This uses what's set
replicator will send data. The here, or what's set in the
configuration value can include some DEFAULT section, or 10 (though
variables that will be extracted from other sections use 3 as the final
the ring. Variables must follow the default).
format {NAME} where NAME is one of: rsync_module {replication_ip}::object Format of the rsync module where
ip, port, replication_ip, the replicator will send data.
replication_port, region, zone, device, The configuration value can
meta. See etc/rsyncd.conf-sample for include some variables that will
some examples. be extracted from the ring.
================== ================= ======================================= Variables must follow the format
{NAME} where NAME is one of: ip,
port, replication_ip,
replication_port, region, zone,
device, meta. See
etc/rsyncd.conf-sample for some
examples.
================== ======================== ================================
[object-updater] [object-updater]
@ -718,35 +725,53 @@ allow_versions false Enable/Disable object versioning feature
[container-replicator] [container-replicator]
================== ==================== ==================================== ================== =========================== =============================
Option Default Description Option Default Description
------------------ -------------------- ------------------------------------ ------------------ --------------------------- -----------------------------
log_name container-replicator Label used when logging log_name container-replicator Label used when logging
log_facility LOG_LOCAL0 Syslog log facility log_facility LOG_LOCAL0 Syslog log facility
log_level INFO Logging level log_level INFO Logging level
per_diff 1000 per_diff 1000 Maximum number of database
concurrency 8 Number of replication workers to rows that will be sync'd in a
spawn single HTTP replication
interval 30 Time in seconds to wait between request. Databases with less
replication passes than or equal to this number
node_timeout 10 Request timeout to external services of differing rows will always
conn_timeout 0.5 Connection timeout to external be sync'd using an HTTP
services replication request rather
reclaim_age 604800 Time elapsed in seconds before a than using rsync.
container can be reclaimed max_diffs 100 Maximum number of HTTP
rsync_module {replication_ip}::container replication requests attempted
Format of the rsync module where the on each replication pass for
replicator will send data. The any one container. This caps
configuration value can include some how long the replicator will
variables that will be extracted from spend trying to sync a given
the ring. Variables must follow the database per pass so the other
format {NAME} where NAME is one of: databases don't get starved.
ip, port, replication_ip, concurrency 8 Number of replication workers
replication_port, region, zone, to spawn
device, meta. See interval 30 Time in seconds to wait
etc/rsyncd.conf-sample for some between replication passes
examples. node_timeout 10 Request timeout to external
================== ==================== ==================================== services
conn_timeout 0.5 Connection timeout to external
services
reclaim_age 604800 Time elapsed in seconds before
a container can be reclaimed
rsync_module {replication_ip}::container Format of the rsync module
where the replicator will send
data. The configuration value
can include some variables
that will be extracted from
the ring. Variables must
follow the format {NAME} where
NAME is one of: ip, port,
replication_ip,
replication_port, region,
zone, device, meta. See
etc/rsyncd.conf-sample for
some examples.
================== =========================== =============================
[container-updater] [container-updater]
@ -859,33 +884,51 @@ set log_level INFO Logging level
[account-replicator] [account-replicator]
================== ================== ====================================== ================== ========================= ===============================
Option Default Description Option Default Description
------------------ ------------------ -------------------------------------- ------------------ ------------------------- -------------------------------
log_name account-replicator Label used when logging log_name account-replicator Label used when logging
log_facility LOG_LOCAL0 Syslog log facility log_facility LOG_LOCAL0 Syslog log facility
log_level INFO Logging level log_level INFO Logging level
per_diff 1000 per_diff 1000 Maximum number of database rows
concurrency 8 Number of replication workers to spawn that will be sync'd in a single
interval 30 Time in seconds to wait between HTTP replication request.
replication passes Databases with less than or
node_timeout 10 Request timeout to external services equal to this number of
conn_timeout 0.5 Connection timeout to external services differing rows will always be
reclaim_age 604800 Time elapsed in seconds before an sync'd using an HTTP replication
account can be reclaimed request rather than using rsync.
rsync_module {replication_ip}::account max_diffs 100 Maximum number of HTTP
Format of the rsync module where the replication requests attempted
replicator will send data. The on each replication pass for any
configuration value can include some one container. This caps how
variables that will be extracted from long the replicator will spend
the ring. Variables must follow the trying to sync a given database
format {NAME} where NAME is one of: per pass so the other databases
ip, port, replication_ip, don't get starved.
replication_port, region, zone, concurrency 8 Number of replication workers
device, meta. See to spawn
etc/rsyncd.conf-sample for some interval 30 Time in seconds to wait between
examples. replication passes
================== ================== ====================================== node_timeout 10 Request timeout to external
services
conn_timeout 0.5 Connection timeout to external
services
reclaim_age 604800 Time elapsed in seconds before
an account can be reclaimed
rsync_module {replication_ip}::account Format of the rsync module where
the replicator will send data.
The configuration value can
include some variables that will
be extracted from the ring.
Variables must follow the format
{NAME} where NAME is one of: ip,
port, replication_ip,
replication_port, region, zone,
device, meta. See
etc/rsyncd.conf-sample for some
examples.
================== ========================= ===============================
[account-auditor] [account-auditor]

View File

@ -51,16 +51,16 @@ To execute the unit tests:
.. note:: .. note::
As of tox version 2.0.0, most environment variables are not automatically As of tox version 2.0.0, most environment variables are not automatically
passed to the test environment. Swift's tox.ini overrides this default passed to the test environment. Swift's `tox.ini` overrides this default
behavior so that variable names matching SWIFT_* and *_proxy will be passed, behavior so that variable names matching ``SWIFT_*`` and ``*_proxy`` will be
but you may need to run tox --recreate for this to take effect after passed, but you may need to run `tox --recreate` for this to take effect
upgrading from tox<2.0.0. after upgrading from tox<2.0.0.
Conversely, if you do not want those environment variables to be passed to Conversely, if you do not want those environment variables to be passed to
the test environment then you will need to unset them before calling tox. the test environment then you will need to unset them before calling tox.
Also, if you ever encounter DistributionNotFound, try to use `tox --recreate` Also, if you ever encounter DistributionNotFound, try to use `tox --recreate`
or remove the .tox directory to force tox to recreate the dependency list. or remove the `.tox` directory to force tox to recreate the dependency list.
The functional tests may be executed against a :doc:`development_saio` or The functional tests may be executed against a :doc:`development_saio` or
other running Swift cluster using the command: other running Swift cluster using the command:

View File

@ -254,9 +254,11 @@ This configuration works as follows:
``admin`` or ``swiftoperator`` role(s). When validated, the service token ``admin`` or ``swiftoperator`` role(s). When validated, the service token
gives the ``service`` role. gives the ``service`` role.
* Swift interprets the above configuration as follows: * Swift interprets the above configuration as follows:
* Did the user token provide one of the roles listed in operator_roles? * Did the user token provide one of the roles listed in operator_roles?
* Did the service token have the ``service`` role as described by the * Did the service token have the ``service`` role as described by the
``SERVICE_service_roles`` options. ``SERVICE_service_roles`` options.
* If both conditions are met, the request is granted. Otherwise, Swift * If both conditions are met, the request is granted. Otherwise, Swift
rejects the request. rejects the request.

View File

@ -171,6 +171,7 @@ The sequence of events and actions are as follows:
a copy of the <user-token>. In the X-Service-Token header, place your a copy of the <user-token>. In the X-Service-Token header, place your
Service's token. If you use python-swiftclient you can achieve this Service's token. If you use python-swiftclient you can achieve this
by: by:
* Putting the URL in the ``preauthurl`` parameter * Putting the URL in the ``preauthurl`` parameter
* Putting the <user-token> in ``preauthtoken`` paramater * Putting the <user-token> in ``preauthtoken`` paramater
* Adding the X-Service-Token to the ``headers`` parameter * Adding the X-Service-Token to the ``headers`` parameter
@ -251,7 +252,7 @@ However, if one Service is compromised, that Service can access
data created by another Service. To prevent this, multiple Service Prefixes may data created by another Service. To prevent this, multiple Service Prefixes may
be used. This also requires that the operator configure multiple service be used. This also requires that the operator configure multiple service
roles. For example, in a system that has Glance and Cinder, the following roles. For example, in a system that has Glance and Cinder, the following
Swift configuration could be used: Swift configuration could be used::
[keystoneauth] [keystoneauth]
reseller_prefix = AUTH_, IMAGE_, BLOCK_ reseller_prefix = AUTH_, IMAGE_, BLOCK_

View File

@ -90,8 +90,19 @@ use = egg:swift#recon
# log_level = INFO # log_level = INFO
# log_address = /dev/log # log_address = /dev/log
# #
# Maximum number of database rows that will be sync'd in a single HTTP
# replication request. Databases with less than or equal to this number of
# differing rows will always be sync'd using an HTTP replication request rather
# than using rsync.
# per_diff = 1000 # per_diff = 1000
#
# Maximum number of HTTP replication requests attempted on each replication
# pass for any one container. This caps how long the replicator will spend
# trying to sync a given database per pass so the other databases don't get
# starved.
# max_diffs = 100 # max_diffs = 100
#
# Number of replication workers to spawn.
# concurrency = 8 # concurrency = 8
# #
# Time in seconds to wait between replication passes # Time in seconds to wait between replication passes

View File

@ -99,8 +99,19 @@ use = egg:swift#recon
# log_level = INFO # log_level = INFO
# log_address = /dev/log # log_address = /dev/log
# #
# Maximum number of database rows that will be sync'd in a single HTTP
# replication request. Databases with less than or equal to this number of
# differing rows will always be sync'd using an HTTP replication request rather
# than using rsync.
# per_diff = 1000 # per_diff = 1000
#
# Maximum number of HTTP replication requests attempted on each replication
# pass for any one container. This caps how long the replicator will spend
# trying to sync a given database per pass so the other databases don't get
# starved.
# max_diffs = 100 # max_diffs = 100
#
# Number of replication workers to spawn.
# concurrency = 8 # concurrency = 8
# #
# Time in seconds to wait between replication passes # Time in seconds to wait between replication passes

View File

@ -71,16 +71,16 @@ class TempAuth(object):
The reseller prefix specifies which parts of the account namespace this The reseller prefix specifies which parts of the account namespace this
middleware is responsible for managing authentication and authorization. middleware is responsible for managing authentication and authorization.
By default, the prefix is AUTH so accounts and tokens are prefixed By default, the prefix is 'AUTH' so accounts and tokens are prefixed
by AUTH_. When a request's token and/or path start with AUTH_, this by 'AUTH\_'. When a request's token and/or path start with 'AUTH\_', this
middleware knows it is responsible. middleware knows it is responsible.
We allow the reseller prefix to be a list. In tempauth, the first item We allow the reseller prefix to be a list. In tempauth, the first item
in the list is used as the prefix for tokens and user groups. The in the list is used as the prefix for tokens and user groups. The
other prefixes provide alternate accounts that user's can access. For other prefixes provide alternate accounts that user's can access. For
example if the reseller prefix list is 'AUTH, OTHER', a user with example if the reseller prefix list is 'AUTH, OTHER', a user with
admin access to AUTH_account also has admin access to admin access to 'AUTH_account' also has admin access to
OTHER_account. 'OTHER_account'.
Required Group: Required Group:
@ -98,7 +98,7 @@ class TempAuth(object):
is not processed. is not processed.
The X-Service-Token is useful when combined with multiple reseller prefix The X-Service-Token is useful when combined with multiple reseller prefix
items. In the following configuration, accounts prefixed SERVICE_ items. In the following configuration, accounts prefixed 'SERVICE\_'
are only accessible if X-Auth-Token is from the end-user and are only accessible if X-Auth-Token is from the end-user and
X-Service-Token is from the ``glance`` user:: X-Service-Token is from the ``glance`` user::

View File

@ -1739,7 +1739,7 @@ def expand_ipv6(address):
def whataremyips(bind_ip=None): def whataremyips(bind_ip=None):
""" """
Get "our" IP addresses ("us" being the set of services configured by Get "our" IP addresses ("us" being the set of services configured by
one *.conf file). If our REST listens on a specific address, return it. one `*.conf` file). If our REST listens on a specific address, return it.
Otherwise, if listen on '0.0.0.0' or '::' return all addresses, including Otherwise, if listen on '0.0.0.0' or '::' return all addresses, including
the loopback. the loopback.
@ -3078,15 +3078,15 @@ class ThreadPool(object):
def run_in_thread(self, func, *args, **kwargs): def run_in_thread(self, func, *args, **kwargs):
""" """
Runs func(*args, **kwargs) in a thread. Blocks the current greenlet Runs ``func(*args, **kwargs)`` in a thread. Blocks the current greenlet
until results are available. until results are available.
Exceptions thrown will be reraised in the calling thread. Exceptions thrown will be reraised in the calling thread.
If the threadpool was initialized with nthreads=0, it invokes If the threadpool was initialized with nthreads=0, it invokes
func(*args, **kwargs) directly, followed by eventlet.sleep() to ensure ``func(*args, **kwargs)`` directly, followed by eventlet.sleep() to
the eventlet hub has a chance to execute. It is more likely the hub ensure the eventlet hub has a chance to execute. It is more likely the
will be invoked when queuing operations to an external thread. hub will be invoked when queuing operations to an external thread.
:returns: result of calling func :returns: result of calling func
:raises: whatever func raises :raises: whatever func raises
@ -3126,7 +3126,7 @@ class ThreadPool(object):
def force_run_in_thread(self, func, *args, **kwargs): def force_run_in_thread(self, func, *args, **kwargs):
""" """
Runs func(*args, **kwargs) in a thread. Blocks the current greenlet Runs ``func(*args, **kwargs)`` in a thread. Blocks the current greenlet
until results are available. until results are available.
Exceptions thrown will be reraised in the calling thread. Exceptions thrown will be reraised in the calling thread.

View File

@ -597,6 +597,8 @@ class PortPidState(object):
def port_index_pairs(self): def port_index_pairs(self):
""" """
Returns current (port, server index) pairs.
:returns: A set of (port, server_idx) tuples for currently-tracked :returns: A set of (port, server_idx) tuples for currently-tracked
ports, sockets, and PIDs. ports, sockets, and PIDs.
""" """
@ -711,6 +713,8 @@ class ServersPerPortStrategy(object):
def loop_timeout(self): def loop_timeout(self):
""" """
Return timeout before checking for reloaded rings.
:returns: The time to wait for a child to exit before checking for :returns: The time to wait for a child to exit before checking for
reloaded rings (new ports). reloaded rings (new ports).
""" """

View File

@ -447,7 +447,7 @@ class BaseDiskFileManager(object):
Parse an on disk file name. Parse an on disk file name.
:param filename: the data file name including extension :param filename: the data file name including extension
:returns: a dict, with keys for timestamp, and ext:: :returns: a dict, with keys for timestamp, and ext:
* timestamp is a :class:`~swift.common.utils.Timestamp` * timestamp is a :class:`~swift.common.utils.Timestamp`
* ext is a string, the file extension including the leading dot or * ext is a string, the file extension including the leading dot or
@ -895,8 +895,10 @@ class BaseDiskFileManager(object):
be yielded. be yielded.
timestamps is a dict which may contain items mapping: timestamps is a dict which may contain items mapping:
ts_data -> timestamp of data or tombstone file, ts_data -> timestamp of data or tombstone file,
ts_meta -> timestamp of meta file, if one exists ts_meta -> timestamp of meta file, if one exists
where timestamps are instances of where timestamps are instances of
:class:`~swift.common.utils.Timestamp` :class:`~swift.common.utils.Timestamp`
""" """
@ -1961,7 +1963,7 @@ class DiskFileManager(BaseDiskFileManager):
Returns the timestamp extracted .data file name. Returns the timestamp extracted .data file name.
:param filename: the data file name including extension :param filename: the data file name including extension
:returns: a dict, with keys for timestamp, and ext:: :returns: a dict, with keys for timestamp, and ext:
* timestamp is a :class:`~swift.common.utils.Timestamp` * timestamp is a :class:`~swift.common.utils.Timestamp`
* ext is a string, the file extension including the leading dot or * ext is a string, the file extension including the leading dot or
@ -2241,12 +2243,12 @@ class ECDiskFileManager(BaseDiskFileManager):
be stripped off to retrieve the timestamp. be stripped off to retrieve the timestamp.
:param filename: the data file name including extension :param filename: the data file name including extension
:returns: a dict, with keys for timestamp, frag_index, and ext:: :returns: a dict, with keys for timestamp, frag_index, and ext:
* timestamp is a :class:`~swift.common.utils.Timestamp` * timestamp is a :class:`~swift.common.utils.Timestamp`
* frag_index is an int or None * frag_index is an int or None
* ext is a string, the file extension including the leading dot or * ext is a string, the file extension including the leading dot or
the empty string if the filename has no extenstion. the empty string if the filename has no extension.
:raises DiskFileError: if any part of the filename is not able to be :raises DiskFileError: if any part of the filename is not able to be
validated. validated.

View File

@ -317,6 +317,7 @@ def get_account_info(env, app, swift_source=None):
This call bypasses auth. Success does not imply that the request has This call bypasses auth. Success does not imply that the request has
authorization to the account. authorization to the account.
:raises ValueError: when path can't be split(path, 2, 4) :raises ValueError: when path can't be split(path, 2, 4)
""" """
(version, account, _junk, _junk) = \ (version, account, _junk, _junk) = \