Currently, the rsync module where the replicators send data is static. It
forbids administrators to set rsync configuration based on their current
deployment or needs.
As an example, the rsyncd configuration example encourages to set a connections
limit for the modules account, container and object. It permits to protect
devices from excessives parallels connections, because it would impact
performances.
On a server with many devices, it is tempting to increase this number
proportionally, but nothing guarantees that the distribution of the connections
will be balanced. In the worst scenario, a single device can receive all the
connections, which is a severe impact on performances.
This commit adds a new option named 'rsync_module' to the *-replicator sections
of the *-server configuration file. This configuration variable can be
extrapolated with device attributes like ip, port, device, zone, ... by using
the format {NAME}. eg:
rsync_module = {replication_ip}::object_{device}
With this configuration, an administrators can solve the problem of connections
distribution by creating one module per device in rsyncd configuration.
The default values are backward compatible:
{replication_ip}::account
{replication_ip}::container
{replication_ip}::object
Option vm_test_mode is deprecated by this commit, but backward compatibility is
maintained. The option is only effective when rsync_module is not set. In that
case, {replication_port} is appended to the default value of rsync_module.
Change-Id: Iad91df50dadbe96c921181797799b4444323ce2e
The actual server-side changes are simple. The tests are a different
matter. Many changes were needed to the object server tests to
handle the now-async calls to the container server. In an effort to
test this properly, some drive-by changes were made to improve tests.
I tested this patch by doing zero-byte object writes to one container
as fast as possible. Then I did it again while also saturating 2 of the
container replica's disks. The results are linked below.
https://gist.github.com/notmyname/2bb85acfd8fbc7fc312a
DocImpact
Change-Id: I737bd0af3f124a4ce3e0862a155e97c1f0ac3e52
The deprecated directive `run_pause` should be replaced with the more
standard one `interval`. The `run_pause` should be still supported for
backward compatibility. This patch updates object replicator to use
`interval` and support `run_pause`. It also updates its sample config
and documentation.
Co-Authored-By: Joanna H. Huang <joanna.huitzu.huang@gmail.com>
Co-Authored-By: Kamil Rykowski <kamil.rykowski@intel.com>
Change-Id: Ie2a3414a96a94efb9273ff53a80b9d90c74fff09
Closes-Bug: #1364735
Used groff to recreate the errors. I believe all the issues
except `binary-without-manpage` are solved. Would like
confirmation from someone using Lintian.
Closes-Bug: #1210114
Change-Id: I533205c53efdb7cdf3645cc3e3dc487f9ee5640a
Change the default value of wsgi workers from 1 to auto. The new default
value for workers in the proxy, container, account & object wsgi servers will
spawn as many workers per process as you have cpu cores.
This will not be ideal for some configurations, but it's much more likely to
produce a successful out of the box deployment.
Inspect the number of cpu_cores using python's multiprocessing when available.
Multiprocessing was added in python 2.6, but I know I've compiled python
without it before on accident. The cpu_count method seems to be pretty system
agnostic, but it says it can raise NotImplementedError or sometimes return 0.
Add a new utility method 'config_auto_int_value' to pull an integer out of the
config which has a dynamic default.
* drive by s/container/proxy/ in proxy-server.conf.5
* fix misplaced max_clients in *-server.conf-sample
* update doc/development_saio to force workers = 1
DocImpact
Change-Id: Ifa563d22952c902ab8cbe1d339ba385413c54e95
The new max_clients parameter allows one full control over the maximum
number of client requests that will be handled by a given worker for
any of the proxy, account, container or object servers.
Lowering the number of clients handled per worker, and raising the
number of workers can lessen the impact that a CPU intensive, or
blocking, request can have on other requests served by the same
worker.
If the maximum number of clients is set to one, then a given worker
will not perform another accept(2) call while processing, allowing
other workers a chance to process it.
DocImpact
Signed-off-by: Peter Portante <peter.portante@redhat.com>
Change-Id: Ic01430f7a6c5ff48d7aa349dc86a5f8ac463a420
Replaced GA code for cross-domain tracking.
Patchset addresses reviewer's comments
and follows new guidance from Foundation:
http://wiki.openstack.org/Documentation/Copyright
Adds current year to each Sphinx-built page.
Addresses only the docs copyright attribution, not code files.
Change-Id: Ib90fd1c92c8fafce2db821bc2b17cef1377cfc1e
A deployer may want to remove a Swift node from a load balancer for
maintenance or upgrade. This patch provides an optional mechanism for
this. The healthcheck filter config can specify "disable_path" which is
a filesystem path. If a file is present at that location, the
healthcheck middleware returns a 503 with a body of "DISABLED BY FILE".
So a deployer can configure "disable_path" and then touch that
filesystem path, wait for the proxy to be removed from the load balancer
pool, perform maintenance/upgrade, and then remove the "disable_path"
file.
Also cleaned up the conf file man pages a bit.
Change-Id: I1759c78c74910a54c720f298d4d8e6fa57a4dab4
"Also adding the new swift-recon and swift-ring-builder manpages to this set"
"Adding new manpages for configuration files and also making changes according to previous review suggestions"
"removing the Author line from the manpages according to suggestions"
Change-Id: I256d2b2851b55a379b59011894f214bf55ba7da9