The new max_clients parameter allows one full control over the maximum
number of client requests that will be handled by a given worker for
any of the proxy, account, container or object servers.
Lowering the number of clients handled per worker, and raising the
number of workers can lessen the impact that a CPU intensive, or
blocking, request can have on other requests served by the same
worker.
If the maximum number of clients is set to one, then a given worker
will not perform another accept(2) call while processing, allowing
other workers a chance to process it.
DocImpact
Signed-off-by: Peter Portante <peter.portante@redhat.com>
Change-Id: Ic01430f7a6c5ff48d7aa349dc86a5f8ac463a420
Allows client-side technologies such as Flash, Java and Silverlight running
on web pages served elsewhere to interact with the Swift API.
Bug #1159960
Change-Id: I7d0533a0aaf189ac452abbd983469acb064fdca4
Extensive refactor here to consolidate what nodes are contacted for
any request. This consolidation means reads will contact the same set
of nodes that writes would, giving a very good chance that
read-your-write behavior will succeed. This also means that writes
will not necessarily try all nodes in the cluster as it would
previously, which really wasn't desirable anyway. (If you really want
that, you can set request_node_count to a really big number, but
understand that also means reads will contact every node looking for
something that might not exist.)
* Added a request_node_count proxy-server conf value that allows
control of how many nodes are contacted for a normal request.
In proxy.controllers.base.Controller:
* Got rid of error_increment since it was only used in one spot by
another method and just served to confuse.
* Made error_occurred also log the device name.
* Made error_limit require an error message and also documented a bit
better.
* Changed iter_nodes to just take a ring and a partition and yield
all the nodes itself so it could control the number of nodes used
in a given request. Also happens to consolidate where sort_nodes is
called.
* Updated account_info and container_info to use all nodes from
iter_nodes and to call error_occurred appropriately.
* Updated GETorHEAD_base to not track attempts on its own and just
stop when iter_nodes tells it to stop. Also, it doesn't take the
nodes to contact anymore; instead it takes the ring and gets the
nodes from iter_nodes itself.
Elsewhere:
* Ring now has a get_part method.
* Made changes to reflect all of the above.
Change-Id: I37f76c99286b6456311abf25167cd0485bfcafac
- We allow all headers requested in preflight request. The CORS
specification does leave the door open for this, as mentioned in
http://www.w3.org/TR/cors/#resource-preflight-requests
Note: Since the list of headers can be unbounded
simply returning headers can be enough.
- This is a followup to review:
https://review.openstack.org/#/c/24415/.
- Fixes bug 1155034.
Change-Id: If7b8f2f3a581c5209892d1ccc9f06ddb8fac92dd
For multiple node swift cluster setup, these directories should be
created in the setup steps. Otherwise, when start up storage servers
there will be errors. The error messages won't tell if it is because
of missing the directory.
Change-Id: I67d74abb7743b76739b5e747d6dcbd3214b00774
Fixes: bug #1158519
A new configuration parameter is added to /etc/swift/swift.conf
[swift-hash]
swift_hash_path_prefix = 'random unique string'
New installations are advised to set this parameter to a random secret,
which would not be disclosed ouside the organization.
The same secret needs to be used by all swift servers of the same cluster.
Existing installations should set this parameter to an empty string
(the default)
DocImpact
Fixes: Bug #1157454
Change-Id: I63b10d0b7d6dd3f74e0f10bb41b5f240fa03578a
Instructions for a Multiple Server Swift Installation (Ubuntu) doc
for creating /var/run/swift and changing owner command were not
using the right format. Missing two colons.
Change-Id: Ie23007a0da498373fbfb137c7edb3d80813c6ba5
Fixes: bug #1158310
Multiple Server Swift Installation (Ubuntu) instruction does not
indicate that the directory /var/run/swift needs to be created.
That directory actually needs to be created and the ownship needs
to be changed to the user/group which swift service runs under.
This patch will fix the document and gives the steps how to create
the directory and set the ownership right. It also gives instruction
on how the script can be added so that swift services can be resarted
after system reboots.
Change-Id: Id61aa67cc0d6f66d749701e6ea824b1ff3b2c478
Fixes: bug #1156631
We can set x-versions-location empty to remove this header in API, but
to keep consistent, we should allow x-remove-versions-location too. The
usage is post "x-remove-versions-locaion: x", just like ACL remove
headers.
Fixed: bug #1107592
Change-Id: I1271eec6401d4a0e8c1a7c2d63aeb8dfef00bf6d
The region is one level above the zone; it is intended to represent a
chunk of machines that is distant from others with respect to
bandwidth and latency.
Old rings will default to having all their devices in region 1. Since
everything is in the same region by default, the ring builder will
simply distribute across zones as it did before, so your partition
assignment won't move because of this change. If you start adding
devices in other regions, of course, the assignment will change to
take that into account.
swift-ring-builder still accepts the same syntax as before, but will
default added devices to region 1 if no region is specified.
Examples:
$ swift-ring-builder foo.builder add r2z1-1.2.3.4:555/sda
$ swift-ring-builder foo.builder add r1z3-1.2.3.4:555/sda
$ swift-ring-builder foo.builder add z3-1.2.3.4:555/sda
Also, some updates to ring-overview doc.
Change-Id: Ifefbb839cdcf033e6c9201fadca95224c7303a29
The recent account_quotas (https://review.openstack.org/23434)
patch added a new setting request.environ[reseller_request].
This patch adds tests for tempauth and keystoneauth as well as
an updated overview_auth.rst.
Change-Id: Icdb7ec9948ae7424b0721fc51a143782b2fdc5a6
container-sync now skips faulty objects in the first and second rounds.
All replicas try in the second round.
No server will give up until the faulty object suceeds
Fixes: bug #1068423
Change-Id: I0defc174b2ce3796a6acf410a2d2eae138e8193d
Add a new middleware implementing account quotas.
This middleware blocks write requests (PUT, POST) if a given quota (in bytes)
is exceeded while DELETE requests are still allowed.
Quotas are stored in the x-account-meta-quota-bytes metadata entry.
Write requests to this metadata setting are only allowed for resellers.
Change-Id: I57fd7c6209f34cc79d4bab72d500d43ba2a62083
Add support for functional tests that work with Apache web front end
Change-Id: I72358a12016eeccc842d834461dbebaa188aa117
Implements: blueprint wsgi-application-interface
Fixes bug 1104708
There could be severe performance drop for swift is one disk of one
storage node is problematic due to the tragic state of async disk I/O.
This patch provided PUT timing per kB transfered (ms/kB) monitoring
support for each non-zero-byte request of each disk and report to
statsD for alert.
-adding "object-server.PUT.<device>.timing" metrics for object-server.
DocImpact.
Change-Id: Ie94bddad28e8be52e71683bf6c9db988664abe47
- Things swill go badly with swift if we leave the default to authtoken
to use its own memcache cache connection based python-memcache c based
binding.
Change-Id: I293b875acdcb06e5a7a0cfa9a9bb5d7678675da0
Fix bug 1129760
Without speed limit, DB auditor will likely consume high CPU% on
storage node. That will highly impact the cluster's performance.
This patch adds two options for account/container auditor:
- containers_per_second: Maximum containers audited per second
- accounts_per_second: Maximum accounts audited per second
DocImpact
Change-Id: I9faa506438185a83ca77db4906969328624d015f
These are mostly cosmetic fixes for irritating imperfections:
- "separated with commas" was duplicated, leave just one
- extra whitespace here and there, man pages are not PEP8, drop
- weird extra commas, drop
- Fedora logs to /var/log/messages
- "drive is has failed", drop "is"
Change-Id: I5ceba2e61b16db4855d76c92cbc83663b9b2a0da
This was an oustanding TODO for StatsD Swift metrics. A new timing
metric is tracked for (only) GET requests for accounts, containers,
and objects:
proxy-server.<req_type>.GET.<status_int>.first-byte.timing
Also updated StatsD documentation in the Admin Guide to clarify that
timing metrics are sent in units of milliseconds.
Change-Id: I5bb781c06cefcb5280f4fb1112a526c029fe0c20
Replaced GA code for cross-domain tracking.
Patchset addresses reviewer's comments
and follows new guidance from Foundation:
http://wiki.openstack.org/Documentation/Copyright
Adds current year to each Sphinx-built page.
Addresses only the docs copyright attribution, not code files.
Change-Id: Ib90fd1c92c8fafce2db821bc2b17cef1377cfc1e
- Change a bit the formatting of the documention as well.
- Fix WARNING: Title underline too short. in misc.rst.
Change-Id: I2f4e36bcb5e01e984f0af0152bc5b3b9f7e942ce
Add a new middleware implementing some basic container quotas.
Quotas are subject to several limitations: eventual consistency, the timeliness
of the cached container_info (60 second ttl by default), and it’s unable to
reject chunked transfer uploads that exceed the quota (though once the quota
is exceeded, new chunked transfers will be refused).
However, they get most of the way to container quotas fairly inexpensively.
Quotas are set by adding meta values to the container, and are validated when
set:
X-Container-Meta-Quota-Bytes: Maximum size of the container, in bytes.
X-Container-Meta-Quota-Count: Maximum object count of the container.
DocImpact
Change-Id: I77cfbf6dc231a2e522bd67328e4c082424a93eee
Some systems behave badly when they completely run out of space. To
alleviate this problem, you can set the fallocate_reserve conf value
to a number of bytes to "reserve" on each disk. When the disk free
space falls at or below this amount, fallocate calls will fail, even
if the underlying OS fallocate call would succeed. For example, a
fallocate_reserve of 5368709120 (5G) would make all fallocate calls
fail, even for zero-byte files, when the disk free space falls under
5G.
The default fallocate_reserve is 0, meaning "no reserve", and so the
software behaves exactly as it always has unless you set this conf
value to something non-zero.
Also fixed ring builder's search_devs doc bugs.
Related: To get rsync to do the same, see
https://github.com/rackspace/cloudfiles-rsync
Specifically, see this patch:
https://github.com/rackspace/cloudfiles-rsync/blob/master/debian/patches/limit-fs-fullness.diff
DocImpact
Change-Id: I8db176ae0ca5b41c9bcfeb7cb8abb31c2e614527
If invoked as 'swift-ring-builder-safe' the directory containing the builder
file provided will be locked (via lock_parent_directory()). This provides a
small safe guard against multiple instances of the swift-ring-builder (or
other utilities that observe this lock) from attempting to write to or read
the builder/ring files while operations are in progress.
This is particularly useful in environments where ring management has been
automated (via Chef or custom solutions) but the operator still occasionally
needs to manually interact with the ring.
DocImpact
Change-Id: Ia362744a8151a91bfb586d01da582906726852e6