Add an environment variable to enable the use of the in-memory object
server during in-process functional test runs.
It might be worth-while to just run under both object servers in-tree,
but this at least enables it, without having to figure out how to make
two test runs in two different environments.
DocImpact
Change-Id: Id76b008e1f273c639ae61550affddc32c5d7c419
Signed-off-by: Thiago da Silva <thiago@redhat.com>
auth_token middleware in python-keystoneclient is deprecated and has
been moved to the keystonemiddleware repo.
Change-Id: Ia04aa83348e0776cb3239cb5420ee1450a990d5b
Closes-Bug: #1342274
May not be obvious, but existing code will let you change the
disk_chunk_size just for the auditor so this just points that
out in the docs. In one short test I ran with a 4 node cluster
with 18GB of 4MB objects on it, changint he auditor chunk size
from the default of 64K to 1MB creased the auditor CPU time from
10% to 4%.
Also added test code to make sure this overridden value is
actually used and checked other auditWorker conf values as
well.
Change-Id: Ia12e1c6127877dc2124b60cd963cd0b6d5f3d6ef
It does not appear that, aside from the user-agent string, the strings
"obj-server", "obj-updater", or "obj-replicator" (or "obj-<anything>"*)
appear in the swift code base, aside from the directory containing the
object services code being named "obj".
Furthermore, the container, account, and proxy services construct their
user-agent string, as reported in the logs, using their full name. In
addition, this full name also shows up as the name of the process via
"ps" or "top", etc., which can make it easier for admins to match log
entries with other tools.
For consistency, we update the object services to use an "object-"
prefix rather than "obj-" in its user agent string.
* obj-etag does appear in a unit test, but not part of the regular
code.
Change-Id: I914fc189514207df2535731eda10cb4b3d30cc6c
The version of setup tools in Ubuntu 12.04 is too old and is causing
the SAIO instructions to fail when installing python-swiftclient.
The work around is to install python-swiftclient's dependencies
before running running:
python setup.py develop
This change adds a note to users of Ubuntu 12.04 to replace step 2
of "Getting the code" with:
cd $HOME/python-swiftclient; sudo pip install -r requirements.txt; \
python setup.py develop; cd -
Change-Id: I63f57bbf1f1158f8740f6137ad55ff49f12a316c
Closes-Bug: #1217288
We are soon going to put servers with a high ratio of disk to CPU
into production as object servers. One of our concerns with this
configuration is that the object auditor would take too long to
complete its audit cycle. Therefore we decided to parallelise
the auditor.
The auditor already uses fork(), so we decided to use the parallel
model from the replicator. Concurrency is set by the concurrency
parameter in the auditor stanza, which sets the number of parallel
checksum auditors. The actual number of parallel auditing processes
is concurrency + 1 if zero_byte_fps is non-zero.
Only one ZBF process is forked, and a new ZBF process is forked as
soon as the current ZBF process finishes. Thus the last process
running will always be a ZBF process.
Both forever and once modes are parallelised.
Each checksum auditor process submits a nested dictionary with keys
{'object_auditor_stats_ALL': {'diskn': {..}}} to dump_recon_cache
so that the object_auditor_stats_ALL dict in recon cache consists
of individual sub-dicts for each of the object disks on the server.
The recon cache is no different to before when the checksum auditor
is run in serial mode. When swift-recon is run, it sums the stats
for the individual disks.
DocImpact
Change-Id: I0ce3db57a43e482d4be351cc522fc9060af6e2d3
Fixed some grammar issues, formatting issues, clarrified some
wording now that its easier to read fully rendered.
Change-Id: Ie803dd1a84d50de7663a7099c32d81807701e188
Add overview and example information for using Storage Policies.
DocImpact
Implements: blueprint storage-policies
Change-Id: I6f11f7a1bdaa6f3defb3baa56a820050e5f727f1
Log lines can get quite large, as we previously noticed with rsync error
log lines. We added a setting to cap those, but it really looks like we
should have just done this overall limit. We noticed the issue when we
switched to UDP syslogging and it would occasionally blow past the 16436
lo MTU! This causes Python's logging code to get an error and hilarity
ensues.
Change-Id: I44bdbe68babd58da58c14360379e8fef8a6b75f7
This allows an easier and more explicit way to tell swift-init to run on
specific servers. For example with an SAIO, this allows you to do
something like:
swift-init object-server.1 reload
to reload just the 1st object server. A more real world example is when
you are running separate servers for replication. In this example you
might have an object-server/public.conf and
object-server/replication.conf. With this change you can do something
like:
swift-init object-server.replication reload
to just reload the replication server.
DocImpact
Change-Id: I5c6046b5ee28e17dadfc5fc53d1d872d9bb8fe48
The profile middleware provide a tool to profile Swift
code on the fly and collect statistic data for performance
analysis. An native simple Web UI is also provided to help
query and visualize the data.
Change-Id: I6a1554b2f8dc22e9c8cd20cff6743513eb9acc05
Implements: blueprint profiling-middleware
Make account, object, and container servers construct log lines using the
same utility function so they will produce identically formatted lines.
This change reorders the fields logged for the account server.
This change also adds the "additional info" field to the two servers that
didn't log that field. This makes the log lines identical across all 3
servers. If people don't like that, I can take that out. I think it makes
the documentation, parsing of the log lines, and the code a tad cleaner.
DocImpact
Change-Id: I268dc0df9dd07afa5382592a28ea37b96c6c2f44
Closes-Bug: 1280955
This is a very simple swift tool to retrieve information
of an account that is located on the storage node.
One can call the tool with a given account db file
as it is stored on the storage node system.
It will then return several information about that account.
Change-Id: Ibfeee790adc000fc177b4b3c03d22ff785fda325
This is a very simple swift tool to retrieve information
of a container that is located on the storage node.
One can call the tool with a given container db file
as it is stored on the storage node system.
It will then return several information about that container.
Change-Id: Ifebaed6c51a9ed5fbc0e7572bb43ef05d7dd254b
This makes it so test-cors.html is a real file in doc/source so it's easy for
those in the know to jump in there with a `python -m SimpleHTTPServer` and
point their webbrowser to `http://localhost:8000/test-cors.html`.
The example html and javascript still appear in the docs in their entirety
using the Sphinx literal include directive.
Change-Id: Ia0ba36df6c58795e3764fa53b7f585dcc1b3be07
CORS doesn't really work with swift right now. OPTIONS calls for the most part
work but for so called "simple cross-site requests" (i.e. those that don't
require a pre-flight OPTIONS request) Swift always returns the Origin it was
given as the Access-Control-Allow-Origin in the response. This makes CORS
"work" for these requests but if you actually wanted the javascript user agent
to restrict anything for you it wouldn't be able to!
You can duplicate the issue with updated CORS test page:
http://docs.openstack.org/developer/swift/cors.html#test-cors-page
And a public container with an 'X-Container-Meta-Access-Control-Allow-Origin'
that does NOT match the webserver hosting the test-cors-page.
e.g.
with a public container that accepts cross-site requests from "example.com":
`swift post cors-container -m access-control-allow-origin:example.com -r .r:*`
You could point your browser at a copy of the test-cors-page on your
filesystem (the browser will will send 'Origin: null')
Without a token the XMLHttpRequest will not request any custom headers (i.e.
Access-Control-Request-Headers: x-auth-token) and the request will be made
with-out a preflight OPTIONS request (which Swift would have denied anyway
because the origin's don't match)
i.e. fill in "http://saio:8080/v1/AUTH_test/cors-container" for "URL" and
leave "Token" blank.
You would expect that the browser would not complete the request because
"Origin: null" does not match the configured "Access-Control-Allow-Origin:
example.com" on the container metadata, and indeed with this patch - it won't!
Also:
The way cors is set up does not play well with certain applications for swift.
If you are running a CDN on top of swift and you have the
Access-Control-Allow-Origin cors header set to * then you probably want the *
to be cached on the the CDN, not the Origin that happened to result in an
origin request.
Also:
If you were unfortunate enough to allow cors headers to be saved directly
onto objects then this allows them to supersede the headers coming from the
container.
NOTE: There is a change is behavior with this patch. Because its cors, a
spec that was created only to cause annoyance to all, I'll write out
what's being changed and hopefully someone will speak up if it breaks
there stuff.
previous behavior: When a request was made with a Origin header set the
cors_validation decorator would always add that origin as
the Access-Control-Allow-Origin header in the response-
whether the passed origin was a match with the container's
X-Container-Meta-Access-Control-Allow-Origin or not, or even
if the container did not have CORS set up at all.
new behavior: If strict_cors_mode is set to True in the proxy-server.conf
(which is the default) the cors_validation decorator will only
add the Access-Control-Allow-Origin header to the response when
the request's Origin matches the value set in
X-Container-Meta-Access-Control-Allow-Origin. NOTE- if the
container does not have CORS set up it won't just magically start
working. Furthremore, if the Origin doesn't match the
Access-Control-Allow-Origin - a successfully authorized request
(either by token or public ACL) won't be *denied* - it just
won't include the Access-Control-Allow-Origin header (it's up
to the security model in the browser to cancel the request
if the response doesn't include a matching Allow-Origin
header). On the other hand, if you want to restrict requests
with CORS, you can actually do it now.
If you are worried about breaking current functionality you
must set:
strict_cors_mode = False
in the proxy-server.conf. This will continue with returning the
passed in Origin as the Access-Control-Allow-Origin in the
response.
previous: If you had X-Container-Meta-Access-Control-Allow-Origin set to *
and you passed in Origin: http://hey.com you'd get
Access-Control-Allow-Origin: http://hey.com back. This was true for
both OPTIONS and regular reqs.
new: With X-Container-Meta-Access-Control-Allow-Origin set to * you get * back
for both OPTIONS and regular reqs.
previous: cors headers saved directly onto objects (by allowing them to be
saved via the allowed_headers config in the object-server conf)
would be overridden by whatever container cors you have set up.
new: For regular (non-OPTIONS) calls the object headers will be kept. The
container cors will only be applied to objects without the
'Access-Control-Allow-Origin' and 'Access-Control-Expose-Headers' headers.
This behavior doesn't make a whole lot of sense for OPTIONS calls so I
left that as is. I don't think that allowing cors headers to be saved
directly onto objects is a good idea and it should be discouraged.
DocImpact
Change-Id: I9b0219407e77c77a9bb1133cbcb179a4c681c4a8
In object audit "once" mode we are allowing the user to specify
a sub-set of devices to audit using the "--devices" command-line
option. The sub-set is specified as a comma-separated list. This
patch is taken from a larger patch to enable parallel processing
in the object auditor.
We've had to modify recon so that it will work properly with this
change to "once" mode. We've modified dump_recon_cache()
so that it will store nested dictionaries, in other words it will
store a recon cache entry such as {'key1': {'key2': {...}}}. When
the object auditor is run in "once" mode with "--devices" set the
object_auditor_stats_ALL and ZBF entries look like:
{'object_auditor_stats_ALL': {'disk1disk2..diskn': {...}}}. When
swift-recon is run, it hunts through the nested dicts to find the
appropriate entries. The object auditor recon cache entries are set
to {} at the beginning of each audit cycle, and individual disk
entries are cleared from cache at the end of each disk's audit cycle.
DocImpact
Change-Id: Icc53dac0a8136f1b2f61d5e08baf7b4fd87c8123
I alphabetized the items under "Middleware" in the source documentation
to make them easier to locate.
Change-Id: I3a0108c89d16ef07b7623dda518b3096c2686002
I alphabetized the items under "Proxy", "Account", "Container",
and "Object" in the source documentation to make them easier to
locate.
Change-Id: Ia9cca0ee558cb1e0361c1a88103352bd006da1e3
Fixes swob module being referenced twice in misc.rst
resulting in duplicate sections in the doc.
Also fixes build_sphinx warning for section underline
too short in middleware.rst.
Change-Id: Ibe44895f933a6503ca04ccd3a084bc0cfd913213
Steps were numbered 1, 2, (note reset), 1, 2, etc. Then a user says:
"I'm on Step 2 in Proxy section, er..."
See a bug, fix a bug.
Change-Id: If6f32b3a33e1070e705812df7ab299e6736c9806
This is for the same reason that SLO got pulled into middleware, which
includes stuff like automatic retry of GETs on broken connection and
the multi-ring storage policy stuff.
The proxy will automatically insert the dlo middleware at an
appropriate place in the pipeline the same way it does with the
gatekeeper middleware. Clusters will still support DLOs after upgrade
even with an old config file that doesn't mention dlo at all.
Includes support for reading config values from the proxy server's
config section so that upgraded clusters continue to work as before.
Bonus fix: resolve 'after' vs. 'after_fn' in proxy's required filters
list. Having two was confusing, so I kept the more-general one.
DocImpact
blueprint multi-ring-large-objects
Change-Id: Ib3b3830c246816dd549fc74be98b4bc651e7bace