Set include_servce_catalog=False in Keystone's auth_token
example configuration. Swift does not use X-Service-Catalog
so there is no need to suffer its overhead. In addition,
service catalogs can be larger than max_header_size so this
change avoids a failure mode.
DocImpact
Relates to bug 1228317
Change-Id: If94531ee070e4a47cbd9b848d28e2313730bd3c0
Swift can now optionally be configured to allow requests to '/info',
providing information about the swift cluster. Additionally a HMAC
signed requests to
'/info?swiftinfo_sig=<sign>&swiftinfo_expires=<expires>' can be
configured allowing privileged access to more sensitive information
not meant to be public.
DocImpact
Change-Id: I2379360fbfe3d9e9e8b25f1dc34517d199574495
Implements: blueprint capabilities
Closes-Bug: #1245694
New replication_one_per_device (True by default)
that restricts incoming REPLICATION requests to
one per device, replication_currency allowing.
Also has replication_lock_timeout (15 by default)
to control how long a request will wait to obtain
a replication device lock before giving up.
This should be very useful in that you can be
assured any concurrent REPLICATION requests are
each writing to distinct devices. If you have 100
devices on a server, you can set
replication_concurrency to 100 and be confident
that, even if 100 replication requests were
executing concurrently, they'd each be writing to
separate devices. Before, all 100 could end up
writing to the same device, bringing it to a
horrible crawl.
NOTE: This is only for ssync replication. The
current default rsync replication still has the
potentially horrible behavior.
Change-Id: I36e99a3d7e100699c76db6d3a4846514537ff685
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
If you're setting one of these up, you're probably going to use it for
development, in which case you want everything but the kitchen sink
turned on so you can just start hacking away.
Change-Id: I98d178ff545cbf8d853c102e9fce76fb9f6773ac
Refactor on-disk knowledge out of the object server by pushing the
async update pickle creation to the new DiskFileManager class (name is
not the best, so suggestions welcome), along with the REPLICATOR
method logic. We also move the mount checking and thread pool storage
to the new ondisk.Devices object, which then also becomes the new home
of the audit_location_generator method.
For the object server, a new setup() method is now called at the end
of the controller's construction, and the _diskfile() method has been
renamed to get_diskfile(), to allow implementation specific behavior.
We then hide the need for the REST API layer to know how and where
quarantining needs to be performed. There are now two places it is
checked internally, on open() where we verify the content-length,
name, and x-timestamp metadata, and in the reader on close where the
etag metadata is checked if the entire file was read.
We add a reader class to allow implementations to isolate the WSGI
handling code for that specific environment (it is used no-where else
in the REST APIs). This simplifies the caller's code to just use a
"with" statement once open to avoid multiple points where close needs
to be called.
For a full historical comparison, including the usage patterns see:
https://gist.github.com/portante/5488238
(as of master, 2b639f5, Merge
"Fix 500 from account-quota This Commit
middleware")
--------------------------------+------------------------------------
DiskFileManager(conf)
Methods:
.pickle_async_update()
.get_diskfile()
.get_hashes()
Attributes:
.devices
.logger
.disk_chunk_size
.keep_cache_size
.bytes_per_sync
DiskFile(a,c,o,keep_data_fp=) DiskFile(a,c,o)
Methods: Methods:
*.__iter__()
.close(verify_file=)
.is_deleted()
.is_expired()
.quarantine()
.get_data_file_size()
.open()
.read_metadata()
.create() .create()
.write_metadata()
.delete() .delete()
Attributes: Attributes:
.quarantined_dir
.keep_cache
.metadata
*DiskFileReader()
Methods:
.__iter__()
.close()
Attributes:
+.was_quarantined
DiskWriter() DiskFileWriter()
Methods: Methods:
.write() .write()
.put() .put()
* Note that the DiskFile class * Note that the DiskReader() object
implements all the methods returned by the
necessary for a WSGI app DiskFileOpened.reader() method
iterator implements all the methods
necessary for a WSGI app iterator
+ Note that if the auditor is
refactored to not use the DiskFile
class, see
https://review.openstack.org/44787
then we don't need the
was_quarantined attribute
A reference "in-memory" object server implementation of a backend
DiskFile class in swift/obj/mem_server.py and
swift/obj/mem_diskfile.py.
One can also reference
https://github.com/portante/gluster-swift/commits/diskfile for the
proposed integration with the gluster-swift code based on these
changes.
Change-Id: I44e153fdb405a5743e9c05349008f94136764916
Signed-off-by: Peter Portante <peter.portante@redhat.com>
This reverts commit 7760f41c3ce436cb23b4b8425db3749a3da33d32
Change-Id: I95e57a2563784a8cd5e995cc826afeac0eadbe62
Signed-off-by: Peter Portante <peter.portante@redhat.com>
The SAIO is purpously cut into two parts, so that you don't have to switch
back and forth between root and your unprivledged user. Add some "note" box
callouts to highlight this changeover.
Change-Id: I8b1a8f0539eac60d4121bdd4dab01df75ecca207
This creates a pool to each memcache server so that connections will not
grow without bound. This also adds a proxy config
"max_memcache_connections" which can control how many connections are
available in the pool.
A side effect of the change is that we had to change the memcache calls
that used noreply, and instead wait for the result of the request.
Leaving with noreply could cause a race condition (specifically in
account auto create), due to one request calling `memcache.del(key)` and
then `memcache.get(key)` with a different pooled connection. If the
delete didn't complete fast enough, the get would return the old value
before it was deleted, and thus believe that the account was not
autocreated.
ClaysMindExploded
DocImpact
Change-Id: I350720b7bba29e1453894d3d4105ac1ea232595b
If you don't, then newer versions of xattr won't install, and since
our xattr requirement is simply ">= 0.4" in requirements.txt, this
affects anyone setting up a new SAIO.
This happened with xattr 0.7, which was released on 2013-07-19.
Change-Id: Iaf335fa25a2908953d1fd218158ebedf5d01cc27
Place all the methods related to on-disk layout and / or configuration
into a new common module that can be shared by the various modules
using the same on-disk layout.
Change-Id: I27ffd4665d5115ffdde649c48a4d18e12017e6a9
Signed-off-by: Peter Portante <peter.portante@redhat.com>
If handoffs_first is True, then the object replicator will give
partitions that are not supposed to be on the node priority.
If handoff_delete is set to a number (n), then it will delete a handoff
partition if at least n replicas were successfully replicated
Also fixed a couple of things in the object replicator unit tests and
added some more
DocImpact
Change-Id: Icb9968953cf467be2a52046fb16f4b84eb5604e4
Used groff to recreate the errors. I believe all the issues
except `binary-without-manpage` are solved. Would like
confirmation from someone using Lintian.
Closes-Bug: #1210114
Change-Id: I533205c53efdb7cdf3645cc3e3dc487f9ee5640a
The main purpose of this patch is to lay the groundwork for allowing
the container and account servers to optionally use pluggable backend
implementations. The backend.py files will eventually be the module
where the backend APIs are defined via docstrings of this reference
implementation. The swift/common/db.py module will remain an internal
module used by the reference implementation.
We have a raft of changes to docstrings staged for later, but this
patch takes care to relocate ContainerBroker and AccountBroker into
their new home intact.
Change-Id: Ibab5c7605860ab768c8aa5a3161a705705689b04
- Makes swift-dispersion-populate a bit faster when using a larger
dispersion_coverage with a larger part_power.
- Adds option to only run population for container OR objects
- Adds option to let you resume population at given point (useful if you
need to resume population after a previous run error'd out or the
like) by specifying which suffix to start at.
The original populate just randomly used uuid4().hex as a suffix on the
container/object names until all the partition's required where covered.
This isn't a big deal if you're only doing 1% coverage on a ring with a
small part power but takes ages if you're doing 100% on a larger ring.
Change-Id: I52f890a774412c1d6179f12db9081aedc58b6bc2
These are headers that will be stripped unless the WSGI environment
contains a true value for 'swift_owner'. The exact definition of a
swift_owner is up to the auth system in use, but usually indicates
administrative responsibilities.
DocImpact
Change-Id: I972772fbbd235414e00130ca663428e8750cabca
The swift-dispersion-populate and swift-dispersion-report tools now
accept a --insecure option.
Also, dispersion.conf now has a keystone_api_insecure option.
Default is obviously to use the secure path.
DocImpact
Change-Id: I4000352e547d9ce5b08ade54e0c886281caff891
Making it possible for one to overwrite the default set of regexes
used to search for device block errors in the log file. Also making
the log file naming pattern configurable by setting them in the
drive-audit.conf file.
Updating "Detecting Failed Drives" section on the admin guide as well.
Change-Id: I7bd3acffed196da3e09db4c9dcbb48a20bdd1cf0
Change the default value of wsgi workers from 1 to auto. The new default
value for workers in the proxy, container, account & object wsgi servers will
spawn as many workers per process as you have cpu cores.
This will not be ideal for some configurations, but it's much more likely to
produce a successful out of the box deployment.
Inspect the number of cpu_cores using python's multiprocessing when available.
Multiprocessing was added in python 2.6, but I know I've compiled python
without it before on accident. The cpu_count method seems to be pretty system
agnostic, but it says it can raise NotImplementedError or sometimes return 0.
Add a new utility method 'config_auto_int_value' to pull an integer out of the
config which has a dynamic default.
* drive by s/container/proxy/ in proxy-server.conf.5
* fix misplaced max_clients in *-server.conf-sample
* update doc/development_saio to force workers = 1
DocImpact
Change-Id: Ifa563d22952c902ab8cbe1d339ba385413c54e95
For systems with very large numbers of partitions, 1% dispersion
coverage may simply be too much/take too long. This fix allows <1
values to be used for dispersion_coverage.
DocImpact
Change-Id: I5ed35b69754d55a410e66e658b3854de57c7666b
This reverts commit 68cb91097b75a92237bd90caffcd405c3e83cb53
Just so this is not get forgotten in the tree...
We are using daemon mode and chunked is not supported in this mode.
In past couple of years, the XFS team has greatly improved inode use in
xfs. With more recent kernels, there is no performance penalty for
using the default inode size, and a smaller inode size gives us
improvements in other areas where disk access is involved.
DocImpact
Change-Id: Ie9da53a6e8bf43d1d02881befbb52595462c9f2e