614 Commits

Author SHA1 Message Date
John Dickinson
038878b1a4 clarify the current state of the DiskFile API
Change-Id: Ia3d62c53d14c9a5efdb9a39b08817320cf331085
2013-12-09 10:33:53 -08:00
Donagh McCabe
4873fcf626 Opt out of the service catalog
Set include_servce_catalog=False in Keystone's auth_token
example configuration. Swift does not use X-Service-Catalog
so there is no need to suffer its overhead. In addition,
service catalogs can be larger than max_header_size so this
change avoids a failure mode.

DocImpact
Relates to bug 1228317

Change-Id: If94531ee070e4a47cbd9b848d28e2313730bd3c0
2013-12-04 12:18:54 +00:00
Jenkins
34b4bf34d2 Merge "Added discoverable capabilities." 2013-11-28 00:11:35 +00:00
Richard (Rick) Hawkins
2c4bf81464 Added discoverable capabilities.
Swift can now optionally be configured to allow requests to '/info',
providing information about the swift cluster.  Additionally a HMAC
signed requests to
'/info?swiftinfo_sig=<sign>&swiftinfo_expires=<expires>' can be
configured allowing privileged access to more sensitive information
not meant to be public.

DocImpact
Change-Id: I2379360fbfe3d9e9e8b25f1dc34517d199574495
Implements: blueprint capabilities
Closes-Bug: #1245694
2013-11-22 15:54:13 -06:00
gholt
c859ebf5ce Per device replication_lock
New replication_one_per_device (True by default)
that restricts incoming REPLICATION requests to
one per device, replication_currency allowing.

Also has replication_lock_timeout (15 by default)
to control how long a request will wait to obtain
a replication device lock before giving up.

This should be very useful in that you can be
assured any concurrent REPLICATION requests are
each writing to distinct devices. If you have 100
devices on a server, you can set
replication_concurrency to 100 and be confident
that, even if 100 replication requests were
executing concurrently, they'd each be writing to
separate devices. Before, all 100 could end up
writing to the same device, bringing it to a
horrible crawl.

NOTE: This is only for ssync replication. The
current default rsync replication still has the
potentially horrible behavior.

Change-Id: I36e99a3d7e100699c76db6d3a4846514537ff685
2013-11-22 21:40:29 +00:00
Chmouel Boudjnah
8255810e7d Add swiftsync.
Change-Id: Ie588db762aedc55e00f8f51b3d2e329ba9a08a6c
2013-11-15 16:32:52 -05:00
Jenkins
af2f8295fd Merge "Add more stuff to SAIO doc's proxy pipeline." 2013-11-12 09:12:42 +00:00
gholt
a80c720af5 Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)

For easier review, this commit can be thought of in distinct
parts:

1)  New global_conf_callback functionality for allowing
    services to perform setup code before workers, etc. are
    launched. (This is then used by ssync in the object
    server to create a cross-worker semaphore to restrict
    concurrent incoming replication.)

2)  A bit of shifting of items up from object server and
    replicator to diskfile or DEFAULT conf sections for
    better sharing of the same settings. conn_timeout,
    node_timeout, client_timeout, network_chunk_size,
    disk_chunk_size.

3)  Modifications to the object server and replicator to
    optionally use ssync in place of rsync. This is done in
    a generic enough way that switching to FutureSync should
    be easy someday.

4)  The biggest part, and (at least for now) completely
    optional part, are the new ssync_sender and
    ssync_receiver files. Nice and isolated for easier
    testing and visibility into test coverage, etc.

All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.

Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.

FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.

There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.

If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.

Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-11-07 16:52:01 +00:00
Samuel Merritt
26483a2fd1 Add more stuff to SAIO doc's proxy pipeline.
If you're setting one of these up, you're probably going to use it for
development, in which case you want everything but the kitchen sink
turned on so you can just start hacking away.

Change-Id: I98d178ff545cbf8d853c102e9fce76fb9f6773ac
2013-11-05 16:15:07 -08:00
Jenkins
920680ffdd Merge "Faster swift-dispersion-populate" 2013-10-22 17:55:59 +00:00
Peter Portante
5202b0e586 DiskFile API, with reference implementation
Refactor on-disk knowledge out of the object server by pushing the
async update pickle creation to the new DiskFileManager class (name is
not the best, so suggestions welcome), along with the REPLICATOR
method logic. We also move the mount checking and thread pool storage
to the new ondisk.Devices object, which then also becomes the new home
of the audit_location_generator method.

For the object server, a new setup() method is now called at the end
of the controller's construction, and the _diskfile() method has been
renamed to get_diskfile(), to allow implementation specific behavior.

We then hide the need for the REST API layer to know how and where
quarantining needs to be performed. There are now two places it is
checked internally, on open() where we verify the content-length,
name, and x-timestamp metadata, and in the reader on close where the
etag metadata is checked if the entire file was read.

We add a reader class to allow implementations to isolate the WSGI
handling code for that specific environment (it is used no-where else
in the REST APIs). This simplifies the caller's code to just use a
"with" statement once open to avoid multiple points where close needs
to be called.

For a full historical comparison, including the usage patterns see:
https://gist.github.com/portante/5488238

(as of master, 2b639f5, Merge
 "Fix 500 from account-quota     This Commit
 middleware")
--------------------------------+------------------------------------
                                 DiskFileManager(conf)

                                   Methods:
                                     .pickle_async_update()
                                     .get_diskfile()
                                     .get_hashes()

                                   Attributes:
                                     .devices
                                     .logger
                                     .disk_chunk_size
                                     .keep_cache_size
                                     .bytes_per_sync

DiskFile(a,c,o,keep_data_fp=)    DiskFile(a,c,o)

  Methods:                         Methods:
   *.__iter__()
    .close(verify_file=)
    .is_deleted()
    .is_expired()
    .quarantine()
    .get_data_file_size()
                                     .open()
                                     .read_metadata()
    .create()                        .create()
                                     .write_metadata()
    .delete()                        .delete()

  Attributes:                      Attributes:
    .quarantined_dir
    .keep_cache
    .metadata
                                *DiskFileReader()

                                   Methods:
                                     .__iter__()
                                     .close()

                                   Attributes:
                                    +.was_quarantined

DiskWriter()                     DiskFileWriter()

  Methods:                         Methods:
    .write()                         .write()
    .put()                           .put()

* Note that the DiskFile class   * Note that the DiskReader() object
  implements all the methods       returned by the
  necessary for a WSGI app         DiskFileOpened.reader() method
  iterator                         implements all the methods
                                   necessary for a WSGI app iterator

                                 + Note that if the auditor is
                                   refactored to not use the DiskFile
                                   class, see
                                   https://review.openstack.org/44787
                                   then we don't need the
                                   was_quarantined attribute

A reference "in-memory" object server implementation of a backend
DiskFile class in swift/obj/mem_server.py and
swift/obj/mem_diskfile.py.

One can also reference
https://github.com/portante/gluster-swift/commits/diskfile for the
proposed integration with the gluster-swift code based on these
changes.

Change-Id: I44e153fdb405a5743e9c05349008f94136764916
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-10-17 15:03:31 -04:00
Peter Portante
9411a24ba7 Revert "Refactor common/utils methods to common/ondisk"
This reverts commit 7760f41c3ce436cb23b4b8425db3749a3da33d32

Change-Id: I95e57a2563784a8cd5e995cc826afeac0eadbe62
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-10-07 17:18:09 -04:00
Jenkins
0b594bc3af Merge "Change OpenStack LLC to Foundation" 2013-10-07 16:09:37 +00:00
Jenkins
30d8ff3ccc Merge "Fedora 19: need to use /etc/rc.d/rc.local" 2013-10-07 06:44:19 +00:00
Jenkins
95b5d6f443 Merge "Remove sphinx build warnings" 2013-10-07 06:43:54 +00:00
Peter Portante
3d8f0f1805 Fedora 19: need to use /etc/rc.d/rc.local
Change-Id: I80e9a4c40ff99ec09a8eeef935447c6393ea78ec
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-10-03 11:36:31 -04:00
Peter Portante
db4547d01d Remove sphinx build warnings
Change-Id: Ic34bbd9cc65d96ea9b8434be7b54e5bcfae28b63
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-10-03 11:21:26 -04:00
Jenkins
34340ddf49 Merge "Add "note" box callouts to SAIO for user changes." 2013-10-02 22:33:53 +00:00
Jenkins
9dc0886bc6 Merge "Pool memcache connections" 2013-10-02 21:46:23 +00:00
Clay Gerrard
02e247c1b3 Add "note" box callouts to SAIO for user changes.
The SAIO is purpously cut into two parts, so that you don't have to switch
back and forth between root and your unprivledged user.  Add some "note" box
callouts to highlight this changeover.

Change-Id: I8b1a8f0539eac60d4121bdd4dab01df75ecca207
2013-10-02 11:39:35 -07:00
Chuck Thier
ae8470131e Pool memcache connections
This creates a pool to each memcache server so that connections will not
grow without bound.  This also adds a proxy config
"max_memcache_connections" which can control how many connections are
available in the pool.

A side effect of the change is that we had to change the memcache calls
that used noreply, and instead wait for the result of the request.
Leaving with noreply could cause a race condition (specifically in
account auto create), due to one request calling `memcache.del(key)` and
then `memcache.get(key)` with a different pooled connection.  If the
delete didn't complete fast enough, the get would return the old value
before it was deleted, and thus believe that the account was not
autocreated.

ClaysMindExploded
DocImpact
Change-Id: I350720b7bba29e1453894d3d4105ac1ea232595b
2013-10-02 02:08:04 +00:00
Peter Portante
e8a07c4ca7 Fedora 19 updates
Change-Id: I95138852e45aa7632218a7107e0e7ba1f6ef373c
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-30 10:23:04 -04:00
Samuel Merritt
d9d7b2135a Install libffi-dev in SAIO docs.
If you don't, then newer versions of xattr won't install, and since
our xattr requirement is simply ">= 0.4" in requirements.txt, this
affects anyone setting up a new SAIO.

This happened with xattr 0.7, which was released on 2013-07-19.

Change-Id: Iaf335fa25a2908953d1fd218158ebedf5d01cc27
2013-09-24 16:54:58 -07:00
Samuel Merritt
ce5e810fed Update SAIO doc to have double proxy-logging in pipeline.
Change-Id: I0a034ca1420761cbf4e35dcea1d9cd18a92f90bd
2013-09-24 16:54:58 -07:00
ZhiQiang Fan
f72704fc82 Change OpenStack LLC to Foundation
Change-Id: I7c3df47c31759dbeb3105f8883e2688ada848d58
Closes-bug: #1214176
2013-09-20 01:02:31 +08:00
Peter Portante
7760f41c3c Refactor common/utils methods to common/ondisk
Place all the methods related to on-disk layout and / or configuration
into a new common module that can be shared by the various modules
using the same on-disk layout.

Change-Id: I27ffd4665d5115ffdde649c48a4d18e12017e6a9
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-17 17:32:04 -04:00
Chuck Thier
a30a7ced9c Add handoffs_first and handoff_delete to obj-repl
If handoffs_first is True, then the object replicator will give
partitions that are not supposed to be on the node priority.

If handoff_delete is set to a number (n), then it will delete a handoff
partition if at least n replicas were successfully replicated

Also fixed a couple of things in the object replicator unit tests and
added some more

DocImpact

Change-Id: Icb9968953cf467be2a52046fb16f4b84eb5604e4
2013-09-13 15:44:07 +00:00
Jenkins
35d94ee90e Merge "Man page lintian errors and warnings" 2013-09-11 18:09:42 +00:00
Tobias Stevenson
83a6ec1683 Man page lintian errors and warnings
Used groff to recreate the errors. I believe all the issues
except `binary-without-manpage` are solved. Would like
confirmation from someone using Lintian.

Closes-Bug: #1210114
Change-Id: I533205c53efdb7cdf3645cc3e3dc487f9ee5640a
2013-09-11 09:21:23 -05:00
Pete Zaitcev
d4b024ad7d Split backends off swift/common/db.py
The main purpose of this patch is to lay the groundwork for allowing
the container and account servers to optionally use pluggable backend
implementations. The backend.py files will eventually be the module
where the backend APIs are defined via docstrings of this reference
implementation. The swift/common/db.py module will remain an internal
module used by the reference implementation.

We have a raft of changes to docstrings staged for later, but this
patch takes care to relocate ContainerBroker and AccountBroker into
their new home intact.

Change-Id: Ibab5c7605860ab768c8aa5a3161a705705689b04
2013-09-10 13:30:28 -06:00
Florian Hines
42f4b150e3 Faster swift-dispersion-populate
- Makes swift-dispersion-populate a bit faster when using a larger
  dispersion_coverage with a larger part_power.
- Adds option to only run population for container OR objects
- Adds option to let you resume population at given point (useful if you
  need to resume population after a previous run error'd out or the
  like) by specifying which suffix to start at.

The original populate just randomly used uuid4().hex as a suffix on the
container/object names until all the partition's required where covered.
This isn't a big deal if you're only doing 1% coverage on a ring with a
small part power but takes ages if you're doing 100% on a larger ring.

Change-Id: I52f890a774412c1d6179f12db9081aedc58b6bc2
2013-09-05 18:12:15 -05:00
Jenkins
621ea520a5 Merge "Added container listing ratelimiting" 2013-08-23 15:46:53 +00:00
gholt
52eca4d8a7 Implements configurable swift_owner_headers
These are headers that will be stripped unless the WSGI environment
contains a true value for 'swift_owner'. The exact definition of a
swift_owner is up to the auth system in use, but usually indicates
administrative responsibilities.

DocImpact

Change-Id: I972772fbbd235414e00130ca663428e8750cabca
2013-08-15 16:42:58 -07:00
gholt
c8795e6e85 Added container listing ratelimiting
Change-Id: If4e9cfe4e4c743de1f39704acf849164cf3f0bd0
2013-08-14 12:40:25 +00:00
Chmouel Boudjnah
716ad3e07b Add libcloud to associated_projects.
Change-Id: I8778bbecc7ae5852cf789ae6b71191004f69861f
2013-08-09 17:18:40 +02:00
John Dickinson
e8a593fbf0 added a couple of java libraries to associated projects
Change-Id: I7a554af509e8d9743a8416a051845c266e1fb9f6
2013-08-09 07:50:28 -07:00
Vincent Untz
7f1aa9d1e8 Allow dispersion tools to use keystone server with insecure certificate
The swift-dispersion-populate and swift-dispersion-report tools now
accept a --insecure option.

Also, dispersion.conf now has a keystone_api_insecure option.

Default is obviously to use the secure path.

DocImpact

Change-Id: I4000352e547d9ce5b08ade54e0c886281caff891
2013-08-05 22:44:12 +02:00
Koert van der Veer
657a0e4e26 Add swift-basicauth and better-staticweb to associated projects.
As announced here:
http://openstack.markmail.org/thread/jjjbdpurhb5jwaxe

Change-Id: I683c463745b34c003ecb79ba8261b94b9dc166b6
2013-08-05 09:57:50 +02:00
Jenkins
a2126add0b Merge "Set default wsgi workers to cpu_count" 2013-07-30 19:12:28 +00:00
Jenkins
82aeacf5bf Merge "Allow floating point value for dispersion_coverage" 2013-07-29 21:53:19 +00:00
Jenkins
3349013dff Merge "Configuration options for error regex and log file in the config now" 2013-07-29 18:23:39 +00:00
Marcelo Martins
d2dd3e5488 Configuration options for error regex and log file in the config now
Making it possible for one to overwrite the default set of regexes
used to search for device block errors in the log file. Also making
the log file naming pattern configurable by setting them in the
drive-audit.conf file.

Updating "Detecting Failed Drives" section on the admin guide as well.

Change-Id: I7bd3acffed196da3e09db4c9dcbb48a20bdd1cf0
2013-07-23 07:24:29 -05:00
Clay Gerrard
de3acec4bf Set default wsgi workers to cpu_count
Change the default value of wsgi workers from 1 to auto.  The new default
value for workers in the proxy, container, account & object wsgi servers will
spawn as many workers per process as you have cpu cores.

This will not be ideal for some configurations, but it's much more likely to
produce a successful out of the box deployment.

Inspect the number of cpu_cores using python's multiprocessing when available.
Multiprocessing was added in python 2.6, but I know I've compiled python
without it before on accident.  The cpu_count method seems to be pretty system
agnostic, but it says it can raise NotImplementedError or sometimes return 0.

Add a new utility method 'config_auto_int_value' to pull an integer out of the
config which has a dynamic default.

 * drive by s/container/proxy/ in proxy-server.conf.5
 * fix misplaced max_clients in *-server.conf-sample
 * update doc/development_saio to force workers = 1

DocImpact

Change-Id: Ifa563d22952c902ab8cbe1d339ba385413c54e95
2013-07-18 22:57:18 -07:00
Chmouel Boudjnah
18a0813d9b Add documentation about flake8+hacking.
- Fixes bug 1201431.

Change-Id: If025a41caf3a629b9efb4d67c53c423796d37a91
2013-07-15 17:14:16 +02:00
Jenkins
72faf7b86d Merge "Revert "docfix apache2 now supports client chunked encodin"" 2013-07-09 19:40:33 +00:00
Thomas Leaman
5449155fb0 Allow floating point value for dispersion_coverage
For systems with very large numbers of partitions, 1% dispersion
coverage may simply be too much/take too long. This fix allows <1
values to be used for dispersion_coverage.

DocImpact

Change-Id: I5ed35b69754d55a410e66e658b3854de57c7666b
2013-07-09 09:29:07 +00:00
David Hadas
fcfb8012cd Revert "docfix apache2 now supports client chunked encodin"
This reverts commit 68cb91097b75a92237bd90caffcd405c3e83cb53

Just so this is not get forgotten in the tree...

We are using daemon mode and chunked is not supported in this mode.
2013-07-02 22:18:32 +00:00
John Dickinson
9b2ee07ee3 small cleanup to associated projects page
Change-Id: I5d6d6d6c32b6573474288897f6fa174b6f150183
2013-07-01 13:40:37 -07:00
Jenkins
4a90414fc7 Merge "docfix apache2 now supports client chunked encodin" 2013-06-29 01:21:10 +00:00
Chuck Thier
581f7f5517 Update docs to use default XFS inode size
In past couple of years, the XFS team has greatly improved inode use in
xfs.  With more recent kernels, there is no performance penalty for
using the default inode size, and a smaller inode size gives us
improvements in other areas where disk access is involved.

DocImpact

Change-Id: Ie9da53a6e8bf43d1d02881befbb52595462c9f2e
2013-06-28 19:52:17 +00:00