Go to file
Clay Gerrard aa71d7e77b Better optimistic lock in get_hashes
mtime and force_rewrite have a *long* tangled history starting back in
lp bug #1089140 that's been carried through many refactors.

Using force_rewrite on errors reading from the pickle has always been a
read-modify-write race; but maybe less bad than the infinite recursion
bug it fixed?

Using getmtime has always had somewhat dubious resolution for race
detection - the only way to be sure the content of the file is the same
as when we read it without locking is to open the file up and check.

Unfortunately, the ondisk data wasn't rich enough to disambiguate when
the ondisk state represented may have changed (e.g. when an invalidation
for a suffix currently being hashed is consolidated, or if all hashes
are invalid like after an error reading the hashes.pkl) - so we also add
a key with a timestamp for race detection and write down if the
dictionary has any valid suffix hashes.

Along the way, we accidentally fix a serious performance regression with
hash invalidations...

We currently rehash all invalid suffixes twice on REPLICATE calls.

First we consolidating hashes, marking all invalid suffixes as None
and then perform the first suffix rehashing.

And then also *every time* one more time again immediately as soon as we
get done with the first one we throw all that work we just did on the
floor and rehash ALL the invalid suffixes *again* a second time because
the race detector erroneously notices the hashes.pkl file has been
"modified while we were hashing".

But we're not in a race.  We took the mtime before calling consolidate
hashes, and consolidate hashes modified the pickle when it wrote back the
invalid suffixes.

FWIW, since consolidate hashes operates under directory lock it can't
race - but we don't want suffix rehashing to hold the directory lock
that long so we use optimistic locking - i.e. we optimistically perform
the rehash w/o a lock and write back the update iif it hasn't changed
since read; if it has we retry the whole operation

UpgradeImpact:

If you upgrade and need to rollback - delete all hashes.pkl:

    rm /srv/node*/*/object*/*/hashes.pkl

Anything on significance achived here was blatently plagerised from the
work of others:

Co-Author: Pavel Kvasnička <pavel.kvasnicka@firma.seznam.cz>
Related-Change-Id: I64cadb1a3feb4d819d545137eecfc295389794f0
Co-Author: Alistair Coles <alistair.coles@hpe.com>
Related-Change-Id: I8f6bb89beaaca3beec2e6063299189f52e9eee51
Related-Change-Id: I08c8cf09282f737103e580c1f57923b399abe58c

Change-Id: Ia43ec2cf7ab715ec37f0044625a10aeb6420f6e3
2017-01-31 22:14:28 +00:00
2013-09-17 11:46:04 +10:00
2017-01-31 22:14:28 +00:00
2016-03-22 11:53:49 +00:00
2016-06-09 11:22:37 -07:00
2016-09-16 09:20:34 -07:00
2016-05-05 22:02:47 -07:00
2016-11-25 16:36:49 +01:00
2017-01-05 10:24:09 -08:00
2014-05-21 09:37:22 -07:00
2016-11-10 21:34:14 +00:00
2017-01-20 12:37:01 -05:00

Team and repository tags

image

Swift

A distributed object storage system designed to scale from a single machine to thousands of servers. Swift is optimized for multi-tenancy and high concurrency. Swift is ideal for backups, web and mobile content, and any other unstructured data that can grow without bound.

Swift provides a simple, REST-based API fully documented at http://docs.openstack.org/.

Swift was originally developed as the basis for Rackspace's Cloud Files and was open-sourced in 2010 as part of the OpenStack project. It has since grown to include contributions from many companies and has spawned a thriving ecosystem of 3rd party tools. Swift's contributors are listed in the AUTHORS file.

Docs

To build documentation install sphinx (pip install sphinx), run python setup.py build_sphinx, and then browse to /doc/build/html/index.html. These docs are auto-generated after every commit and available online at http://docs.openstack.org/developer/swift/.

For Developers

Getting Started

Swift is part of OpenStack and follows the code contribution, review, and testing processes common to all OpenStack projects.

If you would like to start contributing, check out these notes to help you get started.

The best place to get started is the "SAIO - Swift All In One". This document will walk you through setting up a development cluster of Swift in a VM. The SAIO environment is ideal for running small-scale tests against swift and trying out new features and bug fixes.

Tests

There are three types of tests included in Swift's source tree.

  1. Unit tests
  2. Functional tests
  3. Probe tests

Unit tests check that small sections of the code behave properly. For example, a unit test may test a single function to ensure that various input gives the expected output. This validates that the code is correct and regressions are not introduced.

Functional tests check that the client API is working as expected. These can be run against any endpoint claiming to support the Swift API (although some tests require multiple accounts with different privilege levels). These are "black box" tests that ensure that client apps written against Swift will continue to work.

Probe tests are "white box" tests that validate the internal workings of a Swift cluster. They are written to work against the "SAIO - Swift All In One" dev environment. For example, a probe test may create an object, delete one replica, and ensure that the background consistency processes find and correct the error.

You can run unit tests with .unittests, functional tests with .functests, and probe tests with .probetests. There is an additional .alltests script that wraps the other three.

Code Organization

  • bin/: Executable scripts that are the processes run by the deployer
  • doc/: Documentation
  • etc/: Sample config files
  • examples/: Config snippets used in the docs
  • swift/: Core code
    • account/: account server
    • cli/: code that backs some of the CLI tools in bin/
    • common/: code shared by different modules
      • middleware/: "standard", officially-supported middleware
      • ring/: code implementing Swift's ring
    • container/: container server
    • locale/: internationalization (translation) data
    • obj/: object server
    • proxy/: proxy server
  • test/: Unit, functional, and probe tests

Data Flow

Swift is a WSGI application and uses eventlet's WSGI server. After the processes are running, the entry point for new requests is the Application class in swift/proxy/server.py. From there, a controller is chosen, and the request is processed. The proxy may choose to forward the request to a back- end server. For example, the entry point for requests to the object server is the ObjectController class in swift/obj/server.py.

For Deployers

Deployer docs are also available at http://docs.openstack.org/developer/swift/. A good starting point is at http://docs.openstack.org/developer/swift/deployment_guide.html

There is an ops runbook that gives information about how to diagnose and troubleshoot common issues when running a Swift cluster.

You can run functional tests against a swift cluster with .functests. These functional tests require /etc/swift/test.conf to run. A sample config file can be found in this source tree in test/sample.conf.

For Client Apps

For client applications, official Python language bindings are provided at http://github.com/openstack/python-swiftclient.

Complete API documentation at http://docs.openstack.org/api/openstack-object-storage/1.0/content/

There is a large ecosystem of applications and libraries that support and work with OpenStack Swift. Several are listed on the associated projects page.


For more information come hang out in #openstack-swift on freenode.

Thanks,

The Swift Development Team

Description
OpenStack Storage (Swift)
Readme 205 MiB
Languages
Python 99.6%
JavaScript 0.3%