Users have complained for a while that Swift's ETags don't match the
expected RFC formats. We've resisted fixing this for just as long,
worrying that the fix would break innumerable clients that expect the
value to be a hex-encoded MD5 digest and *nothing else*.
But, users keep asking for it, and some consumers (including some CDNs)
break if we *don't* have quoted etags -- so, let's make it an option.
With this middleware, Swift users can set metadata per-account or even
per-container to explicitly request RFC compliant etags or not. Swift
operators also get an option to change the default behavior
cluster-wide; it defaults to the old, non-compliant format.
See also:
- https://tools.ietf.org/html/rfc2616#section-3.11
- https://tools.ietf.org/html/rfc7232#section-2.3
Closes-Bug: 1099087
Closes-Bug: 1424614
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Change-Id: I380c6e34949d857158e11eb428b3eda9975d855d
Adds a tool, swift-container-deleter, that takes an account/container
and optional prefix, marker, and/or end-marker; spins up an internal
client; makes listing requests against the container; and pushes the
found objects into the object-expirer queue with a special
application/async-deleted content-type.
In order to do this enqueuing efficiently, a new internal-to-the-cluster
container method is introduced: UPDATE. It takes a JSON list of object
entries and runs them through merge_items.
The object-expirer is updated to look for work items with this
content-type and skip the X-If-Deleted-At check that it would normally
do.
Note that the target-container's listing will continue to show the
objects until data is actually deleted, bypassing some of the concerns
raised in the related change about clearing out a container entirely and
then deleting it.
Change-Id: Ia13ee5da3d1b5c536eccaadc7a6fdcd997374443
Related-Change: I50e403dee75585fc1ff2bb385d6b2d2f13653cf8
With this commit, each storage policy can define the diskfile to use to
access objects. Selection of the diskfile is done in swift.conf.
Example:
[storage-policy:0]
name = gold
policy_type = replication
default = yes
diskfile = egg:swift#replication.fs
The diskfile configuration item accepts the same format than middlewares
declaration: [[scheme:]egg_name#]entry_point
The egg_name is optional and default to "swift". The scheme is optional
and default to the only valid value "egg". The upstream entry points are
"replication.fs" and "erasure_coding.fs".
Co-Authored-By: Alexandre Lécuyer <alexandre.lecuyer@corp.ovh.com>
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Change-Id: I070c21bc1eaf1c71ac0652cec9e813cadcc14851
Add a new middleware that can be used to fetch an encryption root
secret from a KMIP service. The middleware uses a PyKMIP client
to interact with a KMIP endpoint. The middleware is configured with
a unique identifier for the key to be fetched and options required
for the PyKMIP client.
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Change-Id: Ib0943fb934b347060fc66c091673a33bcfac0a6d
Provides a simple, experimental, CLI tool to generate a
composite ring from a list of component builder files.
For example:
swift-ring-composer <composite-file> compose \
<builder-file> <builder-file> --output <ring-file>
Commands available:
- compose: compose a list of builder file to a composite ring
- show: show the metadata for a composite ring
Co-Authored-By: Kota Tsuyuzaki <tsuyuzaki.kota@lab.ntt.co.jp>
Co-Authored-By: Matthew Oliver <matt@oliver.net.au>
Change-Id: I25a79e71c13af352e19e4358f60545265b51584f
This patch adds a read_only middleware to swift. It gives the ability
to make an entire cluster or individual accounts read only.
When a cluster or an account is in read only mode, requests that would
result in writes to the cluser are not allowed.
DocImpact
Change-Id: I7e0743aecd60b171bbcefcc8b6e1f3fd4cef2478
The sharder daemon visits container dbs and when necessary executes
the sharding workflow on the db.
The workflow is, in overview:
- perform an audit of the container for sharding purposes.
- move any misplaced objects that do not belong in the container
to their correct shard.
- move shard ranges from FOUND state to CREATED state by creating
shard containers.
- move shard ranges from CREATED to CLEAVED state by cleaving objects
to shard dbs and replicating those dbs. By default this is done in
batches of 2 shard ranges per visit.
Additionally, when the auto_shard option is True (NOT yet recommeneded
in production), the sharder will identify shard ranges for containers
that have exceeded the threshold for sharding, and will also manage
the sharding and shrinking of shard containers.
The manage_shard_ranges tool provides a means to manually identify
shard ranges and merge them to a container in order to trigger
sharding. This is currently the recommended way to shard a container.
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Change-Id: I7f192209d4d5580f5a0aa6838f9f04e436cf6b1f
This attempts to import openstack/swift3 package into swift upstream
repository, namespace. This is almost simple porting except following items.
1. Rename swift3 namespace to swift.common.middleware.s3api
1.1 Rename also some conflicted class names (e.g. Request/Response)
2. Port unittests to test/unit/s3api dir to be able to run on the gate.
3. Port functests to test/functional/s3api and setup in-process testing
4. Port docs to doc dir, then address the namespace change.
5. Use get_logger() instead of global logger instance
6. Avoid global conf instance
Ex. fix various minor issue on those steps (e.g. packages, dependencies,
deprecated things)
The details and patch references in the work on feature/s3api are listed
at https://trello.com/b/ZloaZ23t/s3api (completed board)
Note that, because this is just a porting, no new feature is developed since
the last swift3 release, and in the future work, Swift upstream may continue
to work on remaining items for further improvements and the best compatibility
of Amazon S3. Please read the new docs for your deployment and keep track to
know what would be changed in the future releases.
Change-Id: Ib803ea89cfee9a53c429606149159dd136c036fd
Co-Authored-By: Thiago da Silva <thiago@redhat.com>
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Add a symbolic link ("symlink") object support to Swift. This
object will reference another object. GET and HEAD
requests for a symlink object will operate on the referenced object.
DELETE and PUT requests for a symlink object will operate on the
symlink object, not the referenced object, and will delete or
overwrite it, respectively.
POST requests are *not* forwarded to the referenced object and should
be sent directly. POST requests sent to a symlink object will
result in a 307 Error.
Historical information on symlink design can be found here:
https://github.com/openstack/swift-specs/blob/master/specs/in_progress/symlinks.rst.
https://etherpad.openstack.org/p/swift_symlinks
Co-Authored-By: Thiago da Silva <thiago@redhat.com>
Co-Authored-By: Janie Richling <jrichli@us.ibm.com>
Co-Authored-By: Kazuhiro MIYAHARA <miyahara.kazuhiro@lab.ntt.co.jp>
Co-Authored-By: Kota Tsuyuzaki <tsuyuzaki.kota@lab.ntt.co.jp>
Change-Id: I838ed71bacb3e33916db8dd42c7880d5bb9f8e18
Signed-off-by: Thiago da Silva <thiago@redhat.com>
This has been deprecated since Swift 2.10.0 (Newton) including a
message that it would go away. Let's actually remove it.
Change-Id: I7d3659761c71119363ff2c0c750e37b4c6374a39
Related-Change: Ifa8bf636f20f82db4845b02d1b58699edaa39356
Make some json -> (text, xml) stuff in a common module, reference that in
account/container servers so we don't break existing clients (including
out-of-date proxies), but have the proxy controllers always force a json
listing.
This simplifies operations on listings (such as the ones already happening in
decrypter, or the ones planned for symlink and sharding) by only needing to
consider a single response type.
There is a downside of larger backend requests for text/plain listings, but
it seems like a net win?
Change-Id: Id3ce37aa0402e2d8dd5784ce329d7cb4fbaf700d
This patch allows to import the dispersion report tool, and thus making
it more easily usable within other Python tools. This can be also used
in a follow up patch to add some tests for the report tool.
It also fixes a bug when using the "--dump-json" option - until now it
returned the policy name and made the JSON invalid.
Change-Id: Ie0d52a1a54fc152bb72cbb3f84dcc36a8dad972a
This patch adds support for retrieving the encryption root secret from
an external key management system. In practice, this is currently
limited to Barbican.
Change-Id: I1700e997f4ae6fa1a7e68be6b97539a24046e80b
* Fixes warnings in RST file
* Suppress warning log from pyeclib during the doc build.
pyeclib emits a warning message on an older liberasurecode [1]
and sphinx treats this as error (when warning-is-error is set).
There is no need to check warnings during the doc build,
so we can safely suppress the warning.
This is a part of the doc migration community-wide effort.
http://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html
[1] https://github.com/openstack/pyeclib/commit/d163972b
Change-Id: I9adaee29185a2990cc3985bbe0dd366e22f4f1a2
This patch adds methods to increase the partition power of an existing
object ring without downtime for the users using a 3-step process. Data
won't be moved to other nodes; objects using the new increased partition
power will be located on the same device and are hardlinked to avoid
data movement.
1. A new setting "next_part_power" will be added to the rings, and once
the proxy server reloaded the rings it will send this value to the
object servers on any write operation. Object servers will now create a
hard-link in the new location to the original DiskFile object. Already
existing data will be relinked using a new tool in the new locations
using hardlinks.
2. The actual partition power itself will be increased. Servers will now
use the new partition power to read from and write to. No longer
required hard links in the old object location have to be removed now by
the relinker tool; the relinker tool reads the next_part_power setting
to find object locations that need to be cleaned up.
3. The "next_part_power" flag will be removed.
This mostly implements the spec in [1]; however it's not using an
"epoch" as described there. The idea of the epoch was to store data
using different partition powers in their own namespace to avoid
conflicts with auditors and replicators as well as being able to abort
such an operation and just remove the new tree. This would require some
heavy change of the on-disk data layout, and other object-server
implementations would be required to adopt this scheme too.
Instead the object-replicator is now aware that there is a partition
power increase in progress and will skip replication of data in that
storage policy; the relinker tool should be simply run and afterwards
the partition power will be increased. This shouldn't take that much
time (it's only walking the filesystem and hardlinking); impact should
be low therefore. The relinker should be run on all storage nodes at the
same time in parallel to decrease the required time (though this is not
mandatory). Failures during relinking should not affect cluster
operations - relinking can be even aborted manually and restarted later.
Auditors are not quarantining objects written to a path with a different
partition power and therefore working as before (though they are reading
each object twice in the worst case before the no longer needed hard
links are removed).
Co-Authored-By: Alistair Coles <alistair.coles@hpe.com>
Co-Authored-By: Matthew Oliver <matt@oliver.net.au>
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
[1] https://specs.openstack.org/openstack/swift-specs/specs/in_progress/
increasing_partition_power.html
Change-Id: I7d6371a04f5c1c4adbb8733a71f3c177ee5448bb
Warnings-as-errors breaks our docs gate currently, as the gate is using
an old liberasurecode. Stop treating warnings as errors until the gate
can be updated with a more-recent version.
Related-Change: I3985d027f0ac2119ceaeb4daba5964f937de6cea
Change-Id: I4869ec370f08e338be78d2c0f930106081f21a0a
Adds encryption middlewares.
All object servers and proxy servers should be upgraded before
introducing encryption middleware.
Encryption middleware should be first introduced with the
encryption middleware disable_encryption option set to True.
Once all proxies have encryption middleware installed this
option may be set to False (the default).
Increases constraints.py:MAX_HEADER_COUNT by 4 to allow for
headers generated by encryption-related middleware.
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Co-Authored-By: Christian Cachin <cca@zurich.ibm.com>
Co-Authored-By: Mahati Chamarthy <mahati.chamarthy@gmail.com>
Co-Authored-By: Peter Chng <pchng@ca.ibm.com>
Co-Authored-By: Alistair Coles <alistair.coles@hpe.com>
Co-Authored-By: Jonathan Hinson <jlhinson@us.ibm.com>
Co-Authored-By: Hamdi Roumani <roumani@ca.ibm.com>
UpgradeImpact
Change-Id: Ie6db22697ceb1021baaa6bddcf8e41ae3acb5376
Rewrite server side copy and 'object post as copy' feature as middleware to
simplify the PUT method in the object controller code. COPY is no longer
a verb implemented as public method in Proxy application.
The server side copy middleware is inserted to the left of dlo, slo and
versioned_writes middlewares in the proxy server pipeline. As a result,
dlo and slo copy_hooks are no longer required. SLO manifests are now
validated when copied so when copying a manifest to another account the
referenced segments must be readable in that account for the manifest
copy to succeed (previously this validation was not made, meaning the
manifest was copied but could be unusable if the segments were not
readable).
With this change, there should be no change in functionality or existing
behavior. This is asserted with (almost) no changes required to existing
functional tests.
Some notes (for operators):
* Middleware required to be auto-inserted before slo and dlo and
versioned_writes
* Turning off server side copy is not configurable.
* object_post_as_copy is no longer a configurable option of proxy server
but of this middleware. However, for smooth upgrade, config option set
in proxy server app is also read.
DocImpact: Introducing server side copy as middleware
Co-Authored-By: Alistair Coles <alistair.coles@hpe.com>
Co-Authored-By: Thiago da Silva <thiago@redhat.com>
Change-Id: Ic96a92e938589a2f6add35a40741fd062f1c29eb
Signed-off-by: Prashanth Pai <ppai@redhat.com>
Signed-off-by: Thiago da Silva <thiago@redhat.com>
Rewrite object versioning as middleware to simplify the PUT method
in the object controller.
The functionality remains basically the
same with the only major difference being the ability to now
version slo manifest files. dlo manifests are still not
supported as part of this patch.
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
DocImpact
Change-Id: Ie899290b3312e201979eafefb253d1a60b65b837
Signed-off-by: Thiago da Silva <thiago@redhat.com>
Signed-off-by: Prashanth Pai <ppai@redhat.com>
This is a tool to help developers quantify changes to the ring
builder. It takes a scenario (JSON file) describing the builder's
basic parameters (part_power, replicas, etc.) and a number of
"rounds", where each round is a set of operations to perform on the
builder. For each round, the operations are applied, and then the
builder is rebalanced until it reaches a steady state.
The idea is that a developer observes the ring builder behaving
suboptimally, writes a scenario to reproduce the behavior, modifies
the ring builder to fix it, and references the scenario with the
commit so that others can see that things have improved.
I decided to write this after writing my fourth or fifth hacky one-off
script to reproduce some bad behavior in the ring builder.
Change-Id: I114242748368f142304aab90a6d99c1337bced4c
This patch adds the erasure code reconstructor. It follows the
design of the replicator but:
- There is no notion of update() or update_deleted().
- There is a single job processor
- Jobs are processed partition by partition.
- At the end of processing a rebalanced or handoff partition, the
reconstructor will remove successfully reverted objects if any.
And various ssync changes such as the addition of reconstruct_fa()
function called from ssync_sender which performs the actual
reconstruction while sending the object to the receiver
Co-Authored-By: Alistair Coles <alistair.coles@hp.com>
Co-Authored-By: Thiago da Silva <thiago@redhat.com>
Co-Authored-By: John Dickinson <me@not.mn>
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Co-Authored-By: Tushar Gohad <tushar.gohad@intel.com>
Co-Authored-By: Samuel Merritt <sam@swiftstack.com>
Co-Authored-By: Christian Schwede <christian.schwede@enovance.com>
Co-Authored-By: Yuan Zhou <yuan.zhou@intel.com>
blueprint ec-reconstructor
Change-Id: I7d15620dc66ee646b223bb9fff700796cd6bef51
Commit 7a192987c0 sets
up swift for translation but the compile_catalog
directory option is pointing at the wrong location
to scan for po files.
Change-Id: Id4dd24ddfde735ef8ef064882bea045361b5db90
Closes-Bug: #1367086
To start translation of swift, we need to initially import the
translation file - and place it at the proper place so that
the usual CI scripts can handle it.
The proper place is for all python projects
$PROJECT/locale/$PROJECT.pot, so move locale/$PROJECT.pot to the new
location and regenerate the file.
Update setup.cfg with the new paths.
Further imports will be done by the OpenStack Proposal bot.
Change-Id: Ide4da91f2af71db529f4a06d6b1e30ba79883506
Partial-Bug: #608725
Closes-Bug: #1082805
This daemon will take objects that are in the wrong storage policy and
move them to the right ones, or delete requests that went to the wrong
storage policy and apply them to the right ones. It operates on a
queue similar to the object-expirer's queue.
Discovering that the object is in the wrong policy will be done in
subsequent commits by the container replicator; this is the daemon
that handles them once they happen.
Like the object expirer, you only need to run one of these per cluster
see etc/container-reconciler.conf.
DocImpact
Implements: blueprint storage-policies
Change-Id: I5ea62eb77ddcbc7cfebf903429f2ee4c098771c9
The profile middleware provide a tool to profile Swift
code on the fly and collect statistic data for performance
analysis. An native simple Web UI is also provided to help
query and visualize the data.
Change-Id: I6a1554b2f8dc22e9c8cd20cff6743513eb9acc05
Implements: blueprint profiling-middleware
This is a very simple swift tool to retrieve information
of an account that is located on the storage node.
One can call the tool with a given account db file
as it is stored on the storage node system.
It will then return several information about that account.
Change-Id: Ibfeee790adc000fc177b4b3c03d22ff785fda325
This is a very simple swift tool to retrieve information
of a container that is located on the storage node.
One can call the tool with a given container db file
as it is stored on the storage node system.
It will then return several information about that container.
Change-Id: Ifebaed6c51a9ed5fbc0e7572bb43ef05d7dd254b
Just use import to make scripts available in bin/ instead of
creating these during setup.py install.
Change-Id: I7318bbb77f6564ed58736887e711e1c497873471
Add some tests for essential methods in swift-ring-builder.
Tests for removing or changing device settings are executed
with different search values to cover many possible command
line arguments.
Currently tested methods:
- create ring
- add device
- remove device
- set weight
- set info
- set min_part_hours
- set replicas
Tests use swift.common.ring.RingBuilder to verify actions.
Catching and testing output from print statements is not
tested, because this requires redirecting sys.stdout during
tests and that might have some sideeffects for testing tools.
bin/swift-ring-builder has been moved to swift/cli/ringbuilder.py
and slightly modified to work as before (mainly due to no more
existing global variables since that part of the code has been
moved inside a main() function).
Change-Id: Ia63f59a8faca1fad990784f27532ca07a2125454
This is for the same reason that SLO got pulled into middleware, which
includes stuff like automatic retry of GETs on broken connection and
the multi-ring storage policy stuff.
The proxy will automatically insert the dlo middleware at an
appropriate place in the pipeline the same way it does with the
gatekeeper middleware. Clusters will still support DLOs after upgrade
even with an old config file that doesn't mention dlo at all.
Includes support for reading config values from the proxy server's
config section so that upgraded clusters continue to work as before.
Bonus fix: resolve 'after' vs. 'after_fn' in proxy's required filters
list. Having two was confusing, so I kept the more-general one.
DocImpact
blueprint multi-ring-large-objects
Change-Id: Ib3b3830c246816dd549fc74be98b4bc651e7bace
Fix also minor bug in zone filtering when zone set to 0.
Moved bin/swift-recon to swift/cli/recon.py, which makes
it possible to import it without using some scary hacks.
bin/swift-recon is now created by setup.py install.
Closes-Bug: #1261692
Change-Id: Id0729991c8ece73604467480dbf93fec7d8eb196
Summary of the new configuration option:
The cluster operators add the container_sync middleware to their
proxy pipeline and create a container-sync-realms.conf for their
cluster and copy this out to all their proxy and container servers.
This file specifies the available container sync "realms".
A container sync realm is a group of clusters with a shared key that
have agreed to provide container syncing to one another.
The end user can then set the X-Container-Sync-To value on a
container to //realm/cluster/account/container instead of the
previously required URL.
The allowed hosts list is not used with this configuration and
instead every container sync request sent is signed using the realm
key and user key.
This offers better security as source hosts can be faked much more
easily than faking per request signatures. Replaying signed requests,
assuming it could easily be done, shouldn't be an issue as the
X-Timestamp is part of the signature and so would just short-circuit
as already current or as superceded.
This also makes configuration easier for the end user, especially
with difficult networking situations where a different host might
need to be used for the container sync daemon since it's connecting
from within a cluster. With this new configuration option, the end
user just specifies the realm and cluster names and that is resolved
to the proper endpoint configured by the operator. If the operator
changes their configuration (key or endpoint), the end user does not
need to change theirs.
DocImpact
Change-Id: Ie1704990b66d0434e4991e26ed1da8b08cb05a37