Now, instead of saying
X-Versions-Location: <container>
X-Versions-Mode: history
clients should just say
X-History-Location: <container>
Since we've never had a release featuring a user-settable
X-Versions-Mode header, support may be dropped and that is now ignored.
Change-Id: Icfd0f481d4e40dd5375c737190aea7ee8dbc3bf9
This patch improves EC GET response handling:
- The proxy no longer requires all object servers to have a
durable file for the fragment archive that they return in
response to a GET. The proxy will now be satisfied if just
one object server has a durable file at the same timestamp
as fragments from other object servers.
This means that the proxy can now successfully GET an
object that had missing durable files when it was PUT.
- The proxy will now ensure that it has a quorum of *unique*
fragment indexes from object servers before considering a
GET to be successful.
- The proxy is now able to fetch multiple fragment archives
having different indexes from the same node. This enables
the proxy to successfully GET an object that has some
fragments that have landed on the same node, for example
after a rebalance.
This new behavior is facilitated by an exchange of new
headers on a GET request and response between the proxy and
object servers.
An object server now includes with a GET (or HEAD) response:
- X-Backend-Fragments: the value of this describes all
fragment archive indexes that the server has for the
object by encoding a map of the form: timestamp -> <list
of fragment indexes>
- X-Backend-Durable-Timestamp: the value of this is the
internal form of the timestamp of the newest durable file
that was found, if any.
- X-Backend-Data-Timestamp: the value of this is the
internal form of the timestamp of the data file that was
used to construct the diskfile.
A proxy server now includes with a GET request:
- X-Backend-Fragment-Preferences: the value of this
describes the proxy's current preference with respect to
those fragments that it would have object servers
return. It encodes a list of timestamp, and for each
timestamp a list of fragment indexes that the proxy does
NOT require (because it already has them).
The presence of a X-Backend-Fragment-Preferences header
(even one with an empty list as its value) will cause the
object server to search for the most appropriate fragment
to return, disregarding the existence or not of any
durable file. The object server assumes that the proxy
knows best.
Closes-Bug: 1469094
Closes-Bug: 1484598
Change-Id: I2310981fd1c4622ff5d1a739cbcc59637ffe3fc3
Co-Authored-By: Paul Luse <paul.e.luse@intel.com>
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Currently the container sync daemon fails to copy
an SLO manifest, and the error will stall progress
of the sync process on that container. There are
several reasons why the sync of an SLO manifest
may fail:
1. The GET of the manifest from the source
container returns an X-Static-Large-Object header
that is not allowed to be included with a PUT
to the destination container.
2. The format of the manifest object that is read
from the source is not in the syntax required
for a SLO manifest PUT.
3. Assuming 2 were fixed, the PUT of the manifest
includes an ETag header which will not match the
md5 of the manifest generated by the receiving
proxy's SLO middleware.
4. If the manifest is being synced to a different
account and/or cluster, then the SLO segments may
not have been synced and so the validation of the
PUT manifest will fail.
This patch addresses all of these obstacles by
enabling the destination container-sync middleware to
cause the SLO middleware to be bypassed by setting a
swift.slo_override flag in the request environ. This
flag is only set for request that have been validated
as originating from a container sync peer.
This is justifed by noting that a SLO manifest PUT from
a container sync peer can be assumed to have valid syntax
because it was already been validated when written to
the source container.
Furthermore, we must allow SLO manifests to be synced
without requiring the semantic of their content to be
re-validated because we have no way to enforce or check
that segments have been synced prior to the manifest, nor
to check that the semantic of the manifest is still valid
at the source.
This does mean that GETs to synced SLO manifests may fail
if segments have not been synced. This is however
consistent with the expectation for synced DLO manifests
and indeed for the source SLO manifest if segments have
been deleted since it was written.
Co-Authored-By: Oshrit Feder <oshritf@il.ibm.com>
Change-Id: I8d503419b7996721a671ed6b2795224775a7d8c6
Closes-Bug: #1605597
Add a note to the docstring that it is required to add a config section
to the proxy-server.conf and an entry to the pipeline to support history
mode.
Closes-Bug: 1619261
Change-Id: I888485ab4ece6f47db081a4d58c1aab24ce72a8a
Currently when using fast-post, the manifest is updated with the given
'x-object-manifest' header on a POST. If no such header is supplied,
then the manifest will change to a regular object.
This is not currently true when using post-as-copy.
This patch changes the DLO POST using post-as-copy behavior to match
that of using fast-post. It was also documented that
'x-object-manifest' must be provided on a POST to a manifest file.
Change-Id: Ie1143ab1a2c8f8c21e258a36badbff5d947769d4
Closes-bug: 1612991
This comes from discussion in Bristol Hackathon (Feb 2016).
Currently Swift has a couple of choices (Global Cluster and Container
Sync) to sync the stored data into geographically distributed locations.
This patch adds the summary of the discussion comparing between
Global Cluster and Container Sync to enable operators to know which
functionality fits their own use case.
And, to be fairness with container-sync, this patch moves global
cluster docs into overview_global_cluster.rst from admin_guide.rst.
Co-Authored-By: Alistair Coles <alistair.coles@hpe.com>
Change-Id: I624eb519503ae71dbc82245c33dab6e8637d0f8b
Clarify that synced segment container names must be the same
when syncing large objects.
Also add multipart-menifest query string option to API ref
for object GETs.
Change-Id: Ib2d2a1e6c1e5eff215fc75c2b49e7d6758b17b7e
Partial-Bug: #1613681
Closes-Bug: #1613316
This change introduces the concept of a "versioning mode" for
versioned_writes. The following modes are supported:
* stack
When deleting, check whether any previous versions exist in the
versions container. If none is found, the object is deleted. If the
most-recent version in the versions container is not a delete
marker, it is copied into the versioned container (overwriting the
current version if one exists) and then deleted from the versions
container. This preserves the previous behavior.
If the most-recent version in the versions container is a delete
marker and a current version exists in the versioned container, the
current version is deleted. If the most-recent version in the
versions container is a delete marker and no current version exists
in the versioned container, we copy the next-most-recent version
from the versions container into the versioned container (assuming
it exists and is not a delete marker) and delete both the
most-recent version (i.e., the delete marker) and the just-copied
next-most-recent version from the versions container.
With this mode, DELETEs to versioned containers "undo" operations
on containers. Previously this was limited to undoing PUTs, but now
it will also undo DELETEs performed while in "history" mode.
* history
When deleting, check whether a current version exists in the
versioned container. If one is found, it is copied to the versions
container. Then an empty "delete marker" object is also put into the
versions container; this records when the object was deleted.
Finally, the original current version is deleted from the versioned
container. As a result, subsequent GETs or HEADs will return a 404,
and container listings for the versioned container do not include
the object.
With this mode, DELETEs to versioned containers behave like DELETEs
to other containers, but with a history of what has happened.
Clients may specify (via a new X-Versions-Mode header) which mode a
container should use. By default, the existing "stack" mode is used.
Upgrade consideration:
======================
Clients should not use the "history" mode until all proxies in the
cluster have been upgraded. Attempting to use the "history" mode during
a rolling upgrade may result in some requests being served by proxies
running old code (which necessarily uses the "stack" mode), leading to
data loss.
Change-Id: I555dc17fefd0aa9ade681aa156da24e018ebe74b
Changed sentence: "Regions can be used to describe geo-graphically
systems characterized by lower-bandwidth"
To: "Regions can be used to describe geographical
systems characterized by lower-bandwidth"
Change-Id: I0f614a4c53dd31459f1b6297dd32a8c0f609d9ce
Closes-Bug: 1612302
The goal is to modify schedule priority and I/O scheduling class and
priority of daemon/server via configuration.
Setting is optional, default keeps current behaviour.
Use case:
Prioritize object-server to object-auditor, because all user's requests
needed to be served in peak hours and audit could wait.
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
DocImpact
Change-Id: I1018a18f4706daabdb84574ffd9a58d831e68396
added comments on how to run in_process and specific
test cases
Change-Id: I485755996b15753323d30de09914d35e262fcedc
Signed-off-by: Thiago da Silva <thiago@redhat.com>
Running functional tests in the in-process mode uses
the default value for the pipeline. This patch adds support
to specify the SWIFT_TEST_IN_PROCESS_CONF_LOADER variable
to point to a labeled function that changes the proxy
configuration for the functional test.
The patch also adds a new tox environment
func-in-process-encryption
which runs with the environment variable
SWIFT_TEST_IN_PROCESS_CONF_LOADER=encryption
The motivation for this change is to put support in place for an
upstream CI job that will functionally test using encryption
middleware in the pipeline. The gate job is proposed at:
https://review.openstack.org/#/c/348292/
Change-Id: I15c4b20f1d2be57ae21c69c614f6a9579145bee9
If the curl command is used exactly as in the help, the ampersand
in the signature is interpreted as an operator and the curl
command breaks. I am aware of developers who have wasted a lot of
time because of this.
Change-Id: I6468c9a098b56db8242a2cf2c23b7a4857bd8574
An high or increasing partition count due to storing handoffs can have
some severe side-effects, and replication might never be able to catch
up. This patch adds a note to the admin_guide how to check this.
Change-Id: Ib4e161d68f1a82236dbf5fac13ef9a13ac4bbf18
Deployment guide does not talk about the region. Also, it does not
specify that regions and zones need to be ints.
This patch adds brief description about region and changes numbers
to int. Also, adds region in the document that talks about ring data
struture.
Change-Id: I04ce42fb3e5c1f08e7f7ff6be23482cee8bdeb71
Partial-Bug: #1583551
In the swift deployment guide, region is missing from the syntax of
adding a new device to the swift-ring-builder.
This patch adds region in the syntax.
Change-Id: I43e247c92d461efd530c0f82ca3daddcb9e2ba5b
Closes-Bug: #1584127
Drive-by fix for crypto filter_factory test.
Add note to encryption doc to highlight that root secret
should not be changed (follow up on earlier review comment).
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Change-Id: I9776cddd4d045408325342983e285a00c992bfae
libssl-dev/openssl-devel are already listed in other-requirements.txt;
add them to installation instructions in the SAIO docs.
Change-Id: I3dc07213ff8dac1299d3eb68d3448a77e15c79af
Include a note in container-sync docs pointing to specific
configuration needed to be compatible with encryption.
Also remove the sample encryption root secret from
proxy-server.conf-sample and in-process test setup. Remove encryption
middleware from the default proxy pipeline.
Change-Id: Ibceac485813f3ac819a53e644995749735592a55
Adds encryption middlewares.
All object servers and proxy servers should be upgraded before
introducing encryption middleware.
Encryption middleware should be first introduced with the
encryption middleware disable_encryption option set to True.
Once all proxies have encryption middleware installed this
option may be set to False (the default).
Increases constraints.py:MAX_HEADER_COUNT by 4 to allow for
headers generated by encryption-related middleware.
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Co-Authored-By: Christian Cachin <cca@zurich.ibm.com>
Co-Authored-By: Mahati Chamarthy <mahati.chamarthy@gmail.com>
Co-Authored-By: Peter Chng <pchng@ca.ibm.com>
Co-Authored-By: Alistair Coles <alistair.coles@hpe.com>
Co-Authored-By: Jonathan Hinson <jlhinson@us.ibm.com>
Co-Authored-By: Hamdi Roumani <roumani@ca.ibm.com>
UpgradeImpact
Change-Id: Ie6db22697ceb1021baaa6bddcf8e41ae3acb5376
Adds a new form of system metadata for objects.
Sysmeta cannot be updated by an object POST because
that would cause all existing sysmeta to be deleted.
Crypto middleware will want to add 'system' metadata
to object metadata on PUTs and POSTs, but it is ok
for this metadata to be replaced en-masse on every
POST.
This patch introduces x-object-transient-sysmeta-*
that is persisted by object servers and returned
in GET and HEAD responses, just like user metadata,
without polluting the x-object-meta-* namespace.
All headers in this namespace will be filtered
inbound and outbound by the gatekeeper, so cannot
be set or read by clients.
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Co-Authored-By: Janie Richling <jrichli@us.ibm.com>
Change-Id: I5075493329935ba6790543fc82ea6e039704811d