Reserve the namespace starting with the NULL byte for internal
use-cases. Backend services will allow path names to include the NULL
byte in urls and validate names in the reserved namespace. Database
services will filter all names starting with the NULL byte from
responses unless the request includes the header:
X-Backend-Allow-Reserved-Names: true
The proxy server will not allow path names to include the NULL byte in
urls unless a middlware has set the X-Backend-Allow-Reserved-Names
header. Middlewares can use the reserved namespace to create objects
and containers that can not be directly manipulated by clients. Any
objects and bytes created in the reserved namespace will be aggregated
to the user's account totals.
When deploying internal proxys developers and operators may configure
the gatekeeper middleware to translate the X-Allow-Reserved-Names header
to the Backend header so they can manipulate the reserved namespace
directly through the normal API.
UpgradeImpact: it's not safe to rollback from this change
Change-Id: If912f71d8b0d03369680374e8233da85d8d38f85
I changed asserts with more specific assert methods.
e.g.: from assertTrue(sth == None) to assertIsNone(*) or
assertTrue(isinstance(inst, type)) to assertIsInstace(inst, type) or
assertTrue(not sth) to assertFalse(sth).
The code gets more readable, and a better description will be shown on fail.
Change-Id: Icdbf3c63fe8dd6db1129023885655a9f7032d4a7
I changed asserts with more specific assert methods.
e.g.: from assertTrue(x in y) to assertIn(x, y).
The code gets more readable, and a better description will be shown on fail
Change-Id: Ic20fbff8a7bb2e870c1609d4fa6e6255eabbeced
Often, we want the current timestamp. May as well improve the ergonomics
a bit and provide a class method for it.
Change-Id: I3581c635c094a8c4339e9b770331a03eab704074
For now, last modified timestamp is supported only on
object listing. (i.e. GET container)
For example:
GET container with json format results in like as:
[{"hash": "d41d8cd98f00b204e9800998ecf8427e", "last_modified":
"2015-06-10T04:58:23.460230", "bytes": 0, "name": "object",
"content_type": "application/octet-stream"}]
However, container listing (i.e. GET account) shows just a dict
consists of ("name", "bytes", "name") for each container.
For example:
GET accounts with json format result in like as:
[{"count": 0, "bytes": 0, "name": "container"}]
This patch is for supporting last_modified key in the container
listing results as well as object listing like as:
[{"count": 0, "bytes": 0, "name": "container", "last_modified":
"2015-06-10T04:58:23.460230"}]
This patch is changing just output for listing. The original
timestamp to show the last modified is already in container table
of account.db as a "put_timestamp" column.
Note that this patch *DOESN'T* change the put_timestamp semantics.
i.e. the last_modified timestamp will be changed only at both PUT
container and POST container.
(PUT object doesn't affect the timestamp)
Note that the tuple format of returning value from
swift.account.backend.AccountBroker.list_containers is now
(name, object_count, bytes_used, put_timestamp, 0)
* put_timestamp is added *
Original discussion was in working session at Vancouver Summit.
Etherpads are around here:
https://etherpad.openstack.org/p/liberty-swift-contributors-meetuphttps://etherpad.openstack.org/p/liberty-container-listing-update
DocImpact
Change-Id: Iba0503916f1481a20c59ae9136436f40183e4c5b
Extended the use of the DatabaseBroker "stale_reads_ok" flag to the
AccountBroker and ContainerBroker. Now checks for an sqlite3 error
from the _commit_puts call that processes the pending files.
If this error is raised, then the stale_reads_ok flag will be checked
to determine how to proceed as opposed to simply raising.
The first time that print_info is attempted, the flag will be
false, but swift-[account|container]-info will check for the
raised exception. If it was raised, then a warning is reported
that the data may be stale, and another attempt will be
made using the stale_reads_ok=True flag.
Change-Id: I761526eef62327888c865d87a9caafa3e7eabab6
Closes-Bug: 1531302
Defcore uses Tempest, which uses Test Repository.
This change makes it easier for Defcore to pull functional
tests from Swift and run them. Additionally, using testr
allows tests to be run in parallel.
Concurrency set to 1 for now, >1 causes failures for
reasons that are still TBD.
With switch to ostestr all the server logs are being sent to stdout
which makes it completely unreadable. Suppressing the logs by default
now with a flag to enable it if desired.
Co-Authored-By: John Dickinson <me@not.mn>
Co-Authored-By: Robert Collins <rbtcollins@hpe.com>
Co-Authored-By: Matthew Oliver <matt@oliver.net.au>
Co-Authored-By: Ganesh Maharaj Mahalingam <ganesh.mahalingam@intel.com>
Change-Id: I53ef4a116996a772cf1f3abc2eb0ad60047322d5
Related-Bug: 1177924
* With the end_prefix changes in the original commit, we no longer need
the `or not name.startswith(prefix)` check.
* Improve test coverage of reverse path listings.
Change-Id: Iaa7d4b83647c3c150be95f88cb3cc9e4f0e33979
This change adds the ability to tell the container or account server to
reverse their listings. This is done by sending a reverse=TRUE_VALUE,
Where TRUE_VALUE is one of the values true can be in common/utils:
TRUE_VALUES = set(('true', '1', 'yes', 'on', 't', 'y'))
For example:
curl -i -X GET -H "X-Auth-Token: $TOKEN" $STORAGE_URL/c/?reverse=on
I borrowed the swapping of the markers code from Kevin's old change,
thanks Kevin. And Tim Burke added some real nuggets of awesomeness.
DocImpact
Co-Authored-By: Kevin McDonald <kmcdonald@softlayer.com>
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Implements: blueprint reverse-object-listing
Change-Id: I5eb655360ac95042877da26d18707aebc11c02f6
Add test for database create request without account
Partial test for migrate call on database with storage_policy_index
Change-Id: I7cfbd6bc7e2b341f433d88f600b19e54826a0e22
Signed-off-by: Ganesh Maharaj Mahalingam <ganesh.mahalingam@intel.com>
Previously, account listings that used the delimiter query param could
omit some containers if they ended with the character that follows the
delimiter.
See If196e3075612b121ef8da4a9128167d00a248c27 for the corresponding fix
for container listings.
Change-Id: I57fcb97e51f653f5f4e306a632fcb3a0fb148c4e
Noticed this while reviewing another change. Looks like the test itself already
ensures correct functionality of the reclaim() method in AccountBroker without
the commented code, thus removing this stale code.
Change-Id: I6a26a7591adef9fd794ca68a4e9c493d1127f93c
The assert_() method is deprecated and can be safely replaced by assertTrue().
This patch makes sure that running the tests does not create undesired
warnings.
Change-Id: I0602ba39ef93263386644ee68088d5f65fcb4a71
The Python 2 next() method of iterators was renamed to __next__() on
Python 3. Use the builtin next() function instead which works on Python
2 and Python 3.
Change-Id: Ic948bc574b58f1d28c5c58e3985906dee17fa51d
Old account schemas don't send the storage_policy_index key for container rows
during replication, and if the recieving end is already running an upgraded
server it is surprised with a KeyError. Normally this would work itself out
if the old schema recieved any updates from container layer, or a new
container is created, or requires a row sync from another account database -
but if the account databases have rows out of sync and there's no activity in
the account otherwise, there's nothing to force the old schemas to be
upgraded.
Rather than force the old schema that already has a complete set of container
rows to migrate even in the absense of activity we can just fill in default
legacy value for the storage policy index and allow the accounts to get back
in sync and migrate the next time a container update occurs.
FWIW, I never able to get a cluster upgrade to get stuck in this state without
some sort of account failure that forced them to get their rows out of sync
(in my cause I just unlinked a pending and then made sure to force all my
account datbases to commit pending files before upgrading - leading to an
upgraded cluster that absolutly needed account-replication to solve a row
mismatch for inactive accounts with old schemas)
Closes-Bug #1424108
Change-Id: Iaf4ef834eb24f0e11a52cc22b93a864574fabf83
Start tracking the container count per policy including reporting
it in account HEAD and supporting installations where the DB
existed before the updated schema.
Migration is triggered by the account audtior; if the database is
un-migrated it will continue to report policy_stats without the per
policy container_count keys.
Closes-Bug: #1367514
Change-Id: I07331cea177e19b3df303609a4ac510765a19162
ContainerBroker.merge_items() had a bug in it where non-ASCII Unicode
names would possibly result in duplicate entries in container
databases. AccountBroker.merge_items() doesn't do the same
bulk-operations tricks that ContainerBroker does, so it doesn't
currently have the bug. This commit just adds a test to ensure the bug
doesn't creep in should someone decide to make AccountBroker look more
like ContainerBroker someday.
Change-Id: Id2ac129828dbdf55b609d839ce4d9d42437ee0a3
Account-reaper works only at account-server with the first replica, and reaps
account with "deleted" status.
On the other hand, account-replicator doesn't replicate the status, only
replicates *_timestamp.
When swift fails to delete the first account replica, account-reaper never
reaps the account, because the first replica never gets marked as "deleted".
This patch adds a timestamp checking into is_status_deleted method, and
account-reaper will start to reap the account after account-replicator
replicates *_timestamp.
Change-Id: I75e3f15ad217a71b4fd39552cf6db2957597efca
Closes-Bug: #1304755
The normalized form of the X-Timestamp header looks like a float with a fixed
width to ensure stable string sorting - normalized timestamps look like
"1402464677.04188"
To support overwrites of existing data without modifying the original
timestamp but still maintain consistency a second internal offset
vector is append to the normalized timestamp form which compares and
sorts greater than the fixed width float format but less than a newer
timestamp. The internalized format of timestamps looks like
"1402464677.04188_0000000000000000" - the portion after the underscore
is the offset and is a formatted hexadecimal integer.
The internalized form is not exposed to clients in responses from Swift.
Normal client operations will not create a timestamp with an offset.
The Timestamp class in common.utils supports internalized and normalized
formatting of timestamps and also comparison of timestamp values. When the
offset value of a Timestamp is 0 - it's considered insignificant and need not
be represented in the string format; to support backwards compatibility during
a Swift upgrade the internalized and normalized form of a Timestamp with an
insignificant offset are identical. When a timestamp includes an offset it
will always be represented in the internalized form, but is still excluded
from the normalized form. Timestamps with an equivalent timestamp portion
(the float part) will compare and order by their offset. Timestamps with a
greater timestamp portion will always compare and order greater than a
Timestamp with a lesser timestamp regardless of it's offset. String
comparison and ordering is guaranteed for the internalized string format, and
is backwards compatible for normalized timestamps which do not include an
offset.
The reconciler currently uses a offset bump to ensure that objects can move to
the wrong storage policy and be moved back. This use-case is valid because
the content represented by the user-facing timestamp is not modified in way.
Future consumers of the offset vector of timestamps should be mindful of HTTP
semantics of If-Modified and take care to avoid deviation in the response from
the object server without an accompanying change to the user facing timestamp.
DocImpact
Implements: blueprint storage-policies
Change-Id: Id85c960b126ec919a481dc62469bf172b7fb8549
* base implementation of is_deleted phrased to use _is_deleted
* wrap pre-conn coded _is_deleted inside a transation for merge_timestamps
Implements: blueprint storage-policies
Change-Id: I6a948908c3e45b70707981d87171cb2cb910fe1e
This change updates the account HEAD handler to report out per
policy object and byte usage for the account. Cumulative values
are still reported and policy names are used in the report
out (unless request is sent to an account server directly in
which case policy indexes are used for easier accounting).
Below is an example of the relevant HEAD response for a cluster
with 3 policies and just a few small objects:
X-Account-Container-Count: 3
X-Account-Object-Count: 3
X-Account-Bytes-Used: 21
X-Storage-Policy-Bronze-Object-Count: 1
X-Storage-Policy-Bronze-Bytes-Used: 7
X-Storage-Policy-Silver-Object-Count: 1
X-Storage-Policy-Silver-Bytes-Used: 7
X-Storage-Policy-Gold-Object-Count: 1
X-Storage-Policy-Gold-Bytes-Used: 7
Set a DEFAULT storage_policy_index for existing container rows during
migration.
Copy existing object_count and bytes_used in policy_stat table during
migration.
DocImpact
Implements: blueprint storage-policies
Change-Id: I5ec251f9a8014dd89764340de927d09466c72221
This reverts commit 7760f41c3ce436cb23b4b8425db3749a3da33d32
Change-Id: I95e57a2563784a8cd5e995cc826afeac0eadbe62
Signed-off-by: Peter Portante <peter.portante@redhat.com>
Place all the methods related to on-disk layout and / or configuration
into a new common module that can be shared by the various modules
using the same on-disk layout.
Change-Id: I27ffd4665d5115ffdde649c48a4d18e12017e6a9
Signed-off-by: Peter Portante <peter.portante@redhat.com>
The main purpose of this patch is to lay the groundwork for allowing
the container and account servers to optionally use pluggable backend
implementations. The backend.py files will eventually be the module
where the backend APIs are defined via docstrings of this reference
implementation. The swift/common/db.py module will remain an internal
module used by the reference implementation.
We have a raft of changes to docstrings staged for later, but this
patch takes care to relocate ContainerBroker and AccountBroker into
their new home intact.
Change-Id: Ibab5c7605860ab768c8aa5a3161a705705689b04