2013-09-20 01:00:54 +08:00
|
|
|
# Copyright (c) 2010-2012 OpenStack Foundation
|
2010-07-12 17:03:45 -05:00
|
|
|
#
|
|
|
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
# you may not use this file except in compliance with the License.
|
|
|
|
# You may obtain a copy of the License at
|
|
|
|
#
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
#
|
|
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
|
|
# implied.
|
|
|
|
# See the License for the specific language governing permissions and
|
|
|
|
# limitations under the License.
|
|
|
|
|
Allow for multiple X-(Account|Container)-* headers.
When the number of account/container or container/object replicas are
different, Swift had a few misbehaviors. This commit fixes them.
* On an object PUT/POST/DELETE, if there were 3 object replicas and
only 2 container replicas, then only 2 requests would be made to
object servers. Now, 3 requests will be made, but the third won't
have any X-Container-* headers in it.
* On an object PUT/POST/DELETE, if there were 3 object replicas and 4
container replicas, then only 3/4 container servers would receive
immediate updates; the fourth would be ignored. Now one of the
object servers will receive multiple (comma-separated) values in the
X-Container-* headers and it will attempt to contact both of them.
One side effect is that multiple async_pendings may be written for
updates to the same object. They'll have differing timestamps,
though, so all but the newest will be deleted unread. To trigger
this behavior, you have to have more container replicas than object
replicas, 2 or more of the container servers must be down, and the
headers sent to one object server must reference 2 or more down
container servers; it's unlikely enough and the consequences are so
minor that it didn't seem worth fixing.
The situation with account/containers is analogous, only without the
async_pendings.
Change-Id: I98bc2de93fb6b2346d6de1d764213d7563653e8d
2012-12-12 17:47:04 -08:00
|
|
|
import operator
|
2010-07-12 17:03:45 -05:00
|
|
|
import os
|
2012-12-17 06:39:25 -05:00
|
|
|
import mock
|
2010-07-12 17:03:45 -05:00
|
|
|
import unittest
|
2014-05-27 16:57:25 -07:00
|
|
|
import itertools
|
2013-03-20 19:26:45 -07:00
|
|
|
from contextlib import contextmanager
|
2010-07-12 17:03:45 -05:00
|
|
|
from shutil import rmtree
|
|
|
|
from StringIO import StringIO
|
2011-01-19 14:18:37 -06:00
|
|
|
from tempfile import mkdtemp
|
2014-02-17 10:31:20 +00:00
|
|
|
from test.unit import FakeLogger
|
2014-03-26 22:55:55 +00:00
|
|
|
from time import gmtime
|
2013-07-23 14:54:51 -07:00
|
|
|
from xml.dom import minidom
|
2014-05-27 16:57:25 -07:00
|
|
|
import time
|
|
|
|
import random
|
2010-07-12 17:03:45 -05:00
|
|
|
|
2011-11-07 16:24:08 +00:00
|
|
|
from eventlet import spawn, Timeout, listen
|
2010-07-12 17:03:45 -05:00
|
|
|
import simplejson
|
|
|
|
|
Enhance log msg to report referer and user-agent
Enhance internally logged messages to report referer and user-agent.
Pass the referering URL and METHOD between internal servers (when
known), and set the user-agent to be the server type (obj-server,
container-server, proxy-server, obj-updater, obj-replicator,
container-updater, direct-client, etc.) with the process PID. In
conjunction with the transaction ID, it helps to track down which PID
from a given system was responsible for initiating the request and
what that server was working on to make this request.
This has been helpful in tracking down interactions between object,
container and account servers.
We also take things a bit further performaing a bit of refactoring to
consolidate calls to transfer_headers() now that we have a helper
method for constructing them.
Finally we performed further changes to avoid header key duplication
due to string literal header key values and the various objects
representing headers for requests and responses. See below for more
details.
====
Header Keys
There seems to be a bit of a problem with the case of the various
string literals used for header keys and the interchangable way
standard Python dictionaries, HeaderKeyDict() and HeaderEnvironProxy()
objects are used.
If one is not careful, a header object of some sort (one that does not
normalize its keys, and that is not necessarily a dictionary) can be
constructed containing header keys which differ only by the case of
their string literals. E.g.:
{ 'x-trans-id': '1234', 'X-Trans-Id': '5678' }
Such an object, when passed to http_connect() will result in an
on-the-wire header where the key values are merged together, comma
separated, that looks something like:
HTTP_X_TRANS_ID: 1234,5678
For some headers in some contexts, this is behavior is desirable. For
example, one can also use a list of tuples which enumerate the multiple
values a single header should have.
However, in almost all of the contexts used in the code base, this is
not desirable.
This behavior arises from a combination of factors:
1. Header strings are not constants and different lower-case and
title-case header strings values are used interchangably in the
code at times
It might be worth the effort to make a pass through the code to
stop using string literals and use constants instead, but there
are plusses and minuses to doing that, so this was not attempted
in this effort
2. HeaderEnvironProxy() objects report their keys in ".title()"
case, but normalize all other key references to the form
expected by the Request class's environ field
swob.Request.headers fields are HeaderEnvironProxy() objects.
3. HeaderKeyDict() objects report their keys in ".lower()" case,
and normalize all other key references to ".lower()" case
swob.Response.headers fields are HeaderKeyDict() objects.
Depending on which object is used and how it is used, one can end up
with such a mismatch.
This commit takes the following steps as a (PROPOSED) solution:
1. Change HeaderKeyDict() to normalize using ".title()" case to
match HeaderEnvironProxy()
2. Replace standard python dictionary objects with HeaderKeyDict()
objects where possible
This gives us an object that normalizes key references to avoid
fixing the code to normalize the string literals.
3. Fix up a few places to use title case string literals to match
the new defaults
Change-Id: Ied56a1df83ffac793ee85e796424d7d20f18f469
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2012-11-15 16:34:45 -05:00
|
|
|
from swift.common.swob import Request, HeaderKeyDict
|
2013-03-20 19:26:45 -07:00
|
|
|
import swift.container
|
2010-07-12 17:03:45 -05:00
|
|
|
from swift.container import server as container_server
|
2014-04-02 16:16:21 -04:00
|
|
|
from swift.common import constraints
|
2014-06-10 22:17:47 -07:00
|
|
|
from swift.common.utils import (Timestamp, mkdirs, public, replication,
|
|
|
|
lock_parent_directory, json)
|
2013-03-20 19:26:45 -07:00
|
|
|
from test.unit import fake_http_connect
|
2014-06-23 12:52:50 -07:00
|
|
|
from swift.common.storage_policy import (POLICIES, StoragePolicy)
|
2013-12-03 22:02:39 +00:00
|
|
|
from swift.common.request_helpers import get_sys_meta_prefix
|
2013-03-20 19:26:45 -07:00
|
|
|
|
2014-05-27 16:57:25 -07:00
|
|
|
from test.unit import patch_policies
|
|
|
|
|
2013-03-20 19:26:45 -07:00
|
|
|
|
|
|
|
@contextmanager
|
|
|
|
def save_globals():
|
|
|
|
orig_http_connect = getattr(swift.container.server, 'http_connect',
|
|
|
|
None)
|
|
|
|
try:
|
|
|
|
yield True
|
|
|
|
finally:
|
|
|
|
swift.container.server.http_connect = orig_http_connect
|
2010-07-12 17:03:45 -05:00
|
|
|
|
|
|
|
|
2014-05-27 16:57:25 -07:00
|
|
|
@patch_policies
|
2010-07-12 17:03:45 -05:00
|
|
|
class TestContainerController(unittest.TestCase):
|
2013-08-31 22:36:58 -04:00
|
|
|
"""Test swift.container.server.ContainerController"""
|
2010-07-12 17:03:45 -05:00
|
|
|
def setUp(self):
|
2013-08-31 22:36:58 -04:00
|
|
|
"""Set up for testing swift.object_server.ObjectController"""
|
2011-01-19 16:19:43 -06:00
|
|
|
self.testdir = os.path.join(mkdtemp(),
|
|
|
|
'tmp_test_object_server_ObjectController')
|
2010-07-12 17:03:45 -05:00
|
|
|
mkdirs(self.testdir)
|
|
|
|
rmtree(self.testdir)
|
|
|
|
mkdirs(os.path.join(self.testdir, 'sda1'))
|
|
|
|
mkdirs(os.path.join(self.testdir, 'sda1', 'tmp'))
|
|
|
|
self.controller = container_server.ContainerController(
|
|
|
|
{'devices': self.testdir, 'mount_check': 'false'})
|
2014-05-27 16:57:25 -07:00
|
|
|
# some of the policy tests want at least two policies
|
|
|
|
self.assert_(len(POLICIES) > 1)
|
2010-07-12 17:03:45 -05:00
|
|
|
|
|
|
|
def tearDown(self):
|
2011-01-24 17:12:38 -08:00
|
|
|
rmtree(os.path.dirname(self.testdir), ignore_errors=1)
|
2010-07-12 17:03:45 -05:00
|
|
|
|
Add Storage Policy support to Object Updates
The object server will now send its storage policy index to the
container server synchronously and asynchronously (via async_pending).
Each storage policy gets its own async_pending directory under
/srv/node/$disk/objects-$N, so there's no need to change the on-disk
pickle format; the policy index comes from the async_pending's
filename. This avoids any hassle on upgrade. (Recall that policy 0's
objects live in /srv/node/$disk/objects, not objects-0.) Per-policy
tempdir as well.
Also clean up a couple little things in the object updater. Now it
won't abort processing when it encounters a file (not directory) named
"async_pending-\d+", and it won't process updates in a directory that
does not correspond to a storage policy.
That is, if you have policies 1, 2, and 3, but there's a directory on
your disk named "async_pending-5", the updater will now skip over that
entirely. It won't even bother doing directory listings at all. This
is a good idea, believe it or not, because there's nothing good that
the container server can do with an update from some unknown storage
policy. It can't update the listing, it can't move the object if it's
misplaced... all it can do is ignore the request, so it's better to
just not send it in the first place. Plus, if this is due to a
misconfiguration on one storage node, then the updates will get
processed once the configuration is fixed.
There's also a drive by fix to update some backend http mocks for container
update tests that we're not fully exercising their their request fakes.
Because the object server container update code is resilient to to all manor
of failure from backend requests the general intent of the tests was
unaffected but this change cleans up some confusing logging in the debug
logger output.
The object-server will send X-Storage-Policy-Index headers with all
requests to container severs, including X-Delete containers and all
object PUT/DELETE requests. This header value is persisted in the
pickle file for the update and sent along with async requests from the
object-updater as well.
The container server will extract the X-Storage-Policy-Index header from
incoming requests and apply it to container broker calls as appropriate
defaulting to the legacy storage policy 0 to support seemless migration.
DocImpact
Implements: blueprint storage-policies
Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd
add to object updates
Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
|
|
|
def _update_object_put_headers(self, req):
|
|
|
|
"""
|
|
|
|
Override this method in test subclasses to test post upgrade
|
|
|
|
behavior.
|
|
|
|
"""
|
|
|
|
pass
|
|
|
|
|
2014-05-27 16:57:25 -07:00
|
|
|
def _check_put_container_storage_policy(self, req, policy_index):
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(201, resp.status_int)
|
|
|
|
req = Request.blank(req.path, method='HEAD')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(204, resp.status_int)
|
2014-06-23 12:52:50 -07:00
|
|
|
self.assertEqual(str(policy_index),
|
|
|
|
resp.headers['X-Backend-Storage-Policy-Index'])
|
2014-05-27 16:57:25 -07:00
|
|
|
|
|
|
|
def test_get_and_validate_policy_index(self):
|
|
|
|
# no policy is OK
|
|
|
|
req = Request.blank('/sda1/p/a/container_default', method='PUT',
|
|
|
|
headers={'X-Timestamp': '0'})
|
|
|
|
self._check_put_container_storage_policy(req, POLICIES.default.idx)
|
|
|
|
|
|
|
|
# bogus policies
|
|
|
|
for policy in ('nada', 999):
|
|
|
|
req = Request.blank('/sda1/p/a/c_%s' % policy, method='PUT',
|
|
|
|
headers={
|
|
|
|
'X-Timestamp': '0',
|
2014-06-23 12:52:50 -07:00
|
|
|
'X-Backend-Storage-Policy-Index': policy
|
2014-05-27 16:57:25 -07:00
|
|
|
})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(400, resp.status_int)
|
|
|
|
self.assert_('invalid' in resp.body.lower())
|
|
|
|
|
|
|
|
# good policies
|
|
|
|
for policy in POLICIES:
|
|
|
|
req = Request.blank('/sda1/p/a/c_%s' % policy.name, method='PUT',
|
|
|
|
headers={
|
|
|
|
'X-Timestamp': '0',
|
2014-06-23 12:52:50 -07:00
|
|
|
'X-Backend-Storage-Policy-Index':
|
|
|
|
policy.idx,
|
2014-05-27 16:57:25 -07:00
|
|
|
})
|
|
|
|
self._check_put_container_storage_policy(req, policy.idx)
|
|
|
|
|
2010-09-02 21:50:16 -07:00
|
|
|
def test_acl_container(self):
|
|
|
|
# Ensure no acl by default
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'},
|
2010-09-02 21:50:16 -07:00
|
|
|
headers={'X-Timestamp': '0'})
|
2013-09-01 12:50:07 -04:00
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assert_(resp.status.startswith('201'))
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
response = req.get_response(self.controller)
|
2010-09-02 21:50:16 -07:00
|
|
|
self.assert_(response.status.startswith('204'))
|
|
|
|
self.assert_('x-container-read' not in response.headers)
|
|
|
|
self.assert_('x-container-write' not in response.headers)
|
|
|
|
# Ensure POSTing acls works
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'},
|
2010-09-08 22:37:27 -07:00
|
|
|
headers={'X-Timestamp': '1', 'X-Container-Read': '.r:*',
|
2010-09-02 21:50:16 -07:00
|
|
|
'X-Container-Write': 'account:user'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-09-01 12:50:07 -04:00
|
|
|
self.assert_(resp.status.startswith('204'))
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
response = req.get_response(self.controller)
|
2010-09-02 21:50:16 -07:00
|
|
|
self.assert_(response.status.startswith('204'))
|
2010-09-08 22:37:27 -07:00
|
|
|
self.assertEquals(response.headers.get('x-container-read'), '.r:*')
|
2010-09-02 21:50:16 -07:00
|
|
|
self.assertEquals(response.headers.get('x-container-write'),
|
|
|
|
'account:user')
|
|
|
|
# Ensure we can clear acls on POST
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'},
|
2010-09-02 21:50:16 -07:00
|
|
|
headers={'X-Timestamp': '3', 'X-Container-Read': '',
|
|
|
|
'X-Container-Write': ''})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-09-01 12:50:07 -04:00
|
|
|
self.assert_(resp.status.startswith('204'))
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
response = req.get_response(self.controller)
|
2010-09-02 21:50:16 -07:00
|
|
|
self.assert_(response.status.startswith('204'))
|
|
|
|
self.assert_('x-container-read' not in response.headers)
|
|
|
|
self.assert_('x-container-write' not in response.headers)
|
|
|
|
# Ensure PUTing acls works
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c2', environ={'REQUEST_METHOD': 'PUT'},
|
2010-09-08 22:37:27 -07:00
|
|
|
headers={'X-Timestamp': '4', 'X-Container-Read': '.r:*',
|
2010-09-02 21:50:16 -07:00
|
|
|
'X-Container-Write': 'account:user'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-09-01 12:50:07 -04:00
|
|
|
self.assert_(resp.status.startswith('201'))
|
2010-09-02 21:50:16 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c2', environ={'REQUEST_METHOD': 'HEAD'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
response = req.get_response(self.controller)
|
2010-09-02 21:50:16 -07:00
|
|
|
self.assert_(response.status.startswith('204'))
|
2010-09-08 22:37:27 -07:00
|
|
|
self.assertEquals(response.headers.get('x-container-read'), '.r:*')
|
2010-09-02 21:50:16 -07:00
|
|
|
self.assertEquals(response.headers.get('x-container-write'),
|
|
|
|
'account:user')
|
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
def test_HEAD(self):
|
2014-05-27 16:57:25 -07:00
|
|
|
start = int(time.time())
|
2014-06-10 22:17:47 -07:00
|
|
|
ts = (Timestamp(t).internal for t in itertools.count(start))
|
2014-05-27 16:57:25 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c', method='PUT', headers={
|
2014-06-10 22:17:47 -07:00
|
|
|
'x-timestamp': ts.next()})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
req.get_response(self.controller)
|
2014-05-27 16:57:25 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c', method='HEAD')
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
response = req.get_response(self.controller)
|
2014-05-27 16:57:25 -07:00
|
|
|
self.assertEqual(response.status_int, 204)
|
|
|
|
self.assertEqual(response.headers['x-container-bytes-used'], '0')
|
|
|
|
self.assertEqual(response.headers['x-container-object-count'], '0')
|
|
|
|
obj_put_request = Request.blank(
|
|
|
|
'/sda1/p/a/c/o', method='PUT', headers={
|
2014-06-10 22:17:47 -07:00
|
|
|
'x-timestamp': ts.next(),
|
2014-05-27 16:57:25 -07:00
|
|
|
'x-size': 42,
|
|
|
|
'x-content-type': 'text/plain',
|
|
|
|
'x-etag': 'x',
|
|
|
|
})
|
Add Storage Policy support to Object Updates
The object server will now send its storage policy index to the
container server synchronously and asynchronously (via async_pending).
Each storage policy gets its own async_pending directory under
/srv/node/$disk/objects-$N, so there's no need to change the on-disk
pickle format; the policy index comes from the async_pending's
filename. This avoids any hassle on upgrade. (Recall that policy 0's
objects live in /srv/node/$disk/objects, not objects-0.) Per-policy
tempdir as well.
Also clean up a couple little things in the object updater. Now it
won't abort processing when it encounters a file (not directory) named
"async_pending-\d+", and it won't process updates in a directory that
does not correspond to a storage policy.
That is, if you have policies 1, 2, and 3, but there's a directory on
your disk named "async_pending-5", the updater will now skip over that
entirely. It won't even bother doing directory listings at all. This
is a good idea, believe it or not, because there's nothing good that
the container server can do with an update from some unknown storage
policy. It can't update the listing, it can't move the object if it's
misplaced... all it can do is ignore the request, so it's better to
just not send it in the first place. Plus, if this is due to a
misconfiguration on one storage node, then the updates will get
processed once the configuration is fixed.
There's also a drive by fix to update some backend http mocks for container
update tests that we're not fully exercising their their request fakes.
Because the object server container update code is resilient to to all manor
of failure from backend requests the general intent of the tests was
unaffected but this change cleans up some confusing logging in the debug
logger output.
The object-server will send X-Storage-Policy-Index headers with all
requests to container severs, including X-Delete containers and all
object PUT/DELETE requests. This header value is persisted in the
pickle file for the update and sent along with async requests from the
object-updater as well.
The container server will extract the X-Storage-Policy-Index header from
incoming requests and apply it to container broker calls as appropriate
defaulting to the legacy storage policy 0 to support seemless migration.
DocImpact
Implements: blueprint storage-policies
Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd
add to object updates
Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
|
|
|
self._update_object_put_headers(obj_put_request)
|
|
|
|
obj_put_resp = obj_put_request.get_response(self.controller)
|
|
|
|
self.assertEqual(obj_put_resp.status_int // 100, 2)
|
2014-05-27 16:57:25 -07:00
|
|
|
# re-issue HEAD request
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
response = req.get_response(self.controller)
|
2014-05-27 16:57:25 -07:00
|
|
|
self.assertEqual(response.status_int // 100, 2)
|
|
|
|
self.assertEqual(response.headers['x-container-bytes-used'], '42')
|
|
|
|
self.assertEqual(response.headers['x-container-object-count'], '1')
|
|
|
|
# created at time...
|
2014-06-10 22:17:47 -07:00
|
|
|
created_at_header = Timestamp(response.headers['x-timestamp'])
|
|
|
|
self.assertEqual(response.headers['x-timestamp'],
|
|
|
|
created_at_header.normal)
|
|
|
|
self.assert_(created_at_header >= start)
|
2014-05-27 16:57:25 -07:00
|
|
|
self.assertEqual(response.headers['x-put-timestamp'],
|
2014-06-10 22:17:47 -07:00
|
|
|
Timestamp(start).normal)
|
2014-05-27 16:57:25 -07:00
|
|
|
|
|
|
|
# backend headers
|
2014-06-23 12:52:50 -07:00
|
|
|
self.assertEqual(int(response.headers
|
|
|
|
['X-Backend-Storage-Policy-Index']),
|
2014-05-27 16:57:25 -07:00
|
|
|
int(POLICIES.default))
|
2014-06-10 22:17:47 -07:00
|
|
|
self.assert_(
|
|
|
|
Timestamp(response.headers['x-backend-timestamp']) >= start)
|
2014-05-27 16:57:25 -07:00
|
|
|
self.assertEqual(response.headers['x-backend-put-timestamp'],
|
2014-06-10 22:17:47 -07:00
|
|
|
Timestamp(start).internal)
|
2014-05-27 16:57:25 -07:00
|
|
|
self.assertEqual(response.headers['x-backend-delete-timestamp'],
|
2014-06-10 22:17:47 -07:00
|
|
|
Timestamp(0).internal)
|
2014-05-27 16:57:25 -07:00
|
|
|
self.assertEqual(response.headers['x-backend-status-changed-at'],
|
2014-06-10 22:17:47 -07:00
|
|
|
Timestamp(start).internal)
|
2010-07-12 17:03:45 -05:00
|
|
|
|
|
|
|
def test_HEAD_not_found(self):
|
2014-05-27 16:57:25 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c', method='HEAD')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 404)
|
2014-06-23 12:52:50 -07:00
|
|
|
self.assertEqual(int(resp.headers['X-Backend-Storage-Policy-Index']),
|
|
|
|
0)
|
2014-05-27 16:57:25 -07:00
|
|
|
self.assertEqual(resp.headers['x-backend-timestamp'],
|
2014-06-10 22:17:47 -07:00
|
|
|
Timestamp(0).internal)
|
2014-05-27 16:57:25 -07:00
|
|
|
self.assertEqual(resp.headers['x-backend-put-timestamp'],
|
2014-06-10 22:17:47 -07:00
|
|
|
Timestamp(0).internal)
|
2014-05-27 16:57:25 -07:00
|
|
|
self.assertEqual(resp.headers['x-backend-status-changed-at'],
|
2014-06-10 22:17:47 -07:00
|
|
|
Timestamp(0).internal)
|
2014-05-27 16:57:25 -07:00
|
|
|
self.assertEqual(resp.headers['x-backend-delete-timestamp'],
|
2014-06-10 22:17:47 -07:00
|
|
|
Timestamp(0).internal)
|
2014-05-27 16:57:25 -07:00
|
|
|
for header in ('x-container-object-count', 'x-container-bytes-used',
|
|
|
|
'x-timestamp', 'x-put-timestamp'):
|
|
|
|
self.assertEqual(resp.headers[header], None)
|
|
|
|
|
|
|
|
def test_deleted_headers(self):
|
2014-06-10 22:17:47 -07:00
|
|
|
ts = (Timestamp(t).internal for t in
|
|
|
|
itertools.count(int(time.time())))
|
2014-05-27 16:57:25 -07:00
|
|
|
request_method_times = {
|
2014-06-10 22:17:47 -07:00
|
|
|
'PUT': ts.next(),
|
|
|
|
'DELETE': ts.next(),
|
2014-05-27 16:57:25 -07:00
|
|
|
}
|
|
|
|
# setup a deleted container
|
|
|
|
for method in ('PUT', 'DELETE'):
|
|
|
|
x_timestamp = request_method_times[method]
|
|
|
|
req = Request.blank('/sda1/p/a/c', method=method,
|
|
|
|
headers={'x-timestamp': x_timestamp})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int // 100, 2)
|
|
|
|
|
|
|
|
for method in ('GET', 'HEAD'):
|
|
|
|
req = Request.blank('/sda1/p/a/c', method=method)
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 404)
|
|
|
|
# backend headers
|
2014-06-23 12:52:50 -07:00
|
|
|
self.assertEqual(int(resp.headers[
|
|
|
|
'X-Backend-Storage-Policy-Index']),
|
2014-05-27 16:57:25 -07:00
|
|
|
int(POLICIES.default))
|
2014-06-10 22:17:47 -07:00
|
|
|
self.assert_(Timestamp(resp.headers['x-backend-timestamp']) >=
|
|
|
|
Timestamp(request_method_times['PUT']))
|
2014-05-27 16:57:25 -07:00
|
|
|
self.assertEqual(resp.headers['x-backend-put-timestamp'],
|
|
|
|
request_method_times['PUT'])
|
|
|
|
self.assertEqual(resp.headers['x-backend-delete-timestamp'],
|
|
|
|
request_method_times['DELETE'])
|
|
|
|
self.assertEqual(resp.headers['x-backend-status-changed-at'],
|
|
|
|
request_method_times['DELETE'])
|
|
|
|
for header in ('x-container-object-count',
|
|
|
|
'x-container-bytes-used', 'x-timestamp',
|
|
|
|
'x-put-timestamp'):
|
|
|
|
self.assertEqual(resp.headers[header], None)
|
2010-07-12 17:03:45 -05:00
|
|
|
|
2013-03-20 19:26:45 -07:00
|
|
|
def test_HEAD_invalid_partition(self):
|
|
|
|
req = Request.blank('/sda1/./a/c', environ={'REQUEST_METHOD': 'HEAD',
|
|
|
|
'HTTP_X_TIMESTAMP': '1'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
self.assertEquals(resp.status_int, 400)
|
|
|
|
|
|
|
|
def test_HEAD_insufficient_storage(self):
|
|
|
|
self.controller = container_server.ContainerController(
|
|
|
|
{'devices': self.testdir})
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda-null/p/a/c', environ={'REQUEST_METHOD': 'HEAD',
|
|
|
|
'HTTP_X_TIMESTAMP': '1'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
self.assertEquals(resp.status_int, 507)
|
|
|
|
|
|
|
|
def test_HEAD_invalid_content_type(self):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'},
|
|
|
|
headers={'Accept': 'application/plain'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
self.assertEquals(resp.status_int, 406)
|
|
|
|
|
2013-06-12 15:50:13 -07:00
|
|
|
def test_HEAD_invalid_format(self):
|
|
|
|
format = '%D1%BD%8A9' # invalid UTF-8; should be %E1%BD%8A9 (E -> D)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c?format=' + format,
|
|
|
|
environ={'REQUEST_METHOD': 'HEAD'})
|
2013-07-24 12:55:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-06-12 15:50:13 -07:00
|
|
|
self.assertEquals(resp.status_int, 400)
|
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
def test_PUT(self):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT',
|
2010-07-12 17:03:45 -05:00
|
|
|
'HTTP_X_TIMESTAMP': '1'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT',
|
2010-07-12 17:03:45 -05:00
|
|
|
'HTTP_X_TIMESTAMP': '2'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 202)
|
|
|
|
|
2013-10-24 11:46:49 -07:00
|
|
|
def test_PUT_simulated_create_race(self):
|
|
|
|
state = ['initial']
|
|
|
|
|
|
|
|
from swift.container.backend import ContainerBroker as OrigCoBr
|
|
|
|
|
|
|
|
class InterceptedCoBr(OrigCoBr):
|
|
|
|
|
|
|
|
def __init__(self, *args, **kwargs):
|
|
|
|
super(InterceptedCoBr, self).__init__(*args, **kwargs)
|
|
|
|
if state[0] == 'initial':
|
|
|
|
# Do nothing initially
|
|
|
|
pass
|
|
|
|
elif state[0] == 'race':
|
|
|
|
# Save the original db_file attribute value
|
|
|
|
self._saved_db_file = self.db_file
|
|
|
|
self.db_file += '.doesnotexist'
|
|
|
|
|
|
|
|
def initialize(self, *args, **kwargs):
|
|
|
|
if state[0] == 'initial':
|
|
|
|
# Do nothing initially
|
|
|
|
pass
|
|
|
|
elif state[0] == 'race':
|
|
|
|
# Restore the original db_file attribute to get the race
|
|
|
|
# behavior
|
|
|
|
self.db_file = self._saved_db_file
|
|
|
|
return super(InterceptedCoBr, self).initialize(*args, **kwargs)
|
|
|
|
|
|
|
|
with mock.patch("swift.container.server.ContainerBroker",
|
|
|
|
InterceptedCoBr):
|
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '1'})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 201)
|
|
|
|
state[0] = "race"
|
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '1'})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 202)
|
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
def test_PUT_obj_not_found(self):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c/o', environ={'REQUEST_METHOD': 'PUT'},
|
2010-07-12 17:03:45 -05:00
|
|
|
headers={'X-Timestamp': '1', 'X-Size': '0',
|
|
|
|
'X-Content-Type': 'text/plain', 'X-ETag': 'e'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 404)
|
|
|
|
|
2014-05-27 16:57:25 -07:00
|
|
|
def test_PUT_good_policy_specified(self):
|
|
|
|
policy = random.choice(list(POLICIES))
|
|
|
|
# Set metadata header
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='PUT',
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(1).internal,
|
2014-06-23 12:52:50 -07:00
|
|
|
'X-Backend-Storage-Policy-Index':
|
|
|
|
policy.idx})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
|
|
|
|
# now make sure we read it back
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'})
|
|
|
|
resp = req.get_response(self.controller)
|
2014-06-23 12:52:50 -07:00
|
|
|
self.assertEquals(resp.headers.get('X-Backend-Storage-Policy-Index'),
|
|
|
|
str(policy.idx))
|
2014-05-27 16:57:25 -07:00
|
|
|
|
|
|
|
def test_PUT_no_policy_specified(self):
|
|
|
|
# Set metadata header
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(1).internal})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
|
|
|
|
# now make sure the default was used (pol 1)
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'})
|
|
|
|
resp = req.get_response(self.controller)
|
2014-06-23 12:52:50 -07:00
|
|
|
self.assertEquals(resp.headers.get('X-Backend-Storage-Policy-Index'),
|
2014-05-27 16:57:25 -07:00
|
|
|
str(POLICIES.default.idx))
|
|
|
|
|
|
|
|
def test_PUT_bad_policy_specified(self):
|
|
|
|
# Set metadata header
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(1).internal,
|
2014-06-23 12:52:50 -07:00
|
|
|
'X-Backend-Storage-Policy-Index': 'nada'})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
# make sure we get bad response
|
|
|
|
self.assertEquals(resp.status_int, 400)
|
|
|
|
|
|
|
|
def test_PUT_no_policy_change(self):
|
2014-06-10 22:17:47 -07:00
|
|
|
ts = (Timestamp(t).internal for t in itertools.count(time.time()))
|
2014-05-27 16:57:25 -07:00
|
|
|
policy = random.choice(list(POLICIES))
|
|
|
|
# Set metadata header
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='PUT', headers={
|
2014-06-10 22:17:47 -07:00
|
|
|
'X-Timestamp': ts.next(),
|
2014-06-23 12:52:50 -07:00
|
|
|
'X-Backend-Storage-Policy-Index': policy.idx})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
req = Request.blank('/sda1/p/a/c')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
# make sure we get the right index back
|
2014-06-23 12:52:50 -07:00
|
|
|
self.assertEquals(resp.headers.get('X-Backend-Storage-Policy-Index'),
|
|
|
|
str(policy.idx))
|
2014-05-27 16:57:25 -07:00
|
|
|
|
|
|
|
# now try to update w/o changing the policy
|
|
|
|
for method in ('POST', 'PUT'):
|
|
|
|
req = Request.blank('/sda1/p/a/c', method=method, headers={
|
2014-06-10 22:17:47 -07:00
|
|
|
'X-Timestamp': ts.next(),
|
2014-06-23 12:52:50 -07:00
|
|
|
'X-Backend-Storage-Policy-Index': policy.idx
|
2014-05-27 16:57:25 -07:00
|
|
|
})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int // 100, 2)
|
|
|
|
# make sure we get the right index back
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 204)
|
2014-06-23 12:52:50 -07:00
|
|
|
self.assertEquals(resp.headers.get('X-Backend-Storage-Policy-Index'),
|
|
|
|
str(policy.idx))
|
2014-05-27 16:57:25 -07:00
|
|
|
|
|
|
|
def test_PUT_bad_policy_change(self):
|
2014-06-10 22:17:47 -07:00
|
|
|
ts = (Timestamp(t).internal for t in itertools.count(time.time()))
|
2014-05-27 16:57:25 -07:00
|
|
|
policy = random.choice(list(POLICIES))
|
|
|
|
# Set metadata header
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='PUT', headers={
|
2014-06-10 22:17:47 -07:00
|
|
|
'X-Timestamp': ts.next(),
|
2014-06-23 12:52:50 -07:00
|
|
|
'X-Backend-Storage-Policy-Index': policy.idx})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
req = Request.blank('/sda1/p/a/c')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
# make sure we get the right index back
|
2014-06-23 12:52:50 -07:00
|
|
|
self.assertEquals(resp.headers.get('X-Backend-Storage-Policy-Index'),
|
|
|
|
str(policy.idx))
|
2014-05-27 16:57:25 -07:00
|
|
|
|
|
|
|
other_policies = [p for p in POLICIES if p != policy]
|
|
|
|
for other_policy in other_policies:
|
|
|
|
# now try to change it and make sure we get a conflict
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='PUT', headers={
|
2014-06-10 22:17:47 -07:00
|
|
|
'X-Timestamp': ts.next(),
|
2014-06-23 12:52:50 -07:00
|
|
|
'X-Backend-Storage-Policy-Index': other_policy.idx
|
2014-05-27 16:57:25 -07:00
|
|
|
})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 409)
|
|
|
|
|
|
|
|
# and make sure there is no change!
|
|
|
|
req = Request.blank('/sda1/p/a/c')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
# make sure we get the right index back
|
2014-06-23 12:52:50 -07:00
|
|
|
self.assertEquals(resp.headers.get('X-Backend-Storage-Policy-Index'),
|
|
|
|
str(policy.idx))
|
2014-05-27 16:57:25 -07:00
|
|
|
|
|
|
|
def test_POST_ignores_policy_change(self):
|
2014-06-10 22:17:47 -07:00
|
|
|
ts = (Timestamp(t).internal for t in itertools.count(time.time()))
|
2014-05-27 16:57:25 -07:00
|
|
|
policy = random.choice(list(POLICIES))
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='PUT', headers={
|
2014-06-10 22:17:47 -07:00
|
|
|
'X-Timestamp': ts.next(),
|
2014-06-23 12:52:50 -07:00
|
|
|
'X-Backend-Storage-Policy-Index': policy.idx})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
req = Request.blank('/sda1/p/a/c')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
# make sure we get the right index back
|
2014-06-23 12:52:50 -07:00
|
|
|
self.assertEquals(resp.headers.get('X-Backend-Storage-Policy-Index'),
|
|
|
|
str(policy.idx))
|
2014-05-27 16:57:25 -07:00
|
|
|
|
|
|
|
other_policies = [p for p in POLICIES if p != policy]
|
|
|
|
for other_policy in other_policies:
|
|
|
|
# now try to change it and make sure we get a conflict
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='POST', headers={
|
2014-06-10 22:17:47 -07:00
|
|
|
'X-Timestamp': ts.next(),
|
2014-06-23 12:52:50 -07:00
|
|
|
'X-Backend-Storage-Policy-Index': other_policy.idx
|
2014-05-27 16:57:25 -07:00
|
|
|
})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
# valid request
|
|
|
|
self.assertEquals(resp.status_int // 100, 2)
|
|
|
|
|
|
|
|
# but it does nothing
|
|
|
|
req = Request.blank('/sda1/p/a/c')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
# make sure we get the right index back
|
2014-06-23 12:52:50 -07:00
|
|
|
self.assertEquals(resp.headers.get
|
|
|
|
('X-Backend-Storage-Policy-Index'),
|
|
|
|
str(policy.idx))
|
2014-05-27 16:57:25 -07:00
|
|
|
|
|
|
|
def test_PUT_no_policy_for_existing_default(self):
|
2014-06-10 22:17:47 -07:00
|
|
|
ts = (Timestamp(t).internal for t in
|
|
|
|
itertools.count(int(time.time())))
|
2014-05-27 16:57:25 -07:00
|
|
|
# create a container with the default storage policy
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='PUT', headers={
|
2014-06-10 22:17:47 -07:00
|
|
|
'X-Timestamp': ts.next(),
|
2014-05-27 16:57:25 -07:00
|
|
|
})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 201) # sanity check
|
|
|
|
|
|
|
|
# check the policy index
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='HEAD')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 204)
|
2014-06-23 12:52:50 -07:00
|
|
|
self.assertEqual(resp.headers['X-Backend-Storage-Policy-Index'],
|
2014-05-27 16:57:25 -07:00
|
|
|
str(POLICIES.default.idx))
|
|
|
|
|
|
|
|
# put again without specifying the storage policy
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='PUT', headers={
|
2014-06-10 22:17:47 -07:00
|
|
|
'X-Timestamp': ts.next(),
|
2014-05-27 16:57:25 -07:00
|
|
|
})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 202) # sanity check
|
|
|
|
|
|
|
|
# policy index is unchanged
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='HEAD')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 204)
|
2014-06-23 12:52:50 -07:00
|
|
|
self.assertEqual(resp.headers['X-Backend-Storage-Policy-Index'],
|
2014-05-27 16:57:25 -07:00
|
|
|
str(POLICIES.default.idx))
|
|
|
|
|
|
|
|
def test_PUT_proxy_default_no_policy_for_existing_default(self):
|
|
|
|
# make it look like the proxy has a different default than we do, like
|
|
|
|
# during a config change restart across a multi node cluster.
|
|
|
|
proxy_default = random.choice([p for p in POLICIES if not
|
|
|
|
p.is_default])
|
2014-06-10 22:17:47 -07:00
|
|
|
ts = (Timestamp(t).internal for t in
|
|
|
|
itertools.count(int(time.time())))
|
2014-05-27 16:57:25 -07:00
|
|
|
# create a container with the default storage policy
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='PUT', headers={
|
2014-06-10 22:17:47 -07:00
|
|
|
'X-Timestamp': ts.next(),
|
2014-05-27 16:57:25 -07:00
|
|
|
'X-Backend-Storage-Policy-Default': int(proxy_default),
|
|
|
|
})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 201) # sanity check
|
|
|
|
|
|
|
|
# check the policy index
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='HEAD')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 204)
|
2014-06-23 12:52:50 -07:00
|
|
|
self.assertEqual(int(resp.headers['X-Backend-Storage-Policy-Index']),
|
2014-05-27 16:57:25 -07:00
|
|
|
int(proxy_default))
|
|
|
|
|
|
|
|
# put again without proxy specifying the different default
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='PUT', headers={
|
2014-06-10 22:17:47 -07:00
|
|
|
'X-Timestamp': ts.next(),
|
2014-05-27 16:57:25 -07:00
|
|
|
'X-Backend-Storage-Policy-Default': int(POLICIES.default),
|
|
|
|
})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 202) # sanity check
|
|
|
|
|
|
|
|
# policy index is unchanged
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='HEAD')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 204)
|
2014-06-23 12:52:50 -07:00
|
|
|
self.assertEqual(int(resp.headers['X-Backend-Storage-Policy-Index']),
|
2014-05-27 16:57:25 -07:00
|
|
|
int(proxy_default))
|
|
|
|
|
|
|
|
def test_PUT_no_policy_for_existing_non_default(self):
|
2014-06-10 22:17:47 -07:00
|
|
|
ts = (Timestamp(t).internal for t in itertools.count(time.time()))
|
2014-05-27 16:57:25 -07:00
|
|
|
non_default_policy = [p for p in POLICIES if not p.is_default][0]
|
|
|
|
# create a container with the non-default storage policy
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='PUT', headers={
|
2014-06-10 22:17:47 -07:00
|
|
|
'X-Timestamp': ts.next(),
|
2014-06-23 12:52:50 -07:00
|
|
|
'X-Backend-Storage-Policy-Index': non_default_policy.idx,
|
2014-05-27 16:57:25 -07:00
|
|
|
})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 201) # sanity check
|
|
|
|
|
|
|
|
# check the policy index
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='HEAD')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 204)
|
2014-06-23 12:52:50 -07:00
|
|
|
self.assertEqual(resp.headers['X-Backend-Storage-Policy-Index'],
|
2014-05-27 16:57:25 -07:00
|
|
|
str(non_default_policy.idx))
|
|
|
|
|
2014-09-03 10:40:30 -07:00
|
|
|
# put again without specifying the storage policy
|
2014-05-27 16:57:25 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c', method='PUT', headers={
|
2014-06-10 22:17:47 -07:00
|
|
|
'X-Timestamp': ts.next(),
|
2014-05-27 16:57:25 -07:00
|
|
|
})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 202) # sanity check
|
|
|
|
|
|
|
|
# policy index is unchanged
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='HEAD')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 204)
|
2014-06-23 12:52:50 -07:00
|
|
|
self.assertEqual(resp.headers['X-Backend-Storage-Policy-Index'],
|
2014-05-27 16:57:25 -07:00
|
|
|
str(non_default_policy.idx))
|
|
|
|
|
2010-08-10 12:18:15 -07:00
|
|
|
def test_PUT_GET_metadata(self):
|
|
|
|
# Set metadata header
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(1).internal,
|
2010-08-10 12:18:15 -07:00
|
|
|
'X-Container-Meta-Test': 'Value'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-08-10 12:18:15 -07:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
2013-08-16 16:24:00 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'})
|
|
|
|
resp = req.get_response(self.controller)
|
2010-08-10 12:18:15 -07:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
self.assertEquals(resp.headers.get('x-container-meta-test'), 'Value')
|
2010-08-16 15:30:27 -07:00
|
|
|
# Set another metadata header, ensuring old one doesn't disappear
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(1).internal,
|
2010-08-16 15:30:27 -07:00
|
|
|
'X-Container-Meta-Test2': 'Value2'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-08-16 15:30:27 -07:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
2013-08-16 16:24:00 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'})
|
|
|
|
resp = req.get_response(self.controller)
|
2010-08-16 15:30:27 -07:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
self.assertEquals(resp.headers.get('x-container-meta-test'), 'Value')
|
|
|
|
self.assertEquals(resp.headers.get('x-container-meta-test2'), 'Value2')
|
2010-08-10 12:18:15 -07:00
|
|
|
# Update metadata header
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(3).internal,
|
2010-08-10 12:18:15 -07:00
|
|
|
'X-Container-Meta-Test': 'New Value'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-08-10 12:18:15 -07:00
|
|
|
self.assertEquals(resp.status_int, 202)
|
2013-08-16 16:24:00 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'})
|
|
|
|
resp = req.get_response(self.controller)
|
2010-08-10 12:18:15 -07:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
self.assertEquals(resp.headers.get('x-container-meta-test'),
|
|
|
|
'New Value')
|
|
|
|
# Send old update to metadata header
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(2).internal,
|
2010-08-10 12:18:15 -07:00
|
|
|
'X-Container-Meta-Test': 'Old Value'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-08-10 12:18:15 -07:00
|
|
|
self.assertEquals(resp.status_int, 202)
|
2013-08-16 16:24:00 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'})
|
|
|
|
resp = req.get_response(self.controller)
|
2010-08-10 12:18:15 -07:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
self.assertEquals(resp.headers.get('x-container-meta-test'),
|
|
|
|
'New Value')
|
|
|
|
# Remove metadata header (by setting it to empty)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(4).internal,
|
2010-08-10 12:18:15 -07:00
|
|
|
'X-Container-Meta-Test': ''})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-08-10 12:18:15 -07:00
|
|
|
self.assertEquals(resp.status_int, 202)
|
2013-08-16 16:24:00 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'})
|
|
|
|
resp = req.get_response(self.controller)
|
2010-08-10 12:18:15 -07:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
self.assert_('x-container-meta-test' not in resp.headers)
|
|
|
|
|
2013-12-03 22:02:39 +00:00
|
|
|
def test_PUT_GET_sys_metadata(self):
|
|
|
|
prefix = get_sys_meta_prefix('container')
|
|
|
|
key = '%sTest' % prefix
|
|
|
|
key2 = '%sTest2' % prefix
|
|
|
|
# Set metadata header
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(1).internal,
|
2013-12-03 22:02:39 +00:00
|
|
|
key: 'Value'})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-12-03 22:02:39 +00:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
2014-05-27 16:57:25 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'})
|
|
|
|
resp = req.get_response(self.controller)
|
2013-12-03 22:02:39 +00:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
self.assertEquals(resp.headers.get(key.lower()), 'Value')
|
|
|
|
# Set another metadata header, ensuring old one doesn't disappear
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(1).internal,
|
2013-12-03 22:02:39 +00:00
|
|
|
key2: 'Value2'})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-12-03 22:02:39 +00:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
2014-05-27 16:57:25 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'})
|
|
|
|
resp = req.get_response(self.controller)
|
2013-12-03 22:02:39 +00:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
self.assertEquals(resp.headers.get(key.lower()), 'Value')
|
|
|
|
self.assertEquals(resp.headers.get(key2.lower()), 'Value2')
|
|
|
|
# Update metadata header
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(3).internal,
|
2013-12-03 22:02:39 +00:00
|
|
|
key: 'New Value'})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-12-03 22:02:39 +00:00
|
|
|
self.assertEquals(resp.status_int, 202)
|
2014-05-27 16:57:25 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'})
|
|
|
|
resp = req.get_response(self.controller)
|
2013-12-03 22:02:39 +00:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
self.assertEquals(resp.headers.get(key.lower()),
|
|
|
|
'New Value')
|
|
|
|
# Send old update to metadata header
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(2).internal,
|
2013-12-03 22:02:39 +00:00
|
|
|
key: 'Old Value'})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-12-03 22:02:39 +00:00
|
|
|
self.assertEquals(resp.status_int, 202)
|
2014-05-27 16:57:25 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'})
|
|
|
|
resp = req.get_response(self.controller)
|
2013-12-03 22:02:39 +00:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
self.assertEquals(resp.headers.get(key.lower()),
|
|
|
|
'New Value')
|
|
|
|
# Remove metadata header (by setting it to empty)
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(4).internal,
|
2013-12-03 22:02:39 +00:00
|
|
|
key: ''})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-12-03 22:02:39 +00:00
|
|
|
self.assertEquals(resp.status_int, 202)
|
2014-05-27 16:57:25 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'})
|
|
|
|
resp = req.get_response(self.controller)
|
2013-12-03 22:02:39 +00:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
self.assert_(key.lower() not in resp.headers)
|
|
|
|
|
2013-03-20 19:26:45 -07:00
|
|
|
def test_PUT_invalid_partition(self):
|
|
|
|
req = Request.blank('/sda1/./a/c', environ={'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '1'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
self.assertEquals(resp.status_int, 400)
|
|
|
|
|
|
|
|
def test_PUT_timestamp_not_float(self):
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '0'})
|
2013-08-16 16:24:00 -07:00
|
|
|
req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'},
|
|
|
|
headers={'X-Timestamp': 'not-float'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
self.assertEquals(resp.status_int, 400)
|
|
|
|
|
|
|
|
def test_PUT_insufficient_storage(self):
|
|
|
|
self.controller = container_server.ContainerController(
|
|
|
|
{'devices': self.testdir})
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda-null/p/a/c', environ={'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '1'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
self.assertEquals(resp.status_int, 507)
|
|
|
|
|
2010-08-10 12:18:15 -07:00
|
|
|
def test_POST_HEAD_metadata(self):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(1).internal})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-08-10 12:18:15 -07:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
# Set metadata header
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(1).internal,
|
2010-08-10 12:18:15 -07:00
|
|
|
'X-Container-Meta-Test': 'Value'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-08-10 12:18:15 -07:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-08-10 12:18:15 -07:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
self.assertEquals(resp.headers.get('x-container-meta-test'), 'Value')
|
|
|
|
# Update metadata header
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(3).internal,
|
2010-08-10 12:18:15 -07:00
|
|
|
'X-Container-Meta-Test': 'New Value'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-08-10 12:18:15 -07:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-08-10 12:18:15 -07:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
self.assertEquals(resp.headers.get('x-container-meta-test'),
|
|
|
|
'New Value')
|
|
|
|
# Send old update to metadata header
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(2).internal,
|
2010-08-10 12:18:15 -07:00
|
|
|
'X-Container-Meta-Test': 'Old Value'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-08-10 12:18:15 -07:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-08-10 12:18:15 -07:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
self.assertEquals(resp.headers.get('x-container-meta-test'),
|
|
|
|
'New Value')
|
|
|
|
# Remove metadata header (by setting it to empty)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(4).internal,
|
2010-08-10 12:18:15 -07:00
|
|
|
'X-Container-Meta-Test': ''})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-08-10 12:18:15 -07:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-08-10 12:18:15 -07:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
self.assert_('x-container-meta-test' not in resp.headers)
|
|
|
|
|
2013-12-03 22:02:39 +00:00
|
|
|
def test_POST_HEAD_sys_metadata(self):
|
|
|
|
prefix = get_sys_meta_prefix('container')
|
|
|
|
key = '%sTest' % prefix
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(1).internal})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-12-03 22:02:39 +00:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
# Set metadata header
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(1).internal,
|
2013-12-03 22:02:39 +00:00
|
|
|
key: 'Value'})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-12-03 22:02:39 +00:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-12-03 22:02:39 +00:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
self.assertEquals(resp.headers.get(key.lower()), 'Value')
|
|
|
|
# Update metadata header
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(3).internal,
|
2013-12-03 22:02:39 +00:00
|
|
|
key: 'New Value'})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-12-03 22:02:39 +00:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-12-03 22:02:39 +00:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
self.assertEquals(resp.headers.get(key.lower()),
|
|
|
|
'New Value')
|
|
|
|
# Send old update to metadata header
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(2).internal,
|
2013-12-03 22:02:39 +00:00
|
|
|
key: 'Old Value'})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-12-03 22:02:39 +00:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-12-03 22:02:39 +00:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
self.assertEquals(resp.headers.get(key.lower()),
|
|
|
|
'New Value')
|
|
|
|
# Remove metadata header (by setting it to empty)
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(4).internal,
|
2013-12-03 22:02:39 +00:00
|
|
|
key: ''})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-12-03 22:02:39 +00:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-12-03 22:02:39 +00:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
self.assert_(key.lower() not in resp.headers)
|
|
|
|
|
2013-03-20 19:26:45 -07:00
|
|
|
def test_POST_invalid_partition(self):
|
|
|
|
req = Request.blank('/sda1/./a/c', environ={'REQUEST_METHOD': 'POST',
|
|
|
|
'HTTP_X_TIMESTAMP': '1'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
self.assertEquals(resp.status_int, 400)
|
|
|
|
|
|
|
|
def test_POST_timestamp_not_float(self):
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '0'})
|
2013-08-16 16:24:00 -07:00
|
|
|
req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'},
|
|
|
|
headers={'X-Timestamp': 'not-float'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
self.assertEquals(resp.status_int, 400)
|
|
|
|
|
|
|
|
def test_POST_insufficient_storage(self):
|
|
|
|
self.controller = container_server.ContainerController(
|
|
|
|
{'devices': self.testdir})
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda-null/p/a/c', environ={'REQUEST_METHOD': 'POST',
|
|
|
|
'HTTP_X_TIMESTAMP': '1'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
self.assertEquals(resp.status_int, 507)
|
|
|
|
|
|
|
|
def test_POST_invalid_container_sync_to(self):
|
|
|
|
self.controller = container_server.ContainerController(
|
|
|
|
{'devices': self.testdir})
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda-null/p/a/c', environ={'REQUEST_METHOD': 'POST',
|
|
|
|
'HTTP_X_TIMESTAMP': '1'},
|
|
|
|
headers={'x-container-sync-to': '192.168.0.1'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
self.assertEquals(resp.status_int, 400)
|
|
|
|
|
|
|
|
def test_POST_after_DELETE_not_found(self):
|
|
|
|
req = Request.blank('/sda1/p/a/c',
|
|
|
|
environ={'REQUEST_METHOD': 'PUT'},
|
|
|
|
headers={'X-Timestamp': '1'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c',
|
|
|
|
environ={'REQUEST_METHOD': 'DELETE'},
|
|
|
|
headers={'X-Timestamp': '2'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c/',
|
|
|
|
environ={'REQUEST_METHOD': 'POST'},
|
|
|
|
headers={'X-Timestamp': '3'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
self.assertEquals(resp.status_int, 404)
|
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
def test_DELETE_obj_not_found(self):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c/o',
|
|
|
|
environ={'REQUEST_METHOD': 'DELETE'},
|
|
|
|
headers={'X-Timestamp': '1'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 404)
|
|
|
|
|
2013-03-20 19:26:45 -07:00
|
|
|
def test_DELETE_container_not_found(self):
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '0'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'DELETE',
|
|
|
|
'HTTP_X_TIMESTAMP': '1'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
self.assertEquals(resp.status_int, 404)
|
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
def test_PUT_utf8(self):
|
|
|
|
snowman = u'\u2603'
|
|
|
|
container_name = snowman.encode('utf-8')
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/%s' % container_name, environ={
|
2013-03-20 19:26:45 -07:00
|
|
|
'REQUEST_METHOD': 'PUT',
|
2010-07-12 17:03:45 -05:00
|
|
|
'HTTP_X_TIMESTAMP': '1'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
|
2013-03-20 19:26:45 -07:00
|
|
|
def test_account_update_mismatched_host_device(self):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c',
|
2013-03-20 19:26:45 -07:00
|
|
|
environ={'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '1'},
|
|
|
|
headers={'X-Timestamp': '0000000001.00000',
|
|
|
|
'X-Account-Host': '127.0.0.1:0',
|
|
|
|
'X-Account-Partition': '123',
|
|
|
|
'X-Account-Device': 'sda1,sda2'})
|
|
|
|
broker = self.controller._get_container_broker('sda1', 'p', 'a', 'c')
|
|
|
|
resp = self.controller.account_update(req, 'a', 'c', broker)
|
|
|
|
self.assertEquals(resp.status_int, 400)
|
|
|
|
|
|
|
|
def test_account_update_account_override_deleted(self):
|
|
|
|
bindsock = listen(('127.0.0.1', 0))
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c',
|
|
|
|
environ={'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '1'},
|
|
|
|
headers={'X-Timestamp': '0000000001.00000',
|
|
|
|
'X-Account-Host': '%s:%s' %
|
|
|
|
bindsock.getsockname(),
|
|
|
|
'X-Account-Partition': '123',
|
|
|
|
'X-Account-Device': 'sda1',
|
|
|
|
'X-Account-Override-Deleted': 'yes'})
|
2013-03-20 19:26:45 -07:00
|
|
|
with save_globals():
|
|
|
|
new_connect = fake_http_connect(200, count=123)
|
|
|
|
swift.container.server.http_connect = new_connect
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
def test_PUT_account_update(self):
|
|
|
|
bindsock = listen(('127.0.0.1', 0))
|
2013-03-20 19:26:45 -07:00
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
def accept(return_code, expected_timestamp):
|
|
|
|
try:
|
|
|
|
with Timeout(3):
|
|
|
|
sock, addr = bindsock.accept()
|
|
|
|
inc = sock.makefile('rb')
|
|
|
|
out = sock.makefile('wb')
|
|
|
|
out.write('HTTP/1.1 %d OK\r\nContent-Length: 0\r\n\r\n' %
|
|
|
|
return_code)
|
|
|
|
out.flush()
|
|
|
|
self.assertEquals(inc.readline(),
|
|
|
|
'PUT /sda1/123/a/c HTTP/1.1\r\n')
|
|
|
|
headers = {}
|
|
|
|
line = inc.readline()
|
|
|
|
while line and line != '\r\n':
|
|
|
|
headers[line.split(':')[0].lower()] = \
|
|
|
|
line.split(':')[1].strip()
|
|
|
|
line = inc.readline()
|
|
|
|
self.assertEquals(headers['x-put-timestamp'],
|
|
|
|
expected_timestamp)
|
2013-08-28 21:16:08 +02:00
|
|
|
except BaseException as err:
|
2010-07-12 17:03:45 -05:00
|
|
|
return err
|
|
|
|
return None
|
2013-03-20 19:26:45 -07:00
|
|
|
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c',
|
2010-07-12 17:03:45 -05:00
|
|
|
environ={'REQUEST_METHOD': 'PUT'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(1).internal,
|
2010-07-12 17:03:45 -05:00
|
|
|
'X-Account-Host': '%s:%s' % bindsock.getsockname(),
|
|
|
|
'X-Account-Partition': '123',
|
|
|
|
'X-Account-Device': 'sda1'})
|
2014-06-10 22:17:47 -07:00
|
|
|
event = spawn(accept, 201, Timestamp(1).internal)
|
2010-07-12 17:03:45 -05:00
|
|
|
try:
|
|
|
|
with Timeout(3):
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
finally:
|
|
|
|
err = event.wait()
|
|
|
|
if err:
|
|
|
|
raise Exception(err)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c',
|
2010-07-12 17:03:45 -05:00
|
|
|
environ={'REQUEST_METHOD': 'DELETE'},
|
|
|
|
headers={'X-Timestamp': '2'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c',
|
2010-07-12 17:03:45 -05:00
|
|
|
environ={'REQUEST_METHOD': 'PUT'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(3).internal,
|
2010-07-12 17:03:45 -05:00
|
|
|
'X-Account-Host': '%s:%s' % bindsock.getsockname(),
|
|
|
|
'X-Account-Partition': '123',
|
|
|
|
'X-Account-Device': 'sda1'})
|
2014-06-10 22:17:47 -07:00
|
|
|
event = spawn(accept, 404, Timestamp(3).internal)
|
2010-07-12 17:03:45 -05:00
|
|
|
try:
|
|
|
|
with Timeout(3):
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 404)
|
|
|
|
finally:
|
|
|
|
err = event.wait()
|
|
|
|
if err:
|
|
|
|
raise Exception(err)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c',
|
2010-07-12 17:03:45 -05:00
|
|
|
environ={'REQUEST_METHOD': 'PUT'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(5).internal,
|
2010-07-12 17:03:45 -05:00
|
|
|
'X-Account-Host': '%s:%s' % bindsock.getsockname(),
|
|
|
|
'X-Account-Partition': '123',
|
|
|
|
'X-Account-Device': 'sda1'})
|
2014-06-10 22:17:47 -07:00
|
|
|
event = spawn(accept, 503, Timestamp(5).internal)
|
2010-07-12 17:03:45 -05:00
|
|
|
got_exc = False
|
|
|
|
try:
|
|
|
|
with Timeout(3):
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-08-28 21:16:08 +02:00
|
|
|
except BaseException as err:
|
2010-07-12 17:03:45 -05:00
|
|
|
got_exc = True
|
|
|
|
finally:
|
|
|
|
err = event.wait()
|
|
|
|
if err:
|
|
|
|
raise Exception(err)
|
|
|
|
self.assert_(not got_exc)
|
|
|
|
|
2011-07-14 20:07:45 +00:00
|
|
|
def test_PUT_reset_container_sync(self):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'},
|
2011-07-14 20:07:45 +00:00
|
|
|
headers={'x-timestamp': '1',
|
|
|
|
'x-container-sync-to': 'http://127.0.0.1:12345/v1/a/c'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2011-07-14 20:07:45 +00:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
db = self.controller._get_container_broker('sda1', 'p', 'a', 'c')
|
|
|
|
info = db.get_info()
|
|
|
|
self.assertEquals(info['x_container_sync_point1'], -1)
|
|
|
|
self.assertEquals(info['x_container_sync_point2'], -1)
|
|
|
|
db.set_x_container_sync_points(123, 456)
|
|
|
|
info = db.get_info()
|
|
|
|
self.assertEquals(info['x_container_sync_point1'], 123)
|
|
|
|
self.assertEquals(info['x_container_sync_point2'], 456)
|
|
|
|
# Set to same value
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'},
|
2011-07-14 20:07:45 +00:00
|
|
|
headers={'x-timestamp': '1',
|
|
|
|
'x-container-sync-to': 'http://127.0.0.1:12345/v1/a/c'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2011-07-14 20:07:45 +00:00
|
|
|
self.assertEquals(resp.status_int, 202)
|
|
|
|
db = self.controller._get_container_broker('sda1', 'p', 'a', 'c')
|
|
|
|
info = db.get_info()
|
|
|
|
self.assertEquals(info['x_container_sync_point1'], 123)
|
|
|
|
self.assertEquals(info['x_container_sync_point2'], 456)
|
|
|
|
# Set to new value
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'},
|
2011-07-14 20:07:45 +00:00
|
|
|
headers={'x-timestamp': '1',
|
|
|
|
'x-container-sync-to': 'http://127.0.0.1:12345/v1/a/c2'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2011-07-14 20:07:45 +00:00
|
|
|
self.assertEquals(resp.status_int, 202)
|
|
|
|
db = self.controller._get_container_broker('sda1', 'p', 'a', 'c')
|
|
|
|
info = db.get_info()
|
|
|
|
self.assertEquals(info['x_container_sync_point1'], -1)
|
|
|
|
self.assertEquals(info['x_container_sync_point2'], -1)
|
|
|
|
|
|
|
|
def test_POST_reset_container_sync(self):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'},
|
2011-07-14 20:07:45 +00:00
|
|
|
headers={'x-timestamp': '1',
|
|
|
|
'x-container-sync-to': 'http://127.0.0.1:12345/v1/a/c'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2011-07-14 20:07:45 +00:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
db = self.controller._get_container_broker('sda1', 'p', 'a', 'c')
|
|
|
|
info = db.get_info()
|
|
|
|
self.assertEquals(info['x_container_sync_point1'], -1)
|
|
|
|
self.assertEquals(info['x_container_sync_point2'], -1)
|
|
|
|
db.set_x_container_sync_points(123, 456)
|
|
|
|
info = db.get_info()
|
|
|
|
self.assertEquals(info['x_container_sync_point1'], 123)
|
|
|
|
self.assertEquals(info['x_container_sync_point2'], 456)
|
|
|
|
# Set to same value
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'},
|
2011-07-14 20:07:45 +00:00
|
|
|
headers={'x-timestamp': '1',
|
|
|
|
'x-container-sync-to': 'http://127.0.0.1:12345/v1/a/c'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2011-07-14 20:07:45 +00:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
db = self.controller._get_container_broker('sda1', 'p', 'a', 'c')
|
|
|
|
info = db.get_info()
|
|
|
|
self.assertEquals(info['x_container_sync_point1'], 123)
|
|
|
|
self.assertEquals(info['x_container_sync_point2'], 456)
|
|
|
|
# Set to new value
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'POST'},
|
2011-07-14 20:07:45 +00:00
|
|
|
headers={'x-timestamp': '1',
|
|
|
|
'x-container-sync-to': 'http://127.0.0.1:12345/v1/a/c2'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2011-07-14 20:07:45 +00:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
db = self.controller._get_container_broker('sda1', 'p', 'a', 'c')
|
|
|
|
info = db.get_info()
|
|
|
|
self.assertEquals(info['x_container_sync_point1'], -1)
|
|
|
|
self.assertEquals(info['x_container_sync_point2'], -1)
|
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
def test_DELETE(self):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c',
|
2010-07-12 17:03:45 -05:00
|
|
|
environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': '1'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c',
|
2010-07-12 17:03:45 -05:00
|
|
|
environ={'REQUEST_METHOD': 'DELETE'}, headers={'X-Timestamp': '2'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c',
|
2010-07-12 17:03:45 -05:00
|
|
|
environ={'REQUEST_METHOD': 'GET'}, headers={'X-Timestamp': '3'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 404)
|
|
|
|
|
2014-03-17 16:00:46 -07:00
|
|
|
def test_DELETE_PUT_recreate(self):
|
|
|
|
path = '/sda1/p/a/c'
|
|
|
|
req = Request.blank(path, method='PUT',
|
|
|
|
headers={'X-Timestamp': '1'})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
req = Request.blank(path, method='DELETE',
|
|
|
|
headers={'X-Timestamp': '2'})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
req = Request.blank(path, method='GET')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 404) # sanity
|
2014-05-27 16:57:25 -07:00
|
|
|
# backend headers
|
|
|
|
expectations = {
|
2014-06-10 22:17:47 -07:00
|
|
|
'x-backend-put-timestamp': Timestamp(1).internal,
|
|
|
|
'x-backend-delete-timestamp': Timestamp(2).internal,
|
|
|
|
'x-backend-status-changed-at': Timestamp(2).internal,
|
2014-05-27 16:57:25 -07:00
|
|
|
}
|
|
|
|
for header, value in expectations.items():
|
|
|
|
self.assertEqual(resp.headers[header], value,
|
|
|
|
'response header %s was %s not %s' % (
|
|
|
|
header, resp.headers[header], value))
|
2014-03-17 16:00:46 -07:00
|
|
|
db = self.controller._get_container_broker('sda1', 'p', 'a', 'c')
|
|
|
|
self.assertEqual(True, db.is_deleted())
|
|
|
|
info = db.get_info()
|
2014-06-10 22:17:47 -07:00
|
|
|
self.assertEquals(info['put_timestamp'], Timestamp('1').internal)
|
|
|
|
self.assertEquals(info['delete_timestamp'], Timestamp('2').internal)
|
|
|
|
self.assertEquals(info['status_changed_at'], Timestamp('2').internal)
|
2014-03-17 16:00:46 -07:00
|
|
|
# recreate
|
|
|
|
req = Request.blank(path, method='PUT',
|
|
|
|
headers={'X-Timestamp': '4'})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
db = self.controller._get_container_broker('sda1', 'p', 'a', 'c')
|
|
|
|
self.assertEqual(False, db.is_deleted())
|
|
|
|
info = db.get_info()
|
2014-06-10 22:17:47 -07:00
|
|
|
self.assertEquals(info['put_timestamp'], Timestamp('4').internal)
|
|
|
|
self.assertEquals(info['delete_timestamp'], Timestamp('2').internal)
|
|
|
|
self.assertEquals(info['status_changed_at'], Timestamp('4').internal)
|
2014-05-27 16:57:25 -07:00
|
|
|
for method in ('GET', 'HEAD'):
|
|
|
|
req = Request.blank(path)
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
expectations = {
|
2014-06-10 22:17:47 -07:00
|
|
|
'x-put-timestamp': Timestamp(4).normal,
|
|
|
|
'x-backend-put-timestamp': Timestamp(4).internal,
|
|
|
|
'x-backend-delete-timestamp': Timestamp(2).internal,
|
|
|
|
'x-backend-status-changed-at': Timestamp(4).internal,
|
2014-05-27 16:57:25 -07:00
|
|
|
}
|
|
|
|
for header, expected in expectations.items():
|
|
|
|
self.assertEqual(resp.headers[header], expected,
|
|
|
|
'header %s was %s is not expected %s' % (
|
|
|
|
header, resp.headers[header], expected))
|
2014-03-17 16:00:46 -07:00
|
|
|
|
|
|
|
def test_DELETE_PUT_recreate_replication_race(self):
|
|
|
|
path = '/sda1/p/a/c'
|
|
|
|
# create a deleted db
|
|
|
|
req = Request.blank(path, method='PUT',
|
|
|
|
headers={'X-Timestamp': '1'})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
db = self.controller._get_container_broker('sda1', 'p', 'a', 'c')
|
|
|
|
req = Request.blank(path, method='DELETE',
|
|
|
|
headers={'X-Timestamp': '2'})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
req = Request.blank(path, method='GET')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 404) # sanity
|
|
|
|
self.assertEqual(True, db.is_deleted())
|
|
|
|
# now save a copy of this db (and remove it from the "current node")
|
|
|
|
db = self.controller._get_container_broker('sda1', 'p', 'a', 'c')
|
|
|
|
db_path = db.db_file
|
|
|
|
other_path = os.path.join(self.testdir, 'othernode.db')
|
|
|
|
os.rename(db_path, other_path)
|
|
|
|
# that should make it missing on this node
|
|
|
|
req = Request.blank(path, method='GET')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 404) # sanity
|
|
|
|
|
|
|
|
# setup the race in os.path.exists (first time no, then yes)
|
|
|
|
mock_called = []
|
|
|
|
_real_exists = os.path.exists
|
|
|
|
|
|
|
|
def mock_exists(db_path):
|
|
|
|
rv = _real_exists(db_path)
|
|
|
|
if not mock_called:
|
|
|
|
# be as careful as we might hope backend replication can be...
|
|
|
|
with lock_parent_directory(db_path, timeout=1):
|
|
|
|
os.rename(other_path, db_path)
|
|
|
|
mock_called.append((rv, db_path))
|
|
|
|
return rv
|
|
|
|
|
|
|
|
req = Request.blank(path, method='PUT',
|
|
|
|
headers={'X-Timestamp': '4'})
|
|
|
|
with mock.patch.object(container_server.os.path, 'exists',
|
|
|
|
mock_exists):
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
# db was successfully created
|
|
|
|
self.assertEqual(resp.status_int // 100, 2)
|
|
|
|
db = self.controller._get_container_broker('sda1', 'p', 'a', 'c')
|
|
|
|
self.assertEqual(False, db.is_deleted())
|
|
|
|
# mock proves the race
|
|
|
|
self.assertEqual(mock_called[:2],
|
|
|
|
[(exists, db.db_file) for exists in (False, True)])
|
|
|
|
# info was updated
|
|
|
|
info = db.get_info()
|
2014-06-10 22:17:47 -07:00
|
|
|
self.assertEquals(info['put_timestamp'], Timestamp('4').internal)
|
|
|
|
self.assertEquals(info['delete_timestamp'], Timestamp('2').internal)
|
2014-03-17 16:00:46 -07:00
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
def test_DELETE_not_found(self):
|
|
|
|
# Even if the container wasn't previously heard of, the container
|
|
|
|
# server will accept the delete and replicate it to where it belongs
|
|
|
|
# later.
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c',
|
2010-07-12 17:03:45 -05:00
|
|
|
environ={'REQUEST_METHOD': 'DELETE', 'HTTP_X_TIMESTAMP': '1'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 404)
|
|
|
|
|
2014-05-27 16:57:25 -07:00
|
|
|
def test_change_storage_policy_via_DELETE_then_PUT(self):
|
2014-06-10 22:17:47 -07:00
|
|
|
ts = (Timestamp(t).internal for t in
|
2014-05-27 16:57:25 -07:00
|
|
|
itertools.count(int(time.time())))
|
|
|
|
policy = random.choice(list(POLICIES))
|
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', method='PUT',
|
|
|
|
headers={'X-Timestamp': ts.next(),
|
2014-06-23 12:52:50 -07:00
|
|
|
'X-Backend-Storage-Policy-Index': policy.idx})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 201) # sanity check
|
|
|
|
|
|
|
|
# try re-recreate with other policies
|
|
|
|
other_policies = [p for p in POLICIES if p != policy]
|
|
|
|
for other_policy in other_policies:
|
|
|
|
# first delete the existing container
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='DELETE', headers={
|
|
|
|
'X-Timestamp': ts.next()})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 204) # sanity check
|
|
|
|
|
|
|
|
# at this point, the DB should still exist but be in a deleted
|
|
|
|
# state, so changing the policy index is perfectly acceptable
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='PUT', headers={
|
|
|
|
'X-Timestamp': ts.next(),
|
2014-06-23 12:52:50 -07:00
|
|
|
'X-Backend-Storage-Policy-Index': other_policy.idx})
|
2014-05-27 16:57:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 201) # sanity check
|
|
|
|
|
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', method='HEAD')
|
|
|
|
resp = req.get_response(self.controller)
|
2014-06-23 12:52:50 -07:00
|
|
|
self.assertEqual(resp.headers['X-Backend-Storage-Policy-Index'],
|
2014-05-27 16:57:25 -07:00
|
|
|
str(other_policy.idx))
|
|
|
|
|
|
|
|
def test_change_to_default_storage_policy_via_DELETE_then_PUT(self):
|
2014-06-10 22:17:47 -07:00
|
|
|
ts = (Timestamp(t).internal for t in
|
2014-05-27 16:57:25 -07:00
|
|
|
itertools.count(int(time.time())))
|
|
|
|
non_default_policy = random.choice([p for p in POLICIES
|
|
|
|
if not p.is_default])
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='PUT', headers={
|
|
|
|
'X-Timestamp': ts.next(),
|
2014-06-23 12:52:50 -07:00
|
|
|
'X-Backend-Storage-Policy-Index': non_default_policy.idx,
|
2014-05-27 16:57:25 -07:00
|
|
|
})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 201) # sanity check
|
|
|
|
|
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', method='DELETE',
|
|
|
|
headers={'X-Timestamp': ts.next()})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 204) # sanity check
|
|
|
|
|
|
|
|
# at this point, the DB should still exist but be in a deleted state,
|
|
|
|
# so changing the policy index is perfectly acceptable
|
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', method='PUT',
|
|
|
|
headers={'X-Timestamp': ts.next()})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 201) # sanity check
|
|
|
|
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='HEAD')
|
|
|
|
resp = req.get_response(self.controller)
|
2014-06-23 12:52:50 -07:00
|
|
|
self.assertEqual(resp.headers['X-Backend-Storage-Policy-Index'],
|
2014-05-27 16:57:25 -07:00
|
|
|
str(POLICIES.default.idx))
|
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
def test_DELETE_object(self):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
2014-06-10 22:17:47 -07:00
|
|
|
'/sda1/p/a/c', method='PUT', headers={
|
|
|
|
'X-Timestamp': Timestamp(2).internal})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
2014-06-10 22:17:47 -07:00
|
|
|
'/sda1/p/a/c/o', method='PUT', headers={
|
|
|
|
'X-Timestamp': Timestamp(0).internal, 'X-Size': 1,
|
|
|
|
'X-Content-Type': 'text/plain', 'X-Etag': 'x'})
|
Add Storage Policy support to Object Updates
The object server will now send its storage policy index to the
container server synchronously and asynchronously (via async_pending).
Each storage policy gets its own async_pending directory under
/srv/node/$disk/objects-$N, so there's no need to change the on-disk
pickle format; the policy index comes from the async_pending's
filename. This avoids any hassle on upgrade. (Recall that policy 0's
objects live in /srv/node/$disk/objects, not objects-0.) Per-policy
tempdir as well.
Also clean up a couple little things in the object updater. Now it
won't abort processing when it encounters a file (not directory) named
"async_pending-\d+", and it won't process updates in a directory that
does not correspond to a storage policy.
That is, if you have policies 1, 2, and 3, but there's a directory on
your disk named "async_pending-5", the updater will now skip over that
entirely. It won't even bother doing directory listings at all. This
is a good idea, believe it or not, because there's nothing good that
the container server can do with an update from some unknown storage
policy. It can't update the listing, it can't move the object if it's
misplaced... all it can do is ignore the request, so it's better to
just not send it in the first place. Plus, if this is due to a
misconfiguration on one storage node, then the updates will get
processed once the configuration is fixed.
There's also a drive by fix to update some backend http mocks for container
update tests that we're not fully exercising their their request fakes.
Because the object server container update code is resilient to to all manor
of failure from backend requests the general intent of the tests was
unaffected but this change cleans up some confusing logging in the debug
logger output.
The object-server will send X-Storage-Policy-Index headers with all
requests to container severs, including X-Delete containers and all
object PUT/DELETE requests. This header value is persisted in the
pickle file for the update and sent along with async requests from the
object-updater as well.
The container server will extract the X-Storage-Policy-Index header from
incoming requests and apply it to container broker calls as appropriate
defaulting to the legacy storage policy 0 to support seemless migration.
DocImpact
Implements: blueprint storage-policies
Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd
add to object updates
Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
|
|
|
self._update_object_put_headers(req)
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
2014-06-10 22:17:47 -07:00
|
|
|
ts = (Timestamp(t).internal for t in
|
|
|
|
itertools.count(3))
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='DELETE', headers={
|
|
|
|
'X-Timestamp': ts.next()})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 409)
|
2014-06-10 22:17:47 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c/o', method='DELETE', headers={
|
|
|
|
'X-Timestamp': ts.next()})
|
Add Storage Policy support to Object Updates
The object server will now send its storage policy index to the
container server synchronously and asynchronously (via async_pending).
Each storage policy gets its own async_pending directory under
/srv/node/$disk/objects-$N, so there's no need to change the on-disk
pickle format; the policy index comes from the async_pending's
filename. This avoids any hassle on upgrade. (Recall that policy 0's
objects live in /srv/node/$disk/objects, not objects-0.) Per-policy
tempdir as well.
Also clean up a couple little things in the object updater. Now it
won't abort processing when it encounters a file (not directory) named
"async_pending-\d+", and it won't process updates in a directory that
does not correspond to a storage policy.
That is, if you have policies 1, 2, and 3, but there's a directory on
your disk named "async_pending-5", the updater will now skip over that
entirely. It won't even bother doing directory listings at all. This
is a good idea, believe it or not, because there's nothing good that
the container server can do with an update from some unknown storage
policy. It can't update the listing, it can't move the object if it's
misplaced... all it can do is ignore the request, so it's better to
just not send it in the first place. Plus, if this is due to a
misconfiguration on one storage node, then the updates will get
processed once the configuration is fixed.
There's also a drive by fix to update some backend http mocks for container
update tests that we're not fully exercising their their request fakes.
Because the object server container update code is resilient to to all manor
of failure from backend requests the general intent of the tests was
unaffected but this change cleans up some confusing logging in the debug
logger output.
The object-server will send X-Storage-Policy-Index headers with all
requests to container severs, including X-Delete containers and all
object PUT/DELETE requests. This header value is persisted in the
pickle file for the update and sent along with async requests from the
object-updater as well.
The container server will extract the X-Storage-Policy-Index header from
incoming requests and apply it to container broker calls as appropriate
defaulting to the legacy storage policy 0 to support seemless migration.
DocImpact
Implements: blueprint storage-policies
Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd
add to object updates
Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
|
|
|
self._update_object_put_headers(req)
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
2014-06-10 22:17:47 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c', method='DELETE', headers={
|
|
|
|
'X-Timestamp': ts.next()})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='GET', headers={
|
|
|
|
'X-Timestamp': ts.next()})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 404)
|
|
|
|
|
|
|
|
def test_object_update_with_offset(self):
|
|
|
|
ts = (Timestamp(t).internal for t in
|
|
|
|
itertools.count(int(time.time())))
|
|
|
|
# create container
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='PUT', headers={
|
|
|
|
'X-Timestamp': ts.next()})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 201)
|
|
|
|
# check status
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='HEAD')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 204)
|
2014-06-23 12:52:50 -07:00
|
|
|
self.assertEqual(int(resp.headers['X-Backend-Storage-Policy-Index']),
|
2014-06-10 22:17:47 -07:00
|
|
|
int(POLICIES.default))
|
|
|
|
# create object
|
|
|
|
obj_timestamp = ts.next()
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
2014-06-10 22:17:47 -07:00
|
|
|
'/sda1/p/a/c/o', method='PUT', headers={
|
|
|
|
'X-Timestamp': obj_timestamp, 'X-Size': 1,
|
|
|
|
'X-Content-Type': 'text/plain', 'X-Etag': 'x'})
|
|
|
|
self._update_object_put_headers(req)
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
# check listing
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='GET',
|
|
|
|
query_string='format=json')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 200)
|
|
|
|
self.assertEqual(int(resp.headers['X-Container-Object-Count']), 1)
|
|
|
|
self.assertEqual(int(resp.headers['X-Container-Bytes-Used']), 1)
|
|
|
|
listing_data = json.loads(resp.body)
|
|
|
|
self.assertEqual(1, len(listing_data))
|
|
|
|
for obj in listing_data:
|
|
|
|
self.assertEqual(obj['name'], 'o')
|
|
|
|
self.assertEqual(obj['bytes'], 1)
|
|
|
|
self.assertEqual(obj['hash'], 'x')
|
|
|
|
self.assertEqual(obj['content_type'], 'text/plain')
|
|
|
|
# send an update with an offset
|
|
|
|
offset_timestamp = Timestamp(obj_timestamp, offset=1).internal
|
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c/o', method='PUT', headers={
|
|
|
|
'X-Timestamp': offset_timestamp, 'X-Size': 2,
|
|
|
|
'X-Content-Type': 'text/html', 'X-Etag': 'y'})
|
|
|
|
self._update_object_put_headers(req)
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
# check updated listing
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='GET',
|
|
|
|
query_string='format=json')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 200)
|
|
|
|
self.assertEqual(int(resp.headers['X-Container-Object-Count']), 1)
|
|
|
|
self.assertEqual(int(resp.headers['X-Container-Bytes-Used']), 2)
|
|
|
|
listing_data = json.loads(resp.body)
|
|
|
|
self.assertEqual(1, len(listing_data))
|
|
|
|
for obj in listing_data:
|
|
|
|
self.assertEqual(obj['name'], 'o')
|
|
|
|
self.assertEqual(obj['bytes'], 2)
|
|
|
|
self.assertEqual(obj['hash'], 'y')
|
|
|
|
self.assertEqual(obj['content_type'], 'text/html')
|
|
|
|
# now overwrite with a newer time
|
|
|
|
delete_timestamp = ts.next()
|
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c/o', method='DELETE', headers={
|
|
|
|
'X-Timestamp': delete_timestamp})
|
|
|
|
self._update_object_put_headers(req)
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
2014-06-10 22:17:47 -07:00
|
|
|
# check empty listing
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='GET',
|
|
|
|
query_string='format=json')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 200)
|
|
|
|
self.assertEqual(int(resp.headers['X-Container-Object-Count']), 0)
|
|
|
|
self.assertEqual(int(resp.headers['X-Container-Bytes-Used']), 0)
|
|
|
|
listing_data = json.loads(resp.body)
|
|
|
|
self.assertEqual(0, len(listing_data))
|
|
|
|
# recreate with an offset
|
|
|
|
offset_timestamp = Timestamp(delete_timestamp, offset=1).internal
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
2014-06-10 22:17:47 -07:00
|
|
|
'/sda1/p/a/c/o', method='PUT', headers={
|
|
|
|
'X-Timestamp': offset_timestamp, 'X-Size': 3,
|
|
|
|
'X-Content-Type': 'text/enriched', 'X-Etag': 'z'})
|
|
|
|
self._update_object_put_headers(req)
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2014-06-10 22:17:47 -07:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
# check un-deleted listing
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='GET',
|
|
|
|
query_string='format=json')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 200)
|
|
|
|
self.assertEqual(int(resp.headers['X-Container-Object-Count']), 1)
|
|
|
|
self.assertEqual(int(resp.headers['X-Container-Bytes-Used']), 3)
|
|
|
|
listing_data = json.loads(resp.body)
|
|
|
|
self.assertEqual(1, len(listing_data))
|
|
|
|
for obj in listing_data:
|
|
|
|
self.assertEqual(obj['name'], 'o')
|
|
|
|
self.assertEqual(obj['bytes'], 3)
|
|
|
|
self.assertEqual(obj['hash'], 'z')
|
|
|
|
self.assertEqual(obj['content_type'], 'text/enriched')
|
|
|
|
# delete offset with newer offset
|
|
|
|
delete_timestamp = Timestamp(offset_timestamp, offset=1).internal
|
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c/o', method='DELETE', headers={
|
|
|
|
'X-Timestamp': delete_timestamp})
|
|
|
|
self._update_object_put_headers(req)
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
# check empty listing
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='GET',
|
|
|
|
query_string='format=json')
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 200)
|
|
|
|
self.assertEqual(int(resp.headers['X-Container-Object-Count']), 0)
|
|
|
|
self.assertEqual(int(resp.headers['X-Container-Bytes-Used']), 0)
|
|
|
|
listing_data = json.loads(resp.body)
|
|
|
|
self.assertEqual(0, len(listing_data))
|
2010-07-12 17:03:45 -05:00
|
|
|
|
|
|
|
def test_DELETE_account_update(self):
|
|
|
|
bindsock = listen(('127.0.0.1', 0))
|
2013-03-20 19:26:45 -07:00
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
def accept(return_code, expected_timestamp):
|
|
|
|
try:
|
|
|
|
with Timeout(3):
|
|
|
|
sock, addr = bindsock.accept()
|
|
|
|
inc = sock.makefile('rb')
|
|
|
|
out = sock.makefile('wb')
|
|
|
|
out.write('HTTP/1.1 %d OK\r\nContent-Length: 0\r\n\r\n' %
|
|
|
|
return_code)
|
|
|
|
out.flush()
|
|
|
|
self.assertEquals(inc.readline(),
|
|
|
|
'PUT /sda1/123/a/c HTTP/1.1\r\n')
|
|
|
|
headers = {}
|
|
|
|
line = inc.readline()
|
|
|
|
while line and line != '\r\n':
|
|
|
|
headers[line.split(':')[0].lower()] = \
|
|
|
|
line.split(':')[1].strip()
|
|
|
|
line = inc.readline()
|
|
|
|
self.assertEquals(headers['x-delete-timestamp'],
|
|
|
|
expected_timestamp)
|
2013-08-28 21:16:08 +02:00
|
|
|
except BaseException as err:
|
2010-07-12 17:03:45 -05:00
|
|
|
return err
|
|
|
|
return None
|
2013-03-20 19:26:45 -07:00
|
|
|
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c',
|
2010-07-12 17:03:45 -05:00
|
|
|
environ={'REQUEST_METHOD': 'PUT'}, headers={'X-Timestamp': '1'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c',
|
2010-07-12 17:03:45 -05:00
|
|
|
environ={'REQUEST_METHOD': 'DELETE'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(2).internal,
|
2010-07-12 17:03:45 -05:00
|
|
|
'X-Account-Host': '%s:%s' % bindsock.getsockname(),
|
|
|
|
'X-Account-Partition': '123',
|
|
|
|
'X-Account-Device': 'sda1'})
|
2014-06-10 22:17:47 -07:00
|
|
|
event = spawn(accept, 204, Timestamp(2).internal)
|
2010-07-12 17:03:45 -05:00
|
|
|
try:
|
|
|
|
with Timeout(3):
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
finally:
|
|
|
|
err = event.wait()
|
|
|
|
if err:
|
|
|
|
raise Exception(err)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
2014-06-10 22:17:47 -07:00
|
|
|
'/sda1/p/a/c', method='PUT', headers={
|
|
|
|
'X-Timestamp': Timestamp(2).internal})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c',
|
2010-07-12 17:03:45 -05:00
|
|
|
environ={'REQUEST_METHOD': 'DELETE'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(3).internal,
|
2010-07-12 17:03:45 -05:00
|
|
|
'X-Account-Host': '%s:%s' % bindsock.getsockname(),
|
|
|
|
'X-Account-Partition': '123',
|
|
|
|
'X-Account-Device': 'sda1'})
|
2014-06-10 22:17:47 -07:00
|
|
|
event = spawn(accept, 404, Timestamp(3).internal)
|
2010-07-12 17:03:45 -05:00
|
|
|
try:
|
|
|
|
with Timeout(3):
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 404)
|
|
|
|
finally:
|
|
|
|
err = event.wait()
|
|
|
|
if err:
|
|
|
|
raise Exception(err)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
2014-06-10 22:17:47 -07:00
|
|
|
'/sda1/p/a/c', method='PUT', headers={
|
|
|
|
'X-Timestamp': Timestamp(4).internal})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c',
|
2010-07-12 17:03:45 -05:00
|
|
|
environ={'REQUEST_METHOD': 'DELETE'},
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(5).internal,
|
2010-07-12 17:03:45 -05:00
|
|
|
'X-Account-Host': '%s:%s' % bindsock.getsockname(),
|
|
|
|
'X-Account-Partition': '123',
|
|
|
|
'X-Account-Device': 'sda1'})
|
2014-06-10 22:17:47 -07:00
|
|
|
event = spawn(accept, 503, Timestamp(5).internal)
|
2010-07-12 17:03:45 -05:00
|
|
|
got_exc = False
|
|
|
|
try:
|
|
|
|
with Timeout(3):
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-08-28 21:16:08 +02:00
|
|
|
except BaseException as err:
|
2010-07-12 17:03:45 -05:00
|
|
|
got_exc = True
|
|
|
|
finally:
|
|
|
|
err = event.wait()
|
|
|
|
if err:
|
|
|
|
raise Exception(err)
|
|
|
|
self.assert_(not got_exc)
|
|
|
|
|
2013-03-20 19:26:45 -07:00
|
|
|
def test_DELETE_invalid_partition(self):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/./a/c', environ={'REQUEST_METHOD': 'DELETE',
|
|
|
|
'HTTP_X_TIMESTAMP': '1'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
self.assertEquals(resp.status_int, 400)
|
|
|
|
|
|
|
|
def test_DELETE_timestamp_not_float(self):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '0'})
|
2013-08-16 16:24:00 -07:00
|
|
|
req.get_response(self.controller)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'DELETE'},
|
|
|
|
headers={'X-Timestamp': 'not-float'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
self.assertEquals(resp.status_int, 400)
|
|
|
|
|
|
|
|
def test_DELETE_insufficient_storage(self):
|
|
|
|
self.controller = container_server.ContainerController(
|
|
|
|
{'devices': self.testdir})
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda-null/p/a/c', environ={'REQUEST_METHOD': 'DELETE',
|
|
|
|
'HTTP_X_TIMESTAMP': '1'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
self.assertEquals(resp.status_int, 507)
|
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
def test_GET_over_limit(self):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c?limit=%d' %
|
2014-04-02 16:16:21 -04:00
|
|
|
(constraints.CONTAINER_LISTING_LIMIT + 1),
|
2010-07-12 17:03:45 -05:00
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 412)
|
|
|
|
|
2010-10-29 01:23:01 +00:00
|
|
|
def test_GET_json(self):
|
2010-07-12 17:03:45 -05:00
|
|
|
# make a container
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/jsonc', environ={'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '0'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
# test an empty container
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/jsonc?format=json',
|
2010-10-29 10:28:19 +00:00
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-10-29 10:28:19 +00:00
|
|
|
self.assertEquals(resp.status_int, 200)
|
2013-05-28 07:22:42 -07:00
|
|
|
self.assertEquals(simplejson.loads(resp.body), [])
|
2010-07-12 17:03:45 -05:00
|
|
|
# fill the container
|
|
|
|
for i in range(3):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/jsonc/%s' % i, environ={
|
2013-03-20 19:26:45 -07:00
|
|
|
'REQUEST_METHOD': 'PUT',
|
2010-07-12 17:03:45 -05:00
|
|
|
'HTTP_X_TIMESTAMP': '1',
|
|
|
|
'HTTP_X_CONTENT_TYPE': 'text/plain',
|
|
|
|
'HTTP_X_ETAG': 'x',
|
|
|
|
'HTTP_X_SIZE': 0})
|
Add Storage Policy support to Object Updates
The object server will now send its storage policy index to the
container server synchronously and asynchronously (via async_pending).
Each storage policy gets its own async_pending directory under
/srv/node/$disk/objects-$N, so there's no need to change the on-disk
pickle format; the policy index comes from the async_pending's
filename. This avoids any hassle on upgrade. (Recall that policy 0's
objects live in /srv/node/$disk/objects, not objects-0.) Per-policy
tempdir as well.
Also clean up a couple little things in the object updater. Now it
won't abort processing when it encounters a file (not directory) named
"async_pending-\d+", and it won't process updates in a directory that
does not correspond to a storage policy.
That is, if you have policies 1, 2, and 3, but there's a directory on
your disk named "async_pending-5", the updater will now skip over that
entirely. It won't even bother doing directory listings at all. This
is a good idea, believe it or not, because there's nothing good that
the container server can do with an update from some unknown storage
policy. It can't update the listing, it can't move the object if it's
misplaced... all it can do is ignore the request, so it's better to
just not send it in the first place. Plus, if this is due to a
misconfiguration on one storage node, then the updates will get
processed once the configuration is fixed.
There's also a drive by fix to update some backend http mocks for container
update tests that we're not fully exercising their their request fakes.
Because the object server container update code is resilient to to all manor
of failure from backend requests the general intent of the tests was
unaffected but this change cleans up some confusing logging in the debug
logger output.
The object-server will send X-Storage-Policy-Index headers with all
requests to container severs, including X-Delete containers and all
object PUT/DELETE requests. This header value is persisted in the
pickle file for the update and sent along with async requests from the
object-updater as well.
The container server will extract the X-Storage-Policy-Index header from
incoming requests and apply it to container broker calls as appropriate
defaulting to the legacy storage policy 0 to support seemless migration.
DocImpact
Implements: blueprint storage-policies
Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd
add to object updates
Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
|
|
|
self._update_object_put_headers(req)
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
# test format
|
2013-09-01 15:10:39 -04:00
|
|
|
json_body = [{"name": "0",
|
|
|
|
"hash": "x",
|
|
|
|
"bytes": 0,
|
|
|
|
"content_type": "text/plain",
|
|
|
|
"last_modified": "1970-01-01T00:00:01.000000"},
|
|
|
|
{"name": "1",
|
|
|
|
"hash": "x",
|
|
|
|
"bytes": 0,
|
|
|
|
"content_type": "text/plain",
|
|
|
|
"last_modified": "1970-01-01T00:00:01.000000"},
|
|
|
|
{"name": "2",
|
|
|
|
"hash": "x",
|
|
|
|
"bytes": 0,
|
|
|
|
"content_type": "text/plain",
|
|
|
|
"last_modified": "1970-01-01T00:00:01.000000"}]
|
|
|
|
|
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/jsonc?format=json',
|
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.content_type, 'application/json')
|
2013-05-28 07:22:42 -07:00
|
|
|
self.assertEquals(simplejson.loads(resp.body), json_body)
|
2011-06-10 18:36:02 +00:00
|
|
|
self.assertEquals(resp.charset, 'utf-8')
|
2010-10-29 10:28:19 +00:00
|
|
|
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/jsonc?format=json',
|
|
|
|
environ={'REQUEST_METHOD': 'HEAD'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2012-11-02 12:02:02 -07:00
|
|
|
self.assertEquals(resp.content_type, 'application/json')
|
|
|
|
|
2010-10-29 10:28:19 +00:00
|
|
|
for accept in ('application/json', 'application/json;q=1.0,*/*;q=0.9',
|
2013-09-01 15:10:39 -04:00
|
|
|
'*/*;q=0.9,application/json;q=1.0', 'application/*'):
|
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/jsonc',
|
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
2010-10-29 10:28:19 +00:00
|
|
|
req.accept = accept
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-09-01 15:10:39 -04:00
|
|
|
self.assertEquals(
|
|
|
|
simplejson.loads(resp.body), json_body,
|
2010-10-29 10:28:19 +00:00
|
|
|
'Invalid body for Accept: %s' % accept)
|
2013-09-01 15:10:39 -04:00
|
|
|
self.assertEquals(
|
|
|
|
resp.content_type, 'application/json',
|
2010-10-29 10:28:19 +00:00
|
|
|
'Invalid content_type for Accept: %s' % accept)
|
2010-10-29 01:23:01 +00:00
|
|
|
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/jsonc',
|
|
|
|
environ={'REQUEST_METHOD': 'HEAD'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
req.accept = accept
|
|
|
|
resp = req.get_response(self.controller)
|
2013-09-01 15:10:39 -04:00
|
|
|
self.assertEquals(
|
|
|
|
resp.content_type, 'application/json',
|
2012-11-02 12:02:02 -07:00
|
|
|
'Invalid content_type for Accept: %s' % accept)
|
|
|
|
|
2010-10-29 01:23:01 +00:00
|
|
|
def test_GET_plain(self):
|
|
|
|
# make a container
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/plainc', environ={'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '0'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-10-29 10:28:19 +00:00
|
|
|
# test an empty container
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/plainc', environ={'REQUEST_METHOD': 'GET'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-10-29 10:28:19 +00:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
2010-10-29 01:23:01 +00:00
|
|
|
# fill the container
|
|
|
|
for i in range(3):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/plainc/%s' % i, environ={
|
2013-03-20 19:26:45 -07:00
|
|
|
'REQUEST_METHOD': 'PUT',
|
2010-10-29 01:23:01 +00:00
|
|
|
'HTTP_X_TIMESTAMP': '1',
|
|
|
|
'HTTP_X_CONTENT_TYPE': 'text/plain',
|
|
|
|
'HTTP_X_ETAG': 'x',
|
|
|
|
'HTTP_X_SIZE': 0})
|
Add Storage Policy support to Object Updates
The object server will now send its storage policy index to the
container server synchronously and asynchronously (via async_pending).
Each storage policy gets its own async_pending directory under
/srv/node/$disk/objects-$N, so there's no need to change the on-disk
pickle format; the policy index comes from the async_pending's
filename. This avoids any hassle on upgrade. (Recall that policy 0's
objects live in /srv/node/$disk/objects, not objects-0.) Per-policy
tempdir as well.
Also clean up a couple little things in the object updater. Now it
won't abort processing when it encounters a file (not directory) named
"async_pending-\d+", and it won't process updates in a directory that
does not correspond to a storage policy.
That is, if you have policies 1, 2, and 3, but there's a directory on
your disk named "async_pending-5", the updater will now skip over that
entirely. It won't even bother doing directory listings at all. This
is a good idea, believe it or not, because there's nothing good that
the container server can do with an update from some unknown storage
policy. It can't update the listing, it can't move the object if it's
misplaced... all it can do is ignore the request, so it's better to
just not send it in the first place. Plus, if this is due to a
misconfiguration on one storage node, then the updates will get
processed once the configuration is fixed.
There's also a drive by fix to update some backend http mocks for container
update tests that we're not fully exercising their their request fakes.
Because the object server container update code is resilient to to all manor
of failure from backend requests the general intent of the tests was
unaffected but this change cleans up some confusing logging in the debug
logger output.
The object-server will send X-Storage-Policy-Index headers with all
requests to container severs, including X-Delete containers and all
object PUT/DELETE requests. This header value is persisted in the
pickle file for the update and sent along with async requests from the
object-updater as well.
The container server will extract the X-Storage-Policy-Index header from
incoming requests and apply it to container broker calls as appropriate
defaulting to the legacy storage policy 0 to support seemless migration.
DocImpact
Implements: blueprint storage-policies
Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd
add to object updates
Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
|
|
|
self._update_object_put_headers(req)
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-10-29 01:23:01 +00:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
plain_body = '0\n1\n2\n'
|
|
|
|
|
|
|
|
req = Request.blank('/sda1/p/a/plainc',
|
2013-09-01 15:10:39 -04:00
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-10-29 01:23:01 +00:00
|
|
|
self.assertEquals(resp.content_type, 'text/plain')
|
|
|
|
self.assertEquals(resp.body, plain_body)
|
2011-06-10 18:36:02 +00:00
|
|
|
self.assertEquals(resp.charset, 'utf-8')
|
2010-10-29 01:23:01 +00:00
|
|
|
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
req = Request.blank('/sda1/p/a/plainc',
|
2013-09-01 15:10:39 -04:00
|
|
|
environ={'REQUEST_METHOD': 'HEAD'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2012-11-02 12:02:02 -07:00
|
|
|
self.assertEquals(resp.content_type, 'text/plain')
|
|
|
|
|
2010-10-29 01:23:01 +00:00
|
|
|
for accept in ('', 'text/plain', 'application/xml;q=0.8,*/*;q=0.9',
|
2013-09-01 15:10:39 -04:00
|
|
|
'*/*;q=0.9,application/xml;q=0.8', '*/*',
|
|
|
|
'text/plain,application/xml'):
|
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/plainc',
|
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
2010-10-29 01:23:01 +00:00
|
|
|
req.accept = accept
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-09-01 15:10:39 -04:00
|
|
|
self.assertEquals(
|
|
|
|
resp.body, plain_body,
|
2010-10-29 10:28:19 +00:00
|
|
|
'Invalid body for Accept: %s' % accept)
|
2013-09-01 15:10:39 -04:00
|
|
|
self.assertEquals(
|
|
|
|
resp.content_type, 'text/plain',
|
2010-10-29 10:28:19 +00:00
|
|
|
'Invalid content_type for Accept: %s' % accept)
|
2010-10-29 01:23:01 +00:00
|
|
|
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/plainc',
|
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
req.accept = accept
|
|
|
|
resp = req.get_response(self.controller)
|
2013-09-01 15:10:39 -04:00
|
|
|
self.assertEquals(
|
|
|
|
resp.content_type, 'text/plain',
|
2012-11-02 12:02:02 -07:00
|
|
|
'Invalid content_type for Accept: %s' % accept)
|
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
# test conflicting formats
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/plainc?format=plain',
|
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
2010-07-12 17:03:45 -05:00
|
|
|
req.accept = 'application/json'
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.content_type, 'text/plain')
|
2010-10-29 01:23:01 +00:00
|
|
|
self.assertEquals(resp.body, plain_body)
|
|
|
|
|
2012-07-31 02:26:39 +00:00
|
|
|
# test unknown format uses default plain
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/plainc?format=somethingelse',
|
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2012-07-31 02:26:39 +00:00
|
|
|
self.assertEquals(resp.status_int, 200)
|
|
|
|
self.assertEquals(resp.content_type, 'text/plain')
|
|
|
|
self.assertEquals(resp.body, plain_body)
|
|
|
|
|
2012-02-02 11:54:15 +00:00
|
|
|
def test_GET_json_last_modified(self):
|
|
|
|
# make a container
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/jsonc', environ={
|
|
|
|
'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '0'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-09-01 15:10:39 -04:00
|
|
|
for i, d in [(0, 1.5), (1, 1.0), ]:
|
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/jsonc/%s' % i, environ={
|
2013-03-20 19:26:45 -07:00
|
|
|
'REQUEST_METHOD': 'PUT',
|
2012-02-02 11:54:15 +00:00
|
|
|
'HTTP_X_TIMESTAMP': d,
|
|
|
|
'HTTP_X_CONTENT_TYPE': 'text/plain',
|
|
|
|
'HTTP_X_ETAG': 'x',
|
|
|
|
'HTTP_X_SIZE': 0})
|
Add Storage Policy support to Object Updates
The object server will now send its storage policy index to the
container server synchronously and asynchronously (via async_pending).
Each storage policy gets its own async_pending directory under
/srv/node/$disk/objects-$N, so there's no need to change the on-disk
pickle format; the policy index comes from the async_pending's
filename. This avoids any hassle on upgrade. (Recall that policy 0's
objects live in /srv/node/$disk/objects, not objects-0.) Per-policy
tempdir as well.
Also clean up a couple little things in the object updater. Now it
won't abort processing when it encounters a file (not directory) named
"async_pending-\d+", and it won't process updates in a directory that
does not correspond to a storage policy.
That is, if you have policies 1, 2, and 3, but there's a directory on
your disk named "async_pending-5", the updater will now skip over that
entirely. It won't even bother doing directory listings at all. This
is a good idea, believe it or not, because there's nothing good that
the container server can do with an update from some unknown storage
policy. It can't update the listing, it can't move the object if it's
misplaced... all it can do is ignore the request, so it's better to
just not send it in the first place. Plus, if this is due to a
misconfiguration on one storage node, then the updates will get
processed once the configuration is fixed.
There's also a drive by fix to update some backend http mocks for container
update tests that we're not fully exercising their their request fakes.
Because the object server container update code is resilient to to all manor
of failure from backend requests the general intent of the tests was
unaffected but this change cleans up some confusing logging in the debug
logger output.
The object-server will send X-Storage-Policy-Index headers with all
requests to container severs, including X-Delete containers and all
object PUT/DELETE requests. This header value is persisted in the
pickle file for the update and sent along with async requests from the
object-updater as well.
The container server will extract the X-Storage-Policy-Index header from
incoming requests and apply it to container broker calls as appropriate
defaulting to the legacy storage policy 0 to support seemless migration.
DocImpact
Implements: blueprint storage-policies
Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd
add to object updates
Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
|
|
|
self._update_object_put_headers(req)
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2012-02-02 11:54:15 +00:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
# test format
|
|
|
|
# last_modified format must be uniform, even when there are not msecs
|
2013-09-01 15:10:39 -04:00
|
|
|
json_body = [{"name": "0",
|
|
|
|
"hash": "x",
|
|
|
|
"bytes": 0,
|
|
|
|
"content_type": "text/plain",
|
|
|
|
"last_modified": "1970-01-01T00:00:01.500000"},
|
|
|
|
{"name": "1",
|
|
|
|
"hash": "x",
|
|
|
|
"bytes": 0,
|
|
|
|
"content_type": "text/plain",
|
|
|
|
"last_modified": "1970-01-01T00:00:01.000000"}, ]
|
|
|
|
|
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/jsonc?format=json',
|
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2012-02-02 11:54:15 +00:00
|
|
|
self.assertEquals(resp.content_type, 'application/json')
|
2013-05-28 07:22:42 -07:00
|
|
|
self.assertEquals(simplejson.loads(resp.body), json_body)
|
2012-02-02 11:54:15 +00:00
|
|
|
self.assertEquals(resp.charset, 'utf-8')
|
|
|
|
|
2010-10-29 01:23:01 +00:00
|
|
|
def test_GET_xml(self):
|
|
|
|
# make a container
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/xmlc', environ={'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '0'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-10-29 01:23:01 +00:00
|
|
|
# fill the container
|
|
|
|
for i in range(3):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/xmlc/%s' % i,
|
2013-03-20 19:26:45 -07:00
|
|
|
environ={
|
|
|
|
'REQUEST_METHOD': 'PUT',
|
2011-07-22 10:54:54 -07:00
|
|
|
'HTTP_X_TIMESTAMP': '1',
|
|
|
|
'HTTP_X_CONTENT_TYPE': 'text/plain',
|
|
|
|
'HTTP_X_ETAG': 'x',
|
|
|
|
'HTTP_X_SIZE': 0})
|
Add Storage Policy support to Object Updates
The object server will now send its storage policy index to the
container server synchronously and asynchronously (via async_pending).
Each storage policy gets its own async_pending directory under
/srv/node/$disk/objects-$N, so there's no need to change the on-disk
pickle format; the policy index comes from the async_pending's
filename. This avoids any hassle on upgrade. (Recall that policy 0's
objects live in /srv/node/$disk/objects, not objects-0.) Per-policy
tempdir as well.
Also clean up a couple little things in the object updater. Now it
won't abort processing when it encounters a file (not directory) named
"async_pending-\d+", and it won't process updates in a directory that
does not correspond to a storage policy.
That is, if you have policies 1, 2, and 3, but there's a directory on
your disk named "async_pending-5", the updater will now skip over that
entirely. It won't even bother doing directory listings at all. This
is a good idea, believe it or not, because there's nothing good that
the container server can do with an update from some unknown storage
policy. It can't update the listing, it can't move the object if it's
misplaced... all it can do is ignore the request, so it's better to
just not send it in the first place. Plus, if this is due to a
misconfiguration on one storage node, then the updates will get
processed once the configuration is fixed.
There's also a drive by fix to update some backend http mocks for container
update tests that we're not fully exercising their their request fakes.
Because the object server container update code is resilient to to all manor
of failure from backend requests the general intent of the tests was
unaffected but this change cleans up some confusing logging in the debug
logger output.
The object-server will send X-Storage-Policy-Index headers with all
requests to container severs, including X-Delete containers and all
object PUT/DELETE requests. This header value is persisted in the
pickle file for the update and sent along with async requests from the
object-updater as well.
The container server will extract the X-Storage-Policy-Index header from
incoming requests and apply it to container broker calls as appropriate
defaulting to the legacy storage policy 0 to support seemless migration.
DocImpact
Implements: blueprint storage-policies
Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd
add to object updates
Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
|
|
|
self._update_object_put_headers(req)
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-10-29 01:23:01 +00:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
2013-08-30 20:09:26 +00:00
|
|
|
xml_body = '<?xml version="1.0" encoding="UTF-8"?>\n' \
|
2010-10-29 01:23:01 +00:00
|
|
|
'<container name="xmlc">' \
|
2013-09-01 15:10:39 -04:00
|
|
|
'<object><name>0</name><hash>x</hash><bytes>0</bytes>' \
|
|
|
|
'<content_type>text/plain</content_type>' \
|
|
|
|
'<last_modified>1970-01-01T00:00:01.000000' \
|
|
|
|
'</last_modified></object>' \
|
|
|
|
'<object><name>1</name><hash>x</hash><bytes>0</bytes>' \
|
|
|
|
'<content_type>text/plain</content_type>' \
|
|
|
|
'<last_modified>1970-01-01T00:00:01.000000' \
|
|
|
|
'</last_modified></object>' \
|
|
|
|
'<object><name>2</name><hash>x</hash><bytes>0</bytes>' \
|
|
|
|
'<content_type>text/plain</content_type>' \
|
|
|
|
'<last_modified>1970-01-01T00:00:01.000000' \
|
|
|
|
'</last_modified></object>' \
|
2010-10-29 01:23:01 +00:00
|
|
|
'</container>'
|
2013-07-23 14:54:51 -07:00
|
|
|
|
2010-10-29 01:23:01 +00:00
|
|
|
# tests
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/xmlc?format=xml',
|
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-10-29 01:23:01 +00:00
|
|
|
self.assertEquals(resp.content_type, 'application/xml')
|
|
|
|
self.assertEquals(resp.body, xml_body)
|
2011-06-10 18:36:02 +00:00
|
|
|
self.assertEquals(resp.charset, 'utf-8')
|
2010-10-29 01:23:01 +00:00
|
|
|
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/xmlc?format=xml',
|
|
|
|
environ={'REQUEST_METHOD': 'HEAD'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2012-11-02 12:02:02 -07:00
|
|
|
self.assertEquals(resp.content_type, 'application/xml')
|
|
|
|
|
2013-09-01 15:10:39 -04:00
|
|
|
for xml_accept in (
|
|
|
|
'application/xml', 'application/xml;q=1.0,*/*;q=0.9',
|
|
|
|
'*/*;q=0.9,application/xml;q=1.0', 'application/xml,text/xml'):
|
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/xmlc',
|
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
2010-10-29 01:23:01 +00:00
|
|
|
req.accept = xml_accept
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-09-01 15:10:39 -04:00
|
|
|
self.assertEquals(
|
|
|
|
resp.body, xml_body,
|
2010-10-29 10:28:19 +00:00
|
|
|
'Invalid body for Accept: %s' % xml_accept)
|
2013-09-01 15:10:39 -04:00
|
|
|
self.assertEquals(
|
|
|
|
resp.content_type, 'application/xml',
|
2010-10-29 10:28:19 +00:00
|
|
|
'Invalid content_type for Accept: %s' % xml_accept)
|
2010-07-12 17:03:45 -05:00
|
|
|
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/xmlc',
|
|
|
|
environ={'REQUEST_METHOD': 'HEAD'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
req.accept = xml_accept
|
|
|
|
resp = req.get_response(self.controller)
|
2013-09-01 15:10:39 -04:00
|
|
|
self.assertEquals(
|
|
|
|
resp.content_type, 'application/xml',
|
2012-11-02 12:02:02 -07:00
|
|
|
'Invalid content_type for Accept: %s' % xml_accept)
|
|
|
|
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/xmlc',
|
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
2010-11-02 16:04:15 +00:00
|
|
|
req.accept = 'text/xml'
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-11-02 16:04:15 +00:00
|
|
|
self.assertEquals(resp.content_type, 'text/xml')
|
|
|
|
self.assertEquals(resp.body, xml_body)
|
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
def test_GET_marker(self):
|
|
|
|
# make a container
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '0'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
# fill the container
|
|
|
|
for i in range(3):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c/%s' % i, environ={
|
|
|
|
'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '1',
|
|
|
|
'HTTP_X_CONTENT_TYPE': 'text/plain',
|
2010-07-12 17:03:45 -05:00
|
|
|
'HTTP_X_ETAG': 'x', 'HTTP_X_SIZE': 0})
|
Add Storage Policy support to Object Updates
The object server will now send its storage policy index to the
container server synchronously and asynchronously (via async_pending).
Each storage policy gets its own async_pending directory under
/srv/node/$disk/objects-$N, so there's no need to change the on-disk
pickle format; the policy index comes from the async_pending's
filename. This avoids any hassle on upgrade. (Recall that policy 0's
objects live in /srv/node/$disk/objects, not objects-0.) Per-policy
tempdir as well.
Also clean up a couple little things in the object updater. Now it
won't abort processing when it encounters a file (not directory) named
"async_pending-\d+", and it won't process updates in a directory that
does not correspond to a storage policy.
That is, if you have policies 1, 2, and 3, but there's a directory on
your disk named "async_pending-5", the updater will now skip over that
entirely. It won't even bother doing directory listings at all. This
is a good idea, believe it or not, because there's nothing good that
the container server can do with an update from some unknown storage
policy. It can't update the listing, it can't move the object if it's
misplaced... all it can do is ignore the request, so it's better to
just not send it in the first place. Plus, if this is due to a
misconfiguration on one storage node, then the updates will get
processed once the configuration is fixed.
There's also a drive by fix to update some backend http mocks for container
update tests that we're not fully exercising their their request fakes.
Because the object server container update code is resilient to to all manor
of failure from backend requests the general intent of the tests was
unaffected but this change cleans up some confusing logging in the debug
logger output.
The object-server will send X-Storage-Policy-Index headers with all
requests to container severs, including X-Delete containers and all
object PUT/DELETE requests. This header value is persisted in the
pickle file for the update and sent along with async requests from the
object-updater as well.
The container server will extract the X-Storage-Policy-Index header from
incoming requests and apply it to container broker calls as appropriate
defaulting to the legacy storage policy 0 to support seemless migration.
DocImpact
Implements: blueprint storage-policies
Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd
add to object updates
Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
|
|
|
self._update_object_put_headers(req)
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
# test limit with marker
|
2013-08-16 16:24:00 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c?limit=2&marker=1',
|
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
result = resp.body.split()
|
2013-03-20 19:26:45 -07:00
|
|
|
self.assertEquals(result, ['2', ])
|
2010-07-12 17:03:45 -05:00
|
|
|
|
|
|
|
def test_weird_content_types(self):
|
|
|
|
snowman = u'\u2603'
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '0'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-07-23 14:54:51 -07:00
|
|
|
for i, ctype in enumerate((snowman.encode('utf-8'),
|
|
|
|
'text/plain; charset="utf-8"')):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c/%s' % i, environ={
|
2013-03-20 19:26:45 -07:00
|
|
|
'REQUEST_METHOD': 'PUT',
|
2010-07-12 17:03:45 -05:00
|
|
|
'HTTP_X_TIMESTAMP': '1', 'HTTP_X_CONTENT_TYPE': ctype,
|
|
|
|
'HTTP_X_ETAG': 'x', 'HTTP_X_SIZE': 0})
|
Add Storage Policy support to Object Updates
The object server will now send its storage policy index to the
container server synchronously and asynchronously (via async_pending).
Each storage policy gets its own async_pending directory under
/srv/node/$disk/objects-$N, so there's no need to change the on-disk
pickle format; the policy index comes from the async_pending's
filename. This avoids any hassle on upgrade. (Recall that policy 0's
objects live in /srv/node/$disk/objects, not objects-0.) Per-policy
tempdir as well.
Also clean up a couple little things in the object updater. Now it
won't abort processing when it encounters a file (not directory) named
"async_pending-\d+", and it won't process updates in a directory that
does not correspond to a storage policy.
That is, if you have policies 1, 2, and 3, but there's a directory on
your disk named "async_pending-5", the updater will now skip over that
entirely. It won't even bother doing directory listings at all. This
is a good idea, believe it or not, because there's nothing good that
the container server can do with an update from some unknown storage
policy. It can't update the listing, it can't move the object if it's
misplaced... all it can do is ignore the request, so it's better to
just not send it in the first place. Plus, if this is due to a
misconfiguration on one storage node, then the updates will get
processed once the configuration is fixed.
There's also a drive by fix to update some backend http mocks for container
update tests that we're not fully exercising their their request fakes.
Because the object server container update code is resilient to to all manor
of failure from backend requests the general intent of the tests was
unaffected but this change cleans up some confusing logging in the debug
logger output.
The object-server will send X-Storage-Policy-Index headers with all
requests to container severs, including X-Delete containers and all
object PUT/DELETE requests. This header value is persisted in the
pickle file for the update and sent along with async requests from the
object-updater as well.
The container server will extract the X-Storage-Policy-Index header from
incoming requests and apply it to container broker calls as appropriate
defaulting to the legacy storage policy 0 to support seemless migration.
DocImpact
Implements: blueprint storage-policies
Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd
add to object updates
Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
|
|
|
self._update_object_put_headers(req)
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
2013-08-16 16:24:00 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c?format=json',
|
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
result = [x['content_type'] for x in simplejson.loads(resp.body)]
|
2013-07-23 14:54:51 -07:00
|
|
|
self.assertEquals(result, [u'\u2603', 'text/plain;charset="utf-8"'])
|
2012-11-29 13:29:00 -08:00
|
|
|
|
2011-11-16 13:48:23 -06:00
|
|
|
def test_GET_accept_not_valid(self):
|
2014-06-10 22:17:47 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c', method='PUT', headers={
|
|
|
|
'X-Timestamp': Timestamp(0).internal})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 201)
|
|
|
|
req = Request.blank('/sda1/p/a/c', method='GET')
|
2011-11-16 13:48:23 -06:00
|
|
|
req.accept = 'application/xml*'
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2012-11-29 13:29:00 -08:00
|
|
|
self.assertEquals(resp.status_int, 406)
|
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
def test_GET_limit(self):
|
|
|
|
# make a container
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '0'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
# fill the container
|
|
|
|
for i in range(3):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c/%s' % i,
|
2013-03-20 19:26:45 -07:00
|
|
|
environ={
|
|
|
|
'REQUEST_METHOD': 'PUT',
|
2010-07-12 17:03:45 -05:00
|
|
|
'HTTP_X_TIMESTAMP': '1',
|
|
|
|
'HTTP_X_CONTENT_TYPE': 'text/plain',
|
|
|
|
'HTTP_X_ETAG': 'x',
|
|
|
|
'HTTP_X_SIZE': 0})
|
Add Storage Policy support to Object Updates
The object server will now send its storage policy index to the
container server synchronously and asynchronously (via async_pending).
Each storage policy gets its own async_pending directory under
/srv/node/$disk/objects-$N, so there's no need to change the on-disk
pickle format; the policy index comes from the async_pending's
filename. This avoids any hassle on upgrade. (Recall that policy 0's
objects live in /srv/node/$disk/objects, not objects-0.) Per-policy
tempdir as well.
Also clean up a couple little things in the object updater. Now it
won't abort processing when it encounters a file (not directory) named
"async_pending-\d+", and it won't process updates in a directory that
does not correspond to a storage policy.
That is, if you have policies 1, 2, and 3, but there's a directory on
your disk named "async_pending-5", the updater will now skip over that
entirely. It won't even bother doing directory listings at all. This
is a good idea, believe it or not, because there's nothing good that
the container server can do with an update from some unknown storage
policy. It can't update the listing, it can't move the object if it's
misplaced... all it can do is ignore the request, so it's better to
just not send it in the first place. Plus, if this is due to a
misconfiguration on one storage node, then the updates will get
processed once the configuration is fixed.
There's also a drive by fix to update some backend http mocks for container
update tests that we're not fully exercising their their request fakes.
Because the object server container update code is resilient to to all manor
of failure from backend requests the general intent of the tests was
unaffected but this change cleans up some confusing logging in the debug
logger output.
The object-server will send X-Storage-Policy-Index headers with all
requests to container severs, including X-Delete containers and all
object PUT/DELETE requests. This header value is persisted in the
pickle file for the update and sent along with async requests from the
object-updater as well.
The container server will extract the X-Storage-Policy-Index header from
incoming requests and apply it to container broker calls as appropriate
defaulting to the legacy storage policy 0 to support seemless migration.
DocImpact
Implements: blueprint storage-policies
Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd
add to object updates
Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
|
|
|
self._update_object_put_headers(req)
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
# test limit
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c?limit=2', environ={'REQUEST_METHOD': 'GET'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
result = resp.body.split()
|
2013-03-20 19:26:45 -07:00
|
|
|
self.assertEquals(result, ['0', '1'])
|
2010-07-12 17:03:45 -05:00
|
|
|
|
|
|
|
def test_GET_prefix(self):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '0'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
for i in ('a1', 'b1', 'a2', 'b2', 'a3', 'b3'):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c/%s' % i,
|
2013-03-20 19:26:45 -07:00
|
|
|
environ={
|
|
|
|
'REQUEST_METHOD': 'PUT',
|
2010-07-12 17:03:45 -05:00
|
|
|
'HTTP_X_TIMESTAMP': '1',
|
|
|
|
'HTTP_X_CONTENT_TYPE': 'text/plain',
|
|
|
|
'HTTP_X_ETAG': 'x',
|
|
|
|
'HTTP_X_SIZE': 0})
|
Add Storage Policy support to Object Updates
The object server will now send its storage policy index to the
container server synchronously and asynchronously (via async_pending).
Each storage policy gets its own async_pending directory under
/srv/node/$disk/objects-$N, so there's no need to change the on-disk
pickle format; the policy index comes from the async_pending's
filename. This avoids any hassle on upgrade. (Recall that policy 0's
objects live in /srv/node/$disk/objects, not objects-0.) Per-policy
tempdir as well.
Also clean up a couple little things in the object updater. Now it
won't abort processing when it encounters a file (not directory) named
"async_pending-\d+", and it won't process updates in a directory that
does not correspond to a storage policy.
That is, if you have policies 1, 2, and 3, but there's a directory on
your disk named "async_pending-5", the updater will now skip over that
entirely. It won't even bother doing directory listings at all. This
is a good idea, believe it or not, because there's nothing good that
the container server can do with an update from some unknown storage
policy. It can't update the listing, it can't move the object if it's
misplaced... all it can do is ignore the request, so it's better to
just not send it in the first place. Plus, if this is due to a
misconfiguration on one storage node, then the updates will get
processed once the configuration is fixed.
There's also a drive by fix to update some backend http mocks for container
update tests that we're not fully exercising their their request fakes.
Because the object server container update code is resilient to to all manor
of failure from backend requests the general intent of the tests was
unaffected but this change cleans up some confusing logging in the debug
logger output.
The object-server will send X-Storage-Policy-Index headers with all
requests to container severs, including X-Delete containers and all
object PUT/DELETE requests. This header value is persisted in the
pickle file for the update and sent along with async requests from the
object-updater as well.
The container server will extract the X-Storage-Policy-Index header from
incoming requests and apply it to container broker calls as appropriate
defaulting to the legacy storage policy 0 to support seemless migration.
DocImpact
Implements: blueprint storage-policies
Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd
add to object updates
Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
|
|
|
self._update_object_put_headers(req)
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c?prefix=a', environ={'REQUEST_METHOD': 'GET'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
self.assertEquals(resp.body.split(), ['a1', 'a2', 'a3'])
|
2010-07-12 17:03:45 -05:00
|
|
|
|
2013-05-25 16:30:07 -04:00
|
|
|
def test_GET_delimiter_too_long(self):
|
|
|
|
req = Request.blank('/sda1/p/a/c?delimiter=xx',
|
|
|
|
environ={'REQUEST_METHOD': 'GET',
|
|
|
|
'HTTP_X_TIMESTAMP': '0'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-05-25 16:30:07 -04:00
|
|
|
self.assertEquals(resp.status_int, 412)
|
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
def test_GET_delimiter(self):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '0'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
for i in ('US-TX-A', 'US-TX-B', 'US-OK-A', 'US-OK-B', 'US-UT-A'):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c/%s' % i,
|
2013-03-20 19:26:45 -07:00
|
|
|
environ={
|
|
|
|
'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1',
|
2010-07-12 17:03:45 -05:00
|
|
|
'HTTP_X_CONTENT_TYPE': 'text/plain', 'HTTP_X_ETAG': 'x',
|
|
|
|
'HTTP_X_SIZE': 0})
|
Add Storage Policy support to Object Updates
The object server will now send its storage policy index to the
container server synchronously and asynchronously (via async_pending).
Each storage policy gets its own async_pending directory under
/srv/node/$disk/objects-$N, so there's no need to change the on-disk
pickle format; the policy index comes from the async_pending's
filename. This avoids any hassle on upgrade. (Recall that policy 0's
objects live in /srv/node/$disk/objects, not objects-0.) Per-policy
tempdir as well.
Also clean up a couple little things in the object updater. Now it
won't abort processing when it encounters a file (not directory) named
"async_pending-\d+", and it won't process updates in a directory that
does not correspond to a storage policy.
That is, if you have policies 1, 2, and 3, but there's a directory on
your disk named "async_pending-5", the updater will now skip over that
entirely. It won't even bother doing directory listings at all. This
is a good idea, believe it or not, because there's nothing good that
the container server can do with an update from some unknown storage
policy. It can't update the listing, it can't move the object if it's
misplaced... all it can do is ignore the request, so it's better to
just not send it in the first place. Plus, if this is due to a
misconfiguration on one storage node, then the updates will get
processed once the configuration is fixed.
There's also a drive by fix to update some backend http mocks for container
update tests that we're not fully exercising their their request fakes.
Because the object server container update code is resilient to to all manor
of failure from backend requests the general intent of the tests was
unaffected but this change cleans up some confusing logging in the debug
logger output.
The object-server will send X-Storage-Policy-Index headers with all
requests to container severs, including X-Delete containers and all
object PUT/DELETE requests. This header value is persisted in the
pickle file for the update and sent along with async requests from the
object-updater as well.
The container server will extract the X-Storage-Policy-Index header from
incoming requests and apply it to container broker calls as appropriate
defaulting to the legacy storage policy 0 to support seemless migration.
DocImpact
Implements: blueprint storage-policies
Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd
add to object updates
Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
|
|
|
self._update_object_put_headers(req)
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c?prefix=US-&delimiter=-&format=json',
|
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-09-01 15:10:39 -04:00
|
|
|
self.assertEquals(
|
|
|
|
simplejson.loads(resp.body),
|
2013-03-20 19:26:45 -07:00
|
|
|
[{"subdir": "US-OK-"},
|
|
|
|
{"subdir": "US-TX-"},
|
|
|
|
{"subdir": "US-UT-"}])
|
2010-07-12 17:03:45 -05:00
|
|
|
|
|
|
|
def test_GET_delimiter_xml(self):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '0'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
for i in ('US-TX-A', 'US-TX-B', 'US-OK-A', 'US-OK-B', 'US-UT-A'):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c/%s' % i,
|
2013-03-20 19:26:45 -07:00
|
|
|
environ={
|
|
|
|
'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1',
|
2010-07-12 17:03:45 -05:00
|
|
|
'HTTP_X_CONTENT_TYPE': 'text/plain', 'HTTP_X_ETAG': 'x',
|
|
|
|
'HTTP_X_SIZE': 0})
|
Add Storage Policy support to Object Updates
The object server will now send its storage policy index to the
container server synchronously and asynchronously (via async_pending).
Each storage policy gets its own async_pending directory under
/srv/node/$disk/objects-$N, so there's no need to change the on-disk
pickle format; the policy index comes from the async_pending's
filename. This avoids any hassle on upgrade. (Recall that policy 0's
objects live in /srv/node/$disk/objects, not objects-0.) Per-policy
tempdir as well.
Also clean up a couple little things in the object updater. Now it
won't abort processing when it encounters a file (not directory) named
"async_pending-\d+", and it won't process updates in a directory that
does not correspond to a storage policy.
That is, if you have policies 1, 2, and 3, but there's a directory on
your disk named "async_pending-5", the updater will now skip over that
entirely. It won't even bother doing directory listings at all. This
is a good idea, believe it or not, because there's nothing good that
the container server can do with an update from some unknown storage
policy. It can't update the listing, it can't move the object if it's
misplaced... all it can do is ignore the request, so it's better to
just not send it in the first place. Plus, if this is due to a
misconfiguration on one storage node, then the updates will get
processed once the configuration is fixed.
There's also a drive by fix to update some backend http mocks for container
update tests that we're not fully exercising their their request fakes.
Because the object server container update code is resilient to to all manor
of failure from backend requests the general intent of the tests was
unaffected but this change cleans up some confusing logging in the debug
logger output.
The object-server will send X-Storage-Policy-Index headers with all
requests to container severs, including X-Delete containers and all
object PUT/DELETE requests. This header value is persisted in the
pickle file for the update and sent along with async requests from the
object-updater as well.
The container server will extract the X-Storage-Policy-Index header from
incoming requests and apply it to container broker calls as appropriate
defaulting to the legacy storage policy 0 to support seemless migration.
DocImpact
Implements: blueprint storage-policies
Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd
add to object updates
Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
|
|
|
self._update_object_put_headers(req)
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c?prefix=US-&delimiter=-&format=xml',
|
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-09-01 15:10:39 -04:00
|
|
|
self.assertEquals(
|
|
|
|
resp.body, '<?xml version="1.0" encoding="UTF-8"?>'
|
|
|
|
'\n<container name="c"><subdir name="US-OK-">'
|
|
|
|
'<name>US-OK-</name></subdir>'
|
2011-05-19 06:59:47 +00:00
|
|
|
'<subdir name="US-TX-"><name>US-TX-</name></subdir>'
|
|
|
|
'<subdir name="US-UT-"><name>US-UT-</name></subdir></container>')
|
2010-07-12 17:03:45 -05:00
|
|
|
|
2013-06-14 15:29:08 +00:00
|
|
|
def test_GET_delimiter_xml_with_quotes(self):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '0'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c/<\'sub\' "dir">/object',
|
2013-06-14 15:29:08 +00:00
|
|
|
environ={
|
|
|
|
'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1',
|
|
|
|
'HTTP_X_CONTENT_TYPE': 'text/plain', 'HTTP_X_ETAG': 'x',
|
|
|
|
'HTTP_X_SIZE': 0})
|
Add Storage Policy support to Object Updates
The object server will now send its storage policy index to the
container server synchronously and asynchronously (via async_pending).
Each storage policy gets its own async_pending directory under
/srv/node/$disk/objects-$N, so there's no need to change the on-disk
pickle format; the policy index comes from the async_pending's
filename. This avoids any hassle on upgrade. (Recall that policy 0's
objects live in /srv/node/$disk/objects, not objects-0.) Per-policy
tempdir as well.
Also clean up a couple little things in the object updater. Now it
won't abort processing when it encounters a file (not directory) named
"async_pending-\d+", and it won't process updates in a directory that
does not correspond to a storage policy.
That is, if you have policies 1, 2, and 3, but there's a directory on
your disk named "async_pending-5", the updater will now skip over that
entirely. It won't even bother doing directory listings at all. This
is a good idea, believe it or not, because there's nothing good that
the container server can do with an update from some unknown storage
policy. It can't update the listing, it can't move the object if it's
misplaced... all it can do is ignore the request, so it's better to
just not send it in the first place. Plus, if this is due to a
misconfiguration on one storage node, then the updates will get
processed once the configuration is fixed.
There's also a drive by fix to update some backend http mocks for container
update tests that we're not fully exercising their their request fakes.
Because the object server container update code is resilient to to all manor
of failure from backend requests the general intent of the tests was
unaffected but this change cleans up some confusing logging in the debug
logger output.
The object-server will send X-Storage-Policy-Index headers with all
requests to container severs, including X-Delete containers and all
object PUT/DELETE requests. This header value is persisted in the
pickle file for the update and sent along with async requests from the
object-updater as well.
The container server will extract the X-Storage-Policy-Index header from
incoming requests and apply it to container broker calls as appropriate
defaulting to the legacy storage policy 0 to support seemless migration.
DocImpact
Implements: blueprint storage-policies
Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd
add to object updates
Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
|
|
|
self._update_object_put_headers(req)
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-06-14 15:29:08 +00:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c?delimiter=/&format=xml',
|
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-07-23 14:54:51 -07:00
|
|
|
dom = minidom.parseString(resp.body)
|
|
|
|
self.assert_(len(dom.getElementsByTagName('container')) == 1)
|
|
|
|
container = dom.getElementsByTagName('container')[0]
|
|
|
|
self.assert_(len(container.getElementsByTagName('subdir')) == 1)
|
|
|
|
subdir = container.getElementsByTagName('subdir')[0]
|
|
|
|
self.assertEquals(unicode(subdir.attributes['name'].value),
|
|
|
|
u'<\'sub\' "dir">/')
|
|
|
|
self.assert_(len(subdir.getElementsByTagName('name')) == 1)
|
|
|
|
name = subdir.getElementsByTagName('name')[0]
|
|
|
|
self.assertEquals(unicode(name.childNodes[0].data),
|
|
|
|
u'<\'sub\' "dir">/')
|
2013-06-14 15:29:08 +00:00
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
def test_GET_path(self):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT',
|
|
|
|
'HTTP_X_TIMESTAMP': '0'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
for i in ('US/TX', 'US/TX/B', 'US/OK', 'US/OK/B', 'US/UT/A'):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c/%s' % i,
|
2013-03-20 19:26:45 -07:00
|
|
|
environ={
|
|
|
|
'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1',
|
2010-07-12 17:03:45 -05:00
|
|
|
'HTTP_X_CONTENT_TYPE': 'text/plain', 'HTTP_X_ETAG': 'x',
|
|
|
|
'HTTP_X_SIZE': 0})
|
Add Storage Policy support to Object Updates
The object server will now send its storage policy index to the
container server synchronously and asynchronously (via async_pending).
Each storage policy gets its own async_pending directory under
/srv/node/$disk/objects-$N, so there's no need to change the on-disk
pickle format; the policy index comes from the async_pending's
filename. This avoids any hassle on upgrade. (Recall that policy 0's
objects live in /srv/node/$disk/objects, not objects-0.) Per-policy
tempdir as well.
Also clean up a couple little things in the object updater. Now it
won't abort processing when it encounters a file (not directory) named
"async_pending-\d+", and it won't process updates in a directory that
does not correspond to a storage policy.
That is, if you have policies 1, 2, and 3, but there's a directory on
your disk named "async_pending-5", the updater will now skip over that
entirely. It won't even bother doing directory listings at all. This
is a good idea, believe it or not, because there's nothing good that
the container server can do with an update from some unknown storage
policy. It can't update the listing, it can't move the object if it's
misplaced... all it can do is ignore the request, so it's better to
just not send it in the first place. Plus, if this is due to a
misconfiguration on one storage node, then the updates will get
processed once the configuration is fixed.
There's also a drive by fix to update some backend http mocks for container
update tests that we're not fully exercising their their request fakes.
Because the object server container update code is resilient to to all manor
of failure from backend requests the general intent of the tests was
unaffected but this change cleans up some confusing logging in the debug
logger output.
The object-server will send X-Storage-Policy-Index headers with all
requests to container severs, including X-Delete containers and all
object PUT/DELETE requests. This header value is persisted in the
pickle file for the update and sent along with async requests from the
object-updater as well.
The container server will extract the X-Storage-Policy-Index header from
incoming requests and apply it to container broker calls as appropriate
defaulting to the legacy storage policy 0 to support seemless migration.
DocImpact
Implements: blueprint storage-policies
Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd
add to object updates
Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
|
|
|
self._update_object_put_headers(req)
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2010-07-12 17:03:45 -05:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c?path=US&format=json',
|
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-09-01 15:10:39 -04:00
|
|
|
self.assertEquals(
|
|
|
|
simplejson.loads(resp.body),
|
|
|
|
[{"name": "US/OK", "hash": "x", "bytes": 0,
|
|
|
|
"content_type": "text/plain",
|
|
|
|
"last_modified": "1970-01-01T00:00:01.000000"},
|
|
|
|
{"name": "US/TX", "hash": "x", "bytes": 0,
|
|
|
|
"content_type": "text/plain",
|
|
|
|
"last_modified": "1970-01-01T00:00:01.000000"}])
|
2010-07-12 17:03:45 -05:00
|
|
|
|
2013-03-20 19:26:45 -07:00
|
|
|
def test_GET_insufficient_storage(self):
|
|
|
|
self.controller = container_server.ContainerController(
|
|
|
|
{'devices': self.testdir})
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
|
|
|
'/sda-null/p/a/c', environ={'REQUEST_METHOD': 'GET',
|
|
|
|
'HTTP_X_TIMESTAMP': '1'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-03-20 19:26:45 -07:00
|
|
|
self.assertEquals(resp.status_int, 507)
|
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
def test_through_call(self):
|
|
|
|
inbuf = StringIO()
|
|
|
|
errbuf = StringIO()
|
|
|
|
outbuf = StringIO()
|
2013-03-20 19:26:45 -07:00
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
def start_response(*args):
|
|
|
|
outbuf.writelines(args)
|
2013-03-20 19:26:45 -07:00
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
self.controller.__call__({'REQUEST_METHOD': 'GET',
|
|
|
|
'SCRIPT_NAME': '',
|
|
|
|
'PATH_INFO': '/sda1/p/a/c',
|
|
|
|
'SERVER_NAME': '127.0.0.1',
|
|
|
|
'SERVER_PORT': '8080',
|
|
|
|
'SERVER_PROTOCOL': 'HTTP/1.0',
|
|
|
|
'CONTENT_LENGTH': '0',
|
|
|
|
'wsgi.version': (1, 0),
|
|
|
|
'wsgi.url_scheme': 'http',
|
|
|
|
'wsgi.input': inbuf,
|
|
|
|
'wsgi.errors': errbuf,
|
|
|
|
'wsgi.multithread': False,
|
|
|
|
'wsgi.multiprocess': False,
|
|
|
|
'wsgi.run_once': False},
|
|
|
|
start_response)
|
|
|
|
self.assertEquals(errbuf.getvalue(), '')
|
|
|
|
self.assertEquals(outbuf.getvalue()[:4], '404 ')
|
|
|
|
|
|
|
|
def test_through_call_invalid_path(self):
|
|
|
|
inbuf = StringIO()
|
|
|
|
errbuf = StringIO()
|
|
|
|
outbuf = StringIO()
|
2013-03-20 19:26:45 -07:00
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
def start_response(*args):
|
|
|
|
outbuf.writelines(args)
|
2013-03-20 19:26:45 -07:00
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
self.controller.__call__({'REQUEST_METHOD': 'GET',
|
|
|
|
'SCRIPT_NAME': '',
|
|
|
|
'PATH_INFO': '/bob',
|
|
|
|
'SERVER_NAME': '127.0.0.1',
|
|
|
|
'SERVER_PORT': '8080',
|
|
|
|
'SERVER_PROTOCOL': 'HTTP/1.0',
|
|
|
|
'CONTENT_LENGTH': '0',
|
|
|
|
'wsgi.version': (1, 0),
|
|
|
|
'wsgi.url_scheme': 'http',
|
|
|
|
'wsgi.input': inbuf,
|
|
|
|
'wsgi.errors': errbuf,
|
|
|
|
'wsgi.multithread': False,
|
|
|
|
'wsgi.multiprocess': False,
|
|
|
|
'wsgi.run_once': False},
|
|
|
|
start_response)
|
|
|
|
self.assertEquals(errbuf.getvalue(), '')
|
|
|
|
self.assertEquals(outbuf.getvalue()[:4], '400 ')
|
|
|
|
|
2013-03-20 19:26:45 -07:00
|
|
|
def test_through_call_invalid_path_utf8(self):
|
|
|
|
inbuf = StringIO()
|
|
|
|
errbuf = StringIO()
|
|
|
|
outbuf = StringIO()
|
|
|
|
|
|
|
|
def start_response(*args):
|
|
|
|
outbuf.writelines(args)
|
|
|
|
|
|
|
|
self.controller.__call__({'REQUEST_METHOD': 'GET',
|
|
|
|
'SCRIPT_NAME': '',
|
|
|
|
'PATH_INFO': '\x00',
|
|
|
|
'SERVER_NAME': '127.0.0.1',
|
|
|
|
'SERVER_PORT': '8080',
|
|
|
|
'SERVER_PROTOCOL': 'HTTP/1.0',
|
|
|
|
'CONTENT_LENGTH': '0',
|
|
|
|
'wsgi.version': (1, 0),
|
|
|
|
'wsgi.url_scheme': 'http',
|
|
|
|
'wsgi.input': inbuf,
|
|
|
|
'wsgi.errors': errbuf,
|
|
|
|
'wsgi.multithread': False,
|
|
|
|
'wsgi.multiprocess': False,
|
|
|
|
'wsgi.run_once': False},
|
|
|
|
start_response)
|
|
|
|
self.assertEquals(errbuf.getvalue(), '')
|
|
|
|
self.assertEquals(outbuf.getvalue()[:4], '412 ')
|
|
|
|
|
2012-06-01 16:39:35 +02:00
|
|
|
def test_invalid_method_doesnt_exist(self):
|
|
|
|
errbuf = StringIO()
|
|
|
|
outbuf = StringIO()
|
2013-03-20 19:26:45 -07:00
|
|
|
|
2012-06-01 16:39:35 +02:00
|
|
|
def start_response(*args):
|
|
|
|
outbuf.writelines(args)
|
2013-03-20 19:26:45 -07:00
|
|
|
|
2012-06-01 16:39:35 +02:00
|
|
|
self.controller.__call__({'REQUEST_METHOD': 'method_doesnt_exist',
|
|
|
|
'PATH_INFO': '/sda1/p/a/c'},
|
|
|
|
start_response)
|
|
|
|
self.assertEquals(errbuf.getvalue(), '')
|
|
|
|
self.assertEquals(outbuf.getvalue()[:4], '405 ')
|
|
|
|
|
|
|
|
def test_invalid_method_is_not_public(self):
|
|
|
|
errbuf = StringIO()
|
|
|
|
outbuf = StringIO()
|
2013-03-20 19:26:45 -07:00
|
|
|
|
2012-06-01 16:39:35 +02:00
|
|
|
def start_response(*args):
|
|
|
|
outbuf.writelines(args)
|
2013-03-20 19:26:45 -07:00
|
|
|
|
2012-06-01 16:39:35 +02:00
|
|
|
self.controller.__call__({'REQUEST_METHOD': '__init__',
|
|
|
|
'PATH_INFO': '/sda1/p/a/c'},
|
|
|
|
start_response)
|
|
|
|
self.assertEquals(errbuf.getvalue(), '')
|
|
|
|
self.assertEquals(outbuf.getvalue()[:4], '405 ')
|
|
|
|
|
2012-06-06 03:39:53 +09:00
|
|
|
def test_params_format(self):
|
2013-09-01 15:10:39 -04:00
|
|
|
req = Request.blank(
|
2014-06-10 22:17:47 -07:00
|
|
|
'/sda1/p/a/c', method='PUT',
|
|
|
|
headers={'X-Timestamp': Timestamp(1).internal})
|
2013-08-16 16:24:00 -07:00
|
|
|
req.get_response(self.controller)
|
2012-06-06 03:39:53 +09:00
|
|
|
for format in ('xml', 'json'):
|
|
|
|
req = Request.blank('/sda1/p/a/c?format=%s' % format,
|
2014-06-10 22:17:47 -07:00
|
|
|
method='GET')
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2012-06-06 03:39:53 +09:00
|
|
|
self.assertEquals(resp.status_int, 200)
|
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
def test_params_utf8(self):
|
2013-05-25 16:30:07 -04:00
|
|
|
# Bad UTF8 sequence, all parameters should cause 400 error
|
|
|
|
for param in ('delimiter', 'limit', 'marker', 'path', 'prefix',
|
|
|
|
'end_marker', 'format'):
|
2010-07-12 17:03:45 -05:00
|
|
|
req = Request.blank('/sda1/p/a/c?%s=\xce' % param,
|
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
2013-07-24 12:55:25 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-05-25 16:30:07 -04:00
|
|
|
self.assertEquals(resp.status_int, 400,
|
|
|
|
"%d on param %s" % (resp.status_int, param))
|
|
|
|
# Good UTF8 sequence for delimiter, too long (1 byte delimiters only)
|
|
|
|
req = Request.blank('/sda1/p/a/c?delimiter=\xce\xa9',
|
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-05-25 16:30:07 -04:00
|
|
|
self.assertEquals(resp.status_int, 412,
|
|
|
|
"%d on param delimiter" % (resp.status_int))
|
2014-06-10 22:17:47 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c', method='PUT',
|
|
|
|
headers={'X-Timestamp': Timestamp(1).internal})
|
2013-08-16 16:24:00 -07:00
|
|
|
req.get_response(self.controller)
|
2013-05-25 16:30:07 -04:00
|
|
|
# Good UTF8 sequence, ignored for limit, doesn't affect other queries
|
|
|
|
for param in ('limit', 'marker', 'path', 'prefix', 'end_marker',
|
|
|
|
'format'):
|
2010-07-12 17:03:45 -05:00
|
|
|
req = Request.blank('/sda1/p/a/c?%s=\xce\xa9' % param,
|
|
|
|
environ={'REQUEST_METHOD': 'GET'})
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2013-05-25 16:30:07 -04:00
|
|
|
self.assertEquals(resp.status_int, 204,
|
|
|
|
"%d on param %s" % (resp.status_int, param))
|
2010-07-12 17:03:45 -05:00
|
|
|
|
2011-10-26 21:42:24 +00:00
|
|
|
def test_put_auto_create(self):
|
2014-06-10 22:17:47 -07:00
|
|
|
headers = {'x-timestamp': Timestamp(1).internal,
|
2011-10-26 21:42:24 +00:00
|
|
|
'x-size': '0',
|
|
|
|
'x-content-type': 'text/plain',
|
|
|
|
'x-etag': 'd41d8cd98f00b204e9800998ecf8427e'}
|
|
|
|
|
2013-08-16 16:24:00 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c/o',
|
|
|
|
environ={'REQUEST_METHOD': 'PUT'},
|
|
|
|
headers=dict(headers))
|
|
|
|
resp = req.get_response(self.controller)
|
2011-10-26 21:42:24 +00:00
|
|
|
self.assertEquals(resp.status_int, 404)
|
|
|
|
|
2013-08-16 16:24:00 -07:00
|
|
|
req = Request.blank('/sda1/p/.a/c/o',
|
2013-09-01 15:10:39 -04:00
|
|
|
environ={'REQUEST_METHOD': 'PUT'},
|
|
|
|
headers=dict(headers))
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2011-10-26 21:42:24 +00:00
|
|
|
self.assertEquals(resp.status_int, 201)
|
|
|
|
|
2013-08-16 16:24:00 -07:00
|
|
|
req = Request.blank('/sda1/p/a/.c/o',
|
2013-09-01 15:10:39 -04:00
|
|
|
environ={'REQUEST_METHOD': 'PUT'},
|
|
|
|
headers=dict(headers))
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2011-10-26 21:42:24 +00:00
|
|
|
self.assertEquals(resp.status_int, 404)
|
|
|
|
|
2013-08-16 16:24:00 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c/.o',
|
2013-09-01 15:10:39 -04:00
|
|
|
environ={'REQUEST_METHOD': 'PUT'},
|
|
|
|
headers=dict(headers))
|
2013-08-16 16:24:00 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2011-10-26 21:42:24 +00:00
|
|
|
self.assertEquals(resp.status_int, 404)
|
|
|
|
|
|
|
|
def test_delete_auto_create(self):
|
2014-06-10 22:17:47 -07:00
|
|
|
headers = {'x-timestamp': Timestamp(1).internal}
|
2011-10-26 21:42:24 +00:00
|
|
|
|
2013-08-16 16:24:00 -07:00
|
|
|
req = Request.blank('/sda1/p/a/c/o',
|
|
|
|
environ={'REQUEST_METHOD': 'DELETE'},
|
|
|
|
headers=dict(headers))
|
|
|
|
resp = req.get_response(self.controller)
|
2011-10-26 21:42:24 +00:00
|
|
|
self.assertEquals(resp.status_int, 404)
|
|
|
|
|
2013-08-16 16:24:00 -07:00
|
|
|
req = Request.blank('/sda1/p/.a/c/o',
|
|
|
|
environ={'REQUEST_METHOD': 'DELETE'},
|
|
|
|
headers=dict(headers))
|
|
|
|
resp = req.get_response(self.controller)
|
2011-10-26 21:42:24 +00:00
|
|
|
self.assertEquals(resp.status_int, 204)
|
|
|
|
|
2013-08-16 16:24:00 -07:00
|
|
|
req = Request.blank('/sda1/p/a/.c/o',
|
|
|
|
environ={'REQUEST_METHOD': 'DELETE'},
|
|
|
|
headers=dict(headers))
|
|
|
|
resp = req.get_response(self.controller)
|
2011-10-26 21:42:24 +00:00
|
|
|
self.assertEquals(resp.status_int, 404)
|
|
|
|
|
2013-08-16 16:24:00 -07:00
|
|
|
req = Request.blank('/sda1/p/a/.c/.o',
|
|
|
|
environ={'REQUEST_METHOD': 'DELETE'},
|
|
|
|
headers=dict(headers))
|
|
|
|
resp = req.get_response(self.controller)
|
2011-10-26 21:42:24 +00:00
|
|
|
self.assertEquals(resp.status_int, 404)
|
|
|
|
|
2012-11-02 12:02:02 -07:00
|
|
|
def test_content_type_on_HEAD(self):
|
2013-08-16 16:24:00 -07:00
|
|
|
Request.blank('/sda1/p/a/o',
|
2014-06-10 22:17:47 -07:00
|
|
|
headers={'X-Timestamp': Timestamp(1).internal},
|
2013-08-16 16:24:00 -07:00
|
|
|
environ={'REQUEST_METHOD': 'PUT'}).get_response(
|
|
|
|
self.controller)
|
2012-11-02 12:02:02 -07:00
|
|
|
|
|
|
|
env = {'REQUEST_METHOD': 'HEAD'}
|
|
|
|
|
|
|
|
req = Request.blank('/sda1/p/a/o?format=xml', environ=env)
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2012-11-02 12:02:02 -07:00
|
|
|
self.assertEquals(resp.content_type, 'application/xml')
|
|
|
|
self.assertEquals(resp.charset, 'utf-8')
|
|
|
|
|
|
|
|
req = Request.blank('/sda1/p/a/o?format=json', environ=env)
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2012-11-02 12:02:02 -07:00
|
|
|
self.assertEquals(resp.content_type, 'application/json')
|
|
|
|
self.assertEquals(resp.charset, 'utf-8')
|
|
|
|
|
|
|
|
req = Request.blank('/sda1/p/a/o', environ=env)
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2012-11-02 12:02:02 -07:00
|
|
|
self.assertEquals(resp.content_type, 'text/plain')
|
|
|
|
self.assertEquals(resp.charset, 'utf-8')
|
|
|
|
|
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/o', headers={'Accept': 'application/json'}, environ=env)
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2012-11-02 12:02:02 -07:00
|
|
|
self.assertEquals(resp.content_type, 'application/json')
|
|
|
|
self.assertEquals(resp.charset, 'utf-8')
|
|
|
|
|
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/o', headers={'Accept': 'application/xml'}, environ=env)
|
Refactor how we pick listings' content type.
There were a few different places where we had some repeated code to
figure out what format an account or container listing response should
be in (text, JSON, or XML). Now that's been pulled into a single
function.
As part of this, you can now raise HTTPException subclasses in proxy
controllers instead of laboriously plumbing error responses all the
way back up to swift.proxy.server.Application.handle_request(). This
lets us avoid certain ugly patterns, like the one where a method
returns a tuple of (x, y, z, error) and the caller has to see if it
got (value, value, value, None) or (None, None, None, errorvalue). Now
we can just raise the error.
Change-Id: I316873df289160d526487ad116f6fbb9a575e3de
2013-08-14 11:55:15 -07:00
|
|
|
resp = req.get_response(self.controller)
|
2012-11-02 12:02:02 -07:00
|
|
|
self.assertEquals(resp.content_type, 'application/xml')
|
|
|
|
self.assertEquals(resp.charset, 'utf-8')
|
|
|
|
|
Allow for multiple X-(Account|Container)-* headers.
When the number of account/container or container/object replicas are
different, Swift had a few misbehaviors. This commit fixes them.
* On an object PUT/POST/DELETE, if there were 3 object replicas and
only 2 container replicas, then only 2 requests would be made to
object servers. Now, 3 requests will be made, but the third won't
have any X-Container-* headers in it.
* On an object PUT/POST/DELETE, if there were 3 object replicas and 4
container replicas, then only 3/4 container servers would receive
immediate updates; the fourth would be ignored. Now one of the
object servers will receive multiple (comma-separated) values in the
X-Container-* headers and it will attempt to contact both of them.
One side effect is that multiple async_pendings may be written for
updates to the same object. They'll have differing timestamps,
though, so all but the newest will be deleted unread. To trigger
this behavior, you have to have more container replicas than object
replicas, 2 or more of the container servers must be down, and the
headers sent to one object server must reference 2 or more down
container servers; it's unlikely enough and the consequences are so
minor that it didn't seem worth fixing.
The situation with account/containers is analogous, only without the
async_pendings.
Change-Id: I98bc2de93fb6b2346d6de1d764213d7563653e8d
2012-12-12 17:47:04 -08:00
|
|
|
def test_updating_multiple_container_servers(self):
|
|
|
|
http_connect_args = []
|
2013-03-20 19:26:45 -07:00
|
|
|
|
Allow for multiple X-(Account|Container)-* headers.
When the number of account/container or container/object replicas are
different, Swift had a few misbehaviors. This commit fixes them.
* On an object PUT/POST/DELETE, if there were 3 object replicas and
only 2 container replicas, then only 2 requests would be made to
object servers. Now, 3 requests will be made, but the third won't
have any X-Container-* headers in it.
* On an object PUT/POST/DELETE, if there were 3 object replicas and 4
container replicas, then only 3/4 container servers would receive
immediate updates; the fourth would be ignored. Now one of the
object servers will receive multiple (comma-separated) values in the
X-Container-* headers and it will attempt to contact both of them.
One side effect is that multiple async_pendings may be written for
updates to the same object. They'll have differing timestamps,
though, so all but the newest will be deleted unread. To trigger
this behavior, you have to have more container replicas than object
replicas, 2 or more of the container servers must be down, and the
headers sent to one object server must reference 2 or more down
container servers; it's unlikely enough and the consequences are so
minor that it didn't seem worth fixing.
The situation with account/containers is analogous, only without the
async_pendings.
Change-Id: I98bc2de93fb6b2346d6de1d764213d7563653e8d
2012-12-12 17:47:04 -08:00
|
|
|
def fake_http_connect(ipaddr, port, device, partition, method, path,
|
|
|
|
headers=None, query_string=None, ssl=False):
|
2013-03-20 19:26:45 -07:00
|
|
|
|
Allow for multiple X-(Account|Container)-* headers.
When the number of account/container or container/object replicas are
different, Swift had a few misbehaviors. This commit fixes them.
* On an object PUT/POST/DELETE, if there were 3 object replicas and
only 2 container replicas, then only 2 requests would be made to
object servers. Now, 3 requests will be made, but the third won't
have any X-Container-* headers in it.
* On an object PUT/POST/DELETE, if there were 3 object replicas and 4
container replicas, then only 3/4 container servers would receive
immediate updates; the fourth would be ignored. Now one of the
object servers will receive multiple (comma-separated) values in the
X-Container-* headers and it will attempt to contact both of them.
One side effect is that multiple async_pendings may be written for
updates to the same object. They'll have differing timestamps,
though, so all but the newest will be deleted unread. To trigger
this behavior, you have to have more container replicas than object
replicas, 2 or more of the container servers must be down, and the
headers sent to one object server must reference 2 or more down
container servers; it's unlikely enough and the consequences are so
minor that it didn't seem worth fixing.
The situation with account/containers is analogous, only without the
async_pendings.
Change-Id: I98bc2de93fb6b2346d6de1d764213d7563653e8d
2012-12-12 17:47:04 -08:00
|
|
|
class SuccessfulFakeConn(object):
|
|
|
|
@property
|
|
|
|
def status(self):
|
|
|
|
return 200
|
|
|
|
|
|
|
|
def getresponse(self):
|
|
|
|
return self
|
|
|
|
|
|
|
|
def read(self):
|
|
|
|
return ''
|
|
|
|
|
|
|
|
captured_args = {'ipaddr': ipaddr, 'port': port,
|
|
|
|
'device': device, 'partition': partition,
|
|
|
|
'method': method, 'path': path, 'ssl': ssl,
|
|
|
|
'headers': headers, 'query_string': query_string}
|
|
|
|
|
|
|
|
http_connect_args.append(
|
2013-03-20 19:26:45 -07:00
|
|
|
dict((k, v) for k, v in captured_args.iteritems()
|
Allow for multiple X-(Account|Container)-* headers.
When the number of account/container or container/object replicas are
different, Swift had a few misbehaviors. This commit fixes them.
* On an object PUT/POST/DELETE, if there were 3 object replicas and
only 2 container replicas, then only 2 requests would be made to
object servers. Now, 3 requests will be made, but the third won't
have any X-Container-* headers in it.
* On an object PUT/POST/DELETE, if there were 3 object replicas and 4
container replicas, then only 3/4 container servers would receive
immediate updates; the fourth would be ignored. Now one of the
object servers will receive multiple (comma-separated) values in the
X-Container-* headers and it will attempt to contact both of them.
One side effect is that multiple async_pendings may be written for
updates to the same object. They'll have differing timestamps,
though, so all but the newest will be deleted unread. To trigger
this behavior, you have to have more container replicas than object
replicas, 2 or more of the container servers must be down, and the
headers sent to one object server must reference 2 or more down
container servers; it's unlikely enough and the consequences are so
minor that it didn't seem worth fixing.
The situation with account/containers is analogous, only without the
async_pendings.
Change-Id: I98bc2de93fb6b2346d6de1d764213d7563653e8d
2012-12-12 17:47:04 -08:00
|
|
|
if v is not None))
|
|
|
|
|
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c',
|
|
|
|
environ={'REQUEST_METHOD': 'PUT'},
|
|
|
|
headers={'X-Timestamp': '12345',
|
|
|
|
'X-Account-Partition': '30',
|
|
|
|
'X-Account-Host': '1.2.3.4:5, 6.7.8.9:10',
|
|
|
|
'X-Account-Device': 'sdb1, sdf1'})
|
|
|
|
|
|
|
|
orig_http_connect = container_server.http_connect
|
|
|
|
try:
|
|
|
|
container_server.http_connect = fake_http_connect
|
2013-08-16 16:24:00 -07:00
|
|
|
req.get_response(self.controller)
|
Allow for multiple X-(Account|Container)-* headers.
When the number of account/container or container/object replicas are
different, Swift had a few misbehaviors. This commit fixes them.
* On an object PUT/POST/DELETE, if there were 3 object replicas and
only 2 container replicas, then only 2 requests would be made to
object servers. Now, 3 requests will be made, but the third won't
have any X-Container-* headers in it.
* On an object PUT/POST/DELETE, if there were 3 object replicas and 4
container replicas, then only 3/4 container servers would receive
immediate updates; the fourth would be ignored. Now one of the
object servers will receive multiple (comma-separated) values in the
X-Container-* headers and it will attempt to contact both of them.
One side effect is that multiple async_pendings may be written for
updates to the same object. They'll have differing timestamps,
though, so all but the newest will be deleted unread. To trigger
this behavior, you have to have more container replicas than object
replicas, 2 or more of the container servers must be down, and the
headers sent to one object server must reference 2 or more down
container servers; it's unlikely enough and the consequences are so
minor that it didn't seem worth fixing.
The situation with account/containers is analogous, only without the
async_pendings.
Change-Id: I98bc2de93fb6b2346d6de1d764213d7563653e8d
2012-12-12 17:47:04 -08:00
|
|
|
finally:
|
|
|
|
container_server.http_connect = orig_http_connect
|
|
|
|
|
|
|
|
http_connect_args.sort(key=operator.itemgetter('ipaddr'))
|
|
|
|
|
|
|
|
self.assertEquals(len(http_connect_args), 2)
|
|
|
|
self.assertEquals(
|
|
|
|
http_connect_args[0],
|
|
|
|
{'ipaddr': '1.2.3.4',
|
|
|
|
'port': '5',
|
|
|
|
'path': '/a/c',
|
|
|
|
'device': 'sdb1',
|
|
|
|
'partition': '30',
|
|
|
|
'method': 'PUT',
|
|
|
|
'ssl': False,
|
2013-09-01 15:10:39 -04:00
|
|
|
'headers': HeaderKeyDict({
|
|
|
|
'x-bytes-used': 0,
|
|
|
|
'x-delete-timestamp': '0',
|
|
|
|
'x-object-count': 0,
|
2014-06-10 22:17:47 -07:00
|
|
|
'x-put-timestamp': Timestamp(12345).internal,
|
2014-06-23 12:52:50 -07:00
|
|
|
'X-Backend-Storage-Policy-Index': '%s' % POLICIES.default.idx,
|
2013-09-01 15:10:39 -04:00
|
|
|
'referer': 'PUT http://localhost/sda1/p/a/c',
|
|
|
|
'user-agent': 'container-server %d' % os.getpid(),
|
|
|
|
'x-trans-id': '-'})})
|
Allow for multiple X-(Account|Container)-* headers.
When the number of account/container or container/object replicas are
different, Swift had a few misbehaviors. This commit fixes them.
* On an object PUT/POST/DELETE, if there were 3 object replicas and
only 2 container replicas, then only 2 requests would be made to
object servers. Now, 3 requests will be made, but the third won't
have any X-Container-* headers in it.
* On an object PUT/POST/DELETE, if there were 3 object replicas and 4
container replicas, then only 3/4 container servers would receive
immediate updates; the fourth would be ignored. Now one of the
object servers will receive multiple (comma-separated) values in the
X-Container-* headers and it will attempt to contact both of them.
One side effect is that multiple async_pendings may be written for
updates to the same object. They'll have differing timestamps,
though, so all but the newest will be deleted unread. To trigger
this behavior, you have to have more container replicas than object
replicas, 2 or more of the container servers must be down, and the
headers sent to one object server must reference 2 or more down
container servers; it's unlikely enough and the consequences are so
minor that it didn't seem worth fixing.
The situation with account/containers is analogous, only without the
async_pendings.
Change-Id: I98bc2de93fb6b2346d6de1d764213d7563653e8d
2012-12-12 17:47:04 -08:00
|
|
|
self.assertEquals(
|
|
|
|
http_connect_args[1],
|
|
|
|
{'ipaddr': '6.7.8.9',
|
|
|
|
'port': '10',
|
|
|
|
'path': '/a/c',
|
|
|
|
'device': 'sdf1',
|
|
|
|
'partition': '30',
|
|
|
|
'method': 'PUT',
|
|
|
|
'ssl': False,
|
2013-09-01 15:10:39 -04:00
|
|
|
'headers': HeaderKeyDict({
|
|
|
|
'x-bytes-used': 0,
|
|
|
|
'x-delete-timestamp': '0',
|
|
|
|
'x-object-count': 0,
|
2014-06-10 22:17:47 -07:00
|
|
|
'x-put-timestamp': Timestamp(12345).internal,
|
2014-06-23 12:52:50 -07:00
|
|
|
'X-Backend-Storage-Policy-Index': '%s' % POLICIES.default.idx,
|
2013-09-01 15:10:39 -04:00
|
|
|
'referer': 'PUT http://localhost/sda1/p/a/c',
|
|
|
|
'user-agent': 'container-server %d' % os.getpid(),
|
|
|
|
'x-trans-id': '-'})})
|
Allow for multiple X-(Account|Container)-* headers.
When the number of account/container or container/object replicas are
different, Swift had a few misbehaviors. This commit fixes them.
* On an object PUT/POST/DELETE, if there were 3 object replicas and
only 2 container replicas, then only 2 requests would be made to
object servers. Now, 3 requests will be made, but the third won't
have any X-Container-* headers in it.
* On an object PUT/POST/DELETE, if there were 3 object replicas and 4
container replicas, then only 3/4 container servers would receive
immediate updates; the fourth would be ignored. Now one of the
object servers will receive multiple (comma-separated) values in the
X-Container-* headers and it will attempt to contact both of them.
One side effect is that multiple async_pendings may be written for
updates to the same object. They'll have differing timestamps,
though, so all but the newest will be deleted unread. To trigger
this behavior, you have to have more container replicas than object
replicas, 2 or more of the container servers must be down, and the
headers sent to one object server must reference 2 or more down
container servers; it's unlikely enough and the consequences are so
minor that it didn't seem worth fixing.
The situation with account/containers is analogous, only without the
async_pendings.
Change-Id: I98bc2de93fb6b2346d6de1d764213d7563653e8d
2012-12-12 17:47:04 -08:00
|
|
|
|
2012-12-17 06:39:25 -05:00
|
|
|
def test_serv_reserv(self):
|
2013-08-31 22:36:58 -04:00
|
|
|
# Test replication_server flag was set from configuration file.
|
2012-12-17 06:39:25 -05:00
|
|
|
container_controller = container_server.ContainerController
|
|
|
|
conf = {'devices': self.testdir, 'mount_check': 'false'}
|
|
|
|
self.assertEquals(container_controller(conf).replication_server, None)
|
|
|
|
for val in [True, '1', 'True', 'true']:
|
|
|
|
conf['replication_server'] = val
|
|
|
|
self.assertTrue(container_controller(conf).replication_server)
|
|
|
|
for val in [False, 0, '0', 'False', 'false', 'test_string']:
|
|
|
|
conf['replication_server'] = val
|
|
|
|
self.assertFalse(container_controller(conf).replication_server)
|
|
|
|
|
|
|
|
def test_list_allowed_methods(self):
|
2013-08-31 22:36:58 -04:00
|
|
|
# Test list of allowed_methods
|
2013-05-23 20:16:21 +04:00
|
|
|
obj_methods = ['DELETE', 'PUT', 'HEAD', 'GET', 'POST']
|
|
|
|
repl_methods = ['REPLICATE']
|
|
|
|
for method_name in obj_methods:
|
|
|
|
method = getattr(self.controller, method_name)
|
|
|
|
self.assertFalse(hasattr(method, 'replication'))
|
|
|
|
for method_name in repl_methods:
|
|
|
|
method = getattr(self.controller, method_name)
|
|
|
|
self.assertEquals(method.replication, True)
|
2012-12-17 06:39:25 -05:00
|
|
|
|
|
|
|
def test_correct_allowed_method(self):
|
2013-08-31 22:36:58 -04:00
|
|
|
# Test correct work for allowed method using
|
|
|
|
# swift.container.server.ContainerController.__call__
|
2012-12-17 06:39:25 -05:00
|
|
|
inbuf = StringIO()
|
|
|
|
errbuf = StringIO()
|
|
|
|
outbuf = StringIO()
|
2013-05-23 20:16:21 +04:00
|
|
|
self.controller = container_server.ContainerController(
|
|
|
|
{'devices': self.testdir, 'mount_check': 'false',
|
|
|
|
'replication_server': 'false'})
|
2012-12-17 06:39:25 -05:00
|
|
|
|
|
|
|
def start_response(*args):
|
2013-08-31 22:36:58 -04:00
|
|
|
"""Sends args to outbuf"""
|
2012-12-17 06:39:25 -05:00
|
|
|
outbuf.writelines(args)
|
|
|
|
|
2013-05-23 20:16:21 +04:00
|
|
|
method = 'PUT'
|
2012-12-17 06:39:25 -05:00
|
|
|
|
|
|
|
env = {'REQUEST_METHOD': method,
|
|
|
|
'SCRIPT_NAME': '',
|
|
|
|
'PATH_INFO': '/sda1/p/a/c',
|
|
|
|
'SERVER_NAME': '127.0.0.1',
|
|
|
|
'SERVER_PORT': '8080',
|
|
|
|
'SERVER_PROTOCOL': 'HTTP/1.0',
|
|
|
|
'CONTENT_LENGTH': '0',
|
|
|
|
'wsgi.version': (1, 0),
|
|
|
|
'wsgi.url_scheme': 'http',
|
|
|
|
'wsgi.input': inbuf,
|
|
|
|
'wsgi.errors': errbuf,
|
|
|
|
'wsgi.multithread': False,
|
|
|
|
'wsgi.multiprocess': False,
|
|
|
|
'wsgi.run_once': False}
|
|
|
|
|
2013-05-23 20:16:21 +04:00
|
|
|
method_res = mock.MagicMock()
|
|
|
|
mock_method = public(lambda x: mock.MagicMock(return_value=method_res))
|
|
|
|
with mock.patch.object(self.controller, method, new=mock_method):
|
2012-12-17 06:39:25 -05:00
|
|
|
response = self.controller.__call__(env, start_response)
|
2013-05-23 20:16:21 +04:00
|
|
|
self.assertEqual(response, method_res)
|
2012-12-17 06:39:25 -05:00
|
|
|
|
|
|
|
def test_not_allowed_method(self):
|
2013-08-31 22:36:58 -04:00
|
|
|
# Test correct work for NOT allowed method using
|
|
|
|
# swift.container.server.ContainerController.__call__
|
2012-12-17 06:39:25 -05:00
|
|
|
inbuf = StringIO()
|
|
|
|
errbuf = StringIO()
|
|
|
|
outbuf = StringIO()
|
2013-05-23 20:16:21 +04:00
|
|
|
self.controller = container_server.ContainerController(
|
|
|
|
{'devices': self.testdir, 'mount_check': 'false',
|
|
|
|
'replication_server': 'false'})
|
2012-12-17 06:39:25 -05:00
|
|
|
|
|
|
|
def start_response(*args):
|
2013-08-31 22:36:58 -04:00
|
|
|
"""Sends args to outbuf"""
|
2012-12-17 06:39:25 -05:00
|
|
|
outbuf.writelines(args)
|
|
|
|
|
2013-05-23 20:16:21 +04:00
|
|
|
method = 'PUT'
|
2012-12-17 06:39:25 -05:00
|
|
|
|
|
|
|
env = {'REQUEST_METHOD': method,
|
|
|
|
'SCRIPT_NAME': '',
|
|
|
|
'PATH_INFO': '/sda1/p/a/c',
|
|
|
|
'SERVER_NAME': '127.0.0.1',
|
|
|
|
'SERVER_PORT': '8080',
|
|
|
|
'SERVER_PROTOCOL': 'HTTP/1.0',
|
|
|
|
'CONTENT_LENGTH': '0',
|
|
|
|
'wsgi.version': (1, 0),
|
|
|
|
'wsgi.url_scheme': 'http',
|
|
|
|
'wsgi.input': inbuf,
|
|
|
|
'wsgi.errors': errbuf,
|
|
|
|
'wsgi.multithread': False,
|
|
|
|
'wsgi.multiprocess': False,
|
|
|
|
'wsgi.run_once': False}
|
|
|
|
|
|
|
|
answer = ['<html><h1>Method Not Allowed</h1><p>The method is not '
|
|
|
|
'allowed for this resource.</p></html>']
|
2013-05-23 20:16:21 +04:00
|
|
|
mock_method = replication(public(lambda x: mock.MagicMock()))
|
|
|
|
with mock.patch.object(self.controller, method, new=mock_method):
|
2012-12-17 06:39:25 -05:00
|
|
|
response = self.controller.__call__(env, start_response)
|
|
|
|
self.assertEqual(response, answer)
|
|
|
|
|
2014-02-17 10:31:20 +00:00
|
|
|
def test_GET_log_requests_true(self):
|
|
|
|
self.controller.logger = FakeLogger()
|
|
|
|
self.controller.log_requests = True
|
|
|
|
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 404)
|
|
|
|
self.assertTrue(self.controller.logger.log_dict['info'])
|
|
|
|
|
|
|
|
def test_GET_log_requests_false(self):
|
|
|
|
self.controller.logger = FakeLogger()
|
|
|
|
self.controller.log_requests = False
|
|
|
|
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'})
|
|
|
|
resp = req.get_response(self.controller)
|
|
|
|
self.assertEqual(resp.status_int, 404)
|
|
|
|
self.assertFalse(self.controller.logger.log_dict['info'])
|
|
|
|
|
2014-03-26 22:55:55 +00:00
|
|
|
def test_log_line_format(self):
|
|
|
|
req = Request.blank(
|
|
|
|
'/sda1/p/a/c',
|
|
|
|
environ={'REQUEST_METHOD': 'HEAD', 'REMOTE_ADDR': '1.2.3.4'})
|
|
|
|
self.controller.logger = FakeLogger()
|
|
|
|
with mock.patch(
|
|
|
|
'time.gmtime', mock.MagicMock(side_effect=[gmtime(10001.0)])):
|
|
|
|
with mock.patch(
|
|
|
|
'time.time',
|
|
|
|
mock.MagicMock(side_effect=[10000.0, 10001.0, 10002.0])):
|
2014-07-07 17:11:38 -07:00
|
|
|
with mock.patch(
|
|
|
|
'os.getpid', mock.MagicMock(return_value=1234)):
|
|
|
|
req.get_response(self.controller)
|
2014-03-26 22:55:55 +00:00
|
|
|
self.assertEqual(
|
|
|
|
self.controller.logger.log_dict['info'],
|
|
|
|
[(('1.2.3.4 - - [01/Jan/1970:02:46:41 +0000] "HEAD /sda1/p/a/c" '
|
2014-07-07 17:11:38 -07:00
|
|
|
'404 - "-" "-" "-" 2.0000 "-" 1234',), {})])
|
2014-03-26 22:55:55 +00:00
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
|
Add Storage Policy support to Object Updates
The object server will now send its storage policy index to the
container server synchronously and asynchronously (via async_pending).
Each storage policy gets its own async_pending directory under
/srv/node/$disk/objects-$N, so there's no need to change the on-disk
pickle format; the policy index comes from the async_pending's
filename. This avoids any hassle on upgrade. (Recall that policy 0's
objects live in /srv/node/$disk/objects, not objects-0.) Per-policy
tempdir as well.
Also clean up a couple little things in the object updater. Now it
won't abort processing when it encounters a file (not directory) named
"async_pending-\d+", and it won't process updates in a directory that
does not correspond to a storage policy.
That is, if you have policies 1, 2, and 3, but there's a directory on
your disk named "async_pending-5", the updater will now skip over that
entirely. It won't even bother doing directory listings at all. This
is a good idea, believe it or not, because there's nothing good that
the container server can do with an update from some unknown storage
policy. It can't update the listing, it can't move the object if it's
misplaced... all it can do is ignore the request, so it's better to
just not send it in the first place. Plus, if this is due to a
misconfiguration on one storage node, then the updates will get
processed once the configuration is fixed.
There's also a drive by fix to update some backend http mocks for container
update tests that we're not fully exercising their their request fakes.
Because the object server container update code is resilient to to all manor
of failure from backend requests the general intent of the tests was
unaffected but this change cleans up some confusing logging in the debug
logger output.
The object-server will send X-Storage-Policy-Index headers with all
requests to container severs, including X-Delete containers and all
object PUT/DELETE requests. This header value is persisted in the
pickle file for the update and sent along with async requests from the
object-updater as well.
The container server will extract the X-Storage-Policy-Index header from
incoming requests and apply it to container broker calls as appropriate
defaulting to the legacy storage policy 0 to support seemless migration.
DocImpact
Implements: blueprint storage-policies
Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd
add to object updates
Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
|
|
|
@patch_policies([
|
|
|
|
StoragePolicy(0, 'legacy'),
|
|
|
|
StoragePolicy(1, 'one'),
|
|
|
|
StoragePolicy(2, 'two', True),
|
|
|
|
StoragePolicy(3, 'three'),
|
|
|
|
StoragePolicy(4, 'four'),
|
|
|
|
])
|
|
|
|
class TestNonLegacyDefaultStoragePolicy(TestContainerController):
|
|
|
|
"""
|
|
|
|
Test swift.container.server.ContainerController with a non-legacy default
|
|
|
|
Storage Policy.
|
|
|
|
"""
|
|
|
|
|
|
|
|
def _update_object_put_headers(self, req):
|
|
|
|
"""
|
2014-06-23 12:52:50 -07:00
|
|
|
Add policy index headers for containers created with default policy
|
Add Storage Policy support to Object Updates
The object server will now send its storage policy index to the
container server synchronously and asynchronously (via async_pending).
Each storage policy gets its own async_pending directory under
/srv/node/$disk/objects-$N, so there's no need to change the on-disk
pickle format; the policy index comes from the async_pending's
filename. This avoids any hassle on upgrade. (Recall that policy 0's
objects live in /srv/node/$disk/objects, not objects-0.) Per-policy
tempdir as well.
Also clean up a couple little things in the object updater. Now it
won't abort processing when it encounters a file (not directory) named
"async_pending-\d+", and it won't process updates in a directory that
does not correspond to a storage policy.
That is, if you have policies 1, 2, and 3, but there's a directory on
your disk named "async_pending-5", the updater will now skip over that
entirely. It won't even bother doing directory listings at all. This
is a good idea, believe it or not, because there's nothing good that
the container server can do with an update from some unknown storage
policy. It can't update the listing, it can't move the object if it's
misplaced... all it can do is ignore the request, so it's better to
just not send it in the first place. Plus, if this is due to a
misconfiguration on one storage node, then the updates will get
processed once the configuration is fixed.
There's also a drive by fix to update some backend http mocks for container
update tests that we're not fully exercising their their request fakes.
Because the object server container update code is resilient to to all manor
of failure from backend requests the general intent of the tests was
unaffected but this change cleans up some confusing logging in the debug
logger output.
The object-server will send X-Storage-Policy-Index headers with all
requests to container severs, including X-Delete containers and all
object PUT/DELETE requests. This header value is persisted in the
pickle file for the update and sent along with async requests from the
object-updater as well.
The container server will extract the X-Storage-Policy-Index header from
incoming requests and apply it to container broker calls as appropriate
defaulting to the legacy storage policy 0 to support seemless migration.
DocImpact
Implements: blueprint storage-policies
Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd
add to object updates
Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
|
|
|
- which in this TestCase is 1.
|
|
|
|
"""
|
2014-06-23 12:52:50 -07:00
|
|
|
req.headers['X-Backend-Storage-Policy-Index'] = \
|
|
|
|
str(POLICIES.default.idx)
|
Add Storage Policy support to Object Updates
The object server will now send its storage policy index to the
container server synchronously and asynchronously (via async_pending).
Each storage policy gets its own async_pending directory under
/srv/node/$disk/objects-$N, so there's no need to change the on-disk
pickle format; the policy index comes from the async_pending's
filename. This avoids any hassle on upgrade. (Recall that policy 0's
objects live in /srv/node/$disk/objects, not objects-0.) Per-policy
tempdir as well.
Also clean up a couple little things in the object updater. Now it
won't abort processing when it encounters a file (not directory) named
"async_pending-\d+", and it won't process updates in a directory that
does not correspond to a storage policy.
That is, if you have policies 1, 2, and 3, but there's a directory on
your disk named "async_pending-5", the updater will now skip over that
entirely. It won't even bother doing directory listings at all. This
is a good idea, believe it or not, because there's nothing good that
the container server can do with an update from some unknown storage
policy. It can't update the listing, it can't move the object if it's
misplaced... all it can do is ignore the request, so it's better to
just not send it in the first place. Plus, if this is due to a
misconfiguration on one storage node, then the updates will get
processed once the configuration is fixed.
There's also a drive by fix to update some backend http mocks for container
update tests that we're not fully exercising their their request fakes.
Because the object server container update code is resilient to to all manor
of failure from backend requests the general intent of the tests was
unaffected but this change cleans up some confusing logging in the debug
logger output.
The object-server will send X-Storage-Policy-Index headers with all
requests to container severs, including X-Delete containers and all
object PUT/DELETE requests. This header value is persisted in the
pickle file for the update and sent along with async requests from the
object-updater as well.
The container server will extract the X-Storage-Policy-Index header from
incoming requests and apply it to container broker calls as appropriate
defaulting to the legacy storage policy 0 to support seemless migration.
DocImpact
Implements: blueprint storage-policies
Change-Id: I07c730bebaee068f75024fa9c2fa9e11e295d9bd
add to object updates
Change-Id: Ic97a422238a0d7bc2a411a71a7aba3f8b42fce4d
2014-03-17 17:54:42 -07:00
|
|
|
|
|
|
|
|
2010-07-12 17:03:45 -05:00
|
|
|
if __name__ == '__main__':
|
|
|
|
unittest.main()
|