Merge branch 'master' into merge-from-master
Change-Id: Ife650c292aa6fe7d949fb174525728ccc882094b
This commit is contained in:
commit
67b3a74786
40
CHANGELOG
40
CHANGELOG
@ -244,7 +244,7 @@ swift (2.13.0, OpenStack Ocata)
|
||||
|
||||
* PUT subrequests generated from a client-side COPY will now properly log
|
||||
the SSC (server-side copy) Swift source field. See
|
||||
https://docs.openstack.org/developer/swift/logs.html#swift-source for
|
||||
https://docs.openstack.org/swift/latest/logs.html#swift-source for
|
||||
more information.
|
||||
|
||||
* Fixed a bug where an SLO download with a range request may have resulted
|
||||
@ -391,13 +391,13 @@ swift (2.10.0, OpenStack Newton)
|
||||
* Object versioning now supports a "history" mode in addition to
|
||||
the older "stack" mode. The difference is in how DELETE requests
|
||||
are handled. For full details, please read
|
||||
http://docs.openstack.org/developer/swift/overview_object_versioning.html.
|
||||
https://docs.openstack.org/swift/latest/overview_object_versioning.html.
|
||||
|
||||
* New config variables to change the schedule priority and I/O
|
||||
scheduling class. Servers and daemons now understand
|
||||
`nice_priority`, `ionice_class`, and `ionice_priority` to
|
||||
schedule their relative importance. Please read
|
||||
http://docs.openstack.org/developer/swift/deployment_guide.html
|
||||
https://docs.openstack.org/swift/latest/admin_guide.html
|
||||
for full config details.
|
||||
|
||||
* On newer kernels (3.15+ when using xfs), Swift will use the O_TMPFILE
|
||||
@ -410,7 +410,7 @@ swift (2.10.0, OpenStack Newton)
|
||||
improved in clusters that are not completely healthy.
|
||||
|
||||
* Significant improvements to the api-ref doc available at
|
||||
http://developer.openstack.org/api-ref/object-storage/.
|
||||
https://developer.openstack.org/api-ref/object-storage/.
|
||||
|
||||
* A PUT or POST to a container will now update the container's
|
||||
Last-Modified time, and that value will be included in a
|
||||
@ -464,7 +464,7 @@ swift (2.9.0)
|
||||
|
||||
For more information on the details of the at-rest encryption
|
||||
feature, please see the docs at
|
||||
http://docs.openstack.org/developer/swift/overview_encryption.html.
|
||||
https://docs.openstack.org/swift/latest/overview_encryption.html.
|
||||
|
||||
* `swift-recon` can now be called with more than one server type.
|
||||
|
||||
@ -606,7 +606,7 @@ swift (2.7.0, OpenStack Mitaka)
|
||||
default it will stagger the firing.
|
||||
|
||||
* Added an operational procedures guide to the docs. It can be
|
||||
found at http://docs.openstack.org/developer/swift/ops_runbook/index.html and
|
||||
found at https://docs.openstack.org/swift/latest/ops_runbook/index.html and
|
||||
includes information on detecting and handling day-to-day
|
||||
operational issues in a Swift cluster.
|
||||
|
||||
@ -776,7 +776,7 @@ swift (2.6.0)
|
||||
* Container sync has been improved to more quickly find and iterate over
|
||||
the containers to be synced. This reduced server load and lowers the
|
||||
time required to see data propagate between two clusters. Please see
|
||||
http://docs.openstack.org/developer/swift/overview_container_sync.html for more details
|
||||
https://docs.openstack.org/swift/latest/overview_container_sync.html for more details
|
||||
about the new on-disk structure for tracking synchronized containers.
|
||||
|
||||
* A container POST will now update that container's put-timestamp value.
|
||||
@ -862,7 +862,7 @@ swift (2.4.0)
|
||||
server config setting ("allow_versions"), if it is currently enabled.
|
||||
The existing container server config setting enables existing
|
||||
containers to continue being versioned. Please see
|
||||
http://docs.openstack.org/developer/swift/middleware.html#how-to-enable-object-versioning-in-a-swift-cluster
|
||||
https://docs.openstack.org/swift/latest/middleware.html#how-to-enable-object-versioning-in-a-swift-cluster
|
||||
for further upgrade notes.
|
||||
|
||||
* Allow 1+ object-servers-per-disk deployment
|
||||
@ -987,7 +987,7 @@ swift (2.3.0, OpenStack Kilo)
|
||||
ssync for durability. Deployers are urged to do extensive testing and
|
||||
not deploy production data using an erasure code storage policy.
|
||||
|
||||
Full docs are at http://docs.openstack.org/developer/swift/overview_erasure_code.html
|
||||
Full docs are at https://docs.openstack.org/swift/latest/overview_erasure_code.html
|
||||
|
||||
* Add support for container TempURL Keys.
|
||||
|
||||
@ -996,7 +996,7 @@ swift (2.3.0, OpenStack Kilo)
|
||||
|
||||
* Swift now supports composite tokens. This allows another service to
|
||||
act on behalf of a user, but only with that user's consent.
|
||||
See http://docs.openstack.org/developer/swift/overview_auth.html for more details.
|
||||
See https://docs.openstack.org/swift/latest/overview_auth.html for more details.
|
||||
|
||||
* Multi-region replication was improved. When replicating data to a
|
||||
different region, only one replica will be pushed per replication
|
||||
@ -1004,7 +1004,7 @@ swift (2.3.0, OpenStack Kilo)
|
||||
locally instead of pushing more data over the inter-region network.
|
||||
|
||||
* Internal requests from the ratelimit middleware now properly log a
|
||||
swift_source. See http://docs.openstack.org/developer/swift/logs.html for details.
|
||||
swift_source. See https://docs.openstack.org/swift/latest/logs.html for details.
|
||||
|
||||
* Improved storage policy support for quarantine stats in swift-recon.
|
||||
|
||||
@ -1052,7 +1052,7 @@ swift (2.2.2)
|
||||
The overload and dispersion metrics have been exposed in the
|
||||
swift-ring-build CLI tools.
|
||||
|
||||
See http://docs.openstack.org/developer/swift/overview_ring.html
|
||||
See https://docs.openstack.org/swift/latest/overview_ring.html
|
||||
for more info on how data placement works now.
|
||||
|
||||
* Improve replication of large out-of-sync, out-of-date containers.
|
||||
@ -1140,7 +1140,7 @@ swift (2.2.0, OpenStack Juno)
|
||||
now requires that ACLs be set on IDs, which are unique across
|
||||
domains, and further restricts setting new ACLs to only use IDs.
|
||||
|
||||
Please see http://docs.openstack.org/developer/swift/overview_auth.html for
|
||||
Please see https://docs.openstack.org/swift/latest/overview_auth.html for
|
||||
more information on configuring Swift and Keystone together.
|
||||
|
||||
* Swift now supports server-side account-to-account copy. Server-
|
||||
@ -1257,7 +1257,7 @@ swift (2.0.0)
|
||||
them. A policy is set on a Swift container at container creation
|
||||
time and cannot be changed.
|
||||
|
||||
Full docs are at http://docs.openstack.org/developer/swift/overview_policies.html
|
||||
Full docs are at https://docs.openstack.org/swift/latest/overview_policies.html
|
||||
|
||||
* Add profiling middleware in Swift
|
||||
|
||||
@ -1351,7 +1351,7 @@ swift (1.13.0)
|
||||
the header is a JSON dictionary string to be interpreted by the
|
||||
auth system. A reference implementation is given in TempAuth.
|
||||
Please see the full docs at
|
||||
http://docs.openstack.org/developer/swift/overview_auth.html
|
||||
https://docs.openstack.org/swift/latest/overview_auth.html
|
||||
|
||||
* Added a WSGI environment flag to stop swob from always using
|
||||
absolute location. This is useful if middleware needs to use
|
||||
@ -1433,8 +1433,8 @@ swift (1.12.0)
|
||||
* New container sync configuration option, separating the end user
|
||||
from knowing the required end point and adding more secure
|
||||
signed requests. See
|
||||
http://docs.openstack.org/developer/swift/overview_container_sync.html for full
|
||||
information.
|
||||
https://docs.openstack.org/swift/latest/overview_container_sync.html
|
||||
for full information.
|
||||
|
||||
* bulk middleware now can be configured to retry deleting containers.
|
||||
|
||||
@ -1699,7 +1699,7 @@ swift (1.9.0)
|
||||
bugrelated to content-disposition names.
|
||||
|
||||
* Added crossdomain.xml middleware. See
|
||||
http://docs.openstack.org/developer/swift/crossdomain.html for details
|
||||
https://docs.openstack.org/swift/latest/crossdomain.html for details
|
||||
|
||||
* Added rsync bandwidth limit setting for object replicator
|
||||
|
||||
@ -1720,7 +1720,7 @@ swift (1.9.0)
|
||||
* Improved container-sync resiliency
|
||||
|
||||
* Added example Apache config files. See
|
||||
http://docs.openstack.org/developer/swift/apache_deployment_guide.html
|
||||
https://docs.openstack.org/swift/latest/apache_deployment_guide.html
|
||||
for more info
|
||||
|
||||
* If an account is marked as deleted but hasn't been reaped and is still
|
||||
@ -1768,7 +1768,7 @@ swift (1.8.0, OpenStack Grizzly)
|
||||
|
||||
This is a change that may require an update to your proxy server
|
||||
config file or custom middleware that you may be using. See the full
|
||||
docs at http://docs.openstack.org/developer/swift/misc.html#module-swift.common.middleware.proxy_logging.
|
||||
docs at https://docs.openstack.org/swift/latest/misc.html.
|
||||
|
||||
* Changed the default sample rate for a few high-traffic requests.
|
||||
|
||||
|
@ -75,7 +75,7 @@ working on.
|
||||
Getting Started
|
||||
---------------
|
||||
|
||||
http://docs.openstack.org/developer/swift/first_contribution_swift.html
|
||||
https://docs.openstack.org/swift/latest/first_contribution_swift.html
|
||||
|
||||
Once those steps have been completed, changes to OpenStack
|
||||
should be submitted for review via the Gerrit tool, following
|
||||
@ -116,7 +116,7 @@ Recommended workflow
|
||||
====================
|
||||
|
||||
- Set up a `Swift All-In-One
|
||||
VM <http://docs.openstack.org/developer/swift/development_saio.html>`__\ (SAIO).
|
||||
VM <https://docs.openstack.org/swift/latest/development_saio.html>`__\ (SAIO).
|
||||
|
||||
- Make your changes. Docs and tests for your patch must land before or
|
||||
with your patch.
|
||||
|
20
README.rst
20
README.rst
@ -31,7 +31,7 @@ To build documentation install sphinx (``pip install sphinx``), run
|
||||
``python setup.py build_sphinx``, and then browse to
|
||||
/doc/build/html/index.html. These docs are auto-generated after every
|
||||
commit and available online at
|
||||
http://docs.openstack.org/developer/swift/.
|
||||
https://docs.openstack.org/swift/latest/.
|
||||
|
||||
For Developers
|
||||
--------------
|
||||
@ -39,13 +39,14 @@ For Developers
|
||||
Getting Started
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Swift is part of OpenStack and follows the code contribution, review, and testing processes common to all OpenStack projects.
|
||||
Swift is part of OpenStack and follows the code contribution, review, and
|
||||
testing processes common to all OpenStack projects.
|
||||
|
||||
If you would like to start contributing, check out these
|
||||
`notes <CONTRIBUTING.rst>`__ to help you get started.
|
||||
|
||||
The best place to get started is the
|
||||
`"SAIO - Swift All In One" <http://docs.openstack.org/developer/swift/development_saio.html>`__.
|
||||
`"SAIO - Swift All In One" <https://docs.openstack.org/swift/latest/development_saio.html>`__.
|
||||
This document will walk you through setting up a development cluster of
|
||||
Swift in a VM. The SAIO environment is ideal for running small-scale
|
||||
tests against swift and trying out new features and bug fixes.
|
||||
@ -72,7 +73,7 @@ continue to work.
|
||||
|
||||
Probe tests are "white box" tests that validate the internal workings of a
|
||||
Swift cluster. They are written to work against the
|
||||
`"SAIO - Swift All In One" <http://docs.openstack.org/developer/swift/development_saio.html>`__
|
||||
`"SAIO - Swift All In One" <https://docs.openstack.org/swift/latest/development_saio.html>`__
|
||||
dev environment. For example, a probe test may create an object, delete one
|
||||
replica, and ensure that the background consistency processes find and correct
|
||||
the error.
|
||||
@ -119,10 +120,9 @@ For Deployers
|
||||
-------------
|
||||
|
||||
Deployer docs are also available at
|
||||
http://docs.openstack.org/developer/swift/. A good starting point is at
|
||||
http://docs.openstack.org/developer/swift/deployment_guide.html
|
||||
|
||||
There is an `ops runbook <http://docs.openstack.org/developer/swift/ops_runbook/>`__
|
||||
https://docs.openstack.org/swift/latest/. A good starting point is at
|
||||
https://docs.openstack.org/swift/latest/deployment_guide.html
|
||||
There is an `ops runbook <https://docs.openstack.org/swift/latest/ops_runbook/index.html>`__
|
||||
that gives information about how to diagnose and troubleshoot common issues
|
||||
when running a Swift cluster.
|
||||
|
||||
@ -138,11 +138,11 @@ For client applications, official Python language bindings are provided
|
||||
at http://github.com/openstack/python-swiftclient.
|
||||
|
||||
Complete API documentation at
|
||||
http://developer.openstack.org/api-ref/object-store/
|
||||
https://developer.openstack.org/api-ref/object-store/
|
||||
|
||||
There is a large ecosystem of applications and libraries that support and
|
||||
work with OpenStack Swift. Several are listed on the
|
||||
`associated projects <http://docs.openstack.org/developer/swift/associated_projects.html>`__
|
||||
`associated projects <https://docs.openstack.org/swift/latest/associated_projects.html>`__
|
||||
page.
|
||||
|
||||
--------------
|
||||
|
@ -1,5 +1,5 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
||||
# Copyright (c) 2017 Christian Schwede <cschwede@redhat.com>
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@ -14,398 +14,11 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import print_function
|
||||
import json
|
||||
from collections import defaultdict
|
||||
from six.moves.configparser import ConfigParser
|
||||
from optparse import OptionParser
|
||||
from sys import exit, stdout, stderr
|
||||
from time import time
|
||||
|
||||
from eventlet import GreenPool, hubs, patcher, Timeout
|
||||
from eventlet.pools import Pool
|
||||
import sys
|
||||
|
||||
from swift.common import direct_client
|
||||
try:
|
||||
from swiftclient import get_auth
|
||||
except ImportError:
|
||||
from swift.common.internal_client import get_auth
|
||||
from swift.common.internal_client import SimpleClient
|
||||
from swift.common.ring import Ring
|
||||
from swift.common.exceptions import ClientException
|
||||
from swift.common.utils import compute_eta, get_time_units, config_true_value
|
||||
from swift.common.storage_policy import POLICIES
|
||||
from swift.cli.dispersion_report import main
|
||||
|
||||
|
||||
unmounted = []
|
||||
notfound = []
|
||||
json_output = False
|
||||
debug = False
|
||||
insecure = False
|
||||
|
||||
|
||||
def get_error_log(prefix):
|
||||
|
||||
def error_log(msg_or_exc):
|
||||
global debug, unmounted, notfound
|
||||
if hasattr(msg_or_exc, 'http_status'):
|
||||
identifier = '%s:%s/%s' % (msg_or_exc.http_host,
|
||||
msg_or_exc.http_port,
|
||||
msg_or_exc.http_device)
|
||||
if msg_or_exc.http_status == 507:
|
||||
if identifier not in unmounted:
|
||||
unmounted.append(identifier)
|
||||
print('ERROR: %s is unmounted -- This will '
|
||||
'cause replicas designated for that device to be '
|
||||
'considered missing until resolved or the ring is '
|
||||
'updated.' % (identifier), file=stderr)
|
||||
stderr.flush()
|
||||
if debug and identifier not in notfound:
|
||||
notfound.append(identifier)
|
||||
print('ERROR: %s returned a 404' % (identifier), file=stderr)
|
||||
stderr.flush()
|
||||
if not hasattr(msg_or_exc, 'http_status') or \
|
||||
msg_or_exc.http_status not in (404, 507):
|
||||
print('ERROR: %s: %s' % (prefix, msg_or_exc), file=stderr)
|
||||
stderr.flush()
|
||||
return error_log
|
||||
|
||||
|
||||
def container_dispersion_report(coropool, connpool, account, container_ring,
|
||||
retries, output_missing_partitions, policy):
|
||||
with connpool.item() as conn:
|
||||
containers = [c['name'] for c in conn.get_account(
|
||||
prefix='dispersion_%d' % policy.idx, full_listing=True)[1]]
|
||||
containers_listed = len(containers)
|
||||
if not containers_listed:
|
||||
print('No containers to query. Has '
|
||||
'swift-dispersion-populate been run?', file=stderr)
|
||||
stderr.flush()
|
||||
return
|
||||
retries_done = [0]
|
||||
containers_queried = [0]
|
||||
container_copies_missing = defaultdict(int)
|
||||
container_copies_found = [0]
|
||||
container_copies_expected = [0]
|
||||
begun = time()
|
||||
next_report = [time() + 2]
|
||||
|
||||
def direct(container, part, nodes):
|
||||
found_count = 0
|
||||
for node in nodes:
|
||||
error_log = get_error_log('%(ip)s:%(port)s/%(device)s' % node)
|
||||
try:
|
||||
attempts, _junk = direct_client.retry(
|
||||
direct_client.direct_head_container, node, part, account,
|
||||
container, error_log=error_log, retries=retries)
|
||||
retries_done[0] += attempts - 1
|
||||
found_count += 1
|
||||
except ClientException as err:
|
||||
if err.http_status not in (404, 507):
|
||||
error_log('Giving up on /%s/%s/%s: %s' % (part, account,
|
||||
container, err))
|
||||
except (Exception, Timeout) as err:
|
||||
error_log('Giving up on /%s/%s/%s: %s' % (part, account,
|
||||
container, err))
|
||||
if output_missing_partitions and \
|
||||
found_count < len(nodes):
|
||||
missing = len(nodes) - found_count
|
||||
print('\r\x1B[K', end='')
|
||||
stdout.flush()
|
||||
print('# Container partition %s missing %s cop%s' % (
|
||||
part, missing, 'y' if missing == 1 else 'ies'), file=stderr)
|
||||
container_copies_found[0] += found_count
|
||||
containers_queried[0] += 1
|
||||
container_copies_missing[len(nodes) - found_count] += 1
|
||||
if time() >= next_report[0]:
|
||||
next_report[0] = time() + 5
|
||||
eta, eta_unit = compute_eta(begun, containers_queried[0],
|
||||
containers_listed)
|
||||
if not json_output:
|
||||
print('\r\x1B[KQuerying containers: %d of %d, %d%s left, %d '
|
||||
'retries' % (containers_queried[0], containers_listed,
|
||||
round(eta), eta_unit, retries_done[0]),
|
||||
end='')
|
||||
stdout.flush()
|
||||
container_parts = {}
|
||||
for container in containers:
|
||||
part, nodes = container_ring.get_nodes(account, container)
|
||||
if part not in container_parts:
|
||||
container_copies_expected[0] += len(nodes)
|
||||
container_parts[part] = part
|
||||
coropool.spawn(direct, container, part, nodes)
|
||||
coropool.waitall()
|
||||
distinct_partitions = len(container_parts)
|
||||
copies_found = container_copies_found[0]
|
||||
copies_expected = container_copies_expected[0]
|
||||
value = 100.0 * copies_found / copies_expected
|
||||
elapsed, elapsed_unit = get_time_units(time() - begun)
|
||||
container_copies_missing.pop(0, None)
|
||||
if not json_output:
|
||||
print('\r\x1B[KQueried %d containers for dispersion reporting, '
|
||||
'%d%s, %d retries' % (containers_listed, round(elapsed),
|
||||
elapsed_unit, retries_done[0]))
|
||||
if containers_listed - distinct_partitions:
|
||||
print('There were %d overlapping partitions' % (
|
||||
containers_listed - distinct_partitions))
|
||||
for missing_copies, num_parts in container_copies_missing.items():
|
||||
print(missing_string(num_parts, missing_copies,
|
||||
container_ring.replica_count))
|
||||
print('%.02f%% of container copies found (%d of %d)' % (
|
||||
value, copies_found, copies_expected))
|
||||
print('Sample represents %.02f%% of the container partition space' % (
|
||||
100.0 * distinct_partitions / container_ring.partition_count))
|
||||
stdout.flush()
|
||||
return None
|
||||
else:
|
||||
results = {'retries': retries_done[0],
|
||||
'overlapping': containers_listed - distinct_partitions,
|
||||
'pct_found': value,
|
||||
'copies_found': copies_found,
|
||||
'copies_expected': copies_expected}
|
||||
for missing_copies, num_parts in container_copies_missing.items():
|
||||
results['missing_%d' % (missing_copies)] = num_parts
|
||||
return results
|
||||
|
||||
|
||||
def object_dispersion_report(coropool, connpool, account, object_ring,
|
||||
retries, output_missing_partitions, policy):
|
||||
container = 'dispersion_objects_%d' % policy.idx
|
||||
with connpool.item() as conn:
|
||||
try:
|
||||
objects = [o['name'] for o in conn.get_container(
|
||||
container, prefix='dispersion_', full_listing=True)[1]]
|
||||
except ClientException as err:
|
||||
if err.http_status != 404:
|
||||
raise
|
||||
|
||||
print('No objects to query. Has '
|
||||
'swift-dispersion-populate been run?', file=stderr)
|
||||
stderr.flush()
|
||||
return
|
||||
objects_listed = len(objects)
|
||||
if not objects_listed:
|
||||
print('No objects to query. Has swift-dispersion-populate '
|
||||
'been run?', file=stderr)
|
||||
stderr.flush()
|
||||
return
|
||||
retries_done = [0]
|
||||
objects_queried = [0]
|
||||
object_copies_found = [0]
|
||||
object_copies_expected = [0]
|
||||
object_copies_missing = defaultdict(int)
|
||||
begun = time()
|
||||
next_report = [time() + 2]
|
||||
|
||||
headers = None
|
||||
if policy is not None:
|
||||
headers = {}
|
||||
headers['X-Backend-Storage-Policy-Index'] = int(policy)
|
||||
|
||||
def direct(obj, part, nodes):
|
||||
found_count = 0
|
||||
for node in nodes:
|
||||
error_log = get_error_log('%(ip)s:%(port)s/%(device)s' % node)
|
||||
try:
|
||||
attempts, _junk = direct_client.retry(
|
||||
direct_client.direct_head_object, node, part, account,
|
||||
container, obj, error_log=error_log, retries=retries,
|
||||
headers=headers)
|
||||
retries_done[0] += attempts - 1
|
||||
found_count += 1
|
||||
except ClientException as err:
|
||||
if err.http_status not in (404, 507):
|
||||
error_log('Giving up on /%s/%s/%s/%s: %s' % (part, account,
|
||||
container, obj, err))
|
||||
except (Exception, Timeout) as err:
|
||||
error_log('Giving up on /%s/%s/%s/%s: %s' % (part, account,
|
||||
container, obj, err))
|
||||
if output_missing_partitions and \
|
||||
found_count < len(nodes):
|
||||
missing = len(nodes) - found_count
|
||||
print('\r\x1B[K', end='')
|
||||
stdout.flush()
|
||||
print('# Object partition %s missing %s cop%s' % (
|
||||
part, missing, 'y' if missing == 1 else 'ies'), file=stderr)
|
||||
object_copies_found[0] += found_count
|
||||
object_copies_missing[len(nodes) - found_count] += 1
|
||||
objects_queried[0] += 1
|
||||
if time() >= next_report[0]:
|
||||
next_report[0] = time() + 5
|
||||
eta, eta_unit = compute_eta(begun, objects_queried[0],
|
||||
objects_listed)
|
||||
if not json_output:
|
||||
print('\r\x1B[KQuerying objects: %d of %d, %d%s left, %d '
|
||||
'retries' % (objects_queried[0], objects_listed,
|
||||
round(eta), eta_unit, retries_done[0]),
|
||||
end='')
|
||||
stdout.flush()
|
||||
object_parts = {}
|
||||
for obj in objects:
|
||||
part, nodes = object_ring.get_nodes(account, container, obj)
|
||||
if part not in object_parts:
|
||||
object_copies_expected[0] += len(nodes)
|
||||
object_parts[part] = part
|
||||
coropool.spawn(direct, obj, part, nodes)
|
||||
coropool.waitall()
|
||||
distinct_partitions = len(object_parts)
|
||||
copies_found = object_copies_found[0]
|
||||
copies_expected = object_copies_expected[0]
|
||||
value = 100.0 * copies_found / copies_expected
|
||||
elapsed, elapsed_unit = get_time_units(time() - begun)
|
||||
if not json_output:
|
||||
print('\r\x1B[KQueried %d objects for dispersion reporting, '
|
||||
'%d%s, %d retries' % (objects_listed, round(elapsed),
|
||||
elapsed_unit, retries_done[0]))
|
||||
if objects_listed - distinct_partitions:
|
||||
print('There were %d overlapping partitions' % (
|
||||
objects_listed - distinct_partitions))
|
||||
|
||||
for missing_copies, num_parts in object_copies_missing.items():
|
||||
print(missing_string(num_parts, missing_copies,
|
||||
object_ring.replica_count))
|
||||
|
||||
print('%.02f%% of object copies found (%d of %d)' %
|
||||
(value, copies_found, copies_expected))
|
||||
print('Sample represents %.02f%% of the object partition space' % (
|
||||
100.0 * distinct_partitions / object_ring.partition_count))
|
||||
stdout.flush()
|
||||
return None
|
||||
else:
|
||||
results = {'retries': retries_done[0],
|
||||
'overlapping': objects_listed - distinct_partitions,
|
||||
'pct_found': value,
|
||||
'copies_found': copies_found,
|
||||
'copies_expected': copies_expected}
|
||||
|
||||
for missing_copies, num_parts in object_copies_missing.items():
|
||||
results['missing_%d' % (missing_copies,)] = num_parts
|
||||
return results
|
||||
|
||||
|
||||
def missing_string(partition_count, missing_copies, copy_count):
|
||||
exclamations = ''
|
||||
missing_string = str(missing_copies)
|
||||
if missing_copies == copy_count:
|
||||
exclamations = '!!! '
|
||||
missing_string = 'all'
|
||||
elif copy_count - missing_copies == 1:
|
||||
exclamations = '! '
|
||||
|
||||
verb_string = 'was'
|
||||
partition_string = 'partition'
|
||||
if partition_count > 1:
|
||||
verb_string = 'were'
|
||||
partition_string = 'partitions'
|
||||
|
||||
copy_string = 'copies'
|
||||
if missing_copies == 1:
|
||||
copy_string = 'copy'
|
||||
|
||||
return '%sThere %s %d %s missing %s %s.' % (
|
||||
exclamations, verb_string, partition_count, partition_string,
|
||||
missing_string, copy_string
|
||||
)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
patcher.monkey_patch()
|
||||
hubs.get_hub().debug_exceptions = False
|
||||
|
||||
conffile = '/etc/swift/dispersion.conf'
|
||||
|
||||
parser = OptionParser(usage='''
|
||||
Usage: %%prog [options] [conf_file]
|
||||
|
||||
[conf_file] defaults to %s'''.strip() % conffile)
|
||||
parser.add_option('-j', '--dump-json', action='store_true', default=False,
|
||||
help='dump dispersion report in json format')
|
||||
parser.add_option('-d', '--debug', action='store_true', default=False,
|
||||
help='print 404s to standard error')
|
||||
parser.add_option('-p', '--partitions', action='store_true', default=False,
|
||||
help='print missing partitions to standard error')
|
||||
parser.add_option('--container-only', action='store_true', default=False,
|
||||
help='Only run container report')
|
||||
parser.add_option('--object-only', action='store_true', default=False,
|
||||
help='Only run object report')
|
||||
parser.add_option('--insecure', action='store_true', default=False,
|
||||
help='Allow accessing insecure keystone server. '
|
||||
'The keystone\'s certificate will not be verified.')
|
||||
parser.add_option('-P', '--policy-name', dest='policy_name',
|
||||
help="Specify storage policy name")
|
||||
|
||||
options, args = parser.parse_args()
|
||||
|
||||
if args:
|
||||
conffile = args.pop(0)
|
||||
|
||||
c = ConfigParser()
|
||||
if not c.read(conffile):
|
||||
exit('Unable to read config file: %s' % conffile)
|
||||
conf = dict(c.items('dispersion'))
|
||||
|
||||
if options.policy_name is None:
|
||||
policy = POLICIES.default
|
||||
else:
|
||||
policy = POLICIES.get_by_name(options.policy_name)
|
||||
if policy is None:
|
||||
exit('Unable to find policy: %s' % options.policy_name)
|
||||
print('Using storage policy: %s ' % policy.name)
|
||||
|
||||
swift_dir = conf.get('swift_dir', '/etc/swift')
|
||||
retries = int(conf.get('retries', 5))
|
||||
concurrency = int(conf.get('concurrency', 25))
|
||||
endpoint_type = str(conf.get('endpoint_type', 'publicURL'))
|
||||
region_name = str(conf.get('region_name', ''))
|
||||
if options.dump_json or config_true_value(conf.get('dump_json', 'no')):
|
||||
json_output = True
|
||||
container_report = config_true_value(conf.get('container_report', 'yes')) \
|
||||
and not options.object_only
|
||||
object_report = config_true_value(conf.get('object_report', 'yes')) \
|
||||
and not options.container_only
|
||||
if not (object_report or container_report):
|
||||
exit("Neither container or object report is set to run")
|
||||
user_domain_name = str(conf.get('user_domain_name', ''))
|
||||
project_domain_name = str(conf.get('project_domain_name', ''))
|
||||
project_name = str(conf.get('project_name', ''))
|
||||
insecure = options.insecure \
|
||||
or config_true_value(conf.get('keystone_api_insecure', 'no'))
|
||||
if options.debug:
|
||||
debug = True
|
||||
|
||||
coropool = GreenPool(size=concurrency)
|
||||
|
||||
os_options = {'endpoint_type': endpoint_type}
|
||||
if user_domain_name:
|
||||
os_options['user_domain_name'] = user_domain_name
|
||||
if project_domain_name:
|
||||
os_options['project_domain_name'] = project_domain_name
|
||||
if project_name:
|
||||
os_options['project_name'] = project_name
|
||||
if region_name:
|
||||
os_options['region_name'] = region_name
|
||||
|
||||
url, token = get_auth(conf['auth_url'], conf['auth_user'],
|
||||
conf['auth_key'],
|
||||
auth_version=conf.get('auth_version', '1.0'),
|
||||
os_options=os_options,
|
||||
insecure=insecure)
|
||||
account = url.rsplit('/', 1)[1]
|
||||
connpool = Pool(max_size=concurrency)
|
||||
connpool.create = lambda: SimpleClient(
|
||||
url=url, token=token, retries=retries)
|
||||
|
||||
container_ring = Ring(swift_dir, ring_name='container')
|
||||
object_ring = Ring(swift_dir, ring_name=policy.ring_name)
|
||||
|
||||
output = {}
|
||||
if container_report:
|
||||
output['container'] = container_dispersion_report(
|
||||
coropool, connpool, account, container_ring, retries,
|
||||
options.partitions, policy)
|
||||
if object_report:
|
||||
output['object'] = object_dispersion_report(
|
||||
coropool, connpool, account, object_ring, retries,
|
||||
options.partitions, policy)
|
||||
if json_output:
|
||||
print(json.dumps(output))
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
|
@ -996,8 +996,6 @@ Error count to consider a node error limited. The default is 10.
|
||||
Whether account PUTs and DELETEs are even callable. If set to 'true' any authorized
|
||||
user may create and delete accounts; if 'false' no one, even authorized, can. The default
|
||||
is false.
|
||||
.IP \fBobject_post_as_copy\fR
|
||||
Deprecated. The default is False.
|
||||
.IP \fBaccount_autocreate\fR
|
||||
If set to 'true' authorized accounts that do not yet exist within the Swift cluster
|
||||
will be automatically created. The default is set to false.
|
||||
|
@ -9,7 +9,7 @@ eventlet_debug = true
|
||||
[pipeline:main]
|
||||
# Yes, proxy-logging appears twice. This is so that
|
||||
# middleware-originated requests get logged too.
|
||||
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache bulk tempurl ratelimit crossdomain container_sync tempauth staticweb copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
|
||||
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache listing_formats bulk tempurl ratelimit crossdomain container_sync tempauth staticweb copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
|
||||
|
||||
[filter:catch_errors]
|
||||
use = egg:swift#catch_errors
|
||||
@ -71,6 +71,9 @@ allow_versioned_writes = true
|
||||
[filter:copy]
|
||||
use = egg:swift#copy
|
||||
|
||||
[filter:listing_formats]
|
||||
use = egg:swift#listing_formats
|
||||
|
||||
[app:proxy-server]
|
||||
use = egg:swift#proxy
|
||||
allow_account_management = true
|
||||
|
@ -1493,6 +1493,6 @@ See :ref:`custom-logger-hooks-label` for sample use cases.
|
||||
Securing OpenStack Swift
|
||||
------------------------
|
||||
|
||||
Please refer to the security guide at http://docs.openstack.org/security-guide
|
||||
Please refer to the security guide at https://docs.openstack.org/security-guide
|
||||
and in particular the `Object Storage
|
||||
<http://docs.openstack.org/security-guide/object-storage.html>`__ section.
|
||||
<https://docs.openstack.org/security-guide/object-storage.html>`__ section.
|
||||
|
@ -169,14 +169,14 @@ The API Reference describes the operations that you can perform with the
|
||||
Object Storage API:
|
||||
|
||||
- `Storage
|
||||
accounts <http://developer.openstack.org/api-ref/object-storage/index.html#accounts>`__:
|
||||
accounts <https://developer.openstack.org/api-ref/object-storage/index.html#accounts>`__:
|
||||
Use to perform account-level tasks.
|
||||
|
||||
Lists containers for a specified account. Creates, updates, and
|
||||
deletes account metadata. Shows account metadata.
|
||||
|
||||
- `Storage
|
||||
containers <http://developer.openstack.org/api-ref/object-storage/index.html#containers>`__:
|
||||
containers <https://developer.openstack.org/api-ref/object-storage/index.html#containers>`__:
|
||||
Use to perform container-level tasks.
|
||||
|
||||
Lists objects in a specified container. Creates, shows details for,
|
||||
@ -184,7 +184,7 @@ Object Storage API:
|
||||
container metadata.
|
||||
|
||||
- `Storage
|
||||
objects <http://developer.openstack.org/api-ref/object-storage/index.html#objects>`__:
|
||||
objects <https://developer.openstack.org/api-ref/object-storage/index.html#objects>`__:
|
||||
Use to perform object-level tasks.
|
||||
|
||||
Creates, replaces, shows details for, and deletes objects. Copies
|
||||
|
@ -222,4 +222,4 @@ Note that if the above example is copied exactly, and used in a command
|
||||
shell, then the ampersand is interpreted as an operator and the URL
|
||||
will be truncated. Enclose the URL in quotation marks to avoid this.
|
||||
|
||||
.. _tempurl: http://docs.openstack.org/developer/python-swiftclient/cli.html#tempurl
|
||||
.. _tempurl: https://docs.openstack.org/python-swiftclient/latest/cli/index.html#swift-tempurl
|
||||
|
@ -1884,7 +1884,6 @@ error_suppression_limit 10 Error count to consider
|
||||
node error limited
|
||||
allow_account_management false Whether account PUTs and DELETEs
|
||||
are even callable
|
||||
object_post_as_copy false Deprecated.
|
||||
account_autocreate false If set to 'true' authorized
|
||||
accounts that do not yet exist
|
||||
within the Swift cluster will
|
||||
|
@ -127,9 +127,6 @@ set using environment variables:
|
||||
environment variable ``SWIFT_TEST_IN_PROCESS_CONF_LOADER`` to
|
||||
``ec``.
|
||||
|
||||
- the deprecated proxy-server ``object_post_as_copy`` option may be set using
|
||||
the environment variable ``SWIFT_TEST_IN_PROCESS_OBJECT_POST_AS_COPY``.
|
||||
|
||||
- logging to stdout may be enabled by setting ``SWIFT_TEST_DEBUG_LOGS``.
|
||||
|
||||
For example, this command would run the in-process mode functional tests with
|
||||
@ -147,7 +144,6 @@ The ``tox.ini`` file also specifies test environments for running other
|
||||
in-process functional test configurations, e.g.::
|
||||
|
||||
tox -e func-ec
|
||||
tox -e func-post-as-copy
|
||||
|
||||
To debug the functional tests, use the 'in-process test' mode and pass the
|
||||
``--pdb`` flag to ``tox``::
|
||||
|
@ -28,7 +28,7 @@ following actions:
|
||||
.. note::
|
||||
|
||||
For more information on other modules that enable additional features,
|
||||
see the `Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
|
||||
see the `Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`__.
|
||||
|
||||
* In the ``[app:proxy-server]`` section, enable automatic account creation:
|
||||
|
||||
|
@ -10,7 +10,7 @@ the proxy service on the controller node. However, you can run the proxy
|
||||
service on any node with network connectivity to the storage nodes.
|
||||
Additionally, you can install and configure the proxy service on multiple
|
||||
nodes to increase performance and redundancy. For more information, see the
|
||||
`Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
|
||||
`Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`__.
|
||||
|
||||
This section applies to Debian.
|
||||
|
||||
|
@ -10,7 +10,7 @@ the proxy service on the controller node. However, you can run the proxy
|
||||
service on any node with network connectivity to the storage nodes.
|
||||
Additionally, you can install and configure the proxy service on multiple
|
||||
nodes to increase performance and redundancy. For more information, see the
|
||||
`Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
|
||||
`Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`__.
|
||||
|
||||
This section applies to openSUSE Leap 42.2 and SUSE Linux Enterprise Server
|
||||
12 SP2.
|
||||
|
@ -10,7 +10,7 @@ the proxy service on the controller node. However, you can run the proxy
|
||||
service on any node with network connectivity to the storage nodes.
|
||||
Additionally, you can install and configure the proxy service on multiple
|
||||
nodes to increase performance and redundancy. For more information, see the
|
||||
`Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
|
||||
`Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`__.
|
||||
|
||||
This section applies to Red Hat Enterprise Linux 7 and CentOS 7.
|
||||
|
||||
|
@ -10,7 +10,7 @@ the proxy service on the controller node. However, you can run the proxy
|
||||
service on any node with network connectivity to the storage nodes.
|
||||
Additionally, you can install and configure the proxy service on multiple
|
||||
nodes to increase performance and redundancy. For more information, see the
|
||||
`Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
|
||||
`Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`__.
|
||||
|
||||
This section applies to Ubuntu 14.04 (LTS).
|
||||
|
||||
|
@ -40,7 +40,7 @@ swift client
|
||||
swift-init
|
||||
Script that initializes the building of the ring file, takes daemon
|
||||
names as parameter and offers commands. Documented in
|
||||
http://docs.openstack.org/developer/swift/admin_guide.html#managing-services.
|
||||
https://docs.openstack.org/swift/latest/admin_guide.html#managing-services.
|
||||
|
||||
swift-recon
|
||||
A cli tool used to retrieve various metrics and telemetry information
|
||||
@ -48,4 +48,4 @@ swift-recon
|
||||
|
||||
swift-ring-builder
|
||||
Storage ring build and rebalance utility. Documented in
|
||||
http://docs.openstack.org/developer/swift/admin_guide.html#managing-the-rings.
|
||||
https://docs.openstack.org/swift/latest/admin_guide.html#managing-the-rings.
|
||||
|
@ -9,7 +9,7 @@ maximum partitions, 3 replicas of each object, and 1 hour minimum time between
|
||||
moving a partition more than once. For Object Storage, a partition indicates a
|
||||
directory on a storage device rather than a conventional partition table.
|
||||
For more information, see the
|
||||
`Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
|
||||
`Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`__.
|
||||
|
||||
.. note::
|
||||
Perform these steps on the controller node.
|
||||
|
@ -28,7 +28,7 @@ following actions:
|
||||
.. note::
|
||||
|
||||
For more information on other modules that enable additional features,
|
||||
see the `Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
|
||||
see the `Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`__.
|
||||
|
||||
* In the ``[filter:recon]`` section, configure the recon (meters) cache
|
||||
directory:
|
||||
|
@ -28,7 +28,7 @@ following actions:
|
||||
.. note::
|
||||
|
||||
For more information on other modules that enable additional features,
|
||||
see the `Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
|
||||
see the `Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`__.
|
||||
|
||||
* In the ``[filter:recon]`` section, configure the recon (meters) cache
|
||||
directory:
|
||||
|
@ -28,7 +28,7 @@ following actions:
|
||||
.. note::
|
||||
|
||||
For more information on other modules that enable additional features,
|
||||
see the `Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
|
||||
see the `Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`__.
|
||||
|
||||
* In the ``[filter:recon]`` section, configure the recon (meters) cache
|
||||
and lock directories:
|
||||
|
@ -14,7 +14,7 @@ Although Object Storage supports any file system with
|
||||
extended attributes (xattr), testing and benchmarking
|
||||
indicate the best performance and reliability on XFS. For
|
||||
more information on horizontally scaling your environment, see the
|
||||
`Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`_.
|
||||
`Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`_.
|
||||
|
||||
This section applies to openSUSE Leap 42.2 and SUSE Linux Enterprise Server
|
||||
12 SP2.
|
||||
|
@ -14,7 +14,7 @@ Although Object Storage supports any file system with
|
||||
extended attributes (xattr), testing and benchmarking
|
||||
indicate the best performance and reliability on XFS. For
|
||||
more information on horizontally scaling your environment, see the
|
||||
`Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`_.
|
||||
`Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`_.
|
||||
|
||||
This section applies to Red Hat Enterprise Linux 7 and CentOS 7.
|
||||
|
||||
|
@ -14,7 +14,7 @@ Although Object Storage supports any file system with
|
||||
extended attributes (xattr), testing and benchmarking
|
||||
indicate the best performance and reliability on XFS. For
|
||||
more information on horizontally scaling your environment, see the
|
||||
`Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`_.
|
||||
`Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`_.
|
||||
|
||||
This section applies to Ubuntu 14.04 (LTS) and Debian.
|
||||
|
||||
|
@ -104,8 +104,8 @@ can be found in the KeystoneMiddleware_ distribution.
|
||||
The :ref:`keystoneauth` middleware performs authorization and mapping the
|
||||
Keystone roles to Swift's ACLs.
|
||||
|
||||
.. _KeystoneMiddleware: http://docs.openstack.org/developer/keystonemiddleware/
|
||||
.. _Keystone: http://docs.openstack.org/developer/keystone/
|
||||
.. _KeystoneMiddleware: https://docs.openstack.org/keystonemiddleware/latest/
|
||||
.. _Keystone: https://docs.openstack.org/keystone/latest/
|
||||
|
||||
.. _configuring_keystone_auth:
|
||||
|
||||
@ -167,7 +167,7 @@ your situation, but in short:
|
||||
service. The example values shown here assume a user named 'swift' with admin
|
||||
role on a project named 'service', both being in the Keystone domain with id
|
||||
'default'. Refer to the `KeystoneMiddleware documentation
|
||||
<http://docs.openstack.org/developer/keystonemiddleware/middlewarearchitecture.html#configuration>`_
|
||||
<https://docs.openstack.org/keystonemiddleware/latest/middlewarearchitecture.html#configuration>`_
|
||||
for other examples.
|
||||
|
||||
* ``cache`` is set to ``swift.cache``. This means that the middleware
|
||||
|
@ -238,7 +238,7 @@ Keys currently stored in Barbican can be listed using the
|
||||
The keymaster uses the explicitly configured username and password (and
|
||||
project name etc.) from the `keymaster.conf` file for retrieving the encryption
|
||||
root secret from an external key management system. The `Castellan library
|
||||
<http://docs.openstack.org/developer/castellan/>`_ is used to communicate with
|
||||
<https://docs.openstack.org/castellan/latest/>`_ is used to communicate with
|
||||
Barbican.
|
||||
|
||||
For the proxy server, reading the encryption root secret directly from the
|
||||
|
@ -94,7 +94,7 @@ bind_port = 8080
|
||||
[pipeline:main]
|
||||
# This sample pipeline uses tempauth and is used for SAIO dev work and
|
||||
# testing. See below for a pipeline using keystone.
|
||||
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit tempauth copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
|
||||
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache listing_formats container_sync bulk tempurl ratelimit tempauth copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
|
||||
|
||||
# The following pipeline shows keystone integration. Comment out the one
|
||||
# above and uncomment this one. Additional steps for integrating keystone are
|
||||
@ -357,7 +357,7 @@ user_test5_tester5 = testing5 service
|
||||
# Following parameters are known to work with keystonemiddleware v2.3.0
|
||||
# (above v2.0.0), but checking the latest information in the wiki page[1]
|
||||
# is recommended.
|
||||
# 1. http://docs.openstack.org/developer/keystonemiddleware/middlewarearchitecture.html#configuration
|
||||
# 1. https://docs.openstack.org/keystonemiddleware/latest/middlewarearchitecture.html#configuration
|
||||
#
|
||||
# [filter:authtoken]
|
||||
# paste.filter_factory = keystonemiddleware.auth_token:filter_factory
|
||||
@ -860,8 +860,6 @@ use = egg:swift#copy
|
||||
# requests are transformed into COPY requests where source and destination are
|
||||
# the same. All client-visible behavior (save response time) should be
|
||||
# identical.
|
||||
# This option is deprecated and will be ignored in a future release.
|
||||
# object_post_as_copy = false
|
||||
|
||||
# Note: To enable encryption, add the following 2 dependent pieces of crypto
|
||||
# middleware to the proxy-server pipeline. They should be to the right of all
|
||||
@ -915,3 +913,9 @@ use = egg:swift#encryption
|
||||
# disable_encryption to True. However, all encryption middleware should remain
|
||||
# in the pipeline in order for existing encrypted data to be read.
|
||||
# disable_encryption = False
|
||||
|
||||
# listing_formats should be just right of the first proxy-logging middleware,
|
||||
# and left of most other middlewares. If it is not already present, it will
|
||||
# be automatically inserted for you.
|
||||
[filter:listing_formats]
|
||||
use = egg:swift#listing_formats
|
||||
|
@ -107,6 +107,7 @@ paste.filter_factory =
|
||||
keymaster = swift.common.middleware.crypto.keymaster:filter_factory
|
||||
encryption = swift.common.middleware.crypto:filter_factory
|
||||
kms_keymaster = swift.common.middleware.crypto.kms_keymaster:filter_factory
|
||||
listing_formats = swift.common.middleware.listing_formats:filter_factory
|
||||
|
||||
[build_sphinx]
|
||||
all_files = 1
|
||||
|
@ -24,15 +24,16 @@ import swift.common.db
|
||||
from swift.account.backend import AccountBroker, DATADIR
|
||||
from swift.account.utils import account_listing_response, get_response_headers
|
||||
from swift.common.db import DatabaseConnectionError, DatabaseAlreadyExists
|
||||
from swift.common.request_helpers import get_param, get_listing_content_type, \
|
||||
from swift.common.request_helpers import get_param, \
|
||||
split_and_validate_path
|
||||
from swift.common.utils import get_logger, hash_path, public, \
|
||||
Timestamp, storage_directory, config_true_value, \
|
||||
json, timing_stats, replication, get_log_line
|
||||
from swift.common.constraints import check_mount, valid_timestamp, check_utf8
|
||||
from swift.common.constraints import valid_timestamp, check_utf8, check_drive
|
||||
from swift.common import constraints
|
||||
from swift.common.db_replicator import ReplicatorRpc
|
||||
from swift.common.base_storage_server import BaseStorageServer
|
||||
from swift.common.middleware import listing_formats
|
||||
from swift.common.swob import HTTPAccepted, HTTPBadRequest, \
|
||||
HTTPCreated, HTTPForbidden, HTTPInternalServerError, \
|
||||
HTTPMethodNotAllowed, HTTPNoContent, HTTPNotFound, \
|
||||
@ -87,7 +88,7 @@ class AccountController(BaseStorageServer):
|
||||
def DELETE(self, req):
|
||||
"""Handle HTTP DELETE request."""
|
||||
drive, part, account = split_and_validate_path(req, 3)
|
||||
if self.mount_check and not check_mount(self.root, drive):
|
||||
if not check_drive(self.root, drive, self.mount_check):
|
||||
return HTTPInsufficientStorage(drive=drive, request=req)
|
||||
req_timestamp = valid_timestamp(req)
|
||||
broker = self._get_account_broker(drive, part, account)
|
||||
@ -101,7 +102,7 @@ class AccountController(BaseStorageServer):
|
||||
def PUT(self, req):
|
||||
"""Handle HTTP PUT request."""
|
||||
drive, part, account, container = split_and_validate_path(req, 3, 4)
|
||||
if self.mount_check and not check_mount(self.root, drive):
|
||||
if not check_drive(self.root, drive, self.mount_check):
|
||||
return HTTPInsufficientStorage(drive=drive, request=req)
|
||||
if container: # put account container
|
||||
if 'x-timestamp' not in req.headers:
|
||||
@ -167,8 +168,8 @@ class AccountController(BaseStorageServer):
|
||||
def HEAD(self, req):
|
||||
"""Handle HTTP HEAD request."""
|
||||
drive, part, account = split_and_validate_path(req, 3)
|
||||
out_content_type = get_listing_content_type(req)
|
||||
if self.mount_check and not check_mount(self.root, drive):
|
||||
out_content_type = listing_formats.get_listing_content_type(req)
|
||||
if not check_drive(self.root, drive, self.mount_check):
|
||||
return HTTPInsufficientStorage(drive=drive, request=req)
|
||||
broker = self._get_account_broker(drive, part, account,
|
||||
pending_timeout=0.1,
|
||||
@ -201,9 +202,9 @@ class AccountController(BaseStorageServer):
|
||||
constraints.ACCOUNT_LISTING_LIMIT)
|
||||
marker = get_param(req, 'marker', '')
|
||||
end_marker = get_param(req, 'end_marker')
|
||||
out_content_type = get_listing_content_type(req)
|
||||
out_content_type = listing_formats.get_listing_content_type(req)
|
||||
|
||||
if self.mount_check and not check_mount(self.root, drive):
|
||||
if not check_drive(self.root, drive, self.mount_check):
|
||||
return HTTPInsufficientStorage(drive=drive, request=req)
|
||||
broker = self._get_account_broker(drive, part, account,
|
||||
pending_timeout=0.1,
|
||||
@ -224,7 +225,7 @@ class AccountController(BaseStorageServer):
|
||||
"""
|
||||
post_args = split_and_validate_path(req, 3)
|
||||
drive, partition, hash = post_args
|
||||
if self.mount_check and not check_mount(self.root, drive):
|
||||
if not check_drive(self.root, drive, self.mount_check):
|
||||
return HTTPInsufficientStorage(drive=drive, request=req)
|
||||
try:
|
||||
args = json.load(req.environ['wsgi.input'])
|
||||
@ -240,7 +241,7 @@ class AccountController(BaseStorageServer):
|
||||
"""Handle HTTP POST request."""
|
||||
drive, part, account = split_and_validate_path(req, 3)
|
||||
req_timestamp = valid_timestamp(req)
|
||||
if self.mount_check and not check_mount(self.root, drive):
|
||||
if not check_drive(self.root, drive, self.mount_check):
|
||||
return HTTPInsufficientStorage(drive=drive, request=req)
|
||||
broker = self._get_account_broker(drive, part, account)
|
||||
if broker.is_deleted():
|
||||
|
@ -14,8 +14,8 @@
|
||||
# limitations under the License.
|
||||
|
||||
import json
|
||||
from xml.sax import saxutils
|
||||
|
||||
from swift.common.middleware import listing_formats
|
||||
from swift.common.swob import HTTPOk, HTTPNoContent
|
||||
from swift.common.utils import Timestamp
|
||||
from swift.common.storage_policy import POLICIES
|
||||
@ -78,43 +78,27 @@ def account_listing_response(account, req, response_content_type, broker=None,
|
||||
|
||||
account_list = broker.list_containers_iter(limit, marker, end_marker,
|
||||
prefix, delimiter, reverse)
|
||||
if response_content_type == 'application/json':
|
||||
data = []
|
||||
for (name, object_count, bytes_used, put_timestamp, is_subdir) \
|
||||
in account_list:
|
||||
if is_subdir:
|
||||
data.append({'subdir': name})
|
||||
else:
|
||||
data.append(
|
||||
{'name': name, 'count': object_count,
|
||||
'bytes': bytes_used,
|
||||
'last_modified': Timestamp(put_timestamp).isoformat})
|
||||
data = []
|
||||
for (name, object_count, bytes_used, put_timestamp, is_subdir) \
|
||||
in account_list:
|
||||
if is_subdir:
|
||||
data.append({'subdir': name.decode('utf8')})
|
||||
else:
|
||||
data.append(
|
||||
{'name': name.decode('utf8'), 'count': object_count,
|
||||
'bytes': bytes_used,
|
||||
'last_modified': Timestamp(put_timestamp).isoformat})
|
||||
if response_content_type.endswith('/xml'):
|
||||
account_list = listing_formats.account_to_xml(data, account)
|
||||
ret = HTTPOk(body=account_list, request=req, headers=resp_headers)
|
||||
elif response_content_type.endswith('/json'):
|
||||
account_list = json.dumps(data)
|
||||
elif response_content_type.endswith('/xml'):
|
||||
output_list = ['<?xml version="1.0" encoding="UTF-8"?>',
|
||||
'<account name=%s>' % saxutils.quoteattr(account)]
|
||||
for (name, object_count, bytes_used, put_timestamp, is_subdir) \
|
||||
in account_list:
|
||||
if is_subdir:
|
||||
output_list.append(
|
||||
'<subdir name=%s />' % saxutils.quoteattr(name))
|
||||
else:
|
||||
item = '<container><name>%s</name><count>%s</count>' \
|
||||
'<bytes>%s</bytes><last_modified>%s</last_modified>' \
|
||||
'</container>' % \
|
||||
(saxutils.escape(name), object_count,
|
||||
bytes_used, Timestamp(put_timestamp).isoformat)
|
||||
output_list.append(item)
|
||||
output_list.append('</account>')
|
||||
account_list = '\n'.join(output_list)
|
||||
ret = HTTPOk(body=account_list, request=req, headers=resp_headers)
|
||||
elif data:
|
||||
account_list = listing_formats.listing_to_text(data)
|
||||
ret = HTTPOk(body=account_list, request=req, headers=resp_headers)
|
||||
else:
|
||||
if not account_list:
|
||||
resp = HTTPNoContent(request=req, headers=resp_headers)
|
||||
resp.content_type = response_content_type
|
||||
resp.charset = 'utf-8'
|
||||
return resp
|
||||
account_list = '\n'.join(r[0] for r in account_list) + '\n'
|
||||
ret = HTTPOk(body=account_list, request=req, headers=resp_headers)
|
||||
ret = HTTPNoContent(request=req, headers=resp_headers)
|
||||
ret.content_type = response_content_type
|
||||
ret.charset = 'utf-8'
|
||||
return ret
|
||||
|
432
swift/cli/dispersion_report.py
Normal file
432
swift/cli/dispersion_report.py
Normal file
@ -0,0 +1,432 @@
|
||||
#!/usr/bin/env python
|
||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import print_function
|
||||
import json
|
||||
from collections import defaultdict
|
||||
from six.moves.configparser import ConfigParser
|
||||
from optparse import OptionParser
|
||||
from sys import exit, stdout, stderr
|
||||
from time import time
|
||||
|
||||
from eventlet import GreenPool, hubs, patcher, Timeout
|
||||
from eventlet.pools import Pool
|
||||
|
||||
from swift.common import direct_client
|
||||
try:
|
||||
from swiftclient import get_auth
|
||||
except ImportError:
|
||||
from swift.common.internal_client import get_auth
|
||||
from swift.common.internal_client import SimpleClient
|
||||
from swift.common.ring import Ring
|
||||
from swift.common.exceptions import ClientException
|
||||
from swift.common.utils import compute_eta, get_time_units, config_true_value
|
||||
from swift.common.storage_policy import POLICIES
|
||||
|
||||
|
||||
unmounted = []
|
||||
notfound = []
|
||||
json_output = False
|
||||
debug = False
|
||||
insecure = False
|
||||
|
||||
|
||||
def get_error_log(prefix):
|
||||
|
||||
def error_log(msg_or_exc):
|
||||
global debug, unmounted, notfound
|
||||
if hasattr(msg_or_exc, 'http_status'):
|
||||
identifier = '%s:%s/%s' % (msg_or_exc.http_host,
|
||||
msg_or_exc.http_port,
|
||||
msg_or_exc.http_device)
|
||||
if msg_or_exc.http_status == 507:
|
||||
if identifier not in unmounted:
|
||||
unmounted.append(identifier)
|
||||
print('ERROR: %s is unmounted -- This will '
|
||||
'cause replicas designated for that device to be '
|
||||
'considered missing until resolved or the ring is '
|
||||
'updated.' % (identifier), file=stderr)
|
||||
stderr.flush()
|
||||
if debug and identifier not in notfound:
|
||||
notfound.append(identifier)
|
||||
print('ERROR: %s returned a 404' % (identifier), file=stderr)
|
||||
stderr.flush()
|
||||
if not hasattr(msg_or_exc, 'http_status') or \
|
||||
msg_or_exc.http_status not in (404, 507):
|
||||
print('ERROR: %s: %s' % (prefix, msg_or_exc), file=stderr)
|
||||
stderr.flush()
|
||||
return error_log
|
||||
|
||||
|
||||
def container_dispersion_report(coropool, connpool, account, container_ring,
|
||||
retries, output_missing_partitions, policy):
|
||||
with connpool.item() as conn:
|
||||
containers = [c['name'] for c in conn.get_account(
|
||||
prefix='dispersion_%d' % policy.idx, full_listing=True)[1]]
|
||||
containers_listed = len(containers)
|
||||
if not containers_listed:
|
||||
print('No containers to query. Has '
|
||||
'swift-dispersion-populate been run?', file=stderr)
|
||||
stderr.flush()
|
||||
return
|
||||
retries_done = [0]
|
||||
containers_queried = [0]
|
||||
container_copies_missing = defaultdict(int)
|
||||
container_copies_found = [0]
|
||||
container_copies_expected = [0]
|
||||
begun = time()
|
||||
next_report = [time() + 2]
|
||||
|
||||
def direct(container, part, nodes):
|
||||
found_count = 0
|
||||
for node in nodes:
|
||||
error_log = get_error_log('%(ip)s:%(port)s/%(device)s' % node)
|
||||
try:
|
||||
attempts, _junk = direct_client.retry(
|
||||
direct_client.direct_head_container, node, part, account,
|
||||
container, error_log=error_log, retries=retries)
|
||||
retries_done[0] += attempts - 1
|
||||
found_count += 1
|
||||
except ClientException as err:
|
||||
if err.http_status not in (404, 507):
|
||||
error_log('Giving up on /%s/%s/%s: %s' % (part, account,
|
||||
container, err))
|
||||
except (Exception, Timeout) as err:
|
||||
error_log('Giving up on /%s/%s/%s: %s' % (part, account,
|
||||
container, err))
|
||||
if output_missing_partitions and \
|
||||
found_count < len(nodes):
|
||||
missing = len(nodes) - found_count
|
||||
print('\r\x1B[K', end='')
|
||||
stdout.flush()
|
||||
print('# Container partition %s missing %s cop%s' % (
|
||||
part, missing, 'y' if missing == 1 else 'ies'), file=stderr)
|
||||
container_copies_found[0] += found_count
|
||||
containers_queried[0] += 1
|
||||
container_copies_missing[len(nodes) - found_count] += 1
|
||||
if time() >= next_report[0]:
|
||||
next_report[0] = time() + 5
|
||||
eta, eta_unit = compute_eta(begun, containers_queried[0],
|
||||
containers_listed)
|
||||
if not json_output:
|
||||
print('\r\x1B[KQuerying containers: %d of %d, %d%s left, %d '
|
||||
'retries' % (containers_queried[0], containers_listed,
|
||||
round(eta), eta_unit, retries_done[0]),
|
||||
end='')
|
||||
stdout.flush()
|
||||
container_parts = {}
|
||||
for container in containers:
|
||||
part, nodes = container_ring.get_nodes(account, container)
|
||||
if part not in container_parts:
|
||||
container_copies_expected[0] += len(nodes)
|
||||
container_parts[part] = part
|
||||
coropool.spawn(direct, container, part, nodes)
|
||||
coropool.waitall()
|
||||
distinct_partitions = len(container_parts)
|
||||
copies_found = container_copies_found[0]
|
||||
copies_expected = container_copies_expected[0]
|
||||
value = 100.0 * copies_found / copies_expected
|
||||
elapsed, elapsed_unit = get_time_units(time() - begun)
|
||||
container_copies_missing.pop(0, None)
|
||||
if not json_output:
|
||||
print('\r\x1B[KQueried %d containers for dispersion reporting, '
|
||||
'%d%s, %d retries' % (containers_listed, round(elapsed),
|
||||
elapsed_unit, retries_done[0]))
|
||||
if containers_listed - distinct_partitions:
|
||||
print('There were %d overlapping partitions' % (
|
||||
containers_listed - distinct_partitions))
|
||||
for missing_copies, num_parts in container_copies_missing.items():
|
||||
print(missing_string(num_parts, missing_copies,
|
||||
container_ring.replica_count))
|
||||
print('%.02f%% of container copies found (%d of %d)' % (
|
||||
value, copies_found, copies_expected))
|
||||
print('Sample represents %.02f%% of the container partition space' % (
|
||||
100.0 * distinct_partitions / container_ring.partition_count))
|
||||
stdout.flush()
|
||||
return None
|
||||
else:
|
||||
results = {'retries': retries_done[0],
|
||||
'overlapping': containers_listed - distinct_partitions,
|
||||
'pct_found': value,
|
||||
'copies_found': copies_found,
|
||||
'copies_expected': copies_expected}
|
||||
for missing_copies, num_parts in container_copies_missing.items():
|
||||
results['missing_%d' % (missing_copies)] = num_parts
|
||||
return results
|
||||
|
||||
|
||||
def object_dispersion_report(coropool, connpool, account, object_ring,
|
||||
retries, output_missing_partitions, policy):
|
||||
container = 'dispersion_objects_%d' % policy.idx
|
||||
with connpool.item() as conn:
|
||||
try:
|
||||
objects = [o['name'] for o in conn.get_container(
|
||||
container, prefix='dispersion_', full_listing=True)[1]]
|
||||
except ClientException as err:
|
||||
if err.http_status != 404:
|
||||
raise
|
||||
|
||||
print('No objects to query. Has '
|
||||
'swift-dispersion-populate been run?', file=stderr)
|
||||
stderr.flush()
|
||||
return
|
||||
objects_listed = len(objects)
|
||||
if not objects_listed:
|
||||
print('No objects to query. Has swift-dispersion-populate '
|
||||
'been run?', file=stderr)
|
||||
stderr.flush()
|
||||
return
|
||||
retries_done = [0]
|
||||
objects_queried = [0]
|
||||
object_copies_found = [0]
|
||||
object_copies_expected = [0]
|
||||
object_copies_missing = defaultdict(int)
|
||||
begun = time()
|
||||
next_report = [time() + 2]
|
||||
|
||||
headers = None
|
||||
if policy is not None:
|
||||
headers = {}
|
||||
headers['X-Backend-Storage-Policy-Index'] = int(policy)
|
||||
|
||||
def direct(obj, part, nodes):
|
||||
found_count = 0
|
||||
for node in nodes:
|
||||
error_log = get_error_log('%(ip)s:%(port)s/%(device)s' % node)
|
||||
try:
|
||||
attempts, _junk = direct_client.retry(
|
||||
direct_client.direct_head_object, node, part, account,
|
||||
container, obj, error_log=error_log, retries=retries,
|
||||
headers=headers)
|
||||
retries_done[0] += attempts - 1
|
||||
found_count += 1
|
||||
except ClientException as err:
|
||||
if err.http_status not in (404, 507):
|
||||
error_log('Giving up on /%s/%s/%s/%s: %s' % (part, account,
|
||||
container, obj, err))
|
||||
except (Exception, Timeout) as err:
|
||||
error_log('Giving up on /%s/%s/%s/%s: %s' % (part, account,
|
||||
container, obj, err))
|
||||
if output_missing_partitions and \
|
||||
found_count < len(nodes):
|
||||
missing = len(nodes) - found_count
|
||||
print('\r\x1B[K', end='')
|
||||
stdout.flush()
|
||||
print('# Object partition %s missing %s cop%s' % (
|
||||
part, missing, 'y' if missing == 1 else 'ies'), file=stderr)
|
||||
object_copies_found[0] += found_count
|
||||
object_copies_missing[len(nodes) - found_count] += 1
|
||||
objects_queried[0] += 1
|
||||
if time() >= next_report[0]:
|
||||
next_report[0] = time() + 5
|
||||
eta, eta_unit = compute_eta(begun, objects_queried[0],
|
||||
objects_listed)
|
||||
if not json_output:
|
||||
print('\r\x1B[KQuerying objects: %d of %d, %d%s left, %d '
|
||||
'retries' % (objects_queried[0], objects_listed,
|
||||
round(eta), eta_unit, retries_done[0]),
|
||||
end='')
|
||||
stdout.flush()
|
||||
object_parts = {}
|
||||
for obj in objects:
|
||||
part, nodes = object_ring.get_nodes(account, container, obj)
|
||||
if part not in object_parts:
|
||||
object_copies_expected[0] += len(nodes)
|
||||
object_parts[part] = part
|
||||
coropool.spawn(direct, obj, part, nodes)
|
||||
coropool.waitall()
|
||||
distinct_partitions = len(object_parts)
|
||||
copies_found = object_copies_found[0]
|
||||
copies_expected = object_copies_expected[0]
|
||||
value = 100.0 * copies_found / copies_expected
|
||||
elapsed, elapsed_unit = get_time_units(time() - begun)
|
||||
if not json_output:
|
||||
print('\r\x1B[KQueried %d objects for dispersion reporting, '
|
||||
'%d%s, %d retries' % (objects_listed, round(elapsed),
|
||||
elapsed_unit, retries_done[0]))
|
||||
if objects_listed - distinct_partitions:
|
||||
print('There were %d overlapping partitions' % (
|
||||
objects_listed - distinct_partitions))
|
||||
|
||||
for missing_copies, num_parts in object_copies_missing.items():
|
||||
print(missing_string(num_parts, missing_copies,
|
||||
object_ring.replica_count))
|
||||
|
||||
print('%.02f%% of object copies found (%d of %d)' %
|
||||
(value, copies_found, copies_expected))
|
||||
print('Sample represents %.02f%% of the object partition space' % (
|
||||
100.0 * distinct_partitions / object_ring.partition_count))
|
||||
stdout.flush()
|
||||
return None
|
||||
else:
|
||||
results = {'retries': retries_done[0],
|
||||
'overlapping': objects_listed - distinct_partitions,
|
||||
'pct_found': value,
|
||||
'copies_found': copies_found,
|
||||
'copies_expected': copies_expected}
|
||||
|
||||
for missing_copies, num_parts in object_copies_missing.items():
|
||||
results['missing_%d' % (missing_copies,)] = num_parts
|
||||
return results
|
||||
|
||||
|
||||
def missing_string(partition_count, missing_copies, copy_count):
|
||||
exclamations = ''
|
||||
missing_string = str(missing_copies)
|
||||
if missing_copies == copy_count:
|
||||
exclamations = '!!! '
|
||||
missing_string = 'all'
|
||||
elif copy_count - missing_copies == 1:
|
||||
exclamations = '! '
|
||||
|
||||
verb_string = 'was'
|
||||
partition_string = 'partition'
|
||||
if partition_count > 1:
|
||||
verb_string = 'were'
|
||||
partition_string = 'partitions'
|
||||
|
||||
copy_string = 'copies'
|
||||
if missing_copies == 1:
|
||||
copy_string = 'copy'
|
||||
|
||||
return '%sThere %s %d %s missing %s %s.' % (
|
||||
exclamations, verb_string, partition_count, partition_string,
|
||||
missing_string, copy_string
|
||||
)
|
||||
|
||||
|
||||
def main():
|
||||
patcher.monkey_patch()
|
||||
hubs.get_hub().debug_exceptions = False
|
||||
|
||||
conffile = '/etc/swift/dispersion.conf'
|
||||
|
||||
parser = OptionParser(usage='''
|
||||
Usage: %%prog [options] [conf_file]
|
||||
|
||||
[conf_file] defaults to %s'''.strip() % conffile)
|
||||
parser.add_option('-j', '--dump-json', action='store_true', default=False,
|
||||
help='dump dispersion report in json format')
|
||||
parser.add_option('-d', '--debug', action='store_true', default=False,
|
||||
help='print 404s to standard error')
|
||||
parser.add_option('-p', '--partitions', action='store_true', default=False,
|
||||
help='print missing partitions to standard error')
|
||||
parser.add_option('--container-only', action='store_true', default=False,
|
||||
help='Only run container report')
|
||||
parser.add_option('--object-only', action='store_true', default=False,
|
||||
help='Only run object report')
|
||||
parser.add_option('--insecure', action='store_true', default=False,
|
||||
help='Allow accessing insecure keystone server. '
|
||||
'The keystone\'s certificate will not be verified.')
|
||||
parser.add_option('-P', '--policy-name', dest='policy_name',
|
||||
help="Specify storage policy name")
|
||||
|
||||
options, args = parser.parse_args()
|
||||
if args:
|
||||
conffile = args.pop(0)
|
||||
|
||||
if options.debug:
|
||||
global debug
|
||||
debug = True
|
||||
|
||||
c = ConfigParser()
|
||||
if not c.read(conffile):
|
||||
exit('Unable to read config file: %s' % conffile)
|
||||
conf = dict(c.items('dispersion'))
|
||||
|
||||
if options.dump_json:
|
||||
conf['dump_json'] = 'yes'
|
||||
if options.object_only:
|
||||
conf['container_report'] = 'no'
|
||||
if options.container_only:
|
||||
conf['object_report'] = 'no'
|
||||
if options.insecure:
|
||||
conf['keystone_api_insecure'] = 'yes'
|
||||
if options.partitions:
|
||||
conf['partitions'] = 'yes'
|
||||
|
||||
output = generate_report(conf, options.policy_name)
|
||||
|
||||
if json_output:
|
||||
print(json.dumps(output))
|
||||
|
||||
|
||||
def generate_report(conf, policy_name=None):
|
||||
global json_output
|
||||
json_output = config_true_value(conf.get('dump_json', 'no'))
|
||||
if policy_name is None:
|
||||
policy = POLICIES.default
|
||||
else:
|
||||
policy = POLICIES.get_by_name(policy_name)
|
||||
if policy is None:
|
||||
exit('Unable to find policy: %s' % policy_name)
|
||||
if not json_output:
|
||||
print('Using storage policy: %s ' % policy.name)
|
||||
|
||||
swift_dir = conf.get('swift_dir', '/etc/swift')
|
||||
retries = int(conf.get('retries', 5))
|
||||
concurrency = int(conf.get('concurrency', 25))
|
||||
endpoint_type = str(conf.get('endpoint_type', 'publicURL'))
|
||||
region_name = str(conf.get('region_name', ''))
|
||||
container_report = config_true_value(conf.get('container_report', 'yes'))
|
||||
object_report = config_true_value(conf.get('object_report', 'yes'))
|
||||
if not (object_report or container_report):
|
||||
exit("Neither container or object report is set to run")
|
||||
user_domain_name = str(conf.get('user_domain_name', ''))
|
||||
project_domain_name = str(conf.get('project_domain_name', ''))
|
||||
project_name = str(conf.get('project_name', ''))
|
||||
insecure = config_true_value(conf.get('keystone_api_insecure', 'no'))
|
||||
|
||||
coropool = GreenPool(size=concurrency)
|
||||
|
||||
os_options = {'endpoint_type': endpoint_type}
|
||||
if user_domain_name:
|
||||
os_options['user_domain_name'] = user_domain_name
|
||||
if project_domain_name:
|
||||
os_options['project_domain_name'] = project_domain_name
|
||||
if project_name:
|
||||
os_options['project_name'] = project_name
|
||||
if region_name:
|
||||
os_options['region_name'] = region_name
|
||||
|
||||
url, token = get_auth(conf['auth_url'], conf['auth_user'],
|
||||
conf['auth_key'],
|
||||
auth_version=conf.get('auth_version', '1.0'),
|
||||
os_options=os_options,
|
||||
insecure=insecure)
|
||||
account = url.rsplit('/', 1)[1]
|
||||
connpool = Pool(max_size=concurrency)
|
||||
connpool.create = lambda: SimpleClient(
|
||||
url=url, token=token, retries=retries)
|
||||
|
||||
container_ring = Ring(swift_dir, ring_name='container')
|
||||
object_ring = Ring(swift_dir, ring_name=policy.ring_name)
|
||||
|
||||
output = {}
|
||||
if container_report:
|
||||
output['container'] = container_dispersion_report(
|
||||
coropool, connpool, account, container_ring, retries,
|
||||
conf.get('partitions'), policy)
|
||||
if object_report:
|
||||
output['object'] = object_dispersion_report(
|
||||
coropool, connpool, account, object_ring, retries,
|
||||
conf.get('partitions'), policy)
|
||||
|
||||
return output
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
@ -15,6 +15,7 @@
|
||||
|
||||
import functools
|
||||
import os
|
||||
from os.path import isdir # tighter scoped import for mocking
|
||||
import time
|
||||
|
||||
import six
|
||||
@ -104,11 +105,6 @@ reload_constraints()
|
||||
MAX_BUFFERED_SLO_SEGMENTS = 10000
|
||||
|
||||
|
||||
#: Query string format= values to their corresponding content-type values
|
||||
FORMAT2CONTENT_TYPE = {'plain': 'text/plain', 'json': 'application/json',
|
||||
'xml': 'application/xml'}
|
||||
|
||||
|
||||
# By default the maximum number of allowed headers depends on the number of max
|
||||
# allowed metadata settings plus a default value of 36 for swift internally
|
||||
# generated headers and regular http headers. If for some reason this is not
|
||||
@ -234,9 +230,9 @@ def check_dir(root, drive):
|
||||
|
||||
:param root: base path where the dir is
|
||||
:param drive: drive name to be checked
|
||||
:returns: True if it is a valid directoy, False otherwise
|
||||
:returns: full path to the device, or None if drive fails to validate
|
||||
"""
|
||||
return os.path.isdir(os.path.join(root, drive))
|
||||
return check_drive(root, drive, False)
|
||||
|
||||
|
||||
def check_mount(root, drive):
|
||||
@ -248,12 +244,31 @@ def check_mount(root, drive):
|
||||
|
||||
:param root: base path where the devices are mounted
|
||||
:param drive: drive name to be checked
|
||||
:returns: True if it is a valid mounted device, False otherwise
|
||||
:returns: full path to the device, or None if drive fails to validate
|
||||
"""
|
||||
return check_drive(root, drive, True)
|
||||
|
||||
|
||||
def check_drive(root, drive, mount_check):
|
||||
"""
|
||||
Validate the path given by root and drive is a valid existing directory.
|
||||
|
||||
:param root: base path where the devices are mounted
|
||||
:param drive: drive name to be checked
|
||||
:param mount_check: additionally require path is mounted
|
||||
|
||||
:returns: full path to the device, or None if drive fails to validate
|
||||
"""
|
||||
if not (urllib.parse.quote_plus(drive) == drive):
|
||||
return False
|
||||
return None
|
||||
path = os.path.join(root, drive)
|
||||
return utils.ismount(path)
|
||||
if mount_check:
|
||||
if utils.ismount(path):
|
||||
return path
|
||||
else:
|
||||
if isdir(path):
|
||||
return path
|
||||
return None
|
||||
|
||||
|
||||
def check_float(string):
|
||||
|
@ -225,9 +225,10 @@ class Replicator(Daemon):
|
||||
'replication_last': now},
|
||||
self.rcache, self.logger)
|
||||
self.logger.info(' '.join(['%s:%s' % item for item in
|
||||
self.stats.items() if item[0] in
|
||||
sorted(self.stats.items()) if item[0] in
|
||||
('no_change', 'hashmatch', 'rsync', 'diff',
|
||||
'ts_repl', 'empty', 'diff_capped')]))
|
||||
'ts_repl', 'empty', 'diff_capped',
|
||||
'remote_merge')]))
|
||||
|
||||
def _add_failure_stats(self, failure_devs_info):
|
||||
for node, dev in failure_devs_info:
|
||||
|
@ -88,23 +88,24 @@ def _get_direct_account_container(path, stype, node, part,
|
||||
Do not use directly use the get_direct_account or
|
||||
get_direct_container instead.
|
||||
"""
|
||||
qs = 'format=json'
|
||||
params = ['format=json']
|
||||
if marker:
|
||||
qs += '&marker=%s' % quote(marker)
|
||||
params.append('marker=%s' % quote(marker))
|
||||
if limit:
|
||||
qs += '&limit=%d' % limit
|
||||
params.append('limit=%d' % limit)
|
||||
if prefix:
|
||||
qs += '&prefix=%s' % quote(prefix)
|
||||
params.append('prefix=%s' % quote(prefix))
|
||||
if delimiter:
|
||||
qs += '&delimiter=%s' % quote(delimiter)
|
||||
params.append('delimiter=%s' % quote(delimiter))
|
||||
if end_marker:
|
||||
qs += '&end_marker=%s' % quote(end_marker)
|
||||
params.append('end_marker=%s' % quote(end_marker))
|
||||
if reverse:
|
||||
qs += '&reverse=%s' % quote(reverse)
|
||||
params.append('reverse=%s' % quote(reverse))
|
||||
if items:
|
||||
if stype.lower() != 'container':
|
||||
raise ValueError("'items' is not valid for %s server" % stype)
|
||||
qs += '&items=%s' % quote(items)
|
||||
params.append('items=%s' % quote(items))
|
||||
qs = '&'.join(params)
|
||||
with Timeout(conn_timeout):
|
||||
conn = http_connect(node['ip'], node['port'], node['device'], part,
|
||||
'GET', path, query_string=qs,
|
||||
|
@ -772,12 +772,14 @@ class SimpleClient(object):
|
||||
if name:
|
||||
url = '%s/%s' % (url.rstrip('/'), quote(name))
|
||||
else:
|
||||
url += '?format=json'
|
||||
params = ['format=json']
|
||||
if prefix:
|
||||
url += '&prefix=%s' % prefix
|
||||
params.append('prefix=%s' % prefix)
|
||||
|
||||
if marker:
|
||||
url += '&marker=%s' % quote(marker)
|
||||
params.append('marker=%s' % quote(marker))
|
||||
|
||||
url += '?' + '&'.join(params)
|
||||
|
||||
req = urllib2.Request(url, headers=headers, data=contents)
|
||||
if proxy:
|
||||
|
@ -112,20 +112,6 @@ If a request is sent without the query parameter, an attempt will be made to
|
||||
copy the whole object but will fail if the object size is
|
||||
greater than 5GB.
|
||||
|
||||
-------------------
|
||||
Object Post as Copy
|
||||
-------------------
|
||||
Historically, this has been a feature (and a configurable option with default
|
||||
set to True) in proxy server configuration. This has been moved to server side
|
||||
copy middleware and the default changed to False.
|
||||
|
||||
When ``object_post_as_copy`` is set to ``true``, an incoming POST request is
|
||||
morphed into a COPY request where source and destination objects are same.
|
||||
|
||||
This feature was necessary because of a previous behavior where POSTS would
|
||||
update the metadata on the object but not on the container. As a result,
|
||||
features like container sync would not work correctly. This is no longer the
|
||||
case and this option is now deprecated. It will be removed in a future release.
|
||||
"""
|
||||
|
||||
import os
|
||||
@ -137,8 +123,7 @@ from swift.common.utils import get_logger, \
|
||||
config_true_value, FileLikeIter, read_conf_dir, close_if_possible
|
||||
from swift.common.swob import Request, HTTPPreconditionFailed, \
|
||||
HTTPRequestEntityTooLarge, HTTPBadRequest, HTTPException
|
||||
from swift.common.http import HTTP_MULTIPLE_CHOICES, HTTP_CREATED, \
|
||||
is_success, HTTP_OK
|
||||
from swift.common.http import HTTP_MULTIPLE_CHOICES, is_success, HTTP_OK
|
||||
from swift.common.constraints import check_account_format, MAX_FILE_SIZE
|
||||
from swift.common.request_helpers import copy_header_subset, remove_items, \
|
||||
is_sys_meta, is_sys_or_user_meta, is_object_transient_sysmeta
|
||||
@ -238,13 +223,7 @@ class ServerSideCopyWebContext(WSGIContext):
|
||||
return app_resp
|
||||
|
||||
def _adjust_put_response(self, req, additional_resp_headers):
|
||||
if 'swift.post_as_copy' in req.environ:
|
||||
# Older editions returned 202 Accepted on object POSTs, so we'll
|
||||
# convert any 201 Created responses to that for compatibility with
|
||||
# picky clients.
|
||||
if self._get_status_int() == HTTP_CREATED:
|
||||
self._response_status = '202 Accepted'
|
||||
elif is_success(self._get_status_int()):
|
||||
if is_success(self._get_status_int()):
|
||||
for header, value in additional_resp_headers.items():
|
||||
self._response_headers.append((header, value))
|
||||
|
||||
@ -269,17 +248,12 @@ class ServerSideCopyMiddleware(object):
|
||||
def __init__(self, app, conf):
|
||||
self.app = app
|
||||
self.logger = get_logger(conf, log_route="copy")
|
||||
# Read the old object_post_as_copy option from Proxy app just in case
|
||||
# someone has set it to false (non default). This wouldn't cause
|
||||
# problems during upgrade.
|
||||
self._load_object_post_as_copy_conf(conf)
|
||||
self.object_post_as_copy = \
|
||||
config_true_value(conf.get('object_post_as_copy', 'false'))
|
||||
if self.object_post_as_copy:
|
||||
msg = ('object_post_as_copy=true is deprecated; remove all '
|
||||
'references to it from %s to disable this warning. This '
|
||||
'option will be ignored in a future release' % conf.get(
|
||||
'__file__', 'proxy-server.conf'))
|
||||
msg = ('object_post_as_copy=true is deprecated; This '
|
||||
'option is now ignored')
|
||||
self.logger.warning(msg)
|
||||
|
||||
def _load_object_post_as_copy_conf(self, conf):
|
||||
@ -330,9 +304,6 @@ class ServerSideCopyMiddleware(object):
|
||||
elif req.method == 'COPY':
|
||||
req.environ['swift.orig_req_method'] = req.method
|
||||
return self.handle_COPY(req, start_response)
|
||||
elif req.method == 'POST' and self.object_post_as_copy:
|
||||
req.environ['swift.orig_req_method'] = req.method
|
||||
return self.handle_object_post_as_copy(req, start_response)
|
||||
elif req.method == 'OPTIONS':
|
||||
# Does not interfere with OPTIONS response from
|
||||
# (account,container) servers and /info response.
|
||||
@ -343,21 +314,6 @@ class ServerSideCopyMiddleware(object):
|
||||
|
||||
return self.app(env, start_response)
|
||||
|
||||
def handle_object_post_as_copy(self, req, start_response):
|
||||
req.method = 'PUT'
|
||||
req.path_info = '/v1/%s/%s/%s' % (
|
||||
self.account_name, self.container_name, self.object_name)
|
||||
req.headers['Content-Length'] = 0
|
||||
req.headers.pop('Range', None)
|
||||
req.headers['X-Copy-From'] = quote('/%s/%s' % (self.container_name,
|
||||
self.object_name))
|
||||
req.environ['swift.post_as_copy'] = True
|
||||
params = req.params
|
||||
# for post-as-copy always copy the manifest itself if source is *LO
|
||||
params['multipart-manifest'] = 'get'
|
||||
req.params = params
|
||||
return self.handle_PUT(req, start_response)
|
||||
|
||||
def handle_COPY(self, req, start_response):
|
||||
if not req.headers.get('Destination'):
|
||||
return HTTPPreconditionFailed(request=req,
|
||||
@ -394,11 +350,6 @@ class ServerSideCopyMiddleware(object):
|
||||
source_req.headers.pop('X-Backend-Storage-Policy-Index', None)
|
||||
source_req.path_info = quote(source_path)
|
||||
source_req.headers['X-Newest'] = 'true'
|
||||
if 'swift.post_as_copy' in req.environ:
|
||||
# We're COPYing one object over itself because of a POST; rely on
|
||||
# the PUT for write authorization, don't require read authorization
|
||||
source_req.environ['swift.authorize'] = lambda req: None
|
||||
source_req.environ['swift.authorize_override'] = True
|
||||
|
||||
# in case we are copying an SLO manifest, set format=raw parameter
|
||||
params = source_req.params
|
||||
@ -470,11 +421,7 @@ class ServerSideCopyMiddleware(object):
|
||||
def is_object_sysmeta(k):
|
||||
return is_sys_meta('object', k)
|
||||
|
||||
if 'swift.post_as_copy' in sink_req.environ:
|
||||
# Post-as-copy: ignore new sysmeta, copy existing sysmeta
|
||||
remove_items(sink_req.headers, is_object_sysmeta)
|
||||
copy_header_subset(source_resp, sink_req, is_object_sysmeta)
|
||||
elif config_true_value(req.headers.get('x-fresh-metadata', 'false')):
|
||||
if config_true_value(req.headers.get('x-fresh-metadata', 'false')):
|
||||
# x-fresh-metadata only applies to copy, not post-as-copy: ignore
|
||||
# existing user metadata, update existing sysmeta with new
|
||||
copy_header_subset(source_resp, sink_req, is_object_sysmeta)
|
||||
@ -497,9 +444,8 @@ class ServerSideCopyMiddleware(object):
|
||||
params['multipart-manifest'] = 'put'
|
||||
if 'X-Object-Manifest' in source_resp.headers:
|
||||
del params['multipart-manifest']
|
||||
if 'swift.post_as_copy' not in sink_req.environ:
|
||||
sink_req.headers['X-Object-Manifest'] = \
|
||||
source_resp.headers['X-Object-Manifest']
|
||||
sink_req.headers['X-Object-Manifest'] = \
|
||||
source_resp.headers['X-Object-Manifest']
|
||||
sink_req.params = params
|
||||
|
||||
# Set swift.source, data source, content length and etag
|
||||
|
@ -15,7 +15,6 @@
|
||||
|
||||
import base64
|
||||
import json
|
||||
import xml.etree.cElementTree as ElementTree
|
||||
|
||||
from swift import gettext_ as _
|
||||
from swift.common.http import is_success
|
||||
@ -23,7 +22,7 @@ from swift.common.middleware.crypto.crypto_utils import CryptoWSGIContext, \
|
||||
load_crypto_meta, extract_crypto_meta, Crypto
|
||||
from swift.common.exceptions import EncryptionException
|
||||
from swift.common.request_helpers import get_object_transient_sysmeta, \
|
||||
get_listing_content_type, get_sys_meta_prefix, get_user_meta_prefix
|
||||
get_sys_meta_prefix, get_user_meta_prefix
|
||||
from swift.common.swob import Request, HTTPException, HTTPInternalServerError
|
||||
from swift.common.utils import get_logger, config_true_value, \
|
||||
parse_content_range, closing_if_possible, parse_content_type, \
|
||||
@ -352,15 +351,12 @@ class DecrypterContContext(BaseDecrypterContext):
|
||||
|
||||
if is_success(self._get_status_int()):
|
||||
# only decrypt body of 2xx responses
|
||||
out_content_type = get_listing_content_type(req)
|
||||
if out_content_type == 'application/json':
|
||||
handler = self.process_json_resp
|
||||
keys = self.get_decryption_keys(req)
|
||||
elif out_content_type.endswith('/xml'):
|
||||
handler = self.process_xml_resp
|
||||
keys = self.get_decryption_keys(req)
|
||||
else:
|
||||
handler = keys = None
|
||||
handler = keys = None
|
||||
for header, value in self._response_headers:
|
||||
if header.lower() == 'content-type' and \
|
||||
value.split(';', 1)[0] == 'application/json':
|
||||
handler = self.process_json_resp
|
||||
keys = self.get_decryption_keys(req)
|
||||
|
||||
if handler and keys:
|
||||
try:
|
||||
@ -398,24 +394,6 @@ class DecrypterContContext(BaseDecrypterContext):
|
||||
obj_dict['hash'] = self.decrypt_value_with_meta(ciphertext, key)
|
||||
return obj_dict
|
||||
|
||||
def process_xml_resp(self, key, resp_iter):
|
||||
"""
|
||||
Parses xml body listing and decrypt encrypted entries. Updates
|
||||
Content-Length header with new body length and return a body iter.
|
||||
"""
|
||||
with closing_if_possible(resp_iter):
|
||||
resp_body = ''.join(resp_iter)
|
||||
tree = ElementTree.fromstring(resp_body)
|
||||
for elem in tree.iter('hash'):
|
||||
ciphertext = elem.text.encode('utf8')
|
||||
plain = self.decrypt_value_with_meta(ciphertext, key)
|
||||
elem.text = plain.decode('utf8')
|
||||
new_body = ElementTree.tostring(tree, encoding='UTF-8').replace(
|
||||
"<?xml version='1.0' encoding='UTF-8'?>",
|
||||
'<?xml version="1.0" encoding="UTF-8"?>', 1)
|
||||
self.update_content_length(len(new_body))
|
||||
return [new_body]
|
||||
|
||||
|
||||
class Decrypter(object):
|
||||
"""Middleware for decrypting data and user metadata."""
|
||||
|
@ -151,7 +151,7 @@ class GetContext(WSGIContext):
|
||||
method='GET',
|
||||
headers={'x-auth-token': req.headers.get('x-auth-token')},
|
||||
agent=('%(orig)s ' + 'DLO MultipartGET'), swift_source='DLO')
|
||||
con_req.query_string = 'format=json&prefix=%s' % quote(prefix)
|
||||
con_req.query_string = 'prefix=%s' % quote(prefix)
|
||||
if marker:
|
||||
con_req.query_string += '&marker=%s' % quote(marker)
|
||||
|
||||
|
@ -36,6 +36,7 @@ from swift.common.utils import get_logger, config_true_value
|
||||
from swift.common.request_helpers import (
|
||||
remove_items, get_sys_meta_prefix, OBJECT_TRANSIENT_SYSMETA_PREFIX
|
||||
)
|
||||
from six.moves.urllib.parse import urlsplit
|
||||
import re
|
||||
|
||||
#: A list of python regular expressions that will be used to
|
||||
@ -89,9 +90,29 @@ class GatekeeperMiddleware(object):
|
||||
[('X-Timestamp', ts)])
|
||||
|
||||
def gatekeeper_response(status, response_headers, exc_info=None):
|
||||
def fixed_response_headers():
|
||||
def relative_path(value):
|
||||
parsed = urlsplit(v)
|
||||
new_path = parsed.path
|
||||
if parsed.query:
|
||||
new_path += ('?%s' % parsed.query)
|
||||
if parsed.fragment:
|
||||
new_path += ('#%s' % parsed.fragment)
|
||||
return new_path
|
||||
|
||||
if not env.get('swift.leave_relative_location'):
|
||||
return response_headers
|
||||
else:
|
||||
return [
|
||||
(k, v) if k.lower() != 'location' else
|
||||
(k, relative_path(v)) for (k, v) in response_headers
|
||||
]
|
||||
|
||||
response_headers = fixed_response_headers()
|
||||
removed = filter(
|
||||
lambda h: self.outbound_condition(h[0]),
|
||||
response_headers)
|
||||
|
||||
if removed:
|
||||
self.logger.debug('removed response headers: %s' % removed)
|
||||
new_headers = filter(
|
||||
|
211
swift/common/middleware/listing_formats.py
Normal file
211
swift/common/middleware/listing_formats.py
Normal file
@ -0,0 +1,211 @@
|
||||
# Copyright (c) 2017 OpenStack Foundation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import json
|
||||
import six
|
||||
from xml.etree.cElementTree import Element, SubElement, tostring
|
||||
|
||||
from swift.common.constraints import valid_api_version
|
||||
from swift.common.http import HTTP_NO_CONTENT
|
||||
from swift.common.request_helpers import get_param
|
||||
from swift.common.swob import HTTPException, HTTPNotAcceptable, Request, \
|
||||
RESPONSE_REASONS
|
||||
|
||||
|
||||
#: Mapping of query string ``format=`` values to their corresponding
|
||||
#: content-type values.
|
||||
FORMAT2CONTENT_TYPE = {'plain': 'text/plain', 'json': 'application/json',
|
||||
'xml': 'application/xml'}
|
||||
#: Maximum size of a valid JSON container listing body. If we receive
|
||||
#: a container listing response larger than this, assume it's a staticweb
|
||||
#: response and pass it on to the client.
|
||||
# Default max object length is 1024, default container listing limit is 1e4;
|
||||
# add a fudge factor for things like hash, last_modified, etc.
|
||||
MAX_CONTAINER_LISTING_CONTENT_LENGTH = 1024 * 10000 * 2
|
||||
|
||||
|
||||
def get_listing_content_type(req):
|
||||
"""
|
||||
Determine the content type to use for an account or container listing
|
||||
response.
|
||||
|
||||
:param req: request object
|
||||
:returns: content type as a string (e.g. text/plain, application/json)
|
||||
:raises HTTPNotAcceptable: if the requested content type is not acceptable
|
||||
:raises HTTPBadRequest: if the 'format' query param is provided and
|
||||
not valid UTF-8
|
||||
"""
|
||||
query_format = get_param(req, 'format')
|
||||
if query_format:
|
||||
req.accept = FORMAT2CONTENT_TYPE.get(
|
||||
query_format.lower(), FORMAT2CONTENT_TYPE['plain'])
|
||||
out_content_type = req.accept.best_match(
|
||||
['text/plain', 'application/json', 'application/xml', 'text/xml'])
|
||||
if not out_content_type:
|
||||
raise HTTPNotAcceptable(request=req)
|
||||
return out_content_type
|
||||
|
||||
|
||||
def account_to_xml(listing, account_name):
|
||||
doc = Element('account', name=account_name.decode('utf-8'))
|
||||
doc.text = '\n'
|
||||
for record in listing:
|
||||
if 'subdir' in record:
|
||||
name = record.pop('subdir')
|
||||
sub = SubElement(doc, 'subdir', name=name)
|
||||
else:
|
||||
sub = SubElement(doc, 'container')
|
||||
for field in ('name', 'count', 'bytes', 'last_modified'):
|
||||
SubElement(sub, field).text = six.text_type(
|
||||
record.pop(field))
|
||||
sub.tail = '\n'
|
||||
return tostring(doc, encoding='UTF-8').replace(
|
||||
"<?xml version='1.0' encoding='UTF-8'?>",
|
||||
'<?xml version="1.0" encoding="UTF-8"?>', 1)
|
||||
|
||||
|
||||
def container_to_xml(listing, base_name):
|
||||
doc = Element('container', name=base_name.decode('utf-8'))
|
||||
for record in listing:
|
||||
if 'subdir' in record:
|
||||
name = record.pop('subdir')
|
||||
sub = SubElement(doc, 'subdir', name=name)
|
||||
SubElement(sub, 'name').text = name
|
||||
else:
|
||||
sub = SubElement(doc, 'object')
|
||||
for field in ('name', 'hash', 'bytes', 'content_type',
|
||||
'last_modified'):
|
||||
SubElement(sub, field).text = six.text_type(
|
||||
record.pop(field))
|
||||
return tostring(doc, encoding='UTF-8').replace(
|
||||
"<?xml version='1.0' encoding='UTF-8'?>",
|
||||
'<?xml version="1.0" encoding="UTF-8"?>', 1)
|
||||
|
||||
|
||||
def listing_to_text(listing):
|
||||
def get_lines():
|
||||
for item in listing:
|
||||
if 'name' in item:
|
||||
yield item['name'].encode('utf-8') + b'\n'
|
||||
else:
|
||||
yield item['subdir'].encode('utf-8') + b'\n'
|
||||
return b''.join(get_lines())
|
||||
|
||||
|
||||
class ListingFilter(object):
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def __call__(self, env, start_response):
|
||||
req = Request(env)
|
||||
try:
|
||||
# account and container only
|
||||
version, acct, cont = req.split_path(2, 3)
|
||||
except ValueError:
|
||||
return self.app(env, start_response)
|
||||
|
||||
if not valid_api_version(version) or req.method not in ('GET', 'HEAD'):
|
||||
return self.app(env, start_response)
|
||||
|
||||
# OK, definitely have an account/container request.
|
||||
# Get the desired content-type, then force it to a JSON request.
|
||||
try:
|
||||
out_content_type = get_listing_content_type(req)
|
||||
except HTTPException as err:
|
||||
return err(env, start_response)
|
||||
|
||||
params = req.params
|
||||
params['format'] = 'json'
|
||||
req.params = params
|
||||
|
||||
status, headers, resp_iter = req.call_application(self.app)
|
||||
|
||||
header_to_index = {}
|
||||
resp_content_type = resp_length = None
|
||||
for i, (header, value) in enumerate(headers):
|
||||
header = header.lower()
|
||||
if header == 'content-type':
|
||||
header_to_index[header] = i
|
||||
resp_content_type = value.partition(';')[0]
|
||||
elif header == 'content-length':
|
||||
header_to_index[header] = i
|
||||
resp_length = int(value)
|
||||
|
||||
if not status.startswith('200 '):
|
||||
start_response(status, headers)
|
||||
return resp_iter
|
||||
|
||||
if resp_content_type != 'application/json':
|
||||
start_response(status, headers)
|
||||
return resp_iter
|
||||
|
||||
if resp_length is None or \
|
||||
resp_length > MAX_CONTAINER_LISTING_CONTENT_LENGTH:
|
||||
start_response(status, headers)
|
||||
return resp_iter
|
||||
|
||||
def set_header(header, value):
|
||||
if value is None:
|
||||
del headers[header_to_index[header]]
|
||||
else:
|
||||
headers[header_to_index[header]] = (
|
||||
headers[header_to_index[header]][0], str(value))
|
||||
|
||||
if req.method == 'HEAD':
|
||||
set_header('content-type', out_content_type + '; charset=utf-8')
|
||||
set_header('content-length', None) # don't know, can't determine
|
||||
start_response(status, headers)
|
||||
return resp_iter
|
||||
|
||||
body = b''.join(resp_iter)
|
||||
try:
|
||||
listing = json.loads(body)
|
||||
# Do a couple sanity checks
|
||||
if not isinstance(listing, list):
|
||||
raise ValueError
|
||||
if not all(isinstance(item, dict) for item in listing):
|
||||
raise ValueError
|
||||
except ValueError:
|
||||
# Static web listing that's returning invalid JSON?
|
||||
# Just pass it straight through; that's about all we *can* do.
|
||||
start_response(status, headers)
|
||||
return [body]
|
||||
|
||||
try:
|
||||
if out_content_type.endswith('/xml'):
|
||||
if cont:
|
||||
body = container_to_xml(listing, cont)
|
||||
else:
|
||||
body = account_to_xml(listing, acct)
|
||||
elif out_content_type == 'text/plain':
|
||||
body = listing_to_text(listing)
|
||||
# else, json -- we continue down here to be sure we set charset
|
||||
except KeyError:
|
||||
# listing was in a bad format -- funky static web listing??
|
||||
start_response(status, headers)
|
||||
return [body]
|
||||
|
||||
if not body:
|
||||
status = '%s %s' % (HTTP_NO_CONTENT,
|
||||
RESPONSE_REASONS[HTTP_NO_CONTENT][0])
|
||||
|
||||
set_header('content-type', out_content_type + '; charset=utf-8')
|
||||
set_header('content-length', len(body))
|
||||
start_response(status, headers)
|
||||
return [body]
|
||||
|
||||
|
||||
def filter_factory(global_conf, **local_conf):
|
||||
return ListingFilter
|
@ -209,7 +209,7 @@ class ReconMiddleware(object):
|
||||
continue
|
||||
|
||||
try:
|
||||
mounted = check_mount(self.devices, entry)
|
||||
mounted = bool(check_mount(self.devices, entry))
|
||||
except OSError as err:
|
||||
mounted = str(err)
|
||||
mpoint = {'device': entry, 'mounted': mounted}
|
||||
@ -225,7 +225,7 @@ class ReconMiddleware(object):
|
||||
continue
|
||||
|
||||
try:
|
||||
mounted = check_mount(self.devices, entry)
|
||||
mounted = bool(check_mount(self.devices, entry))
|
||||
except OSError as err:
|
||||
devices.append({'device': entry, 'mounted': str(err),
|
||||
'size': '', 'used': '', 'avail': ''})
|
||||
|
@ -260,7 +260,7 @@ class _StaticWebContext(WSGIContext):
|
||||
env, 'GET', '/%s/%s/%s' % (
|
||||
self.version, self.account, self.container),
|
||||
self.agent, swift_source='SW')
|
||||
tmp_env['QUERY_STRING'] = 'delimiter=/&format=json'
|
||||
tmp_env['QUERY_STRING'] = 'delimiter=/'
|
||||
if prefix:
|
||||
tmp_env['QUERY_STRING'] += '&prefix=%s' % quote(prefix)
|
||||
else:
|
||||
@ -465,8 +465,8 @@ class _StaticWebContext(WSGIContext):
|
||||
env, 'GET', '/%s/%s/%s' % (
|
||||
self.version, self.account, self.container),
|
||||
self.agent, swift_source='SW')
|
||||
tmp_env['QUERY_STRING'] = 'limit=1&format=json&delimiter' \
|
||||
'=/&limit=1&prefix=%s' % quote(self.obj + '/')
|
||||
tmp_env['QUERY_STRING'] = 'limit=1&delimiter=/&prefix=%s' % (
|
||||
quote(self.obj + '/'), )
|
||||
resp = self._app_call(tmp_env)
|
||||
body = ''.join(resp)
|
||||
if not is_success(self._get_status_int()) or not body or \
|
||||
|
@ -329,8 +329,7 @@ class VersionedWritesContext(WSGIContext):
|
||||
env, method='GET', swift_source='VW',
|
||||
path='/v1/%s/%s' % (account_name, lcontainer))
|
||||
lreq.environ['QUERY_STRING'] = \
|
||||
'format=json&prefix=%s&marker=%s' % (
|
||||
quote(lprefix), quote(marker))
|
||||
'prefix=%s&marker=%s' % (quote(lprefix), quote(marker))
|
||||
if end_marker:
|
||||
lreq.environ['QUERY_STRING'] += '&end_marker=%s' % (
|
||||
quote(end_marker))
|
||||
@ -826,8 +825,7 @@ class VersionedWritesMiddleware(object):
|
||||
allow_versioned_writes)
|
||||
except HTTPException as error_response:
|
||||
return error_response(env, start_response)
|
||||
elif (obj and req.method in ('PUT', 'DELETE') and
|
||||
not req.environ.get('swift.post_as_copy')):
|
||||
elif (obj and req.method in ('PUT', 'DELETE')):
|
||||
try:
|
||||
return self.object_request(
|
||||
req, api_version, account, container, obj,
|
||||
|
@ -34,10 +34,9 @@ from swift.common.header_key_dict import HeaderKeyDict
|
||||
|
||||
from swift import gettext_ as _
|
||||
from swift.common.storage_policy import POLICIES
|
||||
from swift.common.constraints import FORMAT2CONTENT_TYPE
|
||||
from swift.common.exceptions import ListingIterError, SegmentError
|
||||
from swift.common.http import is_success
|
||||
from swift.common.swob import HTTPBadRequest, HTTPNotAcceptable, \
|
||||
from swift.common.swob import HTTPBadRequest, \
|
||||
HTTPServiceUnavailable, Range, is_chunked, Response, HTTPNoContent, \
|
||||
multi_range_iterator
|
||||
from swift.common.utils import split_path, validate_device_partition, \
|
||||
@ -75,28 +74,6 @@ def get_param(req, name, default=None):
|
||||
return value
|
||||
|
||||
|
||||
def get_listing_content_type(req):
|
||||
"""
|
||||
Determine the content type to use for an account or container listing
|
||||
response.
|
||||
|
||||
:param req: request object
|
||||
:returns: content type as a string (e.g. text/plain, application/json)
|
||||
:raises HTTPNotAcceptable: if the requested content type is not acceptable
|
||||
:raises HTTPBadRequest: if the 'format' query param is provided and
|
||||
not valid UTF-8
|
||||
"""
|
||||
query_format = get_param(req, 'format')
|
||||
if query_format:
|
||||
req.accept = FORMAT2CONTENT_TYPE.get(
|
||||
query_format.lower(), FORMAT2CONTENT_TYPE['plain'])
|
||||
out_content_type = req.accept.best_match(
|
||||
['text/plain', 'application/json', 'application/xml', 'text/xml'])
|
||||
if not out_content_type:
|
||||
raise HTTPNotAcceptable(request=req)
|
||||
return out_content_type
|
||||
|
||||
|
||||
def get_name_and_placement(request, minsegs=1, maxsegs=None,
|
||||
rest_with_last=False):
|
||||
"""
|
||||
|
@ -30,17 +30,18 @@ from swift.container.backend import ContainerBroker, DATADIR, \
|
||||
from swift.container.replicator import ContainerReplicatorRpc
|
||||
from swift.common.db import DatabaseAlreadyExists
|
||||
from swift.common.container_sync_realms import ContainerSyncRealms
|
||||
from swift.common.request_helpers import get_param, get_listing_content_type, \
|
||||
from swift.common.request_helpers import get_param, \
|
||||
split_and_validate_path, is_sys_or_user_meta, create_container_listing
|
||||
from swift.common.utils import get_logger, hash_path, public, \
|
||||
Timestamp, storage_directory, validate_sync_to, \
|
||||
config_true_value, timing_stats, replication, \
|
||||
get_log_line, find_shard_range, account_to_shard_account, whataremyips
|
||||
from swift.common.constraints import check_mount, valid_timestamp, check_utf8
|
||||
from swift.common.constraints import valid_timestamp, check_utf8, check_drive
|
||||
from swift.common import constraints
|
||||
from swift.common.bufferedhttp import http_connect
|
||||
from swift.common.exceptions import ConnectionTimeout
|
||||
from swift.common.http import HTTP_NOT_FOUND, is_success
|
||||
from swift.common.middleware import listing_formats
|
||||
from swift.common.storage_policy import POLICIES
|
||||
from swift.common.base_storage_server import BaseStorageServer
|
||||
from swift.common.header_key_dict import HeaderKeyDict
|
||||
@ -296,7 +297,7 @@ class ContainerController(BaseStorageServer):
|
||||
drive, part, account, container, obj = split_and_validate_path(
|
||||
req, 4, 5, True)
|
||||
req_timestamp = valid_timestamp(req)
|
||||
if self.mount_check and not check_mount(self.root, drive):
|
||||
if not check_drive(self.root, drive, self.mount_check):
|
||||
return HTTPInsufficientStorage(drive=drive, request=req)
|
||||
# policy index is only relevant for delete_obj (and transitively for
|
||||
# auto create accounts)
|
||||
@ -410,7 +411,7 @@ class ContainerController(BaseStorageServer):
|
||||
self.realms_conf)
|
||||
if err:
|
||||
return HTTPBadRequest(err)
|
||||
if self.mount_check and not check_mount(self.root, drive):
|
||||
if not check_drive(self.root, drive, self.mount_check):
|
||||
return HTTPInsufficientStorage(drive=drive, request=req)
|
||||
requested_policy_index = self.get_and_validate_policy_index(req)
|
||||
broker = self._get_container_broker(drive, part, account, container)
|
||||
@ -497,8 +498,8 @@ class ContainerController(BaseStorageServer):
|
||||
"""Handle HTTP HEAD request."""
|
||||
drive, part, account, container, obj = split_and_validate_path(
|
||||
req, 4, 5, True)
|
||||
out_content_type = get_listing_content_type(req)
|
||||
if self.mount_check and not check_mount(self.root, drive):
|
||||
out_content_type = listing_formats.get_listing_content_type(req)
|
||||
if not check_drive(self.root, drive, self.mount_check):
|
||||
return HTTPInsufficientStorage(drive=drive, request=req)
|
||||
broker = self._get_container_broker(drive, part, account, container,
|
||||
pending_timeout=0.1,
|
||||
@ -600,8 +601,8 @@ class ContainerController(BaseStorageServer):
|
||||
request=req,
|
||||
body='Maximum limit is %d'
|
||||
% constraints.CONTAINER_LISTING_LIMIT)
|
||||
out_content_type = get_listing_content_type(req)
|
||||
if self.mount_check and not check_mount(self.root, drive):
|
||||
out_content_type = listing_formats.get_listing_content_type(req)
|
||||
if not check_drive(self.root, drive, self.mount_check):
|
||||
return HTTPInsufficientStorage(drive=drive, request=req)
|
||||
broker = self._get_container_broker(drive, part, account, container,
|
||||
pending_timeout=0.1,
|
||||
@ -660,7 +661,7 @@ class ContainerController(BaseStorageServer):
|
||||
"""
|
||||
post_args = split_and_validate_path(req, 3)
|
||||
drive, partition, hash = post_args
|
||||
if self.mount_check and not check_mount(self.root, drive):
|
||||
if not check_drive(self.root, drive, self.mount_check):
|
||||
return HTTPInsufficientStorage(drive=drive, request=req)
|
||||
try:
|
||||
args = json.load(req.environ['wsgi.input'])
|
||||
@ -682,7 +683,7 @@ class ContainerController(BaseStorageServer):
|
||||
self.realms_conf)
|
||||
if err:
|
||||
return HTTPBadRequest(err)
|
||||
if self.mount_check and not check_mount(self.root, drive):
|
||||
if not check_drive(self.root, drive, self.mount_check):
|
||||
return HTTPInsufficientStorage(drive=drive, request=req)
|
||||
broker = self._get_container_broker(drive, part, account, container)
|
||||
if broker.is_deleted():
|
||||
|
@ -57,7 +57,7 @@ from pyeclib.ec_iface import ECDriverError, ECInvalidFragmentMetadata, \
|
||||
ECBadFragmentChecksum, ECInvalidParameter
|
||||
|
||||
from swift import gettext_ as _
|
||||
from swift.common.constraints import check_mount, check_dir
|
||||
from swift.common.constraints import check_drive
|
||||
from swift.common.request_helpers import is_sys_meta
|
||||
from swift.common.utils import mkdirs, Timestamp, \
|
||||
storage_directory, hash_path, renamer, fallocate, fsync, fdatasync, \
|
||||
@ -1191,12 +1191,11 @@ class BaseDiskFileManager(object):
|
||||
# we'll do some kind of check unless explicitly forbidden
|
||||
if mount_check is not False:
|
||||
if mount_check or self.mount_check:
|
||||
check = check_mount
|
||||
mount_check = True
|
||||
else:
|
||||
check = check_dir
|
||||
if not check(self.devices, device):
|
||||
return None
|
||||
return os.path.join(self.devices, device)
|
||||
mount_check = False
|
||||
return check_drive(self.devices, device, mount_check)
|
||||
return join(self.devices, device)
|
||||
|
||||
@contextmanager
|
||||
def replication_lock(self, device):
|
||||
|
@ -18,7 +18,6 @@ from six.moves.urllib.parse import unquote
|
||||
from swift import gettext_ as _
|
||||
|
||||
from swift.account.utils import account_listing_response
|
||||
from swift.common.request_helpers import get_listing_content_type
|
||||
from swift.common.middleware.acl import parse_acl, format_acl
|
||||
from swift.common.utils import public
|
||||
from swift.common.constraints import check_metadata
|
||||
@ -26,6 +25,7 @@ from swift.common import constraints
|
||||
from swift.common.http import HTTP_NOT_FOUND, HTTP_GONE
|
||||
from swift.proxy.controllers.base import Controller, clear_info_cache, \
|
||||
set_info_cache
|
||||
from swift.common.middleware import listing_formats
|
||||
from swift.common.swob import HTTPBadRequest, HTTPMethodNotAllowed
|
||||
from swift.common.request_helpers import get_sys_meta_prefix
|
||||
|
||||
@ -67,6 +67,9 @@ class AccountController(Controller):
|
||||
concurrency = self.app.account_ring.replica_count \
|
||||
if self.app.concurrent_gets else 1
|
||||
node_iter = self.app.iter_nodes(self.app.account_ring, partition)
|
||||
params = req.params
|
||||
params['format'] = 'json'
|
||||
req.params = params
|
||||
resp = self.GETorHEAD_base(
|
||||
req, _('Account'), node_iter, partition,
|
||||
req.swift_entity_path.rstrip('/'), concurrency)
|
||||
@ -86,8 +89,10 @@ class AccountController(Controller):
|
||||
# creates the account if necessary. If we feed it a perfect
|
||||
# lie, it'll just try to create the container without
|
||||
# creating the account, and that'll fail.
|
||||
resp = account_listing_response(self.account_name, req,
|
||||
get_listing_content_type(req))
|
||||
req.params = {} # clear our format override
|
||||
resp = account_listing_response(
|
||||
self.account_name, req,
|
||||
listing_formats.get_listing_content_type(req))
|
||||
resp.headers['X-Backend-Fake-Account-Listing'] = 'yes'
|
||||
|
||||
# Cache this. We just made a request to a storage node and got
|
||||
|
@ -793,14 +793,16 @@ class ResumingGetter(object):
|
||||
this request. This will change the Range header
|
||||
so that the next req will start where it left off.
|
||||
|
||||
:raises ValueError: if invalid range header
|
||||
:raises HTTPRequestedRangeNotSatisfiable: if begin + num_bytes
|
||||
> end of range + 1
|
||||
:raises RangeAlreadyComplete: if begin + num_bytes == end of range + 1
|
||||
"""
|
||||
if 'Range' in self.backend_headers:
|
||||
req_range = Range(self.backend_headers['Range'])
|
||||
try:
|
||||
req_range = Range(self.backend_headers.get('Range'))
|
||||
except ValueError:
|
||||
req_range = None
|
||||
|
||||
if req_range:
|
||||
begin, end = req_range.ranges[0]
|
||||
if begin is None:
|
||||
# this is a -50 range req (last 50 bytes of file)
|
||||
@ -824,6 +826,9 @@ class ResumingGetter(object):
|
||||
else:
|
||||
self.backend_headers['Range'] = 'bytes=%d-' % num_bytes
|
||||
|
||||
# Reset so if we need to do this more than once, we don't double-up
|
||||
self.bytes_used_from_backend = 0
|
||||
|
||||
def pop_range(self):
|
||||
"""
|
||||
Remove the first byterange from our Range header.
|
||||
|
@ -23,8 +23,7 @@ from swift.common.constraints import check_metadata, CONTAINER_LISTING_LIMIT
|
||||
from swift.common import constraints
|
||||
from swift.common.http import HTTP_ACCEPTED, is_success, \
|
||||
HTTP_PRECONDITION_FAILED
|
||||
from swift.common.request_helpers import get_listing_content_type, \
|
||||
create_container_listing, get_sys_meta_prefix
|
||||
from swift.common.request_helpers import get_sys_meta_prefix
|
||||
from swift.proxy.controllers.base import Controller, delay_denial, \
|
||||
cors_validation, set_info_cache, clear_info_cache
|
||||
from swift.common.storage_policy import POLICIES
|
||||
@ -107,6 +106,9 @@ class ContainerController(Controller):
|
||||
concurrency = self.app.container_ring.replica_count \
|
||||
if self.app.concurrent_gets else 1
|
||||
node_iter = self.app.iter_nodes(self.app.container_ring, part)
|
||||
params = req.params
|
||||
params['format'] = 'json'
|
||||
req.params = params
|
||||
resp = self.GETorHEAD_base(
|
||||
req, _('Container'), node_iter, part,
|
||||
req.swift_entity_path, concurrency)
|
||||
@ -305,9 +307,8 @@ class ContainerController(Controller):
|
||||
objects[-1]['name']:
|
||||
break
|
||||
|
||||
out_content_type = get_listing_content_type(req)
|
||||
return create_container_listing(req, out_content_type, headers,
|
||||
objects, self.container_name)
|
||||
resp.body = json.dumps(objects)
|
||||
return resp
|
||||
|
||||
@public
|
||||
@delay_denial
|
||||
@ -368,11 +369,11 @@ class ContainerController(Controller):
|
||||
headers = self._backend_requests(req, len(containers),
|
||||
account_partition, accounts,
|
||||
policy_index)
|
||||
clear_info_cache(self.app, req.environ,
|
||||
self.account_name, self.container_name)
|
||||
resp = self.make_requests(
|
||||
req, self.app.container_ring,
|
||||
container_partition, 'PUT', req.swift_entity_path, headers)
|
||||
clear_info_cache(self.app, req.environ,
|
||||
self.account_name, self.container_name)
|
||||
return resp
|
||||
|
||||
@public
|
||||
|
@ -66,16 +66,19 @@ required_filters = [
|
||||
'after_fn': lambda pipe: (['catch_errors']
|
||||
if pipe.startswith('catch_errors')
|
||||
else [])},
|
||||
{'name': 'listing_formats', 'after_fn': lambda _junk: [
|
||||
'catch_errors', 'gatekeeper', 'proxy_logging', 'memcache']},
|
||||
# Put copy before dlo, slo and versioned_writes
|
||||
{'name': 'copy', 'after_fn': lambda _junk: [
|
||||
'staticweb', 'tempauth', 'keystoneauth',
|
||||
'catch_errors', 'gatekeeper', 'proxy_logging']},
|
||||
{'name': 'dlo', 'after_fn': lambda _junk: [
|
||||
'copy', 'staticweb', 'tempauth', 'keystoneauth',
|
||||
'catch_errors', 'gatekeeper', 'proxy_logging']},
|
||||
{'name': 'versioned_writes', 'after_fn': lambda _junk: [
|
||||
'slo', 'dlo', 'copy', 'staticweb', 'tempauth',
|
||||
'keystoneauth', 'catch_errors', 'gatekeeper', 'proxy_logging']},
|
||||
# Put copy before dlo, slo and versioned_writes
|
||||
{'name': 'copy', 'after_fn': lambda _junk: [
|
||||
'staticweb', 'tempauth', 'keystoneauth',
|
||||
'catch_errors', 'gatekeeper', 'proxy_logging']}]
|
||||
]
|
||||
|
||||
|
||||
def _label_for_policy(policy):
|
||||
|
@ -505,15 +505,6 @@ def in_process_setup(the_object_server=object_server):
|
||||
'password6': 'testing6'
|
||||
})
|
||||
|
||||
# If an env var explicitly specifies the proxy-server object_post_as_copy
|
||||
# option then use its value, otherwise leave default config unchanged.
|
||||
object_post_as_copy = os.environ.get(
|
||||
'SWIFT_TEST_IN_PROCESS_OBJECT_POST_AS_COPY')
|
||||
if object_post_as_copy is not None:
|
||||
object_post_as_copy = config_true_value(object_post_as_copy)
|
||||
config['object_post_as_copy'] = str(object_post_as_copy)
|
||||
_debug('Setting object_post_as_copy to %r' % object_post_as_copy)
|
||||
|
||||
acc1lis = listen_zero()
|
||||
acc2lis = listen_zero()
|
||||
con1lis = listen_zero()
|
||||
|
@ -2531,6 +2531,7 @@ class TestFile(Base):
|
||||
self.assertEqual(1024, f_dict['bytes'])
|
||||
self.assertEqual('text/foobar', f_dict['content_type'])
|
||||
self.assertEqual(etag, f_dict['hash'])
|
||||
put_last_modified = f_dict['last_modified']
|
||||
|
||||
# now POST updated content-type to each file
|
||||
file_item = self.env.container.file(file_name)
|
||||
@ -2555,6 +2556,7 @@ class TestFile(Base):
|
||||
self.fail('Failed to find file %r in listing' % file_name)
|
||||
self.assertEqual(1024, f_dict['bytes'])
|
||||
self.assertEqual('image/foobarbaz', f_dict['content_type'])
|
||||
self.assertLess(put_last_modified, f_dict['last_modified'])
|
||||
self.assertEqual(etag, f_dict['hash'])
|
||||
|
||||
|
||||
|
@ -448,7 +448,7 @@ class ProbeTest(unittest.TestCase):
|
||||
else:
|
||||
os.system('sudo mount %s' % device)
|
||||
|
||||
def make_internal_client(self, object_post_as_copy=True):
|
||||
def make_internal_client(self):
|
||||
tempdir = mkdtemp()
|
||||
try:
|
||||
conf_path = os.path.join(tempdir, 'internal_client.conf')
|
||||
@ -464,14 +464,13 @@ class ProbeTest(unittest.TestCase):
|
||||
|
||||
[filter:copy]
|
||||
use = egg:swift#copy
|
||||
object_post_as_copy = %s
|
||||
|
||||
[filter:cache]
|
||||
use = egg:swift#memcache
|
||||
|
||||
[filter:catch_errors]
|
||||
use = egg:swift#catch_errors
|
||||
""" % object_post_as_copy
|
||||
"""
|
||||
with open(conf_path, 'w') as f:
|
||||
f.write(dedent(conf_body))
|
||||
return internal_client.InternalClient(conf_path, 'test', 1)
|
||||
|
@ -93,7 +93,7 @@ class TestContainerSync(ReplProbeTest):
|
||||
|
||||
return source['name'], dest['name']
|
||||
|
||||
def _test_sync(self, object_post_as_copy):
|
||||
def test_sync(self):
|
||||
source_container, dest_container = self._setup_synced_containers()
|
||||
|
||||
# upload to source
|
||||
@ -111,12 +111,10 @@ class TestContainerSync(ReplProbeTest):
|
||||
self.assertIn('x-object-meta-test', resp_headers)
|
||||
self.assertEqual('put_value', resp_headers['x-object-meta-test'])
|
||||
|
||||
# update metadata with a POST, using an internal client so we can
|
||||
# vary the object_post_as_copy setting - first use post-as-copy
|
||||
# update metadata with a POST
|
||||
post_headers = {'Content-Type': 'image/jpeg',
|
||||
'X-Object-Meta-Test': 'post_value'}
|
||||
int_client = self.make_internal_client(
|
||||
object_post_as_copy=object_post_as_copy)
|
||||
int_client = self.make_internal_client()
|
||||
int_client.set_object_metadata(self.account, source_container,
|
||||
object_name, post_headers)
|
||||
# sanity checks...
|
||||
@ -154,12 +152,6 @@ class TestContainerSync(ReplProbeTest):
|
||||
self.url, self.token, dest_container, object_name)
|
||||
self.assertEqual(404, cm.exception.http_status) # sanity check
|
||||
|
||||
def test_sync_with_post_as_copy(self):
|
||||
self._test_sync(True)
|
||||
|
||||
def test_sync_with_fast_post(self):
|
||||
self._test_sync(False)
|
||||
|
||||
def test_sync_slo_manifest(self):
|
||||
# Verify that SLO manifests are sync'd even if their segments can not
|
||||
# be found in the destination account at time of sync'ing.
|
||||
|
@ -237,8 +237,7 @@ class TestUpdateOverridesEC(ECProbeTest):
|
||||
self.assertFalse(direct_client.direct_get_container(
|
||||
cnodes[0], cpart, self.account, 'c1')[1])
|
||||
|
||||
# use internal client for POST so we can force fast-post mode
|
||||
int_client = self.make_internal_client(object_post_as_copy=False)
|
||||
int_client = self.make_internal_client()
|
||||
int_client.set_object_metadata(
|
||||
self.account, 'c1', 'o1', {'X-Object-Meta-Fruit': 'Tomato'})
|
||||
self.assertEqual(
|
||||
@ -296,8 +295,7 @@ class TestUpdateOverridesEC(ECProbeTest):
|
||||
content_type='test/ctype')
|
||||
meta = client.head_object(self.url, self.token, 'c1', 'o1')
|
||||
|
||||
# use internal client for POST so we can force fast-post mode
|
||||
int_client = self.make_internal_client(object_post_as_copy=False)
|
||||
int_client = self.make_internal_client()
|
||||
int_client.set_object_metadata(
|
||||
self.account, 'c1', 'o1', {'X-Object-Meta-Fruit': 'Tomato'})
|
||||
self.assertEqual(
|
||||
|
@ -47,7 +47,7 @@ class Test(ReplProbeTest):
|
||||
policy=self.policy)
|
||||
self.container_brain = BrainSplitter(self.url, self.token,
|
||||
self.container_name)
|
||||
self.int_client = self.make_internal_client(object_post_as_copy=False)
|
||||
self.int_client = self.make_internal_client()
|
||||
|
||||
def _get_object_info(self, account, container, obj, number):
|
||||
obj_conf = self.configs['object-server']
|
||||
|
@ -709,49 +709,28 @@ def quiet_eventlet_exceptions():
|
||||
eventlet_debug.hub_exceptions(orig_state)
|
||||
|
||||
|
||||
class MockTrue(object):
|
||||
@contextmanager
|
||||
def mock_check_drive(isdir=False, ismount=False):
|
||||
"""
|
||||
Instances of MockTrue evaluate like True
|
||||
Any attr accessed on an instance of MockTrue will return a MockTrue
|
||||
instance. Any method called on an instance of MockTrue will return
|
||||
a MockTrue instance.
|
||||
All device/drive/mount checking should be done through the constraints
|
||||
module if we keep the mocking consistly w/i that module we can keep our
|
||||
test robust to further rework on that interface.
|
||||
|
||||
>>> thing = MockTrue()
|
||||
>>> thing
|
||||
True
|
||||
>>> thing == True # True == True
|
||||
True
|
||||
>>> thing == False # True == False
|
||||
False
|
||||
>>> thing != True # True != True
|
||||
False
|
||||
>>> thing != False # True != False
|
||||
True
|
||||
>>> thing.attribute
|
||||
True
|
||||
>>> thing.method()
|
||||
True
|
||||
>>> thing.attribute.method()
|
||||
True
|
||||
>>> thing.method().attribute
|
||||
True
|
||||
Replace the constraint modules underlying os calls with mocks.
|
||||
|
||||
:param isdir: return value of constraints isdir calls, default False
|
||||
:param ismount: return value of constraints ismount calls, default False
|
||||
:returns: a dict of constraint module mocks
|
||||
"""
|
||||
|
||||
def __getattribute__(self, *args, **kwargs):
|
||||
return self
|
||||
|
||||
def __call__(self, *args, **kwargs):
|
||||
return self
|
||||
|
||||
def __repr__(*args, **kwargs):
|
||||
return repr(True)
|
||||
|
||||
def __eq__(self, other):
|
||||
return other is True
|
||||
|
||||
def __ne__(self, other):
|
||||
return other is not True
|
||||
mock_base = 'swift.common.constraints.'
|
||||
with mocklib.patch(mock_base + 'isdir') as mock_isdir, \
|
||||
mocklib.patch(mock_base + 'utils.ismount') as mock_ismount:
|
||||
mock_isdir.return_value = isdir
|
||||
mock_ismount.return_value = ismount
|
||||
yield {
|
||||
'isdir': mock_isdir,
|
||||
'ismount': mock_ismount,
|
||||
}
|
||||
|
||||
|
||||
@contextmanager
|
||||
|
@ -36,7 +36,7 @@ from swift.account.server import AccountController
|
||||
from swift.common.utils import (normalize_timestamp, replication, public,
|
||||
mkdirs, storage_directory, Timestamp)
|
||||
from swift.common.request_helpers import get_sys_meta_prefix
|
||||
from test.unit import patch_policies, debug_logger
|
||||
from test.unit import patch_policies, debug_logger, mock_check_drive
|
||||
from swift.common.storage_policy import StoragePolicy, POLICIES
|
||||
|
||||
|
||||
@ -47,6 +47,7 @@ class TestAccountController(unittest.TestCase):
|
||||
"""Set up for testing swift.account.server.AccountController"""
|
||||
self.testdir_base = mkdtemp()
|
||||
self.testdir = os.path.join(self.testdir_base, 'account_server')
|
||||
mkdirs(os.path.join(self.testdir, 'sda1'))
|
||||
self.controller = AccountController(
|
||||
{'devices': self.testdir, 'mount_check': 'false'})
|
||||
|
||||
@ -71,6 +72,50 @@ class TestAccountController(unittest.TestCase):
|
||||
self.assertEqual(resp.headers['Server'],
|
||||
(server_handler.server_type + '/' + swift_version))
|
||||
|
||||
def test_insufficient_storage_mount_check_true(self):
|
||||
conf = {'devices': self.testdir, 'mount_check': 'true'}
|
||||
account_controller = AccountController(conf)
|
||||
self.assertTrue(account_controller.mount_check)
|
||||
for method in account_controller.allowed_methods:
|
||||
if method == 'OPTIONS':
|
||||
continue
|
||||
req = Request.blank('/sda1/p/a-or-suff', method=method,
|
||||
headers={'x-timestamp': '1'})
|
||||
with mock_check_drive() as mocks:
|
||||
try:
|
||||
resp = req.get_response(account_controller)
|
||||
self.assertEqual(resp.status_int, 507)
|
||||
mocks['ismount'].return_value = True
|
||||
resp = req.get_response(account_controller)
|
||||
self.assertNotEqual(resp.status_int, 507)
|
||||
# feel free to rip out this last assertion...
|
||||
expected = 2 if method == 'PUT' else 4
|
||||
self.assertEqual(resp.status_int // 100, expected)
|
||||
except AssertionError as e:
|
||||
self.fail('%s for %s' % (e, method))
|
||||
|
||||
def test_insufficient_storage_mount_check_false(self):
|
||||
conf = {'devices': self.testdir, 'mount_check': 'false'}
|
||||
account_controller = AccountController(conf)
|
||||
self.assertFalse(account_controller.mount_check)
|
||||
for method in account_controller.allowed_methods:
|
||||
if method == 'OPTIONS':
|
||||
continue
|
||||
req = Request.blank('/sda1/p/a-or-suff', method=method,
|
||||
headers={'x-timestamp': '1'})
|
||||
with mock_check_drive() as mocks:
|
||||
try:
|
||||
resp = req.get_response(account_controller)
|
||||
self.assertEqual(resp.status_int, 507)
|
||||
mocks['isdir'].return_value = True
|
||||
resp = req.get_response(account_controller)
|
||||
self.assertNotEqual(resp.status_int, 507)
|
||||
# feel free to rip out this last assertion...
|
||||
expected = 2 if method == 'PUT' else 4
|
||||
self.assertEqual(resp.status_int // 100, expected)
|
||||
except AssertionError as e:
|
||||
self.fail('%s for %s' % (e, method))
|
||||
|
||||
def test_DELETE_not_found(self):
|
||||
req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'DELETE',
|
||||
'HTTP_X_TIMESTAMP': '0'})
|
||||
@ -147,29 +192,6 @@ class TestAccountController(unittest.TestCase):
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 400)
|
||||
|
||||
def test_DELETE_insufficient_storage(self):
|
||||
self.controller = AccountController({'devices': self.testdir})
|
||||
req = Request.blank(
|
||||
'/sda-null/p/a', environ={'REQUEST_METHOD': 'DELETE',
|
||||
'HTTP_X_TIMESTAMP': '1'})
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 507)
|
||||
|
||||
def test_REPLICATE_insufficient_storage(self):
|
||||
conf = {'devices': self.testdir, 'mount_check': 'true'}
|
||||
self.account_controller = AccountController(conf)
|
||||
|
||||
def fake_check_mount(*args, **kwargs):
|
||||
return False
|
||||
|
||||
with mock.patch("swift.common.constraints.check_mount",
|
||||
fake_check_mount):
|
||||
req = Request.blank('/sda1/p/suff',
|
||||
environ={'REQUEST_METHOD': 'REPLICATE'},
|
||||
headers={})
|
||||
resp = req.get_response(self.account_controller)
|
||||
self.assertEqual(resp.status_int, 507)
|
||||
|
||||
def test_REPLICATE_rsync_then_merge_works(self):
|
||||
def fake_rsync_then_merge(self, drive, db_file, args):
|
||||
return HTTPNoContent()
|
||||
@ -331,13 +353,6 @@ class TestAccountController(unittest.TestCase):
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 406)
|
||||
|
||||
def test_HEAD_insufficient_storage(self):
|
||||
self.controller = AccountController({'devices': self.testdir})
|
||||
req = Request.blank('/sda-null/p/a', environ={'REQUEST_METHOD': 'HEAD',
|
||||
'HTTP_X_TIMESTAMP': '1'})
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 507)
|
||||
|
||||
def test_HEAD_invalid_format(self):
|
||||
format = '%D1%BD%8A9' # invalid UTF-8; should be %E1%BD%8A9 (E -> D)
|
||||
req = Request.blank('/sda1/p/a?format=' + format,
|
||||
@ -569,13 +584,6 @@ class TestAccountController(unittest.TestCase):
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 400)
|
||||
|
||||
def test_PUT_insufficient_storage(self):
|
||||
self.controller = AccountController({'devices': self.testdir})
|
||||
req = Request.blank('/sda-null/p/a', environ={'REQUEST_METHOD': 'PUT',
|
||||
'HTTP_X_TIMESTAMP': '1'})
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 507)
|
||||
|
||||
def test_POST_HEAD_metadata(self):
|
||||
req = Request.blank(
|
||||
'/sda1/p/a', environ={'REQUEST_METHOD': 'PUT'},
|
||||
@ -693,13 +701,6 @@ class TestAccountController(unittest.TestCase):
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 400)
|
||||
|
||||
def test_POST_insufficient_storage(self):
|
||||
self.controller = AccountController({'devices': self.testdir})
|
||||
req = Request.blank('/sda-null/p/a', environ={'REQUEST_METHOD': 'POST',
|
||||
'HTTP_X_TIMESTAMP': '1'})
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 507)
|
||||
|
||||
def test_POST_after_DELETE_not_found(self):
|
||||
req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'PUT',
|
||||
'HTTP_X_TIMESTAMP': '0'})
|
||||
@ -1502,13 +1503,6 @@ class TestAccountController(unittest.TestCase):
|
||||
listing.append(node2.firstChild.nodeValue)
|
||||
self.assertEqual(listing, ['sub.1.0', 'sub.1.1', 'sub.1.2'])
|
||||
|
||||
def test_GET_insufficient_storage(self):
|
||||
self.controller = AccountController({'devices': self.testdir})
|
||||
req = Request.blank('/sda-null/p/a', environ={'REQUEST_METHOD': 'GET',
|
||||
'HTTP_X_TIMESTAMP': '1'})
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 507)
|
||||
|
||||
def test_through_call(self):
|
||||
inbuf = BytesIO()
|
||||
errbuf = StringIO()
|
||||
|
21
test/unit/cli/test_dispersion_report.py
Normal file
21
test/unit/cli/test_dispersion_report.py
Normal file
@ -0,0 +1,21 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
|
||||
# use this file except in compliance with the License. You may obtain a copy
|
||||
# of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import unittest
|
||||
|
||||
from swift.cli import dispersion_report
|
||||
|
||||
|
||||
class TestDispersionReport(unittest.TestCase):
|
||||
|
||||
def test_placeholder(self):
|
||||
self.assertTrue(callable(dispersion_report.main))
|
@ -16,7 +16,6 @@ import base64
|
||||
import json
|
||||
import os
|
||||
import unittest
|
||||
from xml.dom import minidom
|
||||
|
||||
import mock
|
||||
|
||||
@ -961,138 +960,6 @@ class TestDecrypterContainerRequests(unittest.TestCase):
|
||||
self.assertIn("Cipher must be AES_CTR_256",
|
||||
self.decrypter.logger.get_lines_for_level('error')[0])
|
||||
|
||||
def _assert_element(self, name, expected, element):
|
||||
self.assertEqual(element.tagName, name)
|
||||
self._assert_element_contains_dict(expected, element)
|
||||
|
||||
def _assert_element_contains_dict(self, expected, element):
|
||||
for k, v in expected.items():
|
||||
entry = element.getElementsByTagName(k)
|
||||
self.assertIsNotNone(entry, 'Key %s not found' % k)
|
||||
actual = entry[0].childNodes[0].nodeValue
|
||||
self.assertEqual(v, actual,
|
||||
"Expected %s but got %s for key %s"
|
||||
% (v, actual, k))
|
||||
|
||||
def test_GET_container_xml(self):
|
||||
content_type_1 = u'\uF10F\uD20D\uB30B\u9409'
|
||||
content_type_2 = 'text/plain; param=foo'
|
||||
pt_etag1 = 'c6e8196d7f0fff6444b90861fe8d609d'
|
||||
pt_etag2 = 'ac0374ed4d43635f803c82469d0b5a10'
|
||||
key = fetch_crypto_keys()['container']
|
||||
|
||||
fake_body = '''<?xml version="1.0" encoding="UTF-8"?>
|
||||
<container name="testc">\
|
||||
<subdir name="test-subdir"><name>test-subdir</name></subdir>\
|
||||
<object><hash>\
|
||||
''' + encrypt_and_append_meta(pt_etag1.encode('utf8'), key) + '''\
|
||||
</hash><content_type>\
|
||||
''' + content_type_1 + '''\
|
||||
</content_type><name>testfile</name><bytes>16</bytes>\
|
||||
<last_modified>2015-04-19T02:37:39.601660</last_modified></object>\
|
||||
<object><hash>\
|
||||
''' + encrypt_and_append_meta(pt_etag2.encode('utf8'), key) + '''\
|
||||
</hash><content_type>\
|
||||
''' + content_type_2 + '''\
|
||||
</content_type><name>testfile2</name><bytes>24</bytes>\
|
||||
<last_modified>2015-04-19T02:37:39.684740</last_modified></object>\
|
||||
</container>'''
|
||||
|
||||
resp = self._make_cont_get_req(fake_body, 'xml')
|
||||
self.assertEqual('200 OK', resp.status)
|
||||
body = resp.body
|
||||
self.assertEqual(len(body), int(resp.headers['Content-Length']))
|
||||
|
||||
tree = minidom.parseString(body)
|
||||
containers = tree.getElementsByTagName('container')
|
||||
self.assertEqual(1, len(containers))
|
||||
self.assertEqual('testc',
|
||||
containers[0].attributes.getNamedItem("name").value)
|
||||
|
||||
results = containers[0].childNodes
|
||||
self.assertEqual(3, len(results))
|
||||
|
||||
self._assert_element('subdir', {"name": "test-subdir"}, results[0])
|
||||
|
||||
obj_dict_1 = {"bytes": "16",
|
||||
"last_modified": "2015-04-19T02:37:39.601660",
|
||||
"hash": pt_etag1,
|
||||
"name": "testfile",
|
||||
"content_type": content_type_1}
|
||||
self._assert_element('object', obj_dict_1, results[1])
|
||||
obj_dict_2 = {"bytes": "24",
|
||||
"last_modified": "2015-04-19T02:37:39.684740",
|
||||
"hash": pt_etag2,
|
||||
"name": "testfile2",
|
||||
"content_type": content_type_2}
|
||||
self._assert_element('object', obj_dict_2, results[2])
|
||||
|
||||
def test_GET_container_xml_with_crypto_override(self):
|
||||
content_type_1 = 'image/jpeg'
|
||||
content_type_2 = 'text/plain; param=foo'
|
||||
|
||||
fake_body = '''<?xml version="1.0" encoding="UTF-8"?>
|
||||
<container name="testc">\
|
||||
<object><hash>c6e8196d7f0fff6444b90861fe8d609d</hash>\
|
||||
<content_type>''' + content_type_1 + '''\
|
||||
</content_type><name>testfile</name><bytes>16</bytes>\
|
||||
<last_modified>2015-04-19T02:37:39.601660</last_modified></object>\
|
||||
<object><hash>ac0374ed4d43635f803c82469d0b5a10</hash>\
|
||||
<content_type>''' + content_type_2 + '''\
|
||||
</content_type><name>testfile2</name><bytes>24</bytes>\
|
||||
<last_modified>2015-04-19T02:37:39.684740</last_modified></object>\
|
||||
</container>'''
|
||||
|
||||
resp = self._make_cont_get_req(fake_body, 'xml', override=True)
|
||||
|
||||
self.assertEqual('200 OK', resp.status)
|
||||
body = resp.body
|
||||
self.assertEqual(len(body), int(resp.headers['Content-Length']))
|
||||
|
||||
tree = minidom.parseString(body)
|
||||
containers = tree.getElementsByTagName('container')
|
||||
self.assertEqual(1, len(containers))
|
||||
self.assertEqual('testc',
|
||||
containers[0].attributes.getNamedItem("name").value)
|
||||
|
||||
objs = tree.getElementsByTagName('object')
|
||||
self.assertEqual(2, len(objs))
|
||||
|
||||
obj_dict_1 = {"bytes": "16",
|
||||
"last_modified": "2015-04-19T02:37:39.601660",
|
||||
"hash": "c6e8196d7f0fff6444b90861fe8d609d",
|
||||
"name": "testfile",
|
||||
"content_type": content_type_1}
|
||||
self._assert_element_contains_dict(obj_dict_1, objs[0])
|
||||
obj_dict_2 = {"bytes": "24",
|
||||
"last_modified": "2015-04-19T02:37:39.684740",
|
||||
"hash": "ac0374ed4d43635f803c82469d0b5a10",
|
||||
"name": "testfile2",
|
||||
"content_type": content_type_2}
|
||||
self._assert_element_contains_dict(obj_dict_2, objs[1])
|
||||
|
||||
def test_cont_get_xml_req_with_cipher_mismatch(self):
|
||||
bad_crypto_meta = fake_get_crypto_meta()
|
||||
bad_crypto_meta['cipher'] = 'unknown_cipher'
|
||||
|
||||
fake_body = '''<?xml version="1.0" encoding="UTF-8"?>
|
||||
<container name="testc"><object>\
|
||||
<hash>''' + encrypt_and_append_meta('c6e8196d7f0fff6444b90861fe8d609d',
|
||||
fetch_crypto_keys()['container'],
|
||||
crypto_meta=bad_crypto_meta) + '''\
|
||||
</hash>\
|
||||
<content_type>image/jpeg</content_type>\
|
||||
<name>testfile</name><bytes>16</bytes>\
|
||||
<last_modified>2015-04-19T02:37:39.601660</last_modified></object>\
|
||||
</container>'''
|
||||
|
||||
resp = self._make_cont_get_req(fake_body, 'xml')
|
||||
|
||||
self.assertEqual('500 Internal Error', resp.status)
|
||||
self.assertEqual('Error decrypting container listing', resp.body)
|
||||
self.assertIn("Cipher must be AES_CTR_256",
|
||||
self.decrypter.logger.get_lines_for_level('error')[0])
|
||||
|
||||
|
||||
class TestModuleMethods(unittest.TestCase):
|
||||
def test_purge_crypto_sysmeta_headers(self):
|
||||
|
@ -618,14 +618,5 @@ class TestCryptoPipelineChanges(unittest.TestCase):
|
||||
self._check_listing(self.crypto_app)
|
||||
|
||||
|
||||
class TestCryptoPipelineChangesFastPost(TestCryptoPipelineChanges):
|
||||
@classmethod
|
||||
def setUpClass(cls):
|
||||
# set proxy config to use fast post
|
||||
extra_conf = {'object_post_as_copy': 'False'}
|
||||
cls._test_context = setup_servers(extra_conf=extra_conf)
|
||||
cls.proxy_app = cls._test_context["test_servers"][0]
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
|
@ -14,7 +14,6 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import time
|
||||
import mock
|
||||
import shutil
|
||||
import tempfile
|
||||
@ -93,9 +92,7 @@ class TestCopyConstraints(unittest.TestCase):
|
||||
class TestServerSideCopyMiddleware(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.app = FakeSwift()
|
||||
self.ssc = copy.filter_factory({
|
||||
'object_post_as_copy': 'yes',
|
||||
})(self.app)
|
||||
self.ssc = copy.filter_factory({})(self.app)
|
||||
self.ssc.logger = self.app.logger
|
||||
|
||||
def tearDown(self):
|
||||
@ -166,92 +163,6 @@ class TestServerSideCopyMiddleware(unittest.TestCase):
|
||||
self.assertRequestEqual(req, self.authorized[0])
|
||||
self.assertNotIn('swift.orig_req_method', req.environ)
|
||||
|
||||
def test_POST_as_COPY_simple(self):
|
||||
self.app.register('GET', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
|
||||
self.app.register('PUT', '/v1/a/c/o', swob.HTTPAccepted, {})
|
||||
req = Request.blank('/v1/a/c/o', method='POST')
|
||||
status, headers, body = self.call_ssc(req)
|
||||
self.assertEqual(status, '202 Accepted')
|
||||
self.assertEqual(len(self.authorized), 1)
|
||||
self.assertRequestEqual(req, self.authorized[0])
|
||||
# For basic test cases, assert orig_req_method behavior
|
||||
self.assertEqual(req.environ['swift.orig_req_method'], 'POST')
|
||||
|
||||
def test_POST_as_COPY_201_return_202(self):
|
||||
self.app.register('GET', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
|
||||
self.app.register('PUT', '/v1/a/c/o', swob.HTTPCreated, {})
|
||||
req = Request.blank('/v1/a/c/o', method='POST')
|
||||
status, headers, body = self.call_ssc(req)
|
||||
self.assertEqual(status, '202 Accepted')
|
||||
self.assertEqual(len(self.authorized), 1)
|
||||
self.assertRequestEqual(req, self.authorized[0])
|
||||
|
||||
def test_POST_delete_at(self):
|
||||
self.app.register('GET', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
|
||||
self.app.register('PUT', '/v1/a/c/o', swob.HTTPAccepted, {})
|
||||
t = str(int(time.time() + 100))
|
||||
req = Request.blank('/v1/a/c/o', method='POST',
|
||||
headers={'Content-Type': 'foo/bar',
|
||||
'X-Delete-At': t})
|
||||
status, headers, body = self.call_ssc(req)
|
||||
self.assertEqual(status, '202 Accepted')
|
||||
|
||||
calls = self.app.calls_with_headers
|
||||
method, path, req_headers = calls[1]
|
||||
self.assertEqual('PUT', method)
|
||||
self.assertTrue('X-Delete-At' in req_headers)
|
||||
self.assertEqual(req_headers['X-Delete-At'], str(t))
|
||||
self.assertEqual(len(self.authorized), 1)
|
||||
self.assertRequestEqual(req, self.authorized[0])
|
||||
|
||||
def test_POST_as_COPY_static_large_object(self):
|
||||
self.app.register('GET', '/v1/a/c/o', swob.HTTPOk,
|
||||
{'X-Static-Large-Object': True}, 'passed')
|
||||
self.app.register('PUT', '/v1/a/c/o', swob.HTTPAccepted, {})
|
||||
req = Request.blank('/v1/a/c/o', method='POST',
|
||||
headers={})
|
||||
status, headers, body = self.call_ssc(req)
|
||||
self.assertEqual(status, '202 Accepted')
|
||||
|
||||
calls = self.app.calls_with_headers
|
||||
method, path, req_headers = calls[1]
|
||||
self.assertEqual('PUT', method)
|
||||
self.assertNotIn('X-Static-Large-Object', req_headers)
|
||||
self.assertEqual(len(self.authorized), 1)
|
||||
self.assertRequestEqual(req, self.authorized[0])
|
||||
|
||||
def test_POST_as_COPY_dynamic_large_object_manifest(self):
|
||||
self.app.register('GET', '/v1/a/c/o', swob.HTTPOk,
|
||||
{'X-Object-Manifest': 'orig_manifest'}, 'passed')
|
||||
self.app.register('PUT', '/v1/a/c/o', swob.HTTPCreated, {})
|
||||
req = Request.blank('/v1/a/c/o', method='POST',
|
||||
headers={'X-Object-Manifest': 'new_manifest'})
|
||||
status, headers, body = self.call_ssc(req)
|
||||
self.assertEqual(status, '202 Accepted')
|
||||
|
||||
calls = self.app.calls_with_headers
|
||||
method, path, req_headers = calls[1]
|
||||
self.assertEqual('PUT', method)
|
||||
self.assertEqual('new_manifest', req_headers['x-object-manifest'])
|
||||
self.assertEqual(len(self.authorized), 1)
|
||||
self.assertRequestEqual(req, self.authorized[0])
|
||||
|
||||
def test_POST_as_COPY_dynamic_large_object_no_manifest(self):
|
||||
self.app.register('GET', '/v1/a/c/o', swob.HTTPOk,
|
||||
{'X-Object-Manifest': 'orig_manifest'}, 'passed')
|
||||
self.app.register('PUT', '/v1/a/c/o', swob.HTTPCreated, {})
|
||||
req = Request.blank('/v1/a/c/o', method='POST',
|
||||
headers={})
|
||||
status, headers, body = self.call_ssc(req)
|
||||
self.assertEqual(status, '202 Accepted')
|
||||
|
||||
calls = self.app.calls_with_headers
|
||||
method, path, req_headers = calls[1]
|
||||
self.assertEqual('PUT', method)
|
||||
self.assertNotIn('X-Object-Manifest', req_headers)
|
||||
self.assertEqual(len(self.authorized), 1)
|
||||
self.assertRequestEqual(req, self.authorized[0])
|
||||
|
||||
def test_basic_put_with_x_copy_from(self):
|
||||
self.app.register('GET', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
|
||||
self.app.register('PUT', '/v1/a/c/o2', swob.HTTPCreated, {})
|
||||
@ -1345,100 +1256,6 @@ class TestServerSideCopyMiddleware(unittest.TestCase):
|
||||
req_headers.get('X-Object-Transient-Sysmeta-Test'))
|
||||
self.assertEqual('Not Bar', req_headers.get('X-Foo'))
|
||||
|
||||
def _test_POST_source_headers(self, extra_post_headers):
|
||||
# helper method to perform a POST with metadata headers that should
|
||||
# always be sent to the destination
|
||||
post_headers = {'X-Object-Meta-Test2': 'added',
|
||||
'X-Object-Sysmeta-Test2': 'added',
|
||||
'X-Object-Transient-Sysmeta-Test2': 'added'}
|
||||
post_headers.update(extra_post_headers)
|
||||
get_resp_headers = {
|
||||
'X-Timestamp': '1234567890.12345',
|
||||
'X-Backend-Timestamp': '1234567890.12345',
|
||||
'Content-Type': 'text/original',
|
||||
'Content-Encoding': 'gzip',
|
||||
'Content-Disposition': 'attachment; filename=myfile',
|
||||
'X-Object-Meta-Test': 'original',
|
||||
'X-Object-Sysmeta-Test': 'original',
|
||||
'X-Object-Transient-Sysmeta-Test': 'original',
|
||||
'X-Foo': 'Bar'}
|
||||
self.app.register(
|
||||
'GET', '/v1/a/c/o', swob.HTTPOk, headers=get_resp_headers)
|
||||
self.app.register('PUT', '/v1/a/c/o', swob.HTTPCreated, {})
|
||||
req = Request.blank('/v1/a/c/o', method='POST', headers=post_headers)
|
||||
status, headers, body = self.call_ssc(req)
|
||||
self.assertEqual(status, '202 Accepted')
|
||||
calls = self.app.calls_with_headers
|
||||
self.assertEqual(2, len(calls))
|
||||
method, path, req_headers = calls[1]
|
||||
self.assertEqual('PUT', method)
|
||||
# these headers should always be applied to the destination
|
||||
self.assertEqual('added', req_headers.get('X-Object-Meta-Test2'))
|
||||
self.assertEqual('added',
|
||||
req_headers.get('X-Object-Transient-Sysmeta-Test2'))
|
||||
# POSTed sysmeta should never be applied to the destination
|
||||
self.assertNotIn('X-Object-Sysmeta-Test2', req_headers)
|
||||
# existing sysmeta should always be preserved
|
||||
self.assertEqual('original',
|
||||
req_headers.get('X-Object-Sysmeta-Test'))
|
||||
return req_headers
|
||||
|
||||
def test_POST_no_updates(self):
|
||||
post_headers = {}
|
||||
req_headers = self._test_POST_source_headers(post_headers)
|
||||
self.assertEqual('text/original', req_headers.get('Content-Type'))
|
||||
self.assertNotIn('X-Object-Meta-Test', req_headers)
|
||||
self.assertNotIn('X-Object-Transient-Sysmeta-Test', req_headers)
|
||||
self.assertNotIn('X-Timestamp', req_headers)
|
||||
self.assertNotIn('X-Backend-Timestamp', req_headers)
|
||||
self.assertNotIn('Content-Encoding', req_headers)
|
||||
self.assertNotIn('Content-Disposition', req_headers)
|
||||
self.assertNotIn('X-Foo', req_headers)
|
||||
|
||||
def test_POST_with_updates(self):
|
||||
post_headers = {
|
||||
'Content-Type': 'text/not_original',
|
||||
'Content-Encoding': 'not_gzip',
|
||||
'Content-Disposition': 'attachment; filename=notmyfile',
|
||||
'X-Object-Meta-Test': 'not_original',
|
||||
'X-Object-Sysmeta-Test': 'not_original',
|
||||
'X-Object-Transient-Sysmeta-Test': 'not_original',
|
||||
'X-Foo': 'Not Bar',
|
||||
}
|
||||
req_headers = self._test_POST_source_headers(post_headers)
|
||||
self.assertEqual('text/not_original', req_headers.get('Content-Type'))
|
||||
self.assertEqual('not_gzip', req_headers.get('Content-Encoding'))
|
||||
self.assertEqual('attachment; filename=notmyfile',
|
||||
req_headers.get('Content-Disposition'))
|
||||
self.assertEqual('not_original', req_headers.get('X-Object-Meta-Test'))
|
||||
self.assertEqual('not_original',
|
||||
req_headers.get('X-Object-Transient-Sysmeta-Test'))
|
||||
self.assertEqual('Not Bar', req_headers.get('X-Foo'))
|
||||
|
||||
def test_POST_x_fresh_metadata_with_updates(self):
|
||||
# post-as-copy trumps x-fresh-metadata i.e. existing user metadata
|
||||
# should not be copied, sysmeta is copied *and not updated with new*
|
||||
post_headers = {
|
||||
'X-Fresh-Metadata': 'true',
|
||||
'Content-Type': 'text/not_original',
|
||||
'Content-Encoding': 'not_gzip',
|
||||
'Content-Disposition': 'attachment; filename=notmyfile',
|
||||
'X-Object-Meta-Test': 'not_original',
|
||||
'X-Object-Sysmeta-Test': 'not_original',
|
||||
'X-Object-Transient-Sysmeta-Test': 'not_original',
|
||||
'X-Foo': 'Not Bar',
|
||||
}
|
||||
req_headers = self._test_POST_source_headers(post_headers)
|
||||
self.assertEqual('text/not_original', req_headers.get('Content-Type'))
|
||||
self.assertEqual('not_gzip', req_headers.get('Content-Encoding'))
|
||||
self.assertEqual('attachment; filename=notmyfile',
|
||||
req_headers.get('Content-Disposition'))
|
||||
self.assertEqual('not_original', req_headers.get('X-Object-Meta-Test'))
|
||||
self.assertEqual('not_original',
|
||||
req_headers.get('X-Object-Transient-Sysmeta-Test'))
|
||||
self.assertEqual('Not Bar', req_headers.get('X-Foo'))
|
||||
self.assertIn('X-Fresh-Metadata', req_headers)
|
||||
|
||||
def test_COPY_with_single_range(self):
|
||||
# verify that source etag is not copied when copying a range
|
||||
self.app.register('GET', '/v1/a/c/o', swob.HTTPOk,
|
||||
@ -1472,67 +1289,6 @@ class TestServerSideCopyConfiguration(unittest.TestCase):
|
||||
def tearDown(self):
|
||||
shutil.rmtree(self.tmpdir)
|
||||
|
||||
def test_post_as_copy_defaults_to_false(self):
|
||||
ssc = copy.filter_factory({})("no app here")
|
||||
self.assertEqual(ssc.object_post_as_copy, False)
|
||||
|
||||
def test_reading_proxy_conf_when_no_middleware_conf_present(self):
|
||||
proxy_conf = dedent("""
|
||||
[DEFAULT]
|
||||
bind_ip = 10.4.5.6
|
||||
|
||||
[pipeline:main]
|
||||
pipeline = catch_errors copy ye-olde-proxy-server
|
||||
|
||||
[filter:copy]
|
||||
use = egg:swift#copy
|
||||
|
||||
[app:ye-olde-proxy-server]
|
||||
use = egg:swift#proxy
|
||||
object_post_as_copy = no
|
||||
""")
|
||||
|
||||
conffile = tempfile.NamedTemporaryFile()
|
||||
conffile.write(proxy_conf)
|
||||
conffile.flush()
|
||||
|
||||
ssc = copy.filter_factory({
|
||||
'__file__': conffile.name
|
||||
})("no app here")
|
||||
|
||||
self.assertEqual(ssc.object_post_as_copy, False)
|
||||
|
||||
def test_middleware_conf_precedence(self):
|
||||
proxy_conf = dedent("""
|
||||
[DEFAULT]
|
||||
bind_ip = 10.4.5.6
|
||||
|
||||
[pipeline:main]
|
||||
pipeline = catch_errors copy ye-olde-proxy-server
|
||||
|
||||
[filter:copy]
|
||||
use = egg:swift#copy
|
||||
object_post_as_copy = no
|
||||
|
||||
[app:ye-olde-proxy-server]
|
||||
use = egg:swift#proxy
|
||||
object_post_as_copy = yes
|
||||
""")
|
||||
|
||||
conffile = tempfile.NamedTemporaryFile()
|
||||
conffile.write(proxy_conf)
|
||||
conffile.flush()
|
||||
|
||||
with mock.patch('swift.common.middleware.copy.get_logger',
|
||||
return_value=debug_logger('copy')):
|
||||
ssc = copy.filter_factory({
|
||||
'object_post_as_copy': 'no',
|
||||
'__file__': conffile.name
|
||||
})("no app here")
|
||||
|
||||
self.assertEqual(ssc.object_post_as_copy, False)
|
||||
self.assertFalse(ssc.logger.get_lines_for_level('warning'))
|
||||
|
||||
def _test_post_as_copy_emits_warning(self, conf):
|
||||
with mock.patch('swift.common.middleware.copy.get_logger',
|
||||
return_value=debug_logger('copy')):
|
||||
@ -1585,9 +1341,7 @@ class TestServerSideCopyMiddlewareWithEC(unittest.TestCase):
|
||||
self.app = PatchedObjControllerApp(
|
||||
None, FakeMemcache(), account_ring=FakeRing(),
|
||||
container_ring=FakeRing(), logger=self.logger)
|
||||
self.ssc = copy.filter_factory({
|
||||
'object_post_as_copy': 'yes',
|
||||
})(self.app)
|
||||
self.ssc = copy.filter_factory({})(self.app)
|
||||
self.ssc.logger = self.app.logger
|
||||
self.policy = POLICIES.default
|
||||
self.app.container_info = dict(self.container_info)
|
||||
|
@ -129,11 +129,11 @@ class DloTestCase(unittest.TestCase):
|
||||
"last_modified": lm,
|
||||
"content_type": "application/png"}]
|
||||
self.app.register(
|
||||
'GET', '/v1/AUTH_test/c?format=json',
|
||||
'GET', '/v1/AUTH_test/c',
|
||||
swob.HTTPOk, {'Content-Type': 'application/json; charset=utf-8'},
|
||||
json.dumps(full_container_listing))
|
||||
self.app.register(
|
||||
'GET', '/v1/AUTH_test/c?format=json&prefix=seg',
|
||||
'GET', '/v1/AUTH_test/c?prefix=seg',
|
||||
swob.HTTPOk, {'Content-Type': 'application/json; charset=utf-8'},
|
||||
json.dumps(segs))
|
||||
|
||||
@ -148,11 +148,11 @@ class DloTestCase(unittest.TestCase):
|
||||
'X-Object-Manifest': 'c/seg_'},
|
||||
'manyseg')
|
||||
self.app.register(
|
||||
'GET', '/v1/AUTH_test/c?format=json&prefix=seg_',
|
||||
'GET', '/v1/AUTH_test/c?prefix=seg_',
|
||||
swob.HTTPOk, {'Content-Type': 'application/json; charset=utf-8'},
|
||||
json.dumps(segs[:3]))
|
||||
self.app.register(
|
||||
'GET', '/v1/AUTH_test/c?format=json&prefix=seg_&marker=seg_03',
|
||||
'GET', '/v1/AUTH_test/c?prefix=seg_&marker=seg_03',
|
||||
swob.HTTPOk, {'Content-Type': 'application/json; charset=utf-8'},
|
||||
json.dumps(segs[3:]))
|
||||
|
||||
@ -163,7 +163,7 @@ class DloTestCase(unittest.TestCase):
|
||||
'X-Object-Manifest': 'c/noseg_'},
|
||||
'noseg')
|
||||
self.app.register(
|
||||
'GET', '/v1/AUTH_test/c?format=json&prefix=noseg_',
|
||||
'GET', '/v1/AUTH_test/c?prefix=noseg_',
|
||||
swob.HTTPOk, {'Content-Type': 'application/json; charset=utf-8'},
|
||||
json.dumps([]))
|
||||
|
||||
@ -278,7 +278,7 @@ class TestDloHeadManifest(DloTestCase):
|
||||
self.assertEqual(
|
||||
self.app.calls,
|
||||
[('HEAD', '/v1/AUTH_test/mancon/manifest-no-segments'),
|
||||
('GET', '/v1/AUTH_test/c?format=json&prefix=noseg_')])
|
||||
('GET', '/v1/AUTH_test/c?prefix=noseg_')])
|
||||
|
||||
|
||||
class TestDloGetManifest(DloTestCase):
|
||||
@ -444,7 +444,7 @@ class TestDloGetManifest(DloTestCase):
|
||||
self.assertEqual(
|
||||
self.app.calls,
|
||||
[('GET', '/v1/AUTH_test/mancon/manifest-many-segments'),
|
||||
('GET', '/v1/AUTH_test/c?format=json&prefix=seg_'),
|
||||
('GET', '/v1/AUTH_test/c?prefix=seg_'),
|
||||
('GET', '/v1/AUTH_test/c/seg_01?multipart-manifest=get'),
|
||||
('GET', '/v1/AUTH_test/c/seg_02?multipart-manifest=get'),
|
||||
('GET', '/v1/AUTH_test/c/seg_03?multipart-manifest=get')])
|
||||
@ -601,7 +601,7 @@ class TestDloGetManifest(DloTestCase):
|
||||
|
||||
def test_error_listing_container_first_listing_request(self):
|
||||
self.app.register(
|
||||
'GET', '/v1/AUTH_test/c?format=json&prefix=seg_',
|
||||
'GET', '/v1/AUTH_test/c?prefix=seg_',
|
||||
swob.HTTPNotFound, {}, None)
|
||||
|
||||
req = swob.Request.blank('/v1/AUTH_test/mancon/manifest-many-segments',
|
||||
@ -613,7 +613,7 @@ class TestDloGetManifest(DloTestCase):
|
||||
|
||||
def test_error_listing_container_second_listing_request(self):
|
||||
self.app.register(
|
||||
'GET', '/v1/AUTH_test/c?format=json&prefix=seg_&marker=seg_03',
|
||||
'GET', '/v1/AUTH_test/c?prefix=seg_&marker=seg_03',
|
||||
swob.HTTPNotFound, {}, None)
|
||||
|
||||
req = swob.Request.blank('/v1/AUTH_test/mancon/manifest-many-segments',
|
||||
@ -648,7 +648,7 @@ class TestDloGetManifest(DloTestCase):
|
||||
swob.HTTPOk, {'Content-Length': '0', 'Etag': 'blah',
|
||||
'X-Object-Manifest': 'c/quotetags'}, None)
|
||||
self.app.register(
|
||||
'GET', '/v1/AUTH_test/c?format=json&prefix=quotetags',
|
||||
'GET', '/v1/AUTH_test/c?prefix=quotetags',
|
||||
swob.HTTPOk, {'Content-Type': 'application/json; charset=utf-8'},
|
||||
json.dumps([{"hash": "\"abc\"", "bytes": 5, "name": "quotetags1",
|
||||
"last_modified": "2013-11-22T02:42:14.261620",
|
||||
@ -673,7 +673,7 @@ class TestDloGetManifest(DloTestCase):
|
||||
segs = [{"hash": md5hex("AAAAA"), "bytes": 5, "name": u"é1"},
|
||||
{"hash": md5hex("AAAAA"), "bytes": 5, "name": u"é2"}]
|
||||
self.app.register(
|
||||
'GET', '/v1/AUTH_test/c?format=json&prefix=%C3%A9',
|
||||
'GET', '/v1/AUTH_test/c?prefix=%C3%A9',
|
||||
swob.HTTPOk, {'Content-Type': 'application/json'},
|
||||
json.dumps(segs))
|
||||
|
||||
@ -745,7 +745,7 @@ class TestDloGetManifest(DloTestCase):
|
||||
self.assertEqual(
|
||||
self.app.calls,
|
||||
[('GET', '/v1/AUTH_test/mancon/manifest'),
|
||||
('GET', '/v1/AUTH_test/c?format=json&prefix=seg'),
|
||||
('GET', '/v1/AUTH_test/c?prefix=seg'),
|
||||
('GET', '/v1/AUTH_test/c/seg_01?multipart-manifest=get'),
|
||||
('GET', '/v1/AUTH_test/c/seg_02?multipart-manifest=get'),
|
||||
('GET', '/v1/AUTH_test/c/seg_03?multipart-manifest=get')])
|
||||
|
@ -215,5 +215,36 @@ class TestGatekeeper(unittest.TestCase):
|
||||
for app_hdrs in ({}, self.forbidden_headers_out):
|
||||
self._test_duplicate_headers_not_removed(method, app_hdrs)
|
||||
|
||||
def _test_location_header(self, location_path):
|
||||
headers = {'Location': location_path}
|
||||
req = Request.blank(
|
||||
'/v/a/c', environ={'REQUEST_METHOD': 'GET',
|
||||
'swift.leave_relative_location': True})
|
||||
|
||||
class SelfishApp(FakeApp):
|
||||
def __call__(self, env, start_response):
|
||||
self.req = Request(env)
|
||||
resp = Response(request=self.req, body='FAKE APP',
|
||||
headers=self.headers)
|
||||
# like webob, middlewares in the pipeline may rewrite
|
||||
# location header from relative to absolute
|
||||
resp.location = resp.absolute_location()
|
||||
return resp(env, start_response)
|
||||
|
||||
selfish_app = SelfishApp(headers=headers)
|
||||
|
||||
app = self.get_app(selfish_app, {})
|
||||
resp = req.get_response(app)
|
||||
self.assertEqual('200 OK', resp.status)
|
||||
self.assertIn('Location', resp.headers)
|
||||
self.assertEqual(resp.headers['Location'], location_path)
|
||||
|
||||
def test_location_header_fixed(self):
|
||||
self._test_location_header('/v/a/c/o2')
|
||||
self._test_location_header('/v/a/c/o2?query=path&query2=doit')
|
||||
self._test_location_header('/v/a/c/o2?query=path#test')
|
||||
self._test_location_header('/v/a/c/o2;whatisparam?query=path#test')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
|
345
test/unit/common/middleware/test_listing_formats.py
Normal file
345
test/unit/common/middleware/test_listing_formats.py
Normal file
@ -0,0 +1,345 @@
|
||||
# Copyright (c) 2017 OpenStack Foundation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import json
|
||||
import unittest
|
||||
|
||||
from swift.common.swob import Request, HTTPOk
|
||||
from swift.common.middleware import listing_formats
|
||||
from test.unit.common.middleware.helpers import FakeSwift
|
||||
|
||||
|
||||
class TestListingFormats(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.fake_swift = FakeSwift()
|
||||
self.app = listing_formats.ListingFilter(self.fake_swift)
|
||||
self.fake_account_listing = json.dumps([
|
||||
{'name': 'bar', 'bytes': 0, 'count': 0,
|
||||
'last_modified': '1970-01-01T00:00:00.000000'},
|
||||
{'subdir': 'foo_'},
|
||||
])
|
||||
self.fake_container_listing = json.dumps([
|
||||
{'name': 'bar', 'hash': 'etag', 'bytes': 0,
|
||||
'content_type': 'text/plain',
|
||||
'last_modified': '1970-01-01T00:00:00.000000'},
|
||||
{'subdir': 'foo/'},
|
||||
])
|
||||
|
||||
def test_valid_account(self):
|
||||
self.fake_swift.register('GET', '/v1/a', HTTPOk, {
|
||||
'Content-Length': str(len(self.fake_account_listing)),
|
||||
'Content-Type': 'application/json'}, self.fake_account_listing)
|
||||
|
||||
req = Request.blank('/v1/a')
|
||||
resp = req.get_response(self.app)
|
||||
self.assertEqual(resp.body, 'bar\nfoo_\n')
|
||||
self.assertEqual(resp.headers['Content-Type'],
|
||||
'text/plain; charset=utf-8')
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', '/v1/a?format=json'))
|
||||
|
||||
req = Request.blank('/v1/a?format=txt')
|
||||
resp = req.get_response(self.app)
|
||||
self.assertEqual(resp.body, 'bar\nfoo_\n')
|
||||
self.assertEqual(resp.headers['Content-Type'],
|
||||
'text/plain; charset=utf-8')
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', '/v1/a?format=json'))
|
||||
|
||||
req = Request.blank('/v1/a?format=json')
|
||||
resp = req.get_response(self.app)
|
||||
self.assertEqual(resp.body, self.fake_account_listing)
|
||||
self.assertEqual(resp.headers['Content-Type'],
|
||||
'application/json; charset=utf-8')
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', '/v1/a?format=json'))
|
||||
|
||||
req = Request.blank('/v1/a?format=xml')
|
||||
resp = req.get_response(self.app)
|
||||
self.assertEqual(resp.body.split('\n'), [
|
||||
'<?xml version="1.0" encoding="UTF-8"?>',
|
||||
'<account name="a">',
|
||||
'<container><name>bar</name><count>0</count><bytes>0</bytes>'
|
||||
'<last_modified>1970-01-01T00:00:00.000000</last_modified>'
|
||||
'</container>',
|
||||
'<subdir name="foo_" />',
|
||||
'</account>',
|
||||
])
|
||||
self.assertEqual(resp.headers['Content-Type'],
|
||||
'application/xml; charset=utf-8')
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', '/v1/a?format=json'))
|
||||
|
||||
def test_valid_container(self):
|
||||
self.fake_swift.register('GET', '/v1/a/c', HTTPOk, {
|
||||
'Content-Length': str(len(self.fake_container_listing)),
|
||||
'Content-Type': 'application/json'}, self.fake_container_listing)
|
||||
|
||||
req = Request.blank('/v1/a/c')
|
||||
resp = req.get_response(self.app)
|
||||
self.assertEqual(resp.body, 'bar\nfoo/\n')
|
||||
self.assertEqual(resp.headers['Content-Type'],
|
||||
'text/plain; charset=utf-8')
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', '/v1/a/c?format=json'))
|
||||
|
||||
req = Request.blank('/v1/a/c?format=txt')
|
||||
resp = req.get_response(self.app)
|
||||
self.assertEqual(resp.body, 'bar\nfoo/\n')
|
||||
self.assertEqual(resp.headers['Content-Type'],
|
||||
'text/plain; charset=utf-8')
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', '/v1/a/c?format=json'))
|
||||
|
||||
req = Request.blank('/v1/a/c?format=json')
|
||||
resp = req.get_response(self.app)
|
||||
self.assertEqual(resp.body, self.fake_container_listing)
|
||||
self.assertEqual(resp.headers['Content-Type'],
|
||||
'application/json; charset=utf-8')
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', '/v1/a/c?format=json'))
|
||||
|
||||
req = Request.blank('/v1/a/c?format=xml')
|
||||
resp = req.get_response(self.app)
|
||||
self.assertEqual(
|
||||
resp.body,
|
||||
'<?xml version="1.0" encoding="UTF-8"?>\n'
|
||||
'<container name="c">'
|
||||
'<object><name>bar</name><hash>etag</hash><bytes>0</bytes>'
|
||||
'<content_type>text/plain</content_type>'
|
||||
'<last_modified>1970-01-01T00:00:00.000000</last_modified>'
|
||||
'</object>'
|
||||
'<subdir name="foo/"><name>foo/</name></subdir>'
|
||||
'</container>'
|
||||
)
|
||||
self.assertEqual(resp.headers['Content-Type'],
|
||||
'application/xml; charset=utf-8')
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', '/v1/a/c?format=json'))
|
||||
|
||||
def test_blank_account(self):
|
||||
self.fake_swift.register('GET', '/v1/a', HTTPOk, {
|
||||
'Content-Length': '2', 'Content-Type': 'application/json'}, '[]')
|
||||
|
||||
req = Request.blank('/v1/a')
|
||||
resp = req.get_response(self.app)
|
||||
self.assertEqual(resp.status, '204 No Content')
|
||||
self.assertEqual(resp.body, '')
|
||||
self.assertEqual(resp.headers['Content-Type'],
|
||||
'text/plain; charset=utf-8')
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', '/v1/a?format=json'))
|
||||
|
||||
req = Request.blank('/v1/a?format=txt')
|
||||
resp = req.get_response(self.app)
|
||||
self.assertEqual(resp.status, '204 No Content')
|
||||
self.assertEqual(resp.body, '')
|
||||
self.assertEqual(resp.headers['Content-Type'],
|
||||
'text/plain; charset=utf-8')
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', '/v1/a?format=json'))
|
||||
|
||||
req = Request.blank('/v1/a?format=json')
|
||||
resp = req.get_response(self.app)
|
||||
self.assertEqual(resp.status, '200 OK')
|
||||
self.assertEqual(resp.body, '[]')
|
||||
self.assertEqual(resp.headers['Content-Type'],
|
||||
'application/json; charset=utf-8')
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', '/v1/a?format=json'))
|
||||
|
||||
req = Request.blank('/v1/a?format=xml')
|
||||
resp = req.get_response(self.app)
|
||||
self.assertEqual(resp.status, '200 OK')
|
||||
self.assertEqual(resp.body.split('\n'), [
|
||||
'<?xml version="1.0" encoding="UTF-8"?>',
|
||||
'<account name="a">',
|
||||
'</account>',
|
||||
])
|
||||
self.assertEqual(resp.headers['Content-Type'],
|
||||
'application/xml; charset=utf-8')
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', '/v1/a?format=json'))
|
||||
|
||||
def test_blank_container(self):
|
||||
self.fake_swift.register('GET', '/v1/a/c', HTTPOk, {
|
||||
'Content-Length': '2', 'Content-Type': 'application/json'}, '[]')
|
||||
|
||||
req = Request.blank('/v1/a/c')
|
||||
resp = req.get_response(self.app)
|
||||
self.assertEqual(resp.status, '204 No Content')
|
||||
self.assertEqual(resp.body, '')
|
||||
self.assertEqual(resp.headers['Content-Type'],
|
||||
'text/plain; charset=utf-8')
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', '/v1/a/c?format=json'))
|
||||
|
||||
req = Request.blank('/v1/a/c?format=txt')
|
||||
resp = req.get_response(self.app)
|
||||
self.assertEqual(resp.status, '204 No Content')
|
||||
self.assertEqual(resp.body, '')
|
||||
self.assertEqual(resp.headers['Content-Type'],
|
||||
'text/plain; charset=utf-8')
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', '/v1/a/c?format=json'))
|
||||
|
||||
req = Request.blank('/v1/a/c?format=json')
|
||||
resp = req.get_response(self.app)
|
||||
self.assertEqual(resp.status, '200 OK')
|
||||
self.assertEqual(resp.body, '[]')
|
||||
self.assertEqual(resp.headers['Content-Type'],
|
||||
'application/json; charset=utf-8')
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', '/v1/a/c?format=json'))
|
||||
|
||||
req = Request.blank('/v1/a/c?format=xml')
|
||||
resp = req.get_response(self.app)
|
||||
self.assertEqual(resp.status, '200 OK')
|
||||
self.assertEqual(resp.body.split('\n'), [
|
||||
'<?xml version="1.0" encoding="UTF-8"?>',
|
||||
'<container name="c" />',
|
||||
])
|
||||
self.assertEqual(resp.headers['Content-Type'],
|
||||
'application/xml; charset=utf-8')
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', '/v1/a/c?format=json'))
|
||||
|
||||
def test_pass_through(self):
|
||||
def do_test(path):
|
||||
self.fake_swift.register(
|
||||
'GET', path, HTTPOk, {
|
||||
'Content-Length': str(len(self.fake_container_listing)),
|
||||
'Content-Type': 'application/json'},
|
||||
self.fake_container_listing)
|
||||
req = Request.blank(path + '?format=xml')
|
||||
resp = req.get_response(self.app)
|
||||
self.assertEqual(resp.body, self.fake_container_listing)
|
||||
self.assertEqual(resp.headers['Content-Type'], 'application/json')
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', path + '?format=xml')) # query param is unchanged
|
||||
|
||||
do_test('/')
|
||||
do_test('/v1')
|
||||
do_test('/auth/v1.0')
|
||||
do_test('/v1/a/c/o')
|
||||
|
||||
def test_static_web_not_json(self):
|
||||
body = 'doesnt matter'
|
||||
self.fake_swift.register(
|
||||
'GET', '/v1/staticweb/not-json', HTTPOk,
|
||||
{'Content-Length': str(len(body)),
|
||||
'Content-Type': 'text/plain'},
|
||||
body)
|
||||
|
||||
resp = Request.blank('/v1/staticweb/not-json').get_response(self.app)
|
||||
self.assertEqual(resp.body, body)
|
||||
self.assertEqual(resp.headers['Content-Type'], 'text/plain')
|
||||
# We *did* try, though
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', '/v1/staticweb/not-json?format=json'))
|
||||
# TODO: add a similar test that has *no* content-type
|
||||
# FakeSwift seems to make this hard to do
|
||||
|
||||
def test_static_web_not_really_json(self):
|
||||
body = 'raises ValueError'
|
||||
self.fake_swift.register(
|
||||
'GET', '/v1/staticweb/not-json', HTTPOk,
|
||||
{'Content-Length': str(len(body)),
|
||||
'Content-Type': 'application/json'},
|
||||
body)
|
||||
|
||||
resp = Request.blank('/v1/staticweb/not-json').get_response(self.app)
|
||||
self.assertEqual(resp.body, body)
|
||||
self.assertEqual(resp.headers['Content-Type'], 'application/json')
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', '/v1/staticweb/not-json?format=json'))
|
||||
|
||||
def test_static_web_pretend_to_be_giant_json(self):
|
||||
body = json.dumps(self.fake_container_listing * 1000000)
|
||||
self.assertGreater( # sanity
|
||||
len(body), listing_formats.MAX_CONTAINER_LISTING_CONTENT_LENGTH)
|
||||
|
||||
self.fake_swift.register(
|
||||
'GET', '/v1/staticweb/not-json', HTTPOk,
|
||||
{'Content-Type': 'application/json'},
|
||||
body)
|
||||
|
||||
resp = Request.blank('/v1/staticweb/not-json').get_response(self.app)
|
||||
self.assertEqual(resp.body, body)
|
||||
self.assertEqual(resp.headers['Content-Type'], 'application/json')
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', '/v1/staticweb/not-json?format=json'))
|
||||
# TODO: add a similar test for chunked transfers
|
||||
# (staticweb referencing a DLO that doesn't fit in a single listing?)
|
||||
|
||||
def test_static_web_bad_json(self):
|
||||
def do_test(body_obj):
|
||||
body = json.dumps(body_obj)
|
||||
self.fake_swift.register(
|
||||
'GET', '/v1/staticweb/bad-json', HTTPOk,
|
||||
{'Content-Length': str(len(body)),
|
||||
'Content-Type': 'application/json'},
|
||||
body)
|
||||
|
||||
def do_sub_test(path):
|
||||
resp = Request.blank(path).get_response(self.app)
|
||||
self.assertEqual(resp.body, body)
|
||||
# NB: no charset is added; we pass through whatever we got
|
||||
self.assertEqual(resp.headers['Content-Type'],
|
||||
'application/json')
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', '/v1/staticweb/bad-json?format=json'))
|
||||
|
||||
do_sub_test('/v1/staticweb/bad-json')
|
||||
do_sub_test('/v1/staticweb/bad-json?format=txt')
|
||||
do_sub_test('/v1/staticweb/bad-json?format=xml')
|
||||
do_sub_test('/v1/staticweb/bad-json?format=json')
|
||||
|
||||
do_test({})
|
||||
do_test({'non-empty': 'hash'})
|
||||
do_test(None)
|
||||
do_test(0)
|
||||
do_test('some string')
|
||||
do_test([None])
|
||||
do_test([0])
|
||||
do_test(['some string'])
|
||||
|
||||
def test_static_web_bad_but_not_terrible_json(self):
|
||||
body = json.dumps([{'no name': 'nor subdir'}])
|
||||
self.fake_swift.register(
|
||||
'GET', '/v1/staticweb/bad-json', HTTPOk,
|
||||
{'Content-Length': str(len(body)),
|
||||
'Content-Type': 'application/json'},
|
||||
body)
|
||||
|
||||
def do_test(path, expect_charset=False):
|
||||
resp = Request.blank(path).get_response(self.app)
|
||||
self.assertEqual(resp.body, body)
|
||||
if expect_charset:
|
||||
self.assertEqual(resp.headers['Content-Type'],
|
||||
'application/json; charset=utf-8')
|
||||
else:
|
||||
self.assertEqual(resp.headers['Content-Type'],
|
||||
'application/json')
|
||||
self.assertEqual(self.fake_swift.calls[-1], (
|
||||
'GET', '/v1/staticweb/bad-json?format=json'))
|
||||
|
||||
do_test('/v1/staticweb/bad-json')
|
||||
do_test('/v1/staticweb/bad-json?format=txt')
|
||||
do_test('/v1/staticweb/bad-json?format=xml')
|
||||
# The response we get is *just close enough* to being valid that we
|
||||
# assume it is and slap on the missing charset. If you set up staticweb
|
||||
# to serve back such responses, your clients are already hosed.
|
||||
do_test('/v1/staticweb/bad-json?format=json', expect_charset=True)
|
@ -279,7 +279,7 @@ class FakeApp(object):
|
||||
if ((env['PATH_INFO'] in (
|
||||
'/v1/a/c3', '/v1/a/c4', '/v1/a/c8', '/v1/a/c9'))
|
||||
and (env['QUERY_STRING'] ==
|
||||
'delimiter=/&format=json&prefix=subdir/')):
|
||||
'delimiter=/&prefix=subdir/')):
|
||||
headers.update({'X-Container-Object-Count': '12',
|
||||
'X-Container-Bytes-Used': '73763',
|
||||
'X-Container-Read': '.r:*',
|
||||
@ -296,14 +296,14 @@ class FakeApp(object):
|
||||
{"subdir":"subdir3/subsubdir/"}]
|
||||
'''.strip()
|
||||
elif env['PATH_INFO'] == '/v1/a/c3' and env['QUERY_STRING'] == \
|
||||
'delimiter=/&format=json&prefix=subdiry/':
|
||||
'delimiter=/&prefix=subdiry/':
|
||||
headers.update({'X-Container-Object-Count': '12',
|
||||
'X-Container-Bytes-Used': '73763',
|
||||
'X-Container-Read': '.r:*',
|
||||
'Content-Type': 'application/json; charset=utf-8'})
|
||||
body = '[]'
|
||||
elif env['PATH_INFO'] == '/v1/a/c3' and env['QUERY_STRING'] == \
|
||||
'limit=1&format=json&delimiter=/&limit=1&prefix=subdirz/':
|
||||
'limit=1&delimiter=/&prefix=subdirz/':
|
||||
headers.update({'X-Container-Object-Count': '12',
|
||||
'X-Container-Bytes-Used': '73763',
|
||||
'X-Container-Read': '.r:*',
|
||||
@ -315,7 +315,7 @@ class FakeApp(object):
|
||||
"last_modified":"2011-03-24T04:27:52.709100"}]
|
||||
'''.strip()
|
||||
elif env['PATH_INFO'] == '/v1/a/c6' and env['QUERY_STRING'] == \
|
||||
'limit=1&format=json&delimiter=/&limit=1&prefix=subdir/':
|
||||
'limit=1&delimiter=/&prefix=subdir/':
|
||||
headers.update({'X-Container-Object-Count': '12',
|
||||
'X-Container-Bytes-Used': '73763',
|
||||
'X-Container-Read': '.r:*',
|
||||
@ -329,9 +329,9 @@ class FakeApp(object):
|
||||
'''.strip()
|
||||
elif env['PATH_INFO'] == '/v1/a/c10' and (
|
||||
env['QUERY_STRING'] ==
|
||||
'delimiter=/&format=json&prefix=%E2%98%83/' or
|
||||
'delimiter=/&prefix=%E2%98%83/' or
|
||||
env['QUERY_STRING'] ==
|
||||
'delimiter=/&format=json&prefix=%E2%98%83/%E2%98%83/'):
|
||||
'delimiter=/&prefix=%E2%98%83/%E2%98%83/'):
|
||||
headers.update({'X-Container-Object-Count': '12',
|
||||
'X-Container-Bytes-Used': '73763',
|
||||
'X-Container-Read': '.r:*',
|
||||
@ -346,7 +346,7 @@ class FakeApp(object):
|
||||
'''.strip()
|
||||
elif 'prefix=' in env['QUERY_STRING']:
|
||||
return Response(status='204 No Content')(env, start_response)
|
||||
elif 'format=json' in env['QUERY_STRING']:
|
||||
else:
|
||||
headers.update({'X-Container-Object-Count': '12',
|
||||
'X-Container-Bytes-Used': '73763',
|
||||
'Content-Type': 'application/json; charset=utf-8'})
|
||||
@ -397,15 +397,6 @@ class FakeApp(object):
|
||||
"content_type":"text/plain",
|
||||
"last_modified":"2011-03-24T04:27:52.935560"}]
|
||||
'''.strip()
|
||||
else:
|
||||
headers.update({'X-Container-Object-Count': '12',
|
||||
'X-Container-Bytes-Used': '73763',
|
||||
'Content-Type': 'text/plain; charset=utf-8'})
|
||||
body = '\n'.join(['401error.html', '404error.html', 'index.html',
|
||||
'listing.css', 'one.txt', 'subdir/1.txt',
|
||||
'subdir/2.txt', u'subdir/\u2603.txt', 'subdir2',
|
||||
'subdir3/subsubdir/index.html', 'two.txt',
|
||||
u'\u2603/\u2603/one.txt'])
|
||||
return Response(status='200 Ok', headers=headers,
|
||||
body=body)(env, start_response)
|
||||
|
||||
@ -481,8 +472,8 @@ class TestStaticWeb(unittest.TestCase):
|
||||
def test_container2(self):
|
||||
resp = Request.blank('/v1/a/c2').get_response(self.test_staticweb)
|
||||
self.assertEqual(resp.status_int, 200)
|
||||
self.assertEqual(resp.content_type, 'text/plain')
|
||||
self.assertEqual(len(resp.body.split('\n')),
|
||||
self.assertEqual(resp.content_type, 'application/json')
|
||||
self.assertEqual(len(json.loads(resp.body)),
|
||||
int(resp.headers['x-container-object-count']))
|
||||
|
||||
def test_container2_web_mode_explicitly_off(self):
|
||||
@ -490,8 +481,8 @@ class TestStaticWeb(unittest.TestCase):
|
||||
'/v1/a/c2',
|
||||
headers={'x-web-mode': 'false'}).get_response(self.test_staticweb)
|
||||
self.assertEqual(resp.status_int, 200)
|
||||
self.assertEqual(resp.content_type, 'text/plain')
|
||||
self.assertEqual(len(resp.body.split('\n')),
|
||||
self.assertEqual(resp.content_type, 'application/json')
|
||||
self.assertEqual(len(json.loads(resp.body)),
|
||||
int(resp.headers['x-container-object-count']))
|
||||
|
||||
def test_container2_web_mode_explicitly_on(self):
|
||||
@ -507,7 +498,7 @@ class TestStaticWeb(unittest.TestCase):
|
||||
|
||||
def test_container2json(self):
|
||||
resp = Request.blank(
|
||||
'/v1/a/c2?format=json').get_response(self.test_staticweb)
|
||||
'/v1/a/c2').get_response(self.test_staticweb)
|
||||
self.assertEqual(resp.status_int, 200)
|
||||
self.assertEqual(resp.content_type, 'application/json')
|
||||
self.assertEqual(len(json.loads(resp.body)),
|
||||
@ -515,7 +506,7 @@ class TestStaticWeb(unittest.TestCase):
|
||||
|
||||
def test_container2json_web_mode_explicitly_off(self):
|
||||
resp = Request.blank(
|
||||
'/v1/a/c2?format=json',
|
||||
'/v1/a/c2',
|
||||
headers={'x-web-mode': 'false'}).get_response(self.test_staticweb)
|
||||
self.assertEqual(resp.status_int, 200)
|
||||
self.assertEqual(resp.content_type, 'application/json')
|
||||
@ -524,7 +515,7 @@ class TestStaticWeb(unittest.TestCase):
|
||||
|
||||
def test_container2json_web_mode_explicitly_on(self):
|
||||
resp = Request.blank(
|
||||
'/v1/a/c2?format=json',
|
||||
'/v1/a/c2',
|
||||
headers={'x-web-mode': 'true'}).get_response(self.test_staticweb)
|
||||
self.assertEqual(resp.status_int, 404)
|
||||
|
||||
|
@ -117,23 +117,14 @@ class TestSubRequestLogging(unittest.TestCase):
|
||||
self._test_subrequest_logged('PUT')
|
||||
self._test_subrequest_logged('DELETE')
|
||||
|
||||
def _test_subrequest_logged_POST(self, subrequest_type,
|
||||
post_as_copy=False):
|
||||
# Test that subrequests made downstream from Copy POST will be logged
|
||||
# with the request type of the subrequest as opposed to the GET/PUT.
|
||||
|
||||
app = FakeApp({'subrequest_type': subrequest_type,
|
||||
'object_post_as_copy': post_as_copy})
|
||||
def _test_subrequest_logged_POST(self, subrequest_type):
|
||||
app = FakeApp({'subrequest_type': subrequest_type})
|
||||
|
||||
hdrs = {'content-type': 'text/plain'}
|
||||
req = Request.blank(self.path, method='POST', headers=hdrs)
|
||||
|
||||
app.register('POST', self.path, HTTPOk, headers=hdrs)
|
||||
expect_lines = 2
|
||||
if post_as_copy:
|
||||
app.register('PUT', self.path, HTTPOk, headers=hdrs)
|
||||
app.register('GET', '/v1/a/c/o', HTTPOk, headers=hdrs)
|
||||
expect_lines = 4
|
||||
|
||||
req.get_response(app)
|
||||
info_log_lines = app.fake_logger.get_lines_for_level('info')
|
||||
@ -142,33 +133,17 @@ class TestSubRequestLogging(unittest.TestCase):
|
||||
|
||||
subreq_put_post = '%s %s' % (subrequest_type, SUB_PUT_POST_PATH)
|
||||
origpost = 'POST %s' % self.path
|
||||
copyget = 'GET %s' % self.path
|
||||
|
||||
if post_as_copy:
|
||||
# post_as_copy expect GET subreq, copy GET, PUT subreq, orig POST
|
||||
subreq_get = '%s %s' % (subrequest_type, SUB_GET_PATH)
|
||||
self.assertTrue(subreq_get in info_log_lines[0])
|
||||
self.assertTrue(copyget in info_log_lines[1])
|
||||
self.assertTrue(subreq_put_post in info_log_lines[2])
|
||||
self.assertTrue(origpost in info_log_lines[3])
|
||||
else:
|
||||
# fast post expect POST subreq, original POST
|
||||
self.assertTrue(subreq_put_post in info_log_lines[0])
|
||||
self.assertTrue(origpost in info_log_lines[1])
|
||||
# fast post expect POST subreq, original POST
|
||||
self.assertTrue(subreq_put_post in info_log_lines[0])
|
||||
self.assertTrue(origpost in info_log_lines[1])
|
||||
|
||||
def test_subrequest_logged_post_as_copy_with_POST_fast_post(self):
|
||||
self._test_subrequest_logged_POST('HEAD', post_as_copy=False)
|
||||
self._test_subrequest_logged_POST('GET', post_as_copy=False)
|
||||
self._test_subrequest_logged_POST('POST', post_as_copy=False)
|
||||
self._test_subrequest_logged_POST('PUT', post_as_copy=False)
|
||||
self._test_subrequest_logged_POST('DELETE', post_as_copy=False)
|
||||
|
||||
def test_subrequest_logged_post_as_copy_with_POST(self):
|
||||
self._test_subrequest_logged_POST('HEAD', post_as_copy=True)
|
||||
self._test_subrequest_logged_POST('GET', post_as_copy=True)
|
||||
self._test_subrequest_logged_POST('POST', post_as_copy=True)
|
||||
self._test_subrequest_logged_POST('PUT', post_as_copy=True)
|
||||
self._test_subrequest_logged_POST('DELETE', post_as_copy=True)
|
||||
def test_subrequest_logged_with_POST(self):
|
||||
self._test_subrequest_logged_POST('HEAD')
|
||||
self._test_subrequest_logged_POST('GET')
|
||||
self._test_subrequest_logged_POST('POST')
|
||||
self._test_subrequest_logged_POST('PUT')
|
||||
self._test_subrequest_logged_POST('DELETE')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
@ -330,23 +330,6 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
|
||||
self.assertEqual(len(self.authorized), 1)
|
||||
self.assertRequestEqual(req, self.authorized[0])
|
||||
|
||||
def test_put_object_post_as_copy(self):
|
||||
# PUTs due to a post-as-copy should NOT cause a versioning op
|
||||
self.app.register(
|
||||
'PUT', '/v1/a/c/o', swob.HTTPCreated, {}, 'passed')
|
||||
|
||||
cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}})
|
||||
req = Request.blank(
|
||||
'/v1/a/c/o',
|
||||
environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache,
|
||||
'CONTENT_LENGTH': '100',
|
||||
'swift.post_as_copy': True})
|
||||
status, headers, body = self.call_vw(req)
|
||||
self.assertEqual(status, '201 Created')
|
||||
self.assertEqual(len(self.authorized), 1)
|
||||
self.assertRequestEqual(req, self.authorized[0])
|
||||
self.assertEqual(1, self.app.call_count)
|
||||
|
||||
def test_put_first_object_success(self):
|
||||
self.app.register(
|
||||
'PUT', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
|
||||
@ -584,7 +567,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
|
||||
'DELETE', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
|
||||
self.app.register(
|
||||
'GET',
|
||||
'/v1/a/ver_cont?format=json&prefix=001o/&marker=&reverse=on',
|
||||
'/v1/a/ver_cont?prefix=001o/&marker=&reverse=on',
|
||||
swob.HTTPNotFound, {}, None)
|
||||
|
||||
cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}})
|
||||
@ -600,7 +583,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
|
||||
self.assertEqual(['VW', None], self.app.swift_sources)
|
||||
self.assertEqual({'fake_trans_id'}, set(self.app.txn_ids))
|
||||
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
|
||||
self.assertEqual(self.app.calls, [
|
||||
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
|
||||
('DELETE', '/v1/a/c/o'),
|
||||
@ -611,7 +594,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
|
||||
'DELETE', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
|
||||
self.app.register(
|
||||
'GET',
|
||||
'/v1/a/ver_cont?format=json&prefix=001o/&marker=&reverse=on',
|
||||
'/v1/a/ver_cont?prefix=001o/&marker=&reverse=on',
|
||||
swob.HTTPOk, {}, '[]')
|
||||
|
||||
cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}})
|
||||
@ -624,7 +607,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
|
||||
self.assertEqual(len(self.authorized), 1)
|
||||
self.assertRequestEqual(req, self.authorized[0])
|
||||
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
|
||||
self.assertEqual(self.app.calls, [
|
||||
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
|
||||
('DELETE', '/v1/a/c/o'),
|
||||
@ -633,7 +616,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
|
||||
def test_delete_latest_version_no_marker_success(self):
|
||||
self.app.register(
|
||||
'GET',
|
||||
'/v1/a/ver_cont?format=json&prefix=001o/&marker=&reverse=on',
|
||||
'/v1/a/ver_cont?prefix=001o/&marker=&reverse=on',
|
||||
swob.HTTPOk, {},
|
||||
'[{"hash": "y", '
|
||||
'"last_modified": "2014-11-21T14:23:02.206740", '
|
||||
@ -672,7 +655,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
|
||||
req_headers = self.app.headers[-1]
|
||||
self.assertNotIn('x-if-delete-at', [h.lower() for h in req_headers])
|
||||
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
|
||||
self.assertEqual(self.app.calls, [
|
||||
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
|
||||
('GET', '/v1/a/ver_cont/001o/2'),
|
||||
@ -683,7 +666,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
|
||||
def test_delete_latest_version_restores_marker_success(self):
|
||||
self.app.register(
|
||||
'GET',
|
||||
'/v1/a/ver_cont?format=json&prefix=001o/&marker=&reverse=on',
|
||||
'/v1/a/ver_cont?prefix=001o/&marker=&reverse=on',
|
||||
swob.HTTPOk, {},
|
||||
'[{"hash": "x", '
|
||||
'"last_modified": "2014-11-21T14:23:02.206740", '
|
||||
@ -731,7 +714,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
|
||||
# in the base versioned container.
|
||||
self.app.register(
|
||||
'GET',
|
||||
'/v1/a/ver_cont?format=json&prefix=001o/&marker=&reverse=on',
|
||||
'/v1/a/ver_cont?prefix=001o/&marker=&reverse=on',
|
||||
swob.HTTPOk, {},
|
||||
'[{"hash": "y", '
|
||||
'"last_modified": "2014-11-21T14:23:02.206740", '
|
||||
@ -766,7 +749,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
|
||||
self.assertEqual(len(self.authorized), 1)
|
||||
self.assertRequestEqual(req, self.authorized[0])
|
||||
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
|
||||
self.assertEqual(self.app.calls, [
|
||||
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
|
||||
('HEAD', '/v1/a/c/o'),
|
||||
@ -787,7 +770,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
|
||||
|
||||
def test_delete_latest_version_doubled_up_markers_success(self):
|
||||
self.app.register(
|
||||
'GET', '/v1/a/ver_cont?format=json&prefix=001o/'
|
||||
'GET', '/v1/a/ver_cont?prefix=001o/'
|
||||
'&marker=&reverse=on',
|
||||
swob.HTTPOk, {},
|
||||
'[{"hash": "x", '
|
||||
@ -905,7 +888,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
|
||||
'DELETE', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
|
||||
self.app.register(
|
||||
'GET',
|
||||
'/v1/a/ver_cont?format=json&prefix=001o/&marker=&reverse=on',
|
||||
'/v1/a/ver_cont?prefix=001o/&marker=&reverse=on',
|
||||
swob.HTTPOk, {},
|
||||
'[{"hash": "y", '
|
||||
'"last_modified": "2014-11-21T14:23:02.206740", '
|
||||
@ -931,7 +914,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
|
||||
self.assertEqual(len(self.authorized), 1)
|
||||
self.assertRequestEqual(req, self.authorized[0])
|
||||
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
|
||||
self.assertEqual(self.app.calls, [
|
||||
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
|
||||
('GET', '/v1/a/ver_cont/001o/1'),
|
||||
@ -942,7 +925,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
|
||||
def test_DELETE_on_expired_versioned_object(self):
|
||||
self.app.register(
|
||||
'GET',
|
||||
'/v1/a/ver_cont?format=json&prefix=001o/&marker=&reverse=on',
|
||||
'/v1/a/ver_cont?prefix=001o/&marker=&reverse=on',
|
||||
swob.HTTPOk, {},
|
||||
'[{"hash": "y", '
|
||||
'"last_modified": "2014-11-21T14:23:02.206740", '
|
||||
@ -979,7 +962,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
|
||||
self.assertRequestEqual(req, self.authorized[0])
|
||||
self.assertEqual(5, self.app.call_count)
|
||||
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
|
||||
self.assertEqual(self.app.calls, [
|
||||
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
|
||||
('GET', '/v1/a/ver_cont/001o/2'),
|
||||
@ -992,7 +975,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
|
||||
authorize_call = []
|
||||
self.app.register(
|
||||
'GET',
|
||||
'/v1/a/ver_cont?format=json&prefix=001o/&marker=&reverse=on',
|
||||
'/v1/a/ver_cont?prefix=001o/&marker=&reverse=on',
|
||||
swob.HTTPOk, {},
|
||||
'[{"hash": "y", '
|
||||
'"last_modified": "2014-11-21T14:23:02.206740", '
|
||||
@ -1021,7 +1004,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
|
||||
self.assertEqual(len(authorize_call), 1)
|
||||
self.assertRequestEqual(req, authorize_call[0])
|
||||
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
|
||||
self.assertEqual(self.app.calls, [
|
||||
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
|
||||
])
|
||||
@ -1058,7 +1041,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
|
||||
self.app.register(
|
||||
'DELETE', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
|
||||
self.app.register(
|
||||
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
'GET', '/v1/a/ver_cont?prefix=001o/&'
|
||||
'marker=&reverse=on',
|
||||
swob.HTTPOk, {},
|
||||
'[{"hash": "x", '
|
||||
@ -1072,7 +1055,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
|
||||
'"name": "001o/2", '
|
||||
'"content_type": "text/plain"}]')
|
||||
self.app.register(
|
||||
'GET', '/v1/a/ver_cont?format=json&prefix=001o/'
|
||||
'GET', '/v1/a/ver_cont?prefix=001o/'
|
||||
'&marker=001o/2',
|
||||
swob.HTTPNotFound, {}, None)
|
||||
self.app.register(
|
||||
@ -1103,7 +1086,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
|
||||
req_headers = self.app.headers[-1]
|
||||
self.assertNotIn('x-if-delete-at', [h.lower() for h in req_headers])
|
||||
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
|
||||
self.assertEqual(self.app.calls, [
|
||||
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
|
||||
('GET', prefix_listing_prefix + 'marker=001o/2'),
|
||||
@ -1114,7 +1097,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
|
||||
|
||||
def test_DELETE_on_expired_versioned_object(self):
|
||||
self.app.register(
|
||||
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
'GET', '/v1/a/ver_cont?prefix=001o/&'
|
||||
'marker=&reverse=on',
|
||||
swob.HTTPOk, {},
|
||||
'[{"hash": "x", '
|
||||
@ -1128,7 +1111,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
|
||||
'"name": "001o/2", '
|
||||
'"content_type": "text/plain"}]')
|
||||
self.app.register(
|
||||
'GET', '/v1/a/ver_cont?format=json&prefix=001o/'
|
||||
'GET', '/v1/a/ver_cont?prefix=001o/'
|
||||
'&marker=001o/2',
|
||||
swob.HTTPNotFound, {}, None)
|
||||
|
||||
@ -1156,7 +1139,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
|
||||
self.assertRequestEqual(req, self.authorized[0])
|
||||
self.assertEqual(6, self.app.call_count)
|
||||
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
|
||||
self.assertEqual(self.app.calls, [
|
||||
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
|
||||
('GET', prefix_listing_prefix + 'marker=001o/2'),
|
||||
@ -1171,7 +1154,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
|
||||
self.app.register(
|
||||
'DELETE', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
|
||||
self.app.register(
|
||||
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
'GET', '/v1/a/ver_cont?prefix=001o/&'
|
||||
'marker=&reverse=on',
|
||||
swob.HTTPOk, {},
|
||||
'[{"hash": "x", '
|
||||
@ -1185,7 +1168,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
|
||||
'"name": "001o/2", '
|
||||
'"content_type": "text/plain"}]')
|
||||
self.app.register(
|
||||
'GET', '/v1/a/ver_cont?format=json&prefix=001o/'
|
||||
'GET', '/v1/a/ver_cont?prefix=001o/'
|
||||
'&marker=001o/2',
|
||||
swob.HTTPNotFound, {}, None)
|
||||
self.app.register(
|
||||
@ -1206,7 +1189,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
|
||||
self.assertEqual(status, '403 Forbidden')
|
||||
self.assertEqual(len(authorize_call), 1)
|
||||
self.assertRequestEqual(req, authorize_call[0])
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
|
||||
self.assertEqual(self.app.calls, [
|
||||
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
|
||||
('GET', prefix_listing_prefix + 'marker=001o/2'),
|
||||
@ -1223,7 +1206,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
|
||||
|
||||
# first container server can reverse
|
||||
self.app.register(
|
||||
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
'GET', '/v1/a/ver_cont?prefix=001o/&'
|
||||
'marker=&reverse=on',
|
||||
swob.HTTPOk, {}, json.dumps(list(reversed(old_versions[2:]))))
|
||||
# but all objects are already gone
|
||||
@ -1239,21 +1222,21 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
|
||||
|
||||
# second container server can't reverse
|
||||
self.app.register(
|
||||
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
'GET', '/v1/a/ver_cont?prefix=001o/&'
|
||||
'marker=001o/2&reverse=on',
|
||||
swob.HTTPOk, {}, json.dumps(old_versions[3:]))
|
||||
|
||||
# subsequent requests shouldn't reverse
|
||||
self.app.register(
|
||||
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
'GET', '/v1/a/ver_cont?prefix=001o/&'
|
||||
'marker=&end_marker=001o/2',
|
||||
swob.HTTPOk, {}, json.dumps(old_versions[:1]))
|
||||
self.app.register(
|
||||
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
'GET', '/v1/a/ver_cont?prefix=001o/&'
|
||||
'marker=001o/0&end_marker=001o/2',
|
||||
swob.HTTPOk, {}, json.dumps(old_versions[1:2]))
|
||||
self.app.register(
|
||||
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
'GET', '/v1/a/ver_cont?prefix=001o/&'
|
||||
'marker=001o/1&end_marker=001o/2',
|
||||
swob.HTTPOk, {}, '[]')
|
||||
self.app.register(
|
||||
@ -1272,7 +1255,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
|
||||
'CONTENT_LENGTH': '0'})
|
||||
status, headers, body = self.call_vw(req)
|
||||
self.assertEqual(status, '204 No Content')
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
|
||||
self.assertEqual(self.app.calls, [
|
||||
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
|
||||
('GET', '/v1/a/ver_cont/001o/4'),
|
||||
@ -1298,7 +1281,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
|
||||
|
||||
# first container server can reverse
|
||||
self.app.register(
|
||||
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
'GET', '/v1/a/ver_cont?prefix=001o/&'
|
||||
'marker=&reverse=on',
|
||||
swob.HTTPOk, {}, json.dumps(list(reversed(old_versions[-2:]))))
|
||||
# but both objects are already gone
|
||||
@ -1311,21 +1294,21 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
|
||||
|
||||
# second container server can't reverse
|
||||
self.app.register(
|
||||
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
'GET', '/v1/a/ver_cont?prefix=001o/&'
|
||||
'marker=001o/3&reverse=on',
|
||||
swob.HTTPOk, {}, json.dumps(old_versions[4:]))
|
||||
|
||||
# subsequent requests shouldn't reverse
|
||||
self.app.register(
|
||||
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
'GET', '/v1/a/ver_cont?prefix=001o/&'
|
||||
'marker=&end_marker=001o/3',
|
||||
swob.HTTPOk, {}, json.dumps(old_versions[:2]))
|
||||
self.app.register(
|
||||
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
'GET', '/v1/a/ver_cont?prefix=001o/&'
|
||||
'marker=001o/1&end_marker=001o/3',
|
||||
swob.HTTPOk, {}, json.dumps(old_versions[2:3]))
|
||||
self.app.register(
|
||||
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
'GET', '/v1/a/ver_cont?prefix=001o/&'
|
||||
'marker=001o/2&end_marker=001o/3',
|
||||
swob.HTTPOk, {}, '[]')
|
||||
self.app.register(
|
||||
@ -1344,7 +1327,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
|
||||
'CONTENT_LENGTH': '0'})
|
||||
status, headers, body = self.call_vw(req)
|
||||
self.assertEqual(status, '204 No Content')
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
|
||||
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
|
||||
self.assertEqual(self.app.calls, [
|
||||
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
|
||||
('GET', '/v1/a/ver_cont/001o/4'),
|
||||
|
@ -20,7 +20,7 @@ import time
|
||||
|
||||
from six.moves import range
|
||||
from test import safe_repr
|
||||
from test.unit import MockTrue
|
||||
from test.unit import mock_check_drive
|
||||
|
||||
from swift.common.swob import Request, HTTPException
|
||||
from swift.common.http import HTTP_REQUEST_ENTITY_TOO_LARGE, \
|
||||
@ -372,21 +372,49 @@ class TestConstraints(unittest.TestCase):
|
||||
self.assertTrue('X-Delete-At' in req.headers)
|
||||
self.assertEqual(req.headers['X-Delete-At'], expected)
|
||||
|
||||
def test_check_dir(self):
|
||||
self.assertFalse(constraints.check_dir('', ''))
|
||||
with mock.patch("os.path.isdir", MockTrue()):
|
||||
self.assertTrue(constraints.check_dir('/srv', 'foo/bar'))
|
||||
def test_check_drive_invalid_path(self):
|
||||
root = '/srv/'
|
||||
with mock_check_drive() as mocks:
|
||||
self.assertIsNone(constraints.check_dir(root, 'foo?bar'))
|
||||
self.assertIsNone(constraints.check_mount(root, 'foo bar'))
|
||||
self.assertIsNone(constraints.check_drive(root, 'foo/bar', True))
|
||||
self.assertIsNone(constraints.check_drive(root, 'foo%bar', False))
|
||||
self.assertEqual([], mocks['isdir'].call_args_list)
|
||||
self.assertEqual([], mocks['ismount'].call_args_list)
|
||||
|
||||
def test_check_mount(self):
|
||||
self.assertFalse(constraints.check_mount('', ''))
|
||||
with mock.patch("swift.common.utils.ismount", MockTrue()):
|
||||
self.assertTrue(constraints.check_mount('/srv', '1'))
|
||||
self.assertTrue(constraints.check_mount('/srv', 'foo-bar'))
|
||||
self.assertTrue(constraints.check_mount(
|
||||
'/srv', '003ed03c-242a-4b2f-bee9-395f801d1699'))
|
||||
self.assertFalse(constraints.check_mount('/srv', 'foo bar'))
|
||||
self.assertFalse(constraints.check_mount('/srv', 'foo/bar'))
|
||||
self.assertFalse(constraints.check_mount('/srv', 'foo?bar'))
|
||||
def test_check_drive_ismount(self):
|
||||
root = '/srv'
|
||||
path = 'sdb1'
|
||||
with mock_check_drive(ismount=True) as mocks:
|
||||
self.assertIsNone(constraints.check_dir(root, path))
|
||||
self.assertIsNone(constraints.check_drive(root, path, False))
|
||||
self.assertEqual([mock.call('/srv/sdb1'), mock.call('/srv/sdb1')],
|
||||
mocks['isdir'].call_args_list)
|
||||
self.assertEqual([], mocks['ismount'].call_args_list)
|
||||
with mock_check_drive(ismount=True) as mocks:
|
||||
self.assertEqual('/srv/sdb1', constraints.check_mount(root, path))
|
||||
self.assertEqual('/srv/sdb1', constraints.check_drive(
|
||||
root, path, True))
|
||||
self.assertEqual([], mocks['isdir'].call_args_list)
|
||||
self.assertEqual([mock.call('/srv/sdb1'), mock.call('/srv/sdb1')],
|
||||
mocks['ismount'].call_args_list)
|
||||
|
||||
def test_check_drive_isdir(self):
|
||||
root = '/srv'
|
||||
path = 'sdb2'
|
||||
with mock_check_drive(isdir=True) as mocks:
|
||||
self.assertEqual('/srv/sdb2', constraints.check_dir(root, path))
|
||||
self.assertEqual('/srv/sdb2', constraints.check_drive(
|
||||
root, path, False))
|
||||
self.assertEqual([mock.call('/srv/sdb2'), mock.call('/srv/sdb2')],
|
||||
mocks['isdir'].call_args_list)
|
||||
self.assertEqual([], mocks['ismount'].call_args_list)
|
||||
with mock_check_drive(isdir=True) as mocks:
|
||||
self.assertIsNone(constraints.check_mount(root, path))
|
||||
self.assertIsNone(constraints.check_drive(root, path, True))
|
||||
self.assertEqual([], mocks['isdir'].call_args_list)
|
||||
self.assertEqual([mock.call('/srv/sdb2'), mock.call('/srv/sdb2')],
|
||||
mocks['ismount'].call_args_list)
|
||||
|
||||
def test_check_float(self):
|
||||
self.assertFalse(constraints.check_float(''))
|
||||
|
@ -590,12 +590,61 @@ class TestDBReplicator(unittest.TestCase):
|
||||
self.assertFalse(
|
||||
replicator._usync_db(0, FakeBroker(), fake_http, '12345', '67890'))
|
||||
|
||||
def test_stats(self):
|
||||
# I'm not sure how to test that this logs the right thing,
|
||||
# but we can at least make sure it gets covered.
|
||||
replicator = TestReplicator({})
|
||||
@mock.patch('swift.common.db_replicator.dump_recon_cache')
|
||||
@mock.patch('swift.common.db_replicator.time.time', return_value=1234.5678)
|
||||
def test_stats(self, mock_time, mock_recon_cache):
|
||||
logger = unit.debug_logger('test-replicator')
|
||||
replicator = TestReplicator({}, logger=logger)
|
||||
replicator._zero_stats()
|
||||
self.assertEqual(replicator.stats['start'], mock_time.return_value)
|
||||
replicator._report_stats()
|
||||
self.assertEqual(logger.get_lines_for_level('info'), [
|
||||
'Attempted to replicate 0 dbs in 0.00000 seconds (0.00000/s)',
|
||||
'Removed 0 dbs',
|
||||
'0 successes, 0 failures',
|
||||
'diff:0 diff_capped:0 empty:0 hashmatch:0 no_change:0 '
|
||||
'remote_merge:0 rsync:0 ts_repl:0',
|
||||
])
|
||||
self.assertEqual(1, len(mock_recon_cache.mock_calls))
|
||||
self.assertEqual(mock_recon_cache.mock_calls[0][1][0], {
|
||||
'replication_time': 0.0,
|
||||
'replication_last': mock_time.return_value,
|
||||
'replication_stats': replicator.stats,
|
||||
})
|
||||
|
||||
mock_recon_cache.reset_mock()
|
||||
logger.clear()
|
||||
replicator.stats.update({
|
||||
'attempted': 30,
|
||||
'success': 25,
|
||||
'remove': 9,
|
||||
'failure': 1,
|
||||
|
||||
'diff': 5,
|
||||
'diff_capped': 4,
|
||||
'empty': 7,
|
||||
'hashmatch': 8,
|
||||
'no_change': 6,
|
||||
'remote_merge': 2,
|
||||
'rsync': 3,
|
||||
'ts_repl': 10,
|
||||
})
|
||||
mock_time.return_value += 246.813576
|
||||
replicator._report_stats()
|
||||
self.maxDiff = None
|
||||
self.assertEqual(logger.get_lines_for_level('info'), [
|
||||
'Attempted to replicate 30 dbs in 246.81358 seconds (0.12155/s)',
|
||||
'Removed 9 dbs',
|
||||
'25 successes, 1 failures',
|
||||
'diff:5 diff_capped:4 empty:7 hashmatch:8 no_change:6 '
|
||||
'remote_merge:2 rsync:3 ts_repl:10',
|
||||
])
|
||||
self.assertEqual(1, len(mock_recon_cache.mock_calls))
|
||||
self.assertEqual(mock_recon_cache.mock_calls[0][1][0], {
|
||||
'replication_time': 246.813576,
|
||||
'replication_last': mock_time.return_value,
|
||||
'replication_stats': replicator.stats,
|
||||
})
|
||||
|
||||
def test_replicate_object(self):
|
||||
db_replicator.ring = FakeRingWithNodes()
|
||||
|
@ -136,22 +136,26 @@ class TestWSGI(unittest.TestCase):
|
||||
_fake_rings(t)
|
||||
app, conf, logger, log_name = wsgi.init_request_processor(
|
||||
conf_file, 'proxy-server')
|
||||
# verify pipeline is catch_errors -> dlo -> proxy-server
|
||||
# verify pipeline is: catch_errors -> gatekeeper -> listing_formats ->
|
||||
# copy -> dlo -> proxy-server
|
||||
expected = swift.common.middleware.catch_errors.CatchErrorMiddleware
|
||||
self.assertTrue(isinstance(app, expected))
|
||||
self.assertIsInstance(app, expected)
|
||||
|
||||
app = app.app
|
||||
expected = swift.common.middleware.gatekeeper.GatekeeperMiddleware
|
||||
self.assertTrue(isinstance(app, expected))
|
||||
self.assertIsInstance(app, expected)
|
||||
|
||||
app = app.app
|
||||
expected = \
|
||||
swift.common.middleware.copy.ServerSideCopyMiddleware
|
||||
expected = swift.common.middleware.listing_formats.ListingFilter
|
||||
self.assertIsInstance(app, expected)
|
||||
|
||||
app = app.app
|
||||
expected = swift.common.middleware.copy.ServerSideCopyMiddleware
|
||||
self.assertIsInstance(app, expected)
|
||||
|
||||
app = app.app
|
||||
expected = swift.common.middleware.dlo.DynamicLargeObject
|
||||
self.assertTrue(isinstance(app, expected))
|
||||
self.assertIsInstance(app, expected)
|
||||
|
||||
app = app.app
|
||||
expected = \
|
||||
@ -160,7 +164,7 @@ class TestWSGI(unittest.TestCase):
|
||||
|
||||
app = app.app
|
||||
expected = swift.proxy.server.Application
|
||||
self.assertTrue(isinstance(app, expected))
|
||||
self.assertIsInstance(app, expected)
|
||||
# config settings applied to app instance
|
||||
self.assertEqual(0.2, app.conn_timeout)
|
||||
# appconfig returns values from 'proxy-server' section
|
||||
@ -1478,6 +1482,7 @@ class TestPipelineModification(unittest.TestCase):
|
||||
self.assertEqual(self.pipeline_modules(app),
|
||||
['swift.common.middleware.catch_errors',
|
||||
'swift.common.middleware.gatekeeper',
|
||||
'swift.common.middleware.listing_formats',
|
||||
'swift.common.middleware.copy',
|
||||
'swift.common.middleware.dlo',
|
||||
'swift.common.middleware.versioned_writes',
|
||||
@ -1510,6 +1515,7 @@ class TestPipelineModification(unittest.TestCase):
|
||||
self.assertEqual(self.pipeline_modules(app),
|
||||
['swift.common.middleware.catch_errors',
|
||||
'swift.common.middleware.gatekeeper',
|
||||
'swift.common.middleware.listing_formats',
|
||||
'swift.common.middleware.copy',
|
||||
'swift.common.middleware.dlo',
|
||||
'swift.common.middleware.versioned_writes',
|
||||
@ -1549,6 +1555,7 @@ class TestPipelineModification(unittest.TestCase):
|
||||
self.assertEqual(self.pipeline_modules(app),
|
||||
['swift.common.middleware.catch_errors',
|
||||
'swift.common.middleware.gatekeeper',
|
||||
'swift.common.middleware.listing_formats',
|
||||
'swift.common.middleware.copy',
|
||||
'swift.common.middleware.slo',
|
||||
'swift.common.middleware.dlo',
|
||||
@ -1649,6 +1656,7 @@ class TestPipelineModification(unittest.TestCase):
|
||||
self.assertEqual(self.pipeline_modules(app), [
|
||||
'swift.common.middleware.catch_errors',
|
||||
'swift.common.middleware.gatekeeper',
|
||||
'swift.common.middleware.listing_formats',
|
||||
'swift.common.middleware.copy',
|
||||
'swift.common.middleware.dlo',
|
||||
'swift.common.middleware.versioned_writes',
|
||||
@ -1664,6 +1672,7 @@ class TestPipelineModification(unittest.TestCase):
|
||||
'swift.common.middleware.gatekeeper',
|
||||
'swift.common.middleware.healthcheck',
|
||||
'swift.common.middleware.catch_errors',
|
||||
'swift.common.middleware.listing_formats',
|
||||
'swift.common.middleware.copy',
|
||||
'swift.common.middleware.dlo',
|
||||
'swift.common.middleware.versioned_writes',
|
||||
@ -1678,6 +1687,7 @@ class TestPipelineModification(unittest.TestCase):
|
||||
'swift.common.middleware.healthcheck',
|
||||
'swift.common.middleware.catch_errors',
|
||||
'swift.common.middleware.gatekeeper',
|
||||
'swift.common.middleware.listing_formats',
|
||||
'swift.common.middleware.copy',
|
||||
'swift.common.middleware.dlo',
|
||||
'swift.common.middleware.versioned_writes',
|
||||
@ -1713,7 +1723,7 @@ class TestPipelineModification(unittest.TestCase):
|
||||
tempdir, policy.ring_name + '.ring.gz')
|
||||
|
||||
app = wsgi.loadapp(conf_path)
|
||||
proxy_app = app.app.app.app.app.app.app
|
||||
proxy_app = app.app.app.app.app.app.app.app
|
||||
self.assertEqual(proxy_app.account_ring.serialized_path,
|
||||
account_ring_path)
|
||||
self.assertEqual(proxy_app.container_ring.serialized_path,
|
||||
|
@ -1,3 +1,4 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright (c) 2010-2012 OpenStack Foundation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
@ -42,7 +43,7 @@ from swift.common import constraints
|
||||
from swift.common.utils import (Timestamp, mkdirs, public, replication,
|
||||
storage_directory, lock_parent_directory,
|
||||
ShardRange)
|
||||
from test.unit import fake_http_connect, debug_logger
|
||||
from test.unit import fake_http_connect, debug_logger, mock_check_drive
|
||||
from swift.common.storage_policy import (POLICIES, StoragePolicy)
|
||||
from swift.common.request_helpers import get_sys_meta_prefix
|
||||
|
||||
@ -64,9 +65,8 @@ def save_globals():
|
||||
class TestContainerController(unittest.TestCase):
|
||||
"""Test swift.container.server.ContainerController"""
|
||||
def setUp(self):
|
||||
"""Set up for testing swift.object_server.ObjectController"""
|
||||
self.testdir = os.path.join(mkdtemp(),
|
||||
'tmp_test_object_server_ObjectController')
|
||||
self.testdir = os.path.join(
|
||||
mkdtemp(), 'tmp_test_container_server_ContainerController')
|
||||
mkdirs(self.testdir)
|
||||
rmtree(self.testdir)
|
||||
mkdirs(os.path.join(self.testdir, 'sda1'))
|
||||
@ -325,15 +325,6 @@ class TestContainerController(unittest.TestCase):
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 400)
|
||||
|
||||
def test_HEAD_insufficient_storage(self):
|
||||
self.controller = container_server.ContainerController(
|
||||
{'devices': self.testdir})
|
||||
req = Request.blank(
|
||||
'/sda-null/p/a/c', environ={'REQUEST_METHOD': 'HEAD',
|
||||
'HTTP_X_TIMESTAMP': '1'})
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 507)
|
||||
|
||||
def test_HEAD_invalid_content_type(self):
|
||||
req = Request.blank(
|
||||
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'HEAD'},
|
||||
@ -363,6 +354,60 @@ class TestContainerController(unittest.TestCase):
|
||||
self.assertEqual(resp.headers['Server'],
|
||||
(self.controller.server_type + '/' + swift_version))
|
||||
|
||||
def test_insufficient_storage_mount_check_true(self):
|
||||
conf = {'devices': self.testdir, 'mount_check': 'true'}
|
||||
container_controller = container_server.ContainerController(conf)
|
||||
self.assertTrue(container_controller.mount_check)
|
||||
for method in container_controller.allowed_methods:
|
||||
if method == 'OPTIONS':
|
||||
continue
|
||||
path = '/sda1/p/'
|
||||
if method == 'REPLICATE':
|
||||
path += 'suff'
|
||||
else:
|
||||
path += 'a/c'
|
||||
req = Request.blank(path, method=method,
|
||||
headers={'x-timestamp': '1'})
|
||||
with mock_check_drive() as mocks:
|
||||
try:
|
||||
resp = req.get_response(container_controller)
|
||||
self.assertEqual(resp.status_int, 507)
|
||||
mocks['ismount'].return_value = True
|
||||
resp = req.get_response(container_controller)
|
||||
self.assertNotEqual(resp.status_int, 507)
|
||||
# feel free to rip out this last assertion...
|
||||
expected = 2 if method == 'PUT' else 4
|
||||
self.assertEqual(resp.status_int // 100, expected)
|
||||
except AssertionError as e:
|
||||
self.fail('%s for %s' % (e, method))
|
||||
|
||||
def test_insufficient_storage_mount_check_false(self):
|
||||
conf = {'devices': self.testdir, 'mount_check': 'false'}
|
||||
container_controller = container_server.ContainerController(conf)
|
||||
self.assertFalse(container_controller.mount_check)
|
||||
for method in container_controller.allowed_methods:
|
||||
if method == 'OPTIONS':
|
||||
continue
|
||||
path = '/sda1/p/'
|
||||
if method == 'REPLICATE':
|
||||
path += 'suff'
|
||||
else:
|
||||
path += 'a/c'
|
||||
req = Request.blank(path, method=method,
|
||||
headers={'x-timestamp': '1'})
|
||||
with mock_check_drive() as mocks:
|
||||
try:
|
||||
resp = req.get_response(container_controller)
|
||||
self.assertEqual(resp.status_int, 507)
|
||||
mocks['isdir'].return_value = True
|
||||
resp = req.get_response(container_controller)
|
||||
self.assertNotEqual(resp.status_int, 507)
|
||||
# feel free to rip out this last assertion...
|
||||
expected = 2 if method == 'PUT' else 4
|
||||
self.assertEqual(resp.status_int // 100, expected)
|
||||
except AssertionError as e:
|
||||
self.fail('%s for %s' % (e, method))
|
||||
|
||||
def test_PUT(self):
|
||||
req = Request.blank(
|
||||
'/sda1/p/a/c',
|
||||
@ -833,15 +878,6 @@ class TestContainerController(unittest.TestCase):
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 400)
|
||||
|
||||
def test_PUT_insufficient_storage(self):
|
||||
self.controller = container_server.ContainerController(
|
||||
{'devices': self.testdir})
|
||||
req = Request.blank(
|
||||
'/sda-null/p/a/c', environ={'REQUEST_METHOD': 'PUT',
|
||||
'HTTP_X_TIMESTAMP': '1'})
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 507)
|
||||
|
||||
def test_POST_HEAD_metadata(self):
|
||||
req = Request.blank(
|
||||
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT'},
|
||||
@ -968,15 +1004,6 @@ class TestContainerController(unittest.TestCase):
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 400)
|
||||
|
||||
def test_POST_insufficient_storage(self):
|
||||
self.controller = container_server.ContainerController(
|
||||
{'devices': self.testdir})
|
||||
req = Request.blank(
|
||||
'/sda-null/p/a/c', environ={'REQUEST_METHOD': 'POST',
|
||||
'HTTP_X_TIMESTAMP': '1'})
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 507)
|
||||
|
||||
def test_POST_invalid_container_sync_to(self):
|
||||
self.controller = container_server.ContainerController(
|
||||
{'devices': self.testdir})
|
||||
@ -1290,22 +1317,6 @@ class TestContainerController(unittest.TestCase):
|
||||
sync_containers = [c for c in sync_store.synced_containers_generator()]
|
||||
self.assertFalse(sync_containers)
|
||||
|
||||
def test_REPLICATE_insufficient_storage(self):
|
||||
conf = {'devices': self.testdir, 'mount_check': 'true'}
|
||||
self.container_controller = container_server.ContainerController(
|
||||
conf)
|
||||
|
||||
def fake_check_mount(*args, **kwargs):
|
||||
return False
|
||||
|
||||
with mock.patch("swift.common.constraints.check_mount",
|
||||
fake_check_mount):
|
||||
req = Request.blank('/sda1/p/suff',
|
||||
environ={'REQUEST_METHOD': 'REPLICATE'},
|
||||
headers={})
|
||||
resp = req.get_response(self.container_controller)
|
||||
self.assertEqual(resp.status_int, 507)
|
||||
|
||||
def test_REPLICATE_rsync_then_merge_works(self):
|
||||
def fake_rsync_then_merge(self, drive, db_file, args):
|
||||
return HTTPNoContent()
|
||||
@ -2033,15 +2044,6 @@ class TestContainerController(unittest.TestCase):
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 400)
|
||||
|
||||
def test_DELETE_insufficient_storage(self):
|
||||
self.controller = container_server.ContainerController(
|
||||
{'devices': self.testdir})
|
||||
req = Request.blank(
|
||||
'/sda-null/p/a/c', environ={'REQUEST_METHOD': 'DELETE',
|
||||
'HTTP_X_TIMESTAMP': '1'})
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 507)
|
||||
|
||||
def test_GET_over_limit(self):
|
||||
req = Request.blank(
|
||||
'/sda1/p/a/c?limit=%d' %
|
||||
@ -2386,6 +2388,54 @@ class TestContainerController(unittest.TestCase):
|
||||
resp.content_type, 'application/json',
|
||||
'Invalid content_type for Accept: %s' % accept)
|
||||
|
||||
def test_GET_non_ascii(self):
|
||||
# make a container
|
||||
req = Request.blank(
|
||||
'/sda1/p/a/jsonc', environ={'REQUEST_METHOD': 'PUT',
|
||||
'HTTP_X_TIMESTAMP': '0'})
|
||||
resp = req.get_response(self.controller)
|
||||
|
||||
noodles = [u"Spätzle", u"ラーメン"]
|
||||
for n in noodles:
|
||||
req = Request.blank(
|
||||
'/sda1/p/a/jsonc/%s' % n.encode("utf-8"),
|
||||
environ={'REQUEST_METHOD': 'PUT',
|
||||
'HTTP_X_TIMESTAMP': '1',
|
||||
'HTTP_X_CONTENT_TYPE': 'text/plain',
|
||||
'HTTP_X_ETAG': 'x',
|
||||
'HTTP_X_SIZE': 0})
|
||||
self._update_object_put_headers(req)
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 201) # sanity check
|
||||
|
||||
json_body = [{"name": noodles[0],
|
||||
"hash": "x",
|
||||
"bytes": 0,
|
||||
"content_type": "text/plain",
|
||||
"last_modified": "1970-01-01T00:00:01.000000"},
|
||||
{"name": noodles[1],
|
||||
"hash": "x",
|
||||
"bytes": 0,
|
||||
"content_type": "text/plain",
|
||||
"last_modified": "1970-01-01T00:00:01.000000"}]
|
||||
|
||||
# JSON
|
||||
req = Request.blank(
|
||||
'/sda1/p/a/jsonc?format=json',
|
||||
environ={'REQUEST_METHOD': 'GET'})
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 200) # sanity check
|
||||
self.assertEqual(json.loads(resp.body), json_body)
|
||||
|
||||
# Plain text
|
||||
text_body = u''.join(n + u"\n" for n in noodles).encode('utf-8')
|
||||
req = Request.blank(
|
||||
'/sda1/p/a/jsonc?format=text',
|
||||
environ={'REQUEST_METHOD': 'GET'})
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 200) # sanity check
|
||||
self.assertEqual(resp.body, text_body)
|
||||
|
||||
def test_GET_plain(self):
|
||||
# make a container
|
||||
req = Request.blank(
|
||||
@ -2770,6 +2820,39 @@ class TestContainerController(unittest.TestCase):
|
||||
{"subdir": "US-TX-"},
|
||||
{"subdir": "US-UT-"}])
|
||||
|
||||
def test_GET_delimiter_non_ascii(self):
|
||||
req = Request.blank(
|
||||
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT',
|
||||
'HTTP_X_TIMESTAMP': '0'})
|
||||
resp = req.get_response(self.controller)
|
||||
for obj_name in [u"a/❥/1", u"a/❥/2", u"a/ꙮ/1", u"a/ꙮ/2"]:
|
||||
req = Request.blank(
|
||||
'/sda1/p/a/c/%s' % obj_name.encode('utf-8'),
|
||||
environ={
|
||||
'REQUEST_METHOD': 'PUT', 'HTTP_X_TIMESTAMP': '1',
|
||||
'HTTP_X_CONTENT_TYPE': 'text/plain', 'HTTP_X_ETAG': 'x',
|
||||
'HTTP_X_SIZE': 0})
|
||||
self._update_object_put_headers(req)
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 201)
|
||||
|
||||
# JSON
|
||||
req = Request.blank(
|
||||
'/sda1/p/a/c?prefix=a/&delimiter=/&format=json',
|
||||
environ={'REQUEST_METHOD': 'GET'})
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(
|
||||
json.loads(resp.body),
|
||||
[{"subdir": u"a/❥/"},
|
||||
{"subdir": u"a/ꙮ/"}])
|
||||
|
||||
# Plain text
|
||||
req = Request.blank(
|
||||
'/sda1/p/a/c?prefix=a/&delimiter=/&format=text',
|
||||
environ={'REQUEST_METHOD': 'GET'})
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.body, u"a/❥/\na/ꙮ/\n".encode("utf-8"))
|
||||
|
||||
def test_GET_leading_delimiter(self):
|
||||
req = Request.blank(
|
||||
'/sda1/p/a/c', environ={'REQUEST_METHOD': 'PUT',
|
||||
@ -2878,15 +2961,6 @@ class TestContainerController(unittest.TestCase):
|
||||
"content_type": "text/plain",
|
||||
"last_modified": "1970-01-01T00:00:01.000000"}])
|
||||
|
||||
def test_GET_insufficient_storage(self):
|
||||
self.controller = container_server.ContainerController(
|
||||
{'devices': self.testdir})
|
||||
req = Request.blank(
|
||||
'/sda-null/p/a/c', environ={'REQUEST_METHOD': 'GET',
|
||||
'HTTP_X_TIMESTAMP': '1'})
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEqual(resp.status_int, 507)
|
||||
|
||||
def test_through_call(self):
|
||||
inbuf = BytesIO()
|
||||
errbuf = StringIO()
|
||||
|
@ -37,7 +37,7 @@ from swift.account import server as account_server
|
||||
from swift.common import storage_policy
|
||||
from swift.common.ring import RingData
|
||||
from swift.common.storage_policy import StoragePolicy, ECStoragePolicy
|
||||
from swift.common.middleware import proxy_logging
|
||||
from swift.common.middleware import listing_formats, proxy_logging
|
||||
from swift.common import utils
|
||||
from swift.common.utils import mkdirs, normalize_timestamp, NullLogger
|
||||
from swift.container import server as container_server
|
||||
@ -210,8 +210,8 @@ def setup_servers(the_object_server=object_server, extra_conf=None):
|
||||
(prosrv, acc1srv, acc2srv, con1srv, con2srv, obj1srv, obj2srv, obj3srv,
|
||||
obj4srv, obj5srv, obj6srv)
|
||||
nl = NullLogger()
|
||||
logging_prosv = proxy_logging.ProxyLoggingMiddleware(prosrv, conf,
|
||||
logger=prosrv.logger)
|
||||
logging_prosv = proxy_logging.ProxyLoggingMiddleware(
|
||||
listing_formats.ListingFilter(prosrv), conf, logger=prosrv.logger)
|
||||
prospa = spawn(wsgi.server, prolis, logging_prosv, nl)
|
||||
acc1spa = spawn(wsgi.server, acc1lis, acc1srv, nl)
|
||||
acc2spa = spawn(wsgi.server, acc2lis, acc2srv, nl)
|
||||
|
@ -906,7 +906,7 @@ class TestAuditor(unittest.TestCase):
|
||||
self.disk_file.delete(ts_tomb)
|
||||
# this get_hashes call will truncate the invalid hashes entry
|
||||
self.disk_file.manager.get_hashes(
|
||||
self.devices + '/sda', '0', [], self.disk_file.policy)
|
||||
'sda', '0', [], self.disk_file.policy)
|
||||
suffix = basename(dirname(self.disk_file._datadir))
|
||||
part_dir = dirname(dirname(self.disk_file._datadir))
|
||||
# sanity checks...
|
||||
@ -1011,7 +1011,7 @@ class TestAuditor(unittest.TestCase):
|
||||
|
||||
# this get_hashes call will truncate the invalid hashes entry
|
||||
self.disk_file.manager.get_hashes(
|
||||
self.devices + '/sda', '0', [], self.disk_file.policy)
|
||||
'sda', '0', [], self.disk_file.policy)
|
||||
with open(hash_invalid, 'rb') as fp:
|
||||
self.assertEqual('', fp.read().strip('\n')) # sanity check
|
||||
|
||||
|
@ -44,7 +44,8 @@ from swift.obj.diskfile import MD5_OF_EMPTY_STRING, update_auditor_status
|
||||
from test.unit import (FakeLogger, mock as unit_mock, temptree,
|
||||
patch_policies, debug_logger, EMPTY_ETAG,
|
||||
make_timestamp_iter, DEFAULT_TEST_EC_TYPE,
|
||||
requires_o_tmpfile_support, encode_frag_archive_bodies)
|
||||
requires_o_tmpfile_support, encode_frag_archive_bodies,
|
||||
mock_check_drive)
|
||||
from nose import SkipTest
|
||||
from swift.obj import diskfile
|
||||
from swift.common import utils
|
||||
@ -2944,32 +2945,26 @@ class DiskFileMixin(BaseDiskFileTestMixin):
|
||||
|
||||
mount_check = None
|
||||
self.df_mgr.mount_check = True
|
||||
with mock.patch('swift.obj.diskfile.check_mount',
|
||||
mock.MagicMock(return_value=False)):
|
||||
with mock_check_drive(ismount=False):
|
||||
self.assertEqual(self.df_mgr.get_dev_path(device, mount_check),
|
||||
None)
|
||||
with mock.patch('swift.obj.diskfile.check_mount',
|
||||
mock.MagicMock(return_value=True)):
|
||||
with mock_check_drive(ismount=True):
|
||||
self.assertEqual(self.df_mgr.get_dev_path(device, mount_check),
|
||||
dev_path)
|
||||
|
||||
self.df_mgr.mount_check = False
|
||||
with mock.patch('swift.obj.diskfile.check_dir',
|
||||
mock.MagicMock(return_value=False)):
|
||||
with mock_check_drive(isdir=False):
|
||||
self.assertEqual(self.df_mgr.get_dev_path(device, mount_check),
|
||||
None)
|
||||
with mock.patch('swift.obj.diskfile.check_dir',
|
||||
mock.MagicMock(return_value=True)):
|
||||
with mock_check_drive(isdir=True):
|
||||
self.assertEqual(self.df_mgr.get_dev_path(device, mount_check),
|
||||
dev_path)
|
||||
|
||||
mount_check = True
|
||||
with mock.patch('swift.obj.diskfile.check_mount',
|
||||
mock.MagicMock(return_value=False)):
|
||||
with mock_check_drive(ismount=False):
|
||||
self.assertEqual(self.df_mgr.get_dev_path(device, mount_check),
|
||||
None)
|
||||
with mock.patch('swift.obj.diskfile.check_mount',
|
||||
mock.MagicMock(return_value=True)):
|
||||
with mock_check_drive(ismount=True):
|
||||
self.assertEqual(self.df_mgr.get_dev_path(device, mount_check),
|
||||
dev_path)
|
||||
|
||||
@ -5855,7 +5850,7 @@ class TestSuffixHashes(unittest.TestCase):
|
||||
suffix_dir = os.path.dirname(df._datadir)
|
||||
for i in itertools.count():
|
||||
df2 = df._manager.get_diskfile(
|
||||
df._device_path,
|
||||
os.path.basename(df._device_path),
|
||||
df._datadir.split('/')[-3],
|
||||
df._account,
|
||||
df._container,
|
||||
@ -7706,8 +7701,7 @@ class TestSuffixHashes(unittest.TestCase):
|
||||
for policy in self.iter_policies():
|
||||
df_mgr = self.df_router[policy]
|
||||
df_mgr.mount_check = True
|
||||
with mock.patch('swift.obj.diskfile.check_mount',
|
||||
mock.MagicMock(side_effect=[False])):
|
||||
with mock_check_drive(ismount=False):
|
||||
self.assertRaises(
|
||||
DiskFileDeviceUnavailable,
|
||||
df_mgr.get_hashes, self.existing_device, '0', ['123'],
|
||||
|
@ -2762,46 +2762,51 @@ class TestObjectReconstructor(BaseTestObjectReconstructor):
|
||||
|
||||
paths = []
|
||||
|
||||
def fake_check_mount(devices, device):
|
||||
paths.append(os.path.join(devices, device))
|
||||
return False
|
||||
def fake_check_drive(devices, device, mount_check):
|
||||
path = os.path.join(devices, device)
|
||||
if (not mount_check) and os.path.isdir(path):
|
||||
# while mount_check is false, the test still creates the dirs
|
||||
paths.append(path)
|
||||
return path
|
||||
return None
|
||||
|
||||
with mock.patch('swift.obj.reconstructor.whataremyips',
|
||||
return_value=[self.ip]), \
|
||||
mock.patch.object(self.policy.object_ring, '_devs',
|
||||
new=stub_ring_devs), \
|
||||
mock.patch('swift.obj.diskfile.check_mount',
|
||||
fake_check_mount):
|
||||
mock.patch('swift.obj.diskfile.check_drive',
|
||||
fake_check_drive):
|
||||
part_infos = list(self.reconstructor.collect_parts())
|
||||
self.assertEqual(2, len(part_infos)) # sanity, same jobs
|
||||
self.assertEqual(set(int(p['partition']) for p in part_infos),
|
||||
set([0, 1]))
|
||||
|
||||
# ... because ismount was not called
|
||||
self.assertEqual(paths, [])
|
||||
# ... because fake_check_drive returned paths for both dirs
|
||||
self.assertEqual(set(paths), set([
|
||||
os.path.join(self.devices, dev) for dev in local_devs]))
|
||||
|
||||
# ... now with mount check
|
||||
self._configure_reconstructor(mount_check=True)
|
||||
self.assertTrue(self.reconstructor.mount_check)
|
||||
paths = []
|
||||
for policy in POLICIES:
|
||||
self.assertTrue(self.reconstructor._df_router[policy].mount_check)
|
||||
with mock.patch('swift.obj.reconstructor.whataremyips',
|
||||
return_value=[self.ip]), \
|
||||
mock.patch.object(self.policy.object_ring, '_devs',
|
||||
new=stub_ring_devs), \
|
||||
mock.patch('swift.obj.diskfile.check_mount',
|
||||
fake_check_mount):
|
||||
mock.patch('swift.obj.diskfile.check_drive',
|
||||
fake_check_drive):
|
||||
part_infos = list(self.reconstructor.collect_parts())
|
||||
self.assertEqual([], part_infos) # sanity, no jobs
|
||||
|
||||
# ... because fake_ismount returned False for both paths
|
||||
self.assertEqual(set(paths), set([
|
||||
os.path.join(self.devices, dev) for dev in local_devs]))
|
||||
# ... because fake_check_drive returned False for both paths
|
||||
self.assertFalse(paths)
|
||||
|
||||
def fake_check_mount(devices, device):
|
||||
path = os.path.join(devices, device)
|
||||
if path.endswith('sda'):
|
||||
return True
|
||||
def fake_check_drive(devices, device, mount_check):
|
||||
self.assertTrue(mount_check)
|
||||
if device == 'sda':
|
||||
return os.path.join(devices, device)
|
||||
else:
|
||||
return False
|
||||
|
||||
@ -2809,8 +2814,8 @@ class TestObjectReconstructor(BaseTestObjectReconstructor):
|
||||
return_value=[self.ip]), \
|
||||
mock.patch.object(self.policy.object_ring, '_devs',
|
||||
new=stub_ring_devs), \
|
||||
mock.patch('swift.obj.diskfile.check_mount',
|
||||
fake_check_mount):
|
||||
mock.patch('swift.obj.diskfile.check_drive',
|
||||
fake_check_drive):
|
||||
part_infos = list(self.reconstructor.collect_parts())
|
||||
self.assertEqual(1, len(part_infos)) # only sda picked up (part 0)
|
||||
self.assertEqual(part_infos[0]['partition'], 0)
|
||||
|
@ -45,7 +45,7 @@ from swift import __version__ as swift_version
|
||||
from swift.common.http import is_success
|
||||
from test import listen_zero
|
||||
from test.unit import FakeLogger, debug_logger, mocked_http_conn, \
|
||||
make_timestamp_iter, DEFAULT_TEST_EC_TYPE
|
||||
make_timestamp_iter, DEFAULT_TEST_EC_TYPE, mock_check_drive
|
||||
from test.unit import connect_tcp, readuntil2crlfs, patch_policies, \
|
||||
encode_frag_archive_bodies
|
||||
from swift.obj import server as object_server
|
||||
@ -2762,6 +2762,68 @@ class TestObjectController(unittest.TestCase):
|
||||
self.assertEqual(resp.headers['Server'],
|
||||
(server_handler.server_type + '/' + swift_version))
|
||||
|
||||
def test_insufficient_storage_mount_check_true(self):
|
||||
conf = {'devices': self.testdir, 'mount_check': 'true'}
|
||||
object_controller = object_server.ObjectController(conf)
|
||||
for policy in POLICIES:
|
||||
mgr = object_controller._diskfile_router[policy]
|
||||
self.assertTrue(mgr.mount_check)
|
||||
for method in object_controller.allowed_methods:
|
||||
if method in ('OPTIONS', 'SSYNC'):
|
||||
continue
|
||||
path = '/sda1/p/'
|
||||
if method == 'REPLICATE':
|
||||
path += 'suff'
|
||||
else:
|
||||
path += 'a/c/o'
|
||||
req = Request.blank(path, method=method,
|
||||
headers={'x-timestamp': '1',
|
||||
'content-type': 'app/test',
|
||||
'content-length': 0})
|
||||
with mock_check_drive() as mocks:
|
||||
try:
|
||||
resp = req.get_response(object_controller)
|
||||
self.assertEqual(resp.status_int, 507)
|
||||
mocks['ismount'].return_value = True
|
||||
resp = req.get_response(object_controller)
|
||||
self.assertNotEqual(resp.status_int, 507)
|
||||
# feel free to rip out this last assertion...
|
||||
expected = 2 if method in ('PUT', 'REPLICATE') else 4
|
||||
self.assertEqual(resp.status_int // 100, expected)
|
||||
except AssertionError as e:
|
||||
self.fail('%s for %s' % (e, method))
|
||||
|
||||
def test_insufficient_storage_mount_check_false(self):
|
||||
conf = {'devices': self.testdir, 'mount_check': 'false'}
|
||||
object_controller = object_server.ObjectController(conf)
|
||||
for policy in POLICIES:
|
||||
mgr = object_controller._diskfile_router[policy]
|
||||
self.assertFalse(mgr.mount_check)
|
||||
for method in object_controller.allowed_methods:
|
||||
if method in ('OPTIONS', 'SSYNC'):
|
||||
continue
|
||||
path = '/sda1/p/'
|
||||
if method == 'REPLICATE':
|
||||
path += 'suff'
|
||||
else:
|
||||
path += 'a/c/o'
|
||||
req = Request.blank(path, method=method,
|
||||
headers={'x-timestamp': '1',
|
||||
'content-type': 'app/test',
|
||||
'content-length': 0})
|
||||
with mock_check_drive() as mocks:
|
||||
try:
|
||||
resp = req.get_response(object_controller)
|
||||
self.assertEqual(resp.status_int, 507)
|
||||
mocks['isdir'].return_value = True
|
||||
resp = req.get_response(object_controller)
|
||||
self.assertNotEqual(resp.status_int, 507)
|
||||
# feel free to rip out this last assertion...
|
||||
expected = 2 if method in ('PUT', 'REPLICATE') else 4
|
||||
self.assertEqual(resp.status_int // 100, expected)
|
||||
except AssertionError as e:
|
||||
self.fail('%s for %s' % (e, method))
|
||||
|
||||
def test_GET(self):
|
||||
# Test swift.obj.server.ObjectController.GET
|
||||
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'})
|
||||
@ -6223,22 +6285,6 @@ class TestObjectController(unittest.TestCase):
|
||||
tpool.execute = was_tpool_exe
|
||||
diskfile.DiskFileManager._get_hashes = was_get_hashes
|
||||
|
||||
def test_REPLICATE_insufficient_storage(self):
|
||||
conf = {'devices': self.testdir, 'mount_check': 'true'}
|
||||
self.object_controller = object_server.ObjectController(
|
||||
conf, logger=debug_logger())
|
||||
self.object_controller.bytes_per_sync = 1
|
||||
|
||||
def fake_check_mount(*args, **kwargs):
|
||||
return False
|
||||
|
||||
with mock.patch("swift.obj.diskfile.check_mount", fake_check_mount):
|
||||
req = Request.blank('/sda1/p/suff',
|
||||
environ={'REQUEST_METHOD': 'REPLICATE'},
|
||||
headers={})
|
||||
resp = req.get_response(self.object_controller)
|
||||
self.assertEqual(resp.status_int, 507)
|
||||
|
||||
def test_REPLICATE_reclaims_tombstones(self):
|
||||
conf = {'devices': self.testdir, 'mount_check': False,
|
||||
'reclaim_age': 100}
|
||||
|
@ -34,7 +34,8 @@ from swift.obj import ssync_receiver, ssync_sender
|
||||
from swift.obj.reconstructor import ObjectReconstructor
|
||||
|
||||
from test import listen_zero, unit
|
||||
from test.unit import debug_logger, patch_policies, make_timestamp_iter
|
||||
from test.unit import (debug_logger, patch_policies, make_timestamp_iter,
|
||||
mock_check_drive)
|
||||
from test.unit.obj.common import write_diskfile
|
||||
|
||||
|
||||
@ -370,8 +371,7 @@ class TestReceiver(unittest.TestCase):
|
||||
mock.patch.object(
|
||||
self.controller._diskfile_router[POLICIES.legacy],
|
||||
'mount_check', False), \
|
||||
mock.patch('swift.obj.diskfile.check_mount',
|
||||
return_value=False) as mocked_check_mount:
|
||||
mock_check_drive(isdir=True) as mocks:
|
||||
req = swob.Request.blank(
|
||||
'/device/partition', environ={'REQUEST_METHOD': 'SSYNC'})
|
||||
resp = req.get_response(self.controller)
|
||||
@ -379,14 +379,13 @@ class TestReceiver(unittest.TestCase):
|
||||
self.body_lines(resp.body),
|
||||
[':ERROR: 0 "Looking for :MISSING_CHECK: START got \'\'"'])
|
||||
self.assertEqual(resp.status_int, 200)
|
||||
self.assertFalse(mocked_check_mount.called)
|
||||
self.assertEqual([], mocks['ismount'].call_args_list)
|
||||
|
||||
with mock.patch.object(self.controller, 'replication_semaphore'), \
|
||||
mock.patch.object(
|
||||
self.controller._diskfile_router[POLICIES.legacy],
|
||||
'mount_check', True), \
|
||||
mock.patch('swift.obj.diskfile.check_mount',
|
||||
return_value=False) as mocked_check_mount:
|
||||
mock_check_drive(ismount=False) as mocks:
|
||||
req = swob.Request.blank(
|
||||
'/device/partition', environ={'REQUEST_METHOD': 'SSYNC'})
|
||||
resp = req.get_response(self.controller)
|
||||
@ -396,12 +395,12 @@ class TestReceiver(unittest.TestCase):
|
||||
"was not enough space to save the resource. Drive: "
|
||||
"device</p></html>"])
|
||||
self.assertEqual(resp.status_int, 507)
|
||||
mocked_check_mount.assert_called_once_with(
|
||||
self.assertEqual([mock.call(os.path.join(
|
||||
self.controller._diskfile_router[POLICIES.legacy].devices,
|
||||
'device')
|
||||
'device'))], mocks['ismount'].call_args_list)
|
||||
|
||||
mocked_check_mount.reset_mock()
|
||||
mocked_check_mount.return_value = True
|
||||
mocks['ismount'].reset_mock()
|
||||
mocks['ismount'].return_value = True
|
||||
req = swob.Request.blank(
|
||||
'/device/partition', environ={'REQUEST_METHOD': 'SSYNC'})
|
||||
resp = req.get_response(self.controller)
|
||||
@ -409,9 +408,9 @@ class TestReceiver(unittest.TestCase):
|
||||
self.body_lines(resp.body),
|
||||
[':ERROR: 0 "Looking for :MISSING_CHECK: START got \'\'"'])
|
||||
self.assertEqual(resp.status_int, 200)
|
||||
mocked_check_mount.assert_called_once_with(
|
||||
self.assertEqual([mock.call(os.path.join(
|
||||
self.controller._diskfile_router[POLICIES.legacy].devices,
|
||||
'device')
|
||||
'device'))], mocks['ismount'].call_args_list)
|
||||
|
||||
def test_SSYNC_Exception(self):
|
||||
|
||||
|
@ -902,18 +902,32 @@ class TestFuncs(unittest.TestCase):
|
||||
|
||||
node = {'ip': '1.2.3.4', 'port': 6200, 'device': 'sda'}
|
||||
|
||||
source1 = TestSource(['abcd', '1234', 'abc', None])
|
||||
source2 = TestSource(['efgh5678'])
|
||||
data = ['abcd', '1234', 'efgh', '5678', 'lots', 'more', 'data']
|
||||
|
||||
# NB: content length on source1 should be correct
|
||||
# but that reversed piece never makes it to the client
|
||||
source1 = TestSource(data[:2] + [data[2][::-1], None] + data[3:])
|
||||
source2 = TestSource(data[2:4] + ['nope', None])
|
||||
source3 = TestSource(data[4:])
|
||||
req = Request.blank('/v1/a/c/o')
|
||||
handler = GetOrHeadHandler(
|
||||
self.app, req, 'Object', None, None, None, {},
|
||||
client_chunk_size=8)
|
||||
|
||||
range_headers = []
|
||||
sources = [(source2, node), (source3, node)]
|
||||
|
||||
def mock_get_source_and_node():
|
||||
range_headers.append(handler.backend_headers['Range'])
|
||||
return sources.pop(0)
|
||||
|
||||
app_iter = handler._make_app_iter(req, node, source1)
|
||||
with mock.patch.object(handler, '_get_source_and_node',
|
||||
lambda: (source2, node)):
|
||||
side_effect=mock_get_source_and_node):
|
||||
client_chunks = list(app_iter)
|
||||
self.assertEqual(client_chunks, ['abcd1234', 'efgh5678'])
|
||||
self.assertEqual(range_headers, ['bytes=8-27', 'bytes=16-27'])
|
||||
self.assertEqual(client_chunks, [
|
||||
'abcd1234', 'efgh5678', 'lotsmore', 'data'])
|
||||
|
||||
def test_client_chunk_size_resuming_chunked(self):
|
||||
|
||||
|
@ -21,7 +21,7 @@ from eventlet import Timeout
|
||||
|
||||
from swift.common.swob import Request
|
||||
from swift.proxy import server as proxy_server
|
||||
from swift.proxy.controllers.base import headers_to_container_info
|
||||
from swift.proxy.controllers.base import headers_to_container_info, Controller
|
||||
from test.unit import fake_http_connect, FakeRing, FakeMemcache
|
||||
from swift.common.storage_policy import StoragePolicy
|
||||
from swift.common.request_helpers import get_sys_meta_prefix
|
||||
@ -111,6 +111,24 @@ class TestContainerController(TestRingBase):
|
||||
from_memcache = self.app.memcache.get('container/a/c')
|
||||
self.assertTrue(from_memcache)
|
||||
|
||||
@mock.patch('swift.proxy.controllers.container.clear_info_cache')
|
||||
@mock.patch.object(Controller, 'make_requests')
|
||||
def test_container_cache_cleared_after_PUT(
|
||||
self, mock_make_requests, mock_clear_info_cache):
|
||||
parent_mock = mock.Mock()
|
||||
parent_mock.attach_mock(mock_make_requests, 'make_requests')
|
||||
parent_mock.attach_mock(mock_clear_info_cache, 'clear_info_cache')
|
||||
controller = proxy_server.ContainerController(self.app, 'a', 'c')
|
||||
callback = self._make_callback_func({})
|
||||
req = Request.blank('/v1/a/c')
|
||||
with mock.patch('swift.proxy.controllers.base.http_connect',
|
||||
fake_http_connect(200, 200, give_connect=callback)):
|
||||
controller.PUT(req)
|
||||
|
||||
# Ensure cache is cleared after the PUT request
|
||||
self.assertEqual(parent_mock.mock_calls[0][0], 'make_requests')
|
||||
self.assertEqual(parent_mock.mock_calls[1][0], 'clear_info_cache')
|
||||
|
||||
def test_swift_owner(self):
|
||||
owner_headers = {
|
||||
'x-container-read': 'value', 'x-container-write': 'value',
|
||||
|
@ -59,7 +59,7 @@ from swift.proxy import server as proxy_server
|
||||
from swift.proxy.controllers.obj import ReplicatedObjectController
|
||||
from swift.obj import server as object_server
|
||||
from swift.common.middleware import proxy_logging, versioned_writes, \
|
||||
copy
|
||||
copy, listing_formats
|
||||
from swift.common.middleware.acl import parse_acl, format_acl
|
||||
from swift.common.exceptions import ChunkReadTimeout, DiskFileNotExist, \
|
||||
APIVersionError, ChunkWriteTimeout
|
||||
@ -3192,8 +3192,6 @@ class TestReplicatedObjectController(
|
||||
|
||||
def test_POST(self):
|
||||
with save_globals():
|
||||
self.app.object_post_as_copy = False
|
||||
|
||||
def test_status_map(statuses, expected):
|
||||
set_http_connect(*statuses)
|
||||
self.app.memcache.store = {}
|
||||
@ -3218,7 +3216,6 @@ class TestReplicatedObjectController(
|
||||
def test_POST_backend_headers(self):
|
||||
# reset the router post patch_policies
|
||||
self.app.obj_controller_router = proxy_server.ObjectControllerRouter()
|
||||
self.app.object_post_as_copy = False
|
||||
self.app.sort_nodes = lambda nodes, *args, **kwargs: nodes
|
||||
backend_requests = []
|
||||
|
||||
@ -3436,7 +3433,6 @@ class TestReplicatedObjectController(
|
||||
def test_POST_meta_val_len(self):
|
||||
with save_globals():
|
||||
limit = constraints.MAX_META_VALUE_LENGTH
|
||||
self.app.object_post_as_copy = False
|
||||
ReplicatedObjectController(
|
||||
self.app, 'account', 'container', 'object')
|
||||
set_http_connect(200, 200, 202, 202, 202)
|
||||
@ -3462,7 +3458,6 @@ class TestReplicatedObjectController(
|
||||
return
|
||||
with save_globals():
|
||||
limit = constraints.MAX_META_VALUE_LENGTH
|
||||
self.app.object_post_as_copy = False
|
||||
controller = ReplicatedObjectController(
|
||||
self.app, 'account', 'container', 'object')
|
||||
set_http_connect(200, 200, 202, 202, 202)
|
||||
@ -3478,7 +3473,6 @@ class TestReplicatedObjectController(
|
||||
def test_POST_meta_key_len(self):
|
||||
with save_globals():
|
||||
limit = constraints.MAX_META_NAME_LENGTH
|
||||
self.app.object_post_as_copy = False
|
||||
set_http_connect(200, 200, 202, 202, 202)
|
||||
# acct cont obj obj obj
|
||||
req = Request.blank(
|
||||
@ -4224,7 +4218,6 @@ class TestReplicatedObjectController(
|
||||
|
||||
def test_PUT_POST_requires_container_exist(self):
|
||||
with save_globals():
|
||||
self.app.object_post_as_copy = False
|
||||
self.app.memcache = FakeMemcacheReturnsNone()
|
||||
controller = ReplicatedObjectController(
|
||||
self.app, 'account', 'container', 'object')
|
||||
@ -4837,7 +4830,6 @@ class TestReplicatedObjectController(
|
||||
called[0] = True
|
||||
return HTTPUnauthorized(request=req)
|
||||
with save_globals():
|
||||
self.app.object_post_as_copy = False
|
||||
set_http_connect(200, 200, 201, 201, 201)
|
||||
controller = ReplicatedObjectController(
|
||||
self.app, 'account', 'container', 'object')
|
||||
@ -4868,7 +4860,6 @@ class TestReplicatedObjectController(
|
||||
|
||||
def test_POST_converts_delete_after_to_delete_at(self):
|
||||
with save_globals():
|
||||
self.app.object_post_as_copy = False
|
||||
controller = ReplicatedObjectController(
|
||||
self.app, 'account', 'container', 'object')
|
||||
set_http_connect(200, 200, 202, 202, 202)
|
||||
@ -5365,7 +5356,6 @@ class TestReplicatedObjectController(
|
||||
|
||||
def test_POST_x_container_headers_with_more_container_replicas(self):
|
||||
self.app.container_ring.set_replicas(4)
|
||||
self.app.object_post_as_copy = False
|
||||
|
||||
req = Request.blank('/v1/a/c/o',
|
||||
environ={'REQUEST_METHOD': 'POST'},
|
||||
@ -6258,21 +6248,7 @@ class BaseTestECObjectController(BaseTestObjectController):
|
||||
self.ec_policy.object_ring.replica_count)
|
||||
|
||||
if method == 'POST':
|
||||
# Take care fast post here!
|
||||
orig_post_as_copy = getattr(
|
||||
_test_servers[0], 'object_post_as_copy', None)
|
||||
try:
|
||||
_test_servers[0].object_post_as_copy = False
|
||||
with mock.patch.object(
|
||||
_test_servers[0],
|
||||
'object_post_as_copy', False):
|
||||
headers = get_ring_reloaded_response(method)
|
||||
finally:
|
||||
if orig_post_as_copy is None:
|
||||
del _test_servers[0].object_post_as_copy
|
||||
else:
|
||||
_test_servers[0].object_post_as_copy = \
|
||||
orig_post_as_copy
|
||||
headers = get_ring_reloaded_response(method)
|
||||
|
||||
exp = 'HTTP/1.1 20'
|
||||
self.assertEqual(headers[:len(exp)], exp)
|
||||
@ -9200,10 +9176,11 @@ class TestAccountControllerFakeGetResponse(unittest.TestCase):
|
||||
"""
|
||||
def setUp(self):
|
||||
conf = {'account_autocreate': 'yes'}
|
||||
self.app = proxy_server.Application(conf, FakeMemcache(),
|
||||
account_ring=FakeRing(),
|
||||
container_ring=FakeRing())
|
||||
self.app.memcache = FakeMemcacheReturnsNone()
|
||||
self.app = listing_formats.ListingFilter(
|
||||
proxy_server.Application(conf, FakeMemcache(),
|
||||
account_ring=FakeRing(),
|
||||
container_ring=FakeRing()))
|
||||
self.app.app.memcache = FakeMemcacheReturnsNone()
|
||||
|
||||
def test_GET_autocreate_accept_json(self):
|
||||
with save_globals():
|
||||
@ -9593,12 +9570,15 @@ class TestSocketObjectVersions(unittest.TestCase):
|
||||
])
|
||||
conf = {'devices': _testdir, 'swift_dir': _testdir,
|
||||
'mount_check': 'false', 'allowed_headers': allowed_headers}
|
||||
prosrv = versioned_writes.VersionedWritesMiddleware(
|
||||
prosrv = listing_formats.ListingFilter(
|
||||
copy.ServerSideCopyMiddleware(
|
||||
proxy_logging.ProxyLoggingMiddleware(
|
||||
_test_servers[0], conf,
|
||||
logger=_test_servers[0].logger), conf),
|
||||
{})
|
||||
versioned_writes.VersionedWritesMiddleware(
|
||||
proxy_logging.ProxyLoggingMiddleware(
|
||||
_test_servers[0], conf,
|
||||
logger=_test_servers[0].logger), {}),
|
||||
{}
|
||||
)
|
||||
)
|
||||
self.coro = spawn(wsgi.server, prolis, prosrv, NullLogger())
|
||||
# replace global prosrv with one that's filtered with version
|
||||
# middleware
|
||||
|
@ -307,11 +307,6 @@ class TestObjectSysmeta(unittest.TestCase):
|
||||
# test fast-post by issuing requests to the proxy app
|
||||
self._test_sysmeta_not_updated_by_POST(self.app)
|
||||
|
||||
def test_sysmeta_not_updated_by_POST_as_copy(self):
|
||||
# test post-as-copy by issuing requests to the copy middleware app
|
||||
self.copy_app.object_post_as_copy = True
|
||||
self._test_sysmeta_not_updated_by_POST(self.copy_app)
|
||||
|
||||
def test_sysmeta_updated_by_COPY(self):
|
||||
# check sysmeta is updated by a COPY in same way as user meta by
|
||||
# issuing requests to the copy middleware app
|
||||
@ -482,8 +477,3 @@ class TestObjectSysmeta(unittest.TestCase):
|
||||
|
||||
def test_transient_sysmeta_replaced_by_PUT_or_POST(self):
|
||||
self._test_transient_sysmeta_replaced_by_PUT_or_POST(self.app)
|
||||
|
||||
def test_transient_sysmeta_replaced_by_PUT_or_POST_as_copy(self):
|
||||
# test post-as-copy by issuing requests to the copy middleware app
|
||||
self.copy_app.object_post_as_copy = True
|
||||
self._test_transient_sysmeta_replaced_by_PUT_or_POST(self.copy_app)
|
||||
|
Loading…
Reference in New Issue
Block a user