Merge remote-tracking branch 'remotes/origin/master' into merge-master

Include a trivial whitespace change in .zuul.yaml to make sure Zuul
realizes it changed on master.

Change-Id: I24069547265db9ebd39e565f966f4d0b82c0aea2
This commit is contained in:
Romain LE DISEZ 2019-03-27 22:09:23 +01:00 committed by Tim Burke
commit 6afc1130fd
53 changed files with 1526 additions and 861 deletions

View File

@ -79,7 +79,7 @@ Minwoo Bae <minwoob@us.ibm.com> Minwoo B
Jaivish Kothari <jaivish.kothari@nectechnologies.in> <janonymous.codevulture@gmail.com>
Michael Matur <michael.matur@gmail.com>
Kazuhiro Miyahara <miyahara.kazuhiro@lab.ntt.co.jp>
Alexandra Settle <alexandra.settle@rackspace.com>
Alexandra Settle <asettle@suse.com> <alexandra.settle@rackspace.com>
Kenichiro Matsuda <matsuda_kenichi@jp.fujitsu.com>
Atsushi Sakai <sakaia@jp.fujitsu.com>
Takashi Natsume <natsume.takashi@lab.ntt.co.jp>

View File

@ -35,6 +35,7 @@
- job:
name: swift-tox-py35
parent: swift-tox-base
nodeset: ubuntu-xenial
description: |
Run unit-tests for swift under cPython version 3.5.
@ -67,6 +68,25 @@
NOSE_COVER_HTML_DIR: '{toxinidir}/cover'
post-run: tools/playbooks/common/cover-post.yaml
- job:
name: swift-tox-py37
parent: swift-tox-base
nodeset: ubuntu-bionic
description: |
Run unit-tests for swift under cPython version 3.7.
Uses tox with the ``py37`` environment.
It sets TMPDIR to an XFS mount point created via
tools/test-setup.sh.
vars:
tox_envlist: py37
bindep_profile: test py37
python_version: 3.7
tox_environment:
NOSE_COVER_HTML: 1
NOSE_COVER_HTML_DIR: '{toxinidir}/cover'
post-run: tools/playbooks/common/cover-post.yaml
- job:
name: swift-tox-func
parent: swift-tox-base
@ -316,6 +336,11 @@
- ^(api-ref|doc|releasenotes)/.*$
- ^test/(functional|probe)/.*$
voting: false
- swift-tox-py37:
irrelevant-files:
- ^(api-ref|doc|releasenotes)/.*$
- ^test/(functional|probe)/.*$
voting: false
- swift-tox-func:
irrelevant-files:
- ^(api-ref|doc|releasenotes)/.*$
@ -344,7 +369,10 @@
- swift-tox-func-s3api-ceph-s3tests-tempauth:
irrelevant-files:
- ^(api-ref|releasenotes)/.*$
- ^test/probe/.*$
# Keep doc/saio -- we use those sample configs in the saio playbooks
# Also keep doc/s3api -- it holds known failures for these tests
- ^doc/(requirements.txt|(manpages|source)/.*)$
- ^test/(unit|probe)/.*$
- ^(.gitreview|.mailmap|AUTHORS|CHANGELOG)$
- swift-probetests-centos-7:
irrelevant-files:
@ -428,3 +456,4 @@
post:
jobs:
- publish-openstack-python-branch-tarball

View File

@ -45,7 +45,7 @@ Alex Holden (alex@alexjonasholden.com)
Alex Pecoraro (alex.pecoraro@emc.com)
Alex Szarka (szarka@inf.u-szeged.hu)
Alex Yang (alex890714@gmail.com)
Alexandra Settle (alexandra.settle@rackspace.com)
Alexandra Settle (asettle@suse.com)
Alexandre Lécuyer (alexandre.lecuyer@corp.ovh.com)
Alfredo Moralejo (amoralej@redhat.com)
Alistair Coles (alistairncoles@gmail.com)
@ -166,6 +166,7 @@ Fujita Tomonori (fujita.tomonori@lab.ntt.co.jp)
Félix Cantournet (felix.cantournet@cloudwatt.com)
Gage Hugo (gh159m@att.com)
Ganesh Maharaj Mahalingam (ganesh.mahalingam@intel.com)
gaobin (gaobin@inspur.com)
gaofei (gao.fei@inspur.com)
Gaurav B. Gangalwar (gaurav@gluster.com)
gecong1973 (ge.cong@zte.com.cn)
@ -173,6 +174,7 @@ gengchc2 (geng.changcai2@zte.com.cn)
Gerard Gine (ggine@swiftstack.com)
Gerry Drudy (gerry.drudy@hpe.com)
Gil Vernik (gilv@il.ibm.com)
Gleb Samsonov (sams-gleb@yandex.ru)
Gonéri Le Bouder (goneri.lebouder@enovance.com)
Graham Hayes (graham.hayes@hpe.com)
Gregory Haynes (greg@greghaynes.net)
@ -276,6 +278,7 @@ Mehdi Abaakouk (sileht@redhat.com)
melissaml (ma.lei@99cloud.net)
Michael Matur (michael.matur@gmail.com)
Michael Shuler (mshuler@gmail.com)
Michele Valsecchi (mvalsecc@redhat.com)
Mike Fedosin (mfedosin@mirantis.com)
Mingyu Li (li.mingyu@99cloud.net)
Minwoo Bae (minwoob@us.ibm.com)
@ -286,6 +289,7 @@ Monty Taylor (mordred@inaugust.com)
Morgan Fainberg (morgan.fainberg@gmail.com)
Morita Kazutaka (morita.kazutaka@gmail.com)
Motonobu Ichimura (motonobu@gmail.com)
Nadeem Syed (snadeem.hameed@gmail.com)
Nakagawa Masaaki (nakagawamsa@nttdata.co.jp)
Nakul Dahiwade (nakul.dahiwade@intel.com)
Nam Nguyen Hoai (namnh@vn.fujitsu.com)

View File

@ -1,3 +1,61 @@
swift (2.21.0, OpenStack Stein release)
* Change the behavior of the EC reconstructor to perform a
fragment rebuild to a handoff node when a primary peer responds
with 507 to the REPLICATE request. This changes EC to match the
existing behavior of replication when drives fail. After a
rebalance of EC rings (potentially removing unmounted/failed
devices), it's most IO efficient to run in handoffs_only mode to
avoid unnecessary rebuilds.
* O_TMPFILE support is now detected by attempting to use it
instead of looking at the kernel version. This allows older
kernels with backported patches to take advantage of the
O_TMPFILE functionality.
* Add slo_manifest_hook callback to allow other middlewares to
impose additional constraints on or make edits to SLO manifests
before being written. For example, a middleware could enforce
minimum segment size or insert data segments.
* Fixed an issue with multi-region EC policies that caused the EC
reconstructor to constantly attempt cross-region rebuild
traffic.
* Fixed an issue where S3 API v4 signatures would not be validated
against the body of the request, allowing a replay attack if
request headers were captured by a malicious third party.
* Display crypto data/metadata details in swift-object-info.
* formpost can now accept a content-encoding parameter.
* Fixed an issue where multipart uploads with the S3 API would
sometimes report an error despite all segments being upload
successfully.
* Multipart object segments are now actually deleted when the
multipart object is deleted via the S3 API.
* Swift now returns a 503 (instead of a 500) when an account
auto-create fails.
* Fixed a bug where encryption would store the incorrect key
metadata if the object name starts with a slash.
* Fixed an issue where an object server failure during a client
download could leave an open socket between the proxy and
client.
* Fixed an issue where deleted EC objects didn't have their
on-disk directories cleaned up. This would cause extra resource
usage on the object servers.
* Fixed issue where bulk requests using xml and expect
100-continue would return a malformed HTTP response.
* Various other minor bug fixes and improvements.
swift (2.20.0)
* S3 API compatibility updates

View File

@ -27,7 +27,7 @@ log_level = INFO
interval = 300
# auto_create_account_prefix = .
# report_interval = 300
# concurrency is the level of concurrency o use to do the work, this value
# concurrency is the level of concurrency to use to do the work, this value
# must be set to at least 1
# concurrency = 1
# processes is how many parts to divide the work into, one part per process

View File

@ -76,7 +76,8 @@ Installing dependencies
python2-netifaces python2-pip python2-dnspython \
python2-mock
Note: This installs necessary system dependencies and *most* of the python
.. note::
This installs necessary system dependencies and *most* of the python
dependencies. Later in the process setuptools/distribute or pip will install
and/or upgrade packages.
@ -90,16 +91,16 @@ Using a partition for storage
If you are going to use a separate partition for Swift data, be sure to add
another device when creating the VM, and follow these instructions:
#. Set up a single partition::
#. Set up a single partition::
sudo fdisk /dev/sdb
sudo mkfs.xfs /dev/sdb1
#. Edit ``/etc/fstab`` and add::
#. Edit ``/etc/fstab`` and add::
/dev/sdb1 /mnt/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0
#. Create the mount point and the individualized links::
#. Create the mount point and the individualized links::
sudo mkdir /mnt/sdb1
sudo mount /mnt/sdb1
@ -116,14 +117,16 @@ another device when creating the VM, and follow these instructions:
# **Make sure to include the trailing slash after /srv/$x/**
for x in {1..4}; do sudo chown -R ${USER}:${USER} /srv/$x/; done
Note: For OpenSuse users, a user's primary group is `users`, so you have 2 options:
.. note::
For OpenSuse users, a user's primary group is ``users``, so you have 2 options:
* Change `${USER}:${USER}` to `${USER}:users` in all references of this guide; or
* Change ``${USER}:${USER}`` to ``${USER}:users`` in all references of this guide; or
* Create a group for your username and add yourself to it::
sudo groupadd ${USER} && sudo gpasswd -a ${USER} ${USER}
Note: We create the mount points and mount the storage disk under
.. note::
We create the mount points and mount the storage disk under
/mnt/sdb1. This disk will contain one directory per simulated swift node,
each owned by the current swift user.
@ -133,7 +136,7 @@ another device when creating the VM, and follow these instructions:
exist. This prevents disk sync operations from writing to the root
partition in the event a drive is unmounted.
#. Next, skip to :ref:`common-dev-section`.
#. Next, skip to :ref:`common-dev-section`.
.. _loopback-section:
@ -144,7 +147,7 @@ Using a loopback device for storage
If you want to use a loopback device instead of another partition, follow
these instructions:
#. Create the file for the loopback device::
#. Create the file for the loopback device::
sudo mkdir /srv
sudo truncate -s 1GB /srv/swift-disk
@ -153,11 +156,11 @@ these instructions:
Modify size specified in the ``truncate`` command to make a larger or
smaller partition as needed.
#. Edit `/etc/fstab` and add::
#. Edit `/etc/fstab` and add::
/srv/swift-disk /mnt/sdb1 xfs loop,noatime,nodiratime,nobarrier,logbufs=8 0 0
#. Create the mount point and the individualized links::
#. Create the mount point and the individualized links::
sudo mkdir /mnt/sdb1
sudo mount /mnt/sdb1
@ -173,14 +176,16 @@ these instructions:
# **Make sure to include the trailing slash after /srv/$x/**
for x in {1..4}; do sudo chown -R ${USER}:${USER} /srv/$x/; done
Note: For OpenSuse users, a user's primary group is `users`, so you have 2 options:
.. note::
For OpenSuse users, a user's primary group is ``users``, so you have 2 options:
* Change `${USER}:${USER}` to `${USER}:users` in all references of this guide; or
* Change ``${USER}:${USER}`` to ``${USER}:users`` in all references of this guide; or
* Create a group for your username and add yourself to it::
sudo groupadd ${USER} && sudo gpasswd -a ${USER} ${USER}
Note: We create the mount points and mount the loopback file under
.. note::
We create the mount points and mount the loopback file under
/mnt/sdb1. This file will contain one directory per simulated swift node,
each owned by the current swift user.
@ -229,11 +234,11 @@ To persist this, edit and add the following to ``/etc/fstab``::
Getting the code
----------------
#. Check out the python-swiftclient repo::
#. Check out the python-swiftclient repo::
cd $HOME; git clone https://github.com/openstack/python-swiftclient.git
#. Build a development installation of python-swiftclient::
#. Build a development installation of python-swiftclient::
cd $HOME/python-swiftclient; sudo python setup.py develop; cd -
@ -242,23 +247,25 @@ Getting the code
cd $HOME/python-swiftclient; sudo pip install -r requirements.txt; sudo python setup.py develop; cd -
#. Check out the swift repo::
#. Check out the swift repo::
git clone https://github.com/openstack/swift.git
#. Build a development installation of swift::
#. Build a development installation of swift::
cd $HOME/swift; sudo pip install --no-binary cryptography -r requirements.txt; sudo python setup.py develop; cd -
Note: Due to a difference in libssl.so naming in OpenSuse to other Linux distros the wheel/binary wont work so the
cryptography must be built, thus the ``--no-binary cryptography``.
.. note::
Due to a difference in how ``libssl.so`` is named in OpenSuse vs. other Linux distros the
wheel/binary won't work; thus we use ``--no-binary cryptography`` to build ``cryptography``
locally.
Fedora 19 or later users might have to perform the following if development
installation of swift fails::
sudo pip install -U xattr
#. Install swift's test dependencies::
#. Install swift's test dependencies::
cd $HOME/swift; sudo pip install -r test-requirements.txt
@ -266,7 +273,7 @@ Getting the code
Setting up rsync
----------------
#. Create ``/etc/rsyncd.conf``::
#. Create ``/etc/rsyncd.conf``::
sudo cp $HOME/swift/doc/saio/rsyncd.conf /etc/
sudo sed -i "s/<your-user-name>/${USER}/" /etc/rsyncd.conf
@ -275,8 +282,9 @@ Setting up rsync
that is copied and fixed up above:
.. literalinclude:: /../saio/rsyncd.conf
:language: ini
#. On Ubuntu, edit the following line in ``/etc/default/rsync``::
#. On Ubuntu, edit the following line in ``/etc/default/rsync``::
RSYNC_ENABLE=true
@ -288,7 +296,7 @@ Setting up rsync
On OpenSuse, nothing needs to happen here.
#. On platforms with SELinux in ``Enforcing`` mode, either set to ``Permissive``::
#. On platforms with SELinux in ``Enforcing`` mode, either set to ``Permissive``::
sudo setenforce Permissive
@ -296,7 +304,7 @@ Setting up rsync
sudo setsebool -P rsync_full_access 1
#. Start the rsync daemon
#. Start the rsync daemon
* On Ubuntu 14.04, run::
@ -322,7 +330,7 @@ Setting up rsync
sudo service xinetd restart
#. Verify rsync is accepting connections for all servers::
#. Verify rsync is accepting connections for all servers::
rsync rsync://pub@localhost/
@ -362,7 +370,7 @@ running, tokens cannot be validated, and accessing Swift becomes impossible.
Optional: Setting up rsyslog for individual logging
---------------------------------------------------
#. Install the swift rsyslogd configuration::
#. Install the swift rsyslogd configuration::
sudo cp $HOME/swift/doc/saio/rsyslog.d/10-swift.conf /etc/rsyslog.d/
@ -379,13 +387,14 @@ Optional: Setting up rsyslog for individual logging
below:
.. literalinclude:: /../saio/rsyslog.d/10-swift.conf
:language: ini
#. Edit ``/etc/rsyslog.conf`` and make the following change (usually in the
#. Edit ``/etc/rsyslog.conf`` and make the following change (usually in the
"GLOBAL DIRECTIVES" section)::
$PrivDropToGroup adm
#. If using hourly logs (see above) perform::
#. If using hourly logs (see above) perform::
sudo mkdir -p /var/log/swift/hourly
@ -393,7 +402,7 @@ Optional: Setting up rsyslog for individual logging
sudo mkdir -p /var/log/swift
#. Setup the logging directory and start syslog:
#. Setup the logging directory and start syslog:
* On Ubuntu::
@ -415,89 +424,106 @@ After performing the following steps, be sure to verify that Swift has access
to resulting configuration files (sample configuration files are provided with
all defaults in line-by-line comments).
#. Optionally remove an existing swift directory::
#. Optionally remove an existing swift directory::
sudo rm -rf /etc/swift
#. Populate the ``/etc/swift`` directory itself::
#. Populate the ``/etc/swift`` directory itself::
cd $HOME/swift/doc; sudo cp -r saio/swift /etc/swift; cd -
sudo chown -R ${USER}:${USER} /etc/swift
#. Update ``<your-user-name>`` references in the Swift config files::
#. Update ``<your-user-name>`` references in the Swift config files::
find /etc/swift/ -name \*.conf | xargs sudo sed -i "s/<your-user-name>/${USER}/"
The contents of the configuration files provided by executing the above
commands are as follows:
#. ``/etc/swift/swift.conf``
#. ``/etc/swift/swift.conf``
.. literalinclude:: /../saio/swift/swift.conf
:language: ini
#. ``/etc/swift/proxy-server.conf``
#. ``/etc/swift/proxy-server.conf``
.. literalinclude:: /../saio/swift/proxy-server.conf
:language: ini
#. ``/etc/swift/object-expirer.conf``
#. ``/etc/swift/object-expirer.conf``
.. literalinclude:: /../saio/swift/object-expirer.conf
:language: ini
#. ``/etc/swift/container-reconciler.conf``
#. ``/etc/swift/container-reconciler.conf``
.. literalinclude:: /../saio/swift/container-reconciler.conf
:language: ini
#. ``/etc/swift/container-sync-realms.conf``
#. ``/etc/swift/container-sync-realms.conf``
.. literalinclude:: /../saio/swift/container-sync-realms.conf
:language: ini
#. ``/etc/swift/account-server/1.conf``
#. ``/etc/swift/account-server/1.conf``
.. literalinclude:: /../saio/swift/account-server/1.conf
:language: ini
#. ``/etc/swift/container-server/1.conf``
#. ``/etc/swift/container-server/1.conf``
.. literalinclude:: /../saio/swift/container-server/1.conf
:language: ini
#. ``/etc/swift/object-server/1.conf``
#. ``/etc/swift/object-server/1.conf``
.. literalinclude:: /../saio/swift/object-server/1.conf
:language: ini
#. ``/etc/swift/account-server/2.conf``
#. ``/etc/swift/account-server/2.conf``
.. literalinclude:: /../saio/swift/account-server/2.conf
:language: ini
#. ``/etc/swift/container-server/2.conf``
#. ``/etc/swift/container-server/2.conf``
.. literalinclude:: /../saio/swift/container-server/2.conf
:language: ini
#. ``/etc/swift/object-server/2.conf``
#. ``/etc/swift/object-server/2.conf``
.. literalinclude:: /../saio/swift/object-server/2.conf
:language: ini
#. ``/etc/swift/account-server/3.conf``
#. ``/etc/swift/account-server/3.conf``
.. literalinclude:: /../saio/swift/account-server/3.conf
:language: ini
#. ``/etc/swift/container-server/3.conf``
#. ``/etc/swift/container-server/3.conf``
.. literalinclude:: /../saio/swift/container-server/3.conf
:language: ini
#. ``/etc/swift/object-server/3.conf``
#. ``/etc/swift/object-server/3.conf``
.. literalinclude:: /../saio/swift/object-server/3.conf
:language: ini
#. ``/etc/swift/account-server/4.conf``
#. ``/etc/swift/account-server/4.conf``
.. literalinclude:: /../saio/swift/account-server/4.conf
:language: ini
#. ``/etc/swift/container-server/4.conf``
#. ``/etc/swift/container-server/4.conf``
.. literalinclude:: /../saio/swift/container-server/4.conf
:language: ini
#. ``/etc/swift/object-server/4.conf``
#. ``/etc/swift/object-server/4.conf``
.. literalinclude:: /../saio/swift/object-server/4.conf
:language: ini
.. _setup_scripts:
@ -505,17 +531,18 @@ commands are as follows:
Setting up scripts for running Swift
------------------------------------
#. Copy the SAIO scripts for resetting the environment::
#. Copy the SAIO scripts for resetting the environment::
mkdir -p $HOME/bin
cd $HOME/swift/doc; cp saio/bin/* $HOME/bin; cd -
chmod +x $HOME/bin/*
#. Edit the ``$HOME/bin/resetswift`` script
#. Edit the ``$HOME/bin/resetswift`` script
The template ``resetswift`` script looks like the following:
.. literalinclude:: /../saio/bin/resetswift
:language: bash
If you are using a loopback device add an environment var to
substitute ``/dev/sdb1`` with ``/srv/swift-disk``::
@ -528,39 +555,44 @@ Setting up scripts for running Swift
sed -i "/find \/var\/log\/swift/d" $HOME/bin/resetswift
#. Install the sample configuration file for running tests::
#. Install the sample configuration file for running tests::
cp $HOME/swift/test/sample.conf /etc/swift/test.conf
The template ``test.conf`` looks like the following:
.. literalinclude:: /../../test/sample.conf
:language: ini
#. Add an environment variable for running tests below::
#. Add an environment variable for running tests below::
echo "export SWIFT_TEST_CONFIG_FILE=/etc/swift/test.conf" >> $HOME/.bashrc
#. Be sure that your ``PATH`` includes the ``bin`` directory::
#. Be sure that your ``PATH`` includes the ``bin`` directory::
echo "export PATH=${PATH}:$HOME/bin" >> $HOME/.bashrc
#. Source the above environment variables into your current environment::
#. Source the above environment variables into your current environment::
. $HOME/.bashrc
#. Construct the initial rings using the provided script::
#. Construct the initial rings using the provided script::
remakerings
The ``remakerings`` script looks like the following:
.. literalinclude:: /../saio/bin/remakerings
:language: bash
You can expect the output from this command to produce the following. Note
that 3 object rings are created in order to test storage policies and EC in
the SAIO environment. The EC ring is the only one with all 8 devices.
There are also two replication rings, one for 3x replication and another
for 2x replication, but those rings only use 4 devices::
for 2x replication, but those rings only use 4 devices:
.. code-block:: console
Device d0r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb1_"" with 1.0 weight got id 0
Device d1r1z2-127.0.0.2:6020R127.0.0.2:6020/sdb2_"" with 1.0 weight got id 1
@ -593,15 +625,15 @@ Setting up scripts for running Swift
Reassigned 3072 (300.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
#. Read more about Storage Policies and your SAIO :doc:`policies_saio`
#. Read more about Storage Policies and your SAIO :doc:`policies_saio`
#. Verify the unit tests run::
#. Verify the unit tests run::
$HOME/swift/.unittests
Note that the unit tests do not require any swift daemons running.
#. Start the "main" Swift daemon processes (proxy, account, container, and
#. Start the "main" Swift daemon processes (proxy, account, container, and
object)::
startmain
@ -612,27 +644,28 @@ Setting up scripts for running Swift
The ``startmain`` script looks like the following:
.. literalinclude:: /../saio/bin/startmain
:language: bash
#. Get an ``X-Storage-Url`` and ``X-Auth-Token``::
#. Get an ``X-Storage-Url`` and ``X-Auth-Token``::
curl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass: testing' http://127.0.0.1:8080/auth/v1.0
#. Check that you can ``GET`` account::
#. Check that you can ``GET`` account::
curl -v -H 'X-Auth-Token: <token-from-x-auth-token-above>' <url-from-x-storage-url-above>
#. Check that ``swift`` command provided by the python-swiftclient package works::
#. Check that ``swift`` command provided by the python-swiftclient package works::
swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat
#. Verify the functional tests run::
#. Verify the functional tests run::
$HOME/swift/.functests
(Note: functional tests will first delete everything in the configured
accounts.)
#. Verify the probe tests run::
#. Verify the probe tests run::
$HOME/swift/.probetests

View File

@ -589,7 +589,8 @@ reseller_prefix = AUTH_
# useful if there are multiple auth systems in the proxy pipeline.
delay_auth_decision = False
# Keystone server details
# Keystone server details. Note that this differs from how swift3 was
# configured: in particular, the Keystone API version must be included.
auth_uri = http://keystonehost:35357/v3
# Connect/read timeout to use when communicating with Keystone

View File

@ -0,0 +1,69 @@
---
features:
- |
Change the behavior of the EC reconstructor to perform a
fragment rebuild to a handoff node when a primary peer responds
with 507 to the REPLICATE request. This changes EC to match the
existing behavior of replication when drives fail. After a
rebalance of EC rings (potentially removing unmounted/failed
devices), it's most IO efficient to run in handoffs_only mode to
avoid unnecessary rebuilds.
- |
O_TMPFILE support is now detected by attempting to use it
instead of looking at the kernel version. This allows older
kernels with backported patches to take advantage of the
O_TMPFILE functionality.
- |
Add slo_manifest_hook callback to allow other middlewares to
impose additional constraints on or make edits to SLO manifests
before being written. For example, a middleware could enforce
minimum segment size or insert data segments.
- |
Fixed an issue with multi-region EC policies that caused the EC
reconstructor to constantly attempt cross-region rebuild
traffic.
- |
Fixed an issue where S3 API v4 signatures would not be validated
against the body of the request, allowing a replay attack if
request headers were captured by a malicious third party.
- Display crypto data/metadata details in swift-object-info.
- formpost can now accept a content-encoding parameter.
- |
Fixed an issue where multipart uploads with the S3 API would
sometimes report an error despite all segments being upload
successfully.
- |
Multipart object segments are now actually deleted when the
multipart object is deleted via the S3 API.
- |
Swift now returns a 503 (instead of a 500) when an account
auto-create fails.
- |
Fixed a bug where encryption would store the incorrect key
metadata if the object name starts with a slash.
- |
Fixed an issue where an object server failure during a client
download could leave an open socket between the proxy and
client.
- |
Fixed an issue where deleted EC objects didn't have their
on-disk directories cleaned up. This would cause extra resource
usage on the object servers.
- |
Fixed issue where bulk requests using xml and expect
100-continue would return a malformed HTTP response.
- Various other minor bug fixes and improvements.

View File

@ -7,6 +7,8 @@
current
stein
rocky
queens

View File

@ -0,0 +1,6 @@
===================================
Stein Series Release Notes
===================================
.. release-notes::
:branch: stable/stein

View File

@ -16,7 +16,6 @@
import os
import random
import socket
from swift import gettext_ as _
from logging import DEBUG
from math import sqrt
from time import time
@ -142,10 +141,10 @@ class AccountReaper(Daemon):
continue
self.reap_device(device)
except (Exception, Timeout):
self.logger.exception(_("Exception in top-level account reaper "
"loop"))
self.logger.exception("Exception in top-level account reaper "
"loop")
elapsed = time() - begin
self.logger.info(_('Devices pass completed: %.02fs'), elapsed)
self.logger.info('Devices pass completed: %.02fs', elapsed)
def reap_device(self, device):
"""
@ -255,19 +254,15 @@ class AccountReaper(Daemon):
self.delay_reaping:
return False
account = info['account']
self.logger.info(_('Beginning pass on account %s'), account)
self.logger.info('Beginning pass on account %s', account)
self.reset_stats()
container_limit = 1000
if container_shard is not None:
container_limit *= len(nodes)
try:
marker = ''
while True:
containers = \
list(broker.list_containers_iter(container_limit, marker,
None, None, None))
if not containers:
break
containers = list(broker.list_containers_iter(
container_limit, '', None, None, None))
while containers:
try:
for (container, _junk, _junk, _junk, _junk) in containers:
if six.PY3:
@ -284,43 +279,44 @@ class AccountReaper(Daemon):
self.container_pool.waitall()
except (Exception, Timeout):
self.logger.exception(
_('Exception with containers for account %s'), account)
marker = containers[-1][0]
if marker == '':
break
log = 'Completed pass on account %s' % account
'Exception with containers for account %s', account)
containers = list(broker.list_containers_iter(
container_limit, containers[-1][0], None, None, None))
log_buf = ['Completed pass on account %s' % account]
except (Exception, Timeout):
self.logger.exception(
_('Exception with account %s'), account)
log = _('Incomplete pass on account %s') % account
self.logger.exception('Exception with account %s', account)
log_buf = ['Incomplete pass on account %s' % account]
if self.stats_containers_deleted:
log += _(', %s containers deleted') % self.stats_containers_deleted
log_buf.append(', %s containers deleted' %
self.stats_containers_deleted)
if self.stats_objects_deleted:
log += _(', %s objects deleted') % self.stats_objects_deleted
log_buf.append(', %s objects deleted' % self.stats_objects_deleted)
if self.stats_containers_remaining:
log += _(', %s containers remaining') % \
self.stats_containers_remaining
log_buf.append(', %s containers remaining' %
self.stats_containers_remaining)
if self.stats_objects_remaining:
log += _(', %s objects remaining') % self.stats_objects_remaining
log_buf.append(', %s objects remaining' %
self.stats_objects_remaining)
if self.stats_containers_possibly_remaining:
log += _(', %s containers possibly remaining') % \
self.stats_containers_possibly_remaining
log_buf.append(', %s containers possibly remaining' %
self.stats_containers_possibly_remaining)
if self.stats_objects_possibly_remaining:
log += _(', %s objects possibly remaining') % \
self.stats_objects_possibly_remaining
log_buf.append(', %s objects possibly remaining' %
self.stats_objects_possibly_remaining)
if self.stats_return_codes:
log += _(', return codes: ')
log_buf.append(', return codes: ')
for code in sorted(self.stats_return_codes):
log += '%s %sxxs, ' % (self.stats_return_codes[code], code)
log = log[:-2]
log += _(', elapsed: %.02fs') % (time() - begin)
self.logger.info(log)
log_buf.append('%s %sxxs, ' % (self.stats_return_codes[code],
code))
log_buf[-1] = log_buf[-1][:-2]
log_buf.append(', elapsed: %.02fs' % (time() - begin))
self.logger.info(''.join(log_buf))
self.logger.timing_since('timing', self.start_time)
delete_timestamp = Timestamp(info['delete_timestamp'])
if self.stats_containers_remaining and \
begin - float(delete_timestamp) >= self.reap_not_done_after:
self.logger.warning(
_('Account %(account)s has not been reaped since %(time)s') %
'Account %(account)s has not been reaped since %(time)s' %
{'account': account, 'time': delete_timestamp.isoformat})
return True
@ -379,14 +375,14 @@ class AccountReaper(Daemon):
except ClientException as err:
if self.logger.getEffectiveLevel() <= DEBUG:
self.logger.exception(
_('Exception with %(ip)s:%(port)s/%(device)s'), node)
'Exception with %(ip)s:%(port)s/%(device)s', node)
self.stats_return_codes[err.http_status // 100] = \
self.stats_return_codes.get(err.http_status // 100, 0) + 1
self.logger.increment(
'return_codes.%d' % (err.http_status // 100,))
except (Timeout, socket.error) as err:
self.logger.error(
_('Timeout Exception with %(ip)s:%(port)s/%(device)s'),
'Timeout Exception with %(ip)s:%(port)s/%(device)s',
node)
if not objects:
break
@ -397,21 +393,15 @@ class AccountReaper(Daemon):
self.logger.error('ERROR: invalid storage policy index: %r'
% policy_index)
for obj in objects:
obj_name = obj['name']
if isinstance(obj_name, six.text_type):
obj_name = obj_name.encode('utf8')
pool.spawn(self.reap_object, account, container, part,
nodes, obj_name, policy_index)
nodes, obj['name'], policy_index)
pool.waitall()
except (Exception, Timeout):
self.logger.exception(_('Exception with objects for container '
'%(container)s for account %(account)s'
),
self.logger.exception('Exception with objects for container '
'%(container)s for account %(account)s',
{'container': container,
'account': account})
marker = objects[-1]['name']
if marker == '':
break
successes = 0
failures = 0
timestamp = Timestamp.now()
@ -434,7 +424,7 @@ class AccountReaper(Daemon):
except ClientException as err:
if self.logger.getEffectiveLevel() <= DEBUG:
self.logger.exception(
_('Exception with %(ip)s:%(port)s/%(device)s'), node)
'Exception with %(ip)s:%(port)s/%(device)s', node)
failures += 1
self.logger.increment('containers_failures')
self.stats_return_codes[err.http_status // 100] = \
@ -443,7 +433,7 @@ class AccountReaper(Daemon):
'return_codes.%d' % (err.http_status // 100,))
except (Timeout, socket.error) as err:
self.logger.error(
_('Timeout Exception with %(ip)s:%(port)s/%(device)s'),
'Timeout Exception with %(ip)s:%(port)s/%(device)s',
node)
failures += 1
self.logger.increment('containers_failures')
@ -510,7 +500,7 @@ class AccountReaper(Daemon):
except ClientException as err:
if self.logger.getEffectiveLevel() <= DEBUG:
self.logger.exception(
_('Exception with %(ip)s:%(port)s/%(device)s'), node)
'Exception with %(ip)s:%(port)s/%(device)s', node)
failures += 1
self.logger.increment('objects_failures')
self.stats_return_codes[err.http_status // 100] = \
@ -521,7 +511,7 @@ class AccountReaper(Daemon):
failures += 1
self.logger.increment('objects_failures')
self.logger.error(
_('Timeout Exception with %(ip)s:%(port)s/%(device)s'),
'Timeout Exception with %(ip)s:%(port)s/%(device)s',
node)
if successes > failures:
self.stats_objects_deleted += 1

View File

@ -515,7 +515,12 @@ def main(args=None):
logger = get_logger({}, name='ContainerBroker', log_to_console=True)
broker = ContainerBroker(args.container_db, logger=logger,
skip_commits=True)
try:
broker.get_info()
except Exception as exc:
print('Error opening container DB %s: %s' % (args.container_db, exc),
file=sys.stderr)
return 2
print('Loaded db broker for %s.' % broker.path, file=sys.stderr)
return args.func(broker, args)

View File

@ -178,6 +178,14 @@ class BufferedHTTPConnection(HTTPConnection):
return ret
def putrequest(self, method, url, skip_host=0, skip_accept_encoding=0):
'''Send a request to the server.
:param method: specifies an HTTP request method, e.g. 'GET'.
:param url: specifies the object being requested, e.g. '/index.html'.
:param skip_host: if True does not add automatically a 'Host:' header
:param skip_accept_encoding: if True does not add automatically an
'Accept-Encoding:' header
'''
self._method = method
self._path = url
return HTTPConnection.putrequest(self, method, url, skip_host,

View File

@ -380,6 +380,10 @@ class Bulk(object):
query request.
"""
last_yield = time()
if out_content_type and out_content_type.endswith('/xml'):
to_yield = '<?xml version="1.0" encoding="UTF-8"?>\n'
else:
to_yield = ' '
separator = ''
failed_files = []
resp_dict = {'Response Status': HTTPOk().status,
@ -390,8 +394,6 @@ class Bulk(object):
try:
if not out_content_type:
raise HTTPNotAcceptable(request=req)
if out_content_type.endswith('/xml'):
yield '<?xml version="1.0" encoding="UTF-8"?>\n'
try:
vrs, account, _junk = req.split_path(2, 3, True)
@ -452,9 +454,9 @@ class Bulk(object):
for resp, obj_name, retry in pile.asyncstarmap(
do_delete, names_to_delete):
if last_yield + self.yield_frequency < time():
separator = '\r\n\r\n'
last_yield = time()
yield ' '
yield to_yield
to_yield, separator = ' ', '\r\n\r\n'
self._process_delete(resp, pile, obj_name,
resp_dict, failed_files,
failed_file_response, retry)
@ -462,9 +464,9 @@ class Bulk(object):
# Abort, but drain off the in-progress deletes
for resp, obj_name, retry in pile:
if last_yield + self.yield_frequency < time():
separator = '\r\n\r\n'
last_yield = time()
yield ' '
yield to_yield
to_yield, separator = ' ', '\r\n\r\n'
# Don't pass in the pile, as we shouldn't retry
self._process_delete(
resp, None, obj_name, resp_dict,
@ -508,14 +510,16 @@ class Bulk(object):
'Response Body': '', 'Number Files Created': 0}
failed_files = []
last_yield = time()
if out_content_type and out_content_type.endswith('/xml'):
to_yield = '<?xml version="1.0" encoding="UTF-8"?>\n'
else:
to_yield = ' '
separator = ''
containers_accessed = set()
req.environ['eventlet.minimum_write_chunk_size'] = 0
try:
if not out_content_type:
raise HTTPNotAcceptable(request=req)
if out_content_type.endswith('/xml'):
yield '<?xml version="1.0" encoding="UTF-8"?>\n'
if req.content_length is None and \
req.headers.get('transfer-encoding',
@ -533,9 +537,9 @@ class Bulk(object):
containers_created = 0
while True:
if last_yield + self.yield_frequency < time():
separator = '\r\n\r\n'
last_yield = time()
yield ' '
yield to_yield
to_yield, separator = ' ', '\r\n\r\n'
tar_info = next(tar)
if tar_info is None or \
len(failed_files) >= self.max_failed_extractions:

View File

@ -114,12 +114,11 @@ greater than 5GB.
"""
from six.moves.urllib.parse import quote, unquote
from swift.common.utils import get_logger, config_true_value, FileLikeIter, \
close_if_possible
from swift.common.swob import Request, HTTPPreconditionFailed, \
HTTPRequestEntityTooLarge, HTTPBadRequest, HTTPException
HTTPRequestEntityTooLarge, HTTPBadRequest, HTTPException, \
wsgi_quote, wsgi_unquote
from swift.common.http import HTTP_MULTIPLE_CHOICES, is_success, HTTP_OK
from swift.common.constraints import check_account_format, MAX_FILE_SIZE
from swift.common.request_helpers import copy_header_subset, remove_items, \
@ -183,7 +182,7 @@ class ServerSideCopyWebContext(WSGIContext):
def get_source_resp(self, req):
sub_req = make_subrequest(
req.environ, path=quote(req.path_info), headers=req.headers,
req.environ, path=wsgi_quote(req.path_info), headers=req.headers,
swift_source='SSC')
return sub_req.get_response(self.app)
@ -257,9 +256,9 @@ class ServerSideCopyMiddleware(object):
)(req.environ, start_response)
dest_account = account
if 'Destination-Account' in req.headers:
dest_account = unquote(req.headers.get('Destination-Account'))
dest_account = wsgi_unquote(req.headers.get('Destination-Account'))
dest_account = check_account_format(req, dest_account)
req.headers['X-Copy-From-Account'] = quote(account)
req.headers['X-Copy-From-Account'] = wsgi_quote(account)
account = dest_account
del req.headers['Destination-Account']
dest_container, dest_object = _check_destination_header(req)
@ -275,7 +274,7 @@ class ServerSideCopyMiddleware(object):
req.path_info = '/%s/%s/%s/%s' % (
ver, dest_account, dest_container, dest_object)
req.headers['Content-Length'] = 0
req.headers['X-Copy-From'] = quote(source)
req.headers['X-Copy-From'] = wsgi_quote(source)
del req.headers['Destination']
return self.handle_PUT(req, start_response)
@ -312,8 +311,8 @@ class ServerSideCopyMiddleware(object):
def _create_response_headers(self, source_path, source_resp, sink_req):
resp_headers = dict()
acct, path = source_path.split('/', 3)[2:4]
resp_headers['X-Copied-From-Account'] = quote(acct)
resp_headers['X-Copied-From'] = quote(path)
resp_headers['X-Copied-From-Account'] = wsgi_quote(acct)
resp_headers['X-Copied-From'] = wsgi_quote(path)
if 'last-modified' in source_resp.headers:
resp_headers['X-Copied-From-Last-Modified'] = \
source_resp.headers['last-modified']
@ -334,7 +333,7 @@ class ServerSideCopyMiddleware(object):
src_account_name = req.headers.get('X-Copy-From-Account')
if src_account_name:
src_account_name = check_account_format(
req, unquote(src_account_name))
req, wsgi_unquote(src_account_name))
else:
src_account_name = acct
src_container_name, src_obj_name = _check_copy_from_header(req)

View File

@ -18,7 +18,8 @@ import hmac
from swift.common.exceptions import UnknownSecretIdError
from swift.common.middleware.crypto.crypto_utils import CRYPTO_KEY_CALLBACK
from swift.common.swob import Request, HTTPException, wsgi_to_bytes
from swift.common.utils import readconf, strict_b64decode, get_logger
from swift.common.utils import readconf, strict_b64decode, get_logger, \
split_path
from swift.common.wsgi import WSGIContext
@ -77,29 +78,54 @@ class KeyMasterContext(WSGIContext):
version = key_id['v']
if version not in ('1', '2'):
raise ValueError('Unknown key_id version: %s' % version)
if version == '1' and not key_id['path'].startswith(
'/' + self.account + '/'):
# Well shoot. This was the bug that made us notice we needed
# a v2! Hope the current account/container was the original!
key_acct, key_cont, key_obj = (
self.account, self.container, key_id['path'])
else:
key_acct, key_cont, key_obj = split_path(
key_id['path'], 1, 3, True)
check_path = (
self.account, self.container or key_cont, self.obj or key_obj)
if (key_acct, key_cont, key_obj) != check_path:
self.keymaster.logger.info(
"Path stored in meta (%r) does not match path from "
"request (%r)! Using path from meta.",
key_id['path'],
'/' + '/'.join(x for x in [
self.account, self.container, self.obj] if x))
else:
secret_id = self.keymaster.active_secret_id
# v1 had a bug where we would claim the path was just the object
# name if the object started with a slash. Bump versions to
# establish that we can trust the path.
version = '2'
key_acct, key_cont, key_obj = (
self.account, self.container, self.obj)
if (secret_id, version) in self._keys:
return self._keys[(secret_id, version)]
keys = {}
account_path = '/' + self.account
account_path = '/' + key_acct
try:
# self.account/container/obj reflect the level of the *request*,
# which may be different from the level of the key_id-path. Only
# fetch the keys that the request needs.
if self.container:
path = account_path + '/' + self.container
path = account_path + '/' + key_cont
keys['container'] = self.keymaster.create_key(
path, secret_id=secret_id)
if self.obj:
if self.obj.startswith('/') and version == '1':
path = self.obj
if key_obj.startswith('/') and version == '1':
path = key_obj
else:
path = path + '/' + self.obj
path = path + '/' + key_obj
keys['object'] = self.keymaster.create_key(
path, secret_id=secret_id)

View File

@ -144,7 +144,8 @@ class GetContext(WSGIContext):
def _get_container_listing(self, req, version, account, container,
prefix, marker=''):
con_req = make_subrequest(
req.environ, path='/'.join(['', version, account, container]),
req.environ,
path=quote('/'.join(['', version, account, container])),
method='GET',
headers={'x-auth-token': req.headers.get('x-auth-token')},
agent=('%(orig)s ' + 'DLO MultipartGET'), swift_source='DLO')

View File

@ -126,7 +126,9 @@ import hmac
from hashlib import sha1
from time import time
import six
from six.moves.urllib.parse import quote
from swift.common.exceptions import MimeInvalid
from swift.common.middleware.tempurl import get_tempurl_keys_from_metadata
from swift.common.utils import streq_const_time, register_swift_info, \
@ -229,7 +231,7 @@ class FormPost(object):
start_response(status, headers)
return [body]
except MimeInvalid:
body = 'FormPost: invalid starting boundary'
body = b'FormPost: invalid starting boundary'
start_response(
'400 Bad Request',
(('Content-Type', 'text/plain'),
@ -237,6 +239,8 @@ class FormPost(object):
return [body]
except (FormInvalid, EOFError) as err:
body = 'FormPost: %s' % err
if six.PY3:
body = body.encode('utf-8')
start_response(
'400 Bad Request',
(('Content-Type', 'text/plain'),
@ -258,6 +262,8 @@ class FormPost(object):
:returns: status_line, headers_list, body
"""
keys = self._get_keys(env)
if six.PY3:
boundary = boundary.encode('utf-8')
status = message = ''
attributes = {}
subheaders = []
@ -282,14 +288,13 @@ class FormPost(object):
hdrs['Content-Type'] or 'application/octet-stream'
if 'content-encoding' not in attributes and \
'content-encoding' in hdrs:
attributes['content-encoding'] = \
hdrs['Content-Encoding']
attributes['content-encoding'] = hdrs['Content-Encoding']
status, subheaders = \
self._perform_subrequest(env, attributes, fp, keys)
if not status.startswith('2'):
break
else:
data = ''
data = b''
mxln = MAX_VALUE_LENGTH
while mxln:
chunk = fp.read(mxln)
@ -299,6 +304,8 @@ class FormPost(object):
data += chunk
while fp.read(READ_CHUNK_SIZE):
pass
if six.PY3:
data = data.decode('utf-8')
if 'name' in attrs:
attributes[attrs['name'].lower()] = data.rstrip('\r\n--')
if not status:
@ -315,6 +322,8 @@ class FormPost(object):
body = status + '\r\nFormPost: ' + message.title()
headers.extend([('Content-Type', 'text/plain'),
('Content-Length', len(body))])
if six.PY3:
body = body.encode('utf-8')
return status, headers, body
status = status.split(' ', 1)[0]
if '?' in redirect:
@ -324,6 +333,8 @@ class FormPost(object):
redirect += 'status=%s&message=%s' % (quote(status), quote(message))
body = '<html><body><p><a href="%s">' \
'Click to continue...</a></p></body></html>' % redirect
if six.PY3:
body = body.encode('utf-8')
headers.extend(
[('Location', redirect), ('Content-Length', str(len(body)))])
return '303 See Other', headers, body
@ -385,6 +396,8 @@ class FormPost(object):
attributes.get('max_file_size') or '0',
attributes.get('max_file_count') or '0',
attributes.get('expires') or '0')
if six.PY3:
hmac_body = hmac_body.encode('utf-8')
has_valid_sig = False
for key in keys:

View File

@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from urllib import quote
from six.moves.urllib.parse import quote
from swift.common.utils import public
from swift.common.middleware.s3api.controllers.base import Controller

View File

@ -24,7 +24,8 @@ import six
from six.moves.urllib.parse import quote, unquote, parse_qsl
import string
from swift.common.utils import split_path, json, get_swift_info
from swift.common.utils import split_path, json, get_swift_info, \
close_if_possible
from swift.common import swob
from swift.common.http import HTTP_OK, HTTP_CREATED, HTTP_ACCEPTED, \
HTTP_NO_CONTENT, HTTP_UNAUTHORIZED, HTTP_FORBIDDEN, HTTP_NOT_FOUND, \
@ -110,6 +111,34 @@ def _header_acl_property(resource):
doc='Get and set the %s acl property' % resource)
class HashingInput(object):
"""
wsgi.input wrapper to verify the hash of the input as it's read.
"""
def __init__(self, reader, content_length, hasher, expected_hex_hash):
self._input = reader
self._to_read = content_length
self._hasher = hasher()
self._expected = expected_hex_hash
def read(self, size=None):
chunk = self._input.read(size)
self._hasher.update(chunk)
self._to_read -= len(chunk)
if self._to_read < 0 or (size > len(chunk) and self._to_read) or (
self._to_read == 0 and
self._hasher.hexdigest() != self._expected):
self.close()
# Since we don't return the last chunk, the PUT never completes
raise swob.HTTPUnprocessableEntity(
'The X-Amz-Content-SHA56 you specified did not match '
'what we received.')
return chunk
def close(self):
close_if_possible(self._input)
class SigV4Mixin(object):
"""
A request class mixin to provide S3 signature v4 functionality
@ -401,6 +430,20 @@ class SigV4Mixin(object):
raise InvalidRequest(msg)
else:
hashed_payload = self.headers['X-Amz-Content-SHA256']
if self.content_length == 0:
if hashed_payload != sha256().hexdigest():
raise BadDigest(
'The X-Amz-Content-SHA56 you specified did not match '
'what we received.')
elif self.content_length:
self.environ['wsgi.input'] = HashingInput(
self.environ['wsgi.input'],
self.content_length,
sha256,
hashed_payload)
# else, not provided -- Swift will kick out a 411 Length Required
# which will get translated back to a S3-style response in
# S3Request._swift_error_codes
cr.append(hashed_payload)
return '\n'.join(cr).encode('utf-8')
@ -1267,8 +1310,11 @@ class S3Request(swob.Request):
sw_req = self.to_swift_req(method, container, obj, headers=headers,
body=body, query=query)
try:
sw_resp = sw_req.get_response(app)
except swob.HTTPException as err:
sw_resp = err
else:
# reuse account and tokens
_, self.account, _ = split_path(sw_resp.environ['PATH_INFO'],
2, 3, True)

View File

@ -33,6 +33,25 @@ This middleware:
* Optionally can retrieve and cache secret from keystone
to validate signature locally
.. note::
If upgrading from swift3, the ``auth_version`` config option has been
removed, and the ``auth_uri`` option now includes the Keystone API
version. If you previously had a configuration like
.. code-block:: ini
[filter:s3token]
use = egg:swift3#s3token
auth_uri = https://keystonehost:35357
auth_version = 3
you should now use
.. code-block:: ini
[filter:s3token]
use = egg:swift#s3token
auth_uri = https://keystonehost:35357/v3
"""
import base64

View File

@ -457,8 +457,8 @@ def parse_and_validate_input(req_body, req_path):
continue
obj_path = '/'.join(['', vrs, account,
seg_dict['path'].lstrip('/')])
if req_path == quote(obj_path):
quote(seg_dict['path'].lstrip('/'))])
if req_path == obj_path:
errors.append(
b"Index %d: manifest must not include itself as a segment"
% seg_index)
@ -526,7 +526,7 @@ class SloGetContext(WSGIContext):
Raise exception on failures.
"""
sub_req = make_subrequest(
req.environ, path='/'.join(['', version, acc, con, obj]),
req.environ, path=quote('/'.join(['', version, acc, con, obj])),
method='GET',
headers={'x-auth-token': req.headers.get('x-auth-token')},
agent='%(orig)s SLO MultipartGET', swift_source='SLO')
@ -1109,8 +1109,8 @@ class StaticLargeObject(object):
path2indices[seg_dict['path']].append(index)
def do_head(obj_name):
obj_path = '/'.join(['', vrs, account,
get_valid_utf8_str(obj_name).lstrip('/')])
obj_path = quote('/'.join([
'', vrs, account, get_valid_utf8_str(obj_name).lstrip('/')]))
sub_req = make_subrequest(
req.environ, path=obj_path + '?', # kill the query string

View File

@ -184,7 +184,7 @@ from eventlet import Timeout
import six
from swift.common.swob import Response, Request, wsgi_to_str
from swift.common.swob import HTTPBadRequest, HTTPForbidden, HTTPNotFound, \
HTTPUnauthorized
HTTPUnauthorized, HTTPMethodNotAllowed
from swift.common.request_helpers import get_sys_meta_prefix
from swift.common.middleware.acl import (
@ -688,6 +688,9 @@ class TempAuth(object):
"""
req.start_time = time()
handler = None
if req.method != 'GET':
req.response = HTTPMethodNotAllowed(request=req)
return req.response
try:
version, account, user, _junk = split_path(req.path_info,
1, 4, True)

View File

@ -26,7 +26,6 @@ import sys
import time
import six
from six.moves.urllib.parse import unquote
from swift.common.header_key_dict import HeaderKeyDict
from swift import gettext_ as _
@ -35,11 +34,11 @@ from swift.common.exceptions import ListingIterError, SegmentError
from swift.common.http import is_success
from swift.common.swob import HTTPBadRequest, \
HTTPServiceUnavailable, Range, is_chunked, multi_range_iterator, \
HTTPPreconditionFailed, wsgi_to_bytes
HTTPPreconditionFailed, wsgi_to_bytes, wsgi_unquote, wsgi_to_str
from swift.common.utils import split_path, validate_device_partition, \
close_if_possible, maybe_multipart_byteranges_to_document_iters, \
multipart_byteranges_to_document_iters, parse_content_type, \
parse_content_range, csv_append, list_from_csv, Spliterator
parse_content_range, csv_append, list_from_csv, Spliterator, quote
from swift.common.wsgi import make_subrequest
@ -115,14 +114,13 @@ def split_and_validate_path(request, minsegs=1, maxsegs=None,
Utility function to split and validate the request path.
:returns: result of :meth:`~swift.common.utils.split_path` if
everything's okay
everything's okay, as native strings
:raises HTTPBadRequest: if something's not okay
"""
try:
segs = split_path(unquote(request.path),
minsegs, maxsegs, rest_with_last)
segs = request.split_path(minsegs, maxsegs, rest_with_last)
validate_device_partition(segs[0], segs[1])
return segs
return [wsgi_to_str(seg) for seg in segs]
except ValueError as err:
raise HTTPBadRequest(body=str(err), request=request,
content_type='text/plain')
@ -308,7 +306,7 @@ def check_path_header(req, name, length, error_msg):
:raise: HTTPPreconditionFailed if header value
is not well formatted.
"""
hdr = unquote(req.headers.get(name))
hdr = wsgi_unquote(req.headers.get(name))
if not hdr.startswith('/'):
hdr = '/' + hdr
try:
@ -391,7 +389,7 @@ class SegmentedIterable(object):
# segment is a plain old object, not some flavor of large
# object; therefore, its etag is its MD5sum and hence we can
# check it.
path = seg_path + '?multipart-manifest=get'
path = quote(seg_path) + '?multipart-manifest=get'
seg_req = make_subrequest(
self.req.environ, path=path, method='GET',
headers={'x-auth-token': self.req.headers.get(

View File

@ -309,6 +309,50 @@ def str_to_wsgi(native_str):
return bytes_to_wsgi(native_str.encode('utf8', errors='surrogateescape'))
def wsgi_quote(wsgi_str):
if six.PY2:
if not isinstance(wsgi_str, bytes):
raise TypeError('Expected a WSGI string; got %r' % wsgi_str)
return urllib.parse.quote(wsgi_str)
if not isinstance(wsgi_str, str) or any(ord(x) > 255 for x in wsgi_str):
raise TypeError('Expected a WSGI string; got %r' % wsgi_str)
return urllib.parse.quote(wsgi_str, encoding='latin-1')
def wsgi_unquote(wsgi_str):
if six.PY2:
if not isinstance(wsgi_str, bytes):
raise TypeError('Expected a WSGI string; got %r' % wsgi_str)
return urllib.parse.unquote(wsgi_str)
if not isinstance(wsgi_str, str) or any(ord(x) > 255 for x in wsgi_str):
raise TypeError('Expected a WSGI string; got %r' % wsgi_str)
return urllib.parse.unquote(wsgi_str, encoding='latin-1')
def wsgi_quote_plus(wsgi_str):
if six.PY2:
if not isinstance(wsgi_str, bytes):
raise TypeError('Expected a WSGI string; got %r' % wsgi_str)
return urllib.parse.quote_plus(wsgi_str)
if not isinstance(wsgi_str, str) or any(ord(x) > 255 for x in wsgi_str):
raise TypeError('Expected a WSGI string; got %r' % wsgi_str)
return urllib.parse.quote_plus(wsgi_str, encoding='latin-1')
def wsgi_unquote_plus(wsgi_str):
if six.PY2:
if not isinstance(wsgi_str, bytes):
raise TypeError('Expected a WSGI string; got %r' % wsgi_str)
return urllib.parse.unquote_plus(wsgi_str)
if not isinstance(wsgi_str, str) or any(ord(x) > 255 for x in wsgi_str):
raise TypeError('Expected a WSGI string; got %r' % wsgi_str)
return urllib.parse.unquote_plus(wsgi_str, encoding='latin-1')
def _resp_status_property():
"""
Set and retrieve the value of Response.status

View File

@ -4273,8 +4273,7 @@ def iter_multipart_mime_documents(wsgi_input, boundary, read_chunk_size=4096):
for doing that if necessary.
:param wsgi_input: The file-like object to read from.
:param boundary: The mime boundary to separate new file-like
objects on.
:param boundary: The mime boundary to separate new file-like objects on.
:returns: A generator of file-like objects for each part.
:raises MimeInvalid: if the document is malformed
"""

View File

@ -32,13 +32,10 @@ from eventlet.green import socket, ssl, os as green_os
import six
from six import BytesIO
from six import StringIO
from six.moves.urllib.parse import unquote
if six.PY2:
import mimetools
from swift.common import utils, constraints
from swift.common.storage_policy import BindPortsCache
from swift.common.swob import Request
from swift.common.swob import Request, wsgi_unquote
from swift.common.utils import capture_stdio, disable_fallocate, \
drop_privileges, get_logger, NullLogger, config_true_value, \
validate_configuration, get_hub, config_auto_int_value, \
@ -148,31 +145,6 @@ def wrap_conf_type(f):
appconfig = wrap_conf_type(loadwsgi.appconfig)
def monkey_patch_mimetools():
"""
mimetools.Message defaults content-type to "text/plain"
This changes it to default to None, so we can detect missing headers.
"""
if six.PY3:
# The mimetools has been removed from Python 3
return
orig_parsetype = mimetools.Message.parsetype
def parsetype(self):
if not self.typeheader:
self.type = None
self.maintype = None
self.subtype = None
self.plisttext = ''
else:
orig_parsetype(self)
parsetype.patched = True
if not getattr(mimetools.Message.parsetype, 'patched', None):
mimetools.Message.parsetype = parsetype
def get_socket(conf):
"""Bind socket to bind ip:port in conf
@ -448,6 +420,18 @@ class SwiftHttpProtocol(wsgi.HttpProtocol):
# versions the output from error is same as info anyway
self.server.log.info('ERROR WSGI: ' + f, *a)
class MessageClass(wsgi.HttpProtocol.MessageClass):
'''Subclass to see when the client didn't provide a Content-Type'''
# for py2:
def parsetype(self):
if self.typeheader is None:
self.typeheader = ''
wsgi.HttpProtocol.MessageClass.parsetype(self)
# for py3:
def get_default_type(self):
return ''
class SwiftHttpProxiedProtocol(SwiftHttpProtocol):
"""
@ -1156,7 +1140,6 @@ def _initrp(conf_path, app_section, *args, **kwargs):
if config_true_value(conf.get('disable_fallocate', 'no')):
disable_fallocate()
monkey_patch_mimetools()
return (conf, logger, log_name)
@ -1319,7 +1302,7 @@ def make_subrequest(env, method=None, path=None, body=None, headers=None,
path = path or ''
if path and '?' in path:
path, query_string = path.split('?', 1)
newenv = make_env(env, method, path=unquote(path), agent=agent,
newenv = make_env(env, method, path=wsgi_unquote(path), agent=agent,
query_string=query_string, swift_source=swift_source)
if not headers:
headers = {}

View File

@ -338,7 +338,7 @@ class ContainerBroker(DatabaseBroker):
self._db_files = None
@classmethod
def create_broker(self, device_path, part, account, container, logger=None,
def create_broker(cls, device_path, part, account, container, logger=None,
epoch=None, put_timestamp=None,
storage_policy_index=None):
"""

View File

@ -131,14 +131,6 @@ def get_tmp_dir(policy_or_index):
return get_policy_string(TMP_BASE, policy_or_index)
def _unlink_if_present(filename):
try:
os.unlink(filename)
except OSError as err:
if err.errno != errno.ENOENT:
raise
def _get_filename(fd):
"""
Helper function to get to file name from a file descriptor or filename.
@ -1623,12 +1615,10 @@ class BaseDiskFileManager(object):
partition_path = get_part_path(dev_path, policy, partition)
if suffixes is None:
suffixes = self.yield_suffixes(device, partition, policy)
considering_all_suffixes = True
else:
suffixes = (
(os.path.join(partition_path, suffix), suffix)
for suffix in suffixes)
considering_all_suffixes = False
key_preference = (
('ts_meta', 'meta_info', 'timestamp'),
@ -1637,19 +1627,18 @@ class BaseDiskFileManager(object):
('ts_ctype', 'ctype_info', 'ctype_timestamp'),
)
# We delete as many empty directories as we can.
# cleanup_ondisk_files() takes care of the hash dirs, and we take
# care of the suffix dirs and possibly even the partition dir.
have_nonempty_suffix = False
# cleanup_ondisk_files() will remove empty hash dirs, and we'll
# invalidate any empty suffix dirs so they'll get cleaned up on
# the next rehash
for suffix_path, suffix in suffixes:
have_nonempty_hashdir = False
found_files = False
for object_hash in self._listdir(suffix_path):
object_path = os.path.join(suffix_path, object_hash)
try:
results = self.cleanup_ondisk_files(
object_path, **kwargs)
if results['files']:
have_nonempty_hashdir = True
found_files = True
timestamps = {}
for ts_key, info_key, info_ts_key in key_preference:
if info_key not in results:
@ -1669,38 +1658,8 @@ class BaseDiskFileManager(object):
'Invalid diskfile filename in %r (%s)' % (
object_path, err))
if have_nonempty_hashdir:
have_nonempty_suffix = True
else:
try:
os.rmdir(suffix_path)
except OSError as err:
if err.errno not in (errno.ENOENT, errno.ENOTEMPTY):
self.logger.debug(
'Error cleaning up empty suffix directory %s: %s',
suffix_path, err)
# cleanup_ondisk_files tries to remove empty hash dirs,
# but if it fails, so will we. An empty directory
# structure won't cause errors (just slowdown), so we
# ignore the exception.
if considering_all_suffixes and not have_nonempty_suffix:
# There's nothing of interest in the partition, so delete it
try:
# Remove hashes.pkl *then* hashes.invalid; otherwise, if we
# remove hashes.invalid but leave hashes.pkl, that makes it
# look as though the invalidations in hashes.invalid never
# occurred.
_unlink_if_present(os.path.join(partition_path, HASH_FILE))
_unlink_if_present(os.path.join(partition_path,
HASH_INVALIDATIONS_FILE))
# This lock is only held by people dealing with the hashes
# or the hash invalidations, and we've just removed those.
_unlink_if_present(os.path.join(partition_path, ".lock"))
_unlink_if_present(os.path.join(partition_path,
".lock-replication"))
os.rmdir(partition_path)
except OSError as err:
self.logger.debug("Error cleaning up empty partition: %s", err)
if not found_files:
self.invalidate_hash(suffix_path)
class BaseDiskFileWriter(object):

View File

@ -412,6 +412,9 @@ class DiskFile(object):
raise DiskFileNotOpen()
return self._metadata
get_datafile_metadata = get_metadata
get_metafile_metadata = get_metadata
def read_metadata(self, current_time=None):
"""
Return the metadata for an object.

View File

@ -25,6 +25,7 @@
# collected. We've seen objects hang around forever otherwise.
from six.moves.urllib.parse import unquote
from six.moves import zip
import collections
import itertools
@ -697,7 +698,8 @@ class BaseObjectController(Controller):
"""
raise NotImplementedError()
def _delete_object(self, req, obj_ring, partition, headers):
def _delete_object(self, req, obj_ring, partition, headers,
node_count=None, node_iterator=None):
"""Delete object considering write-affinity.
When deleting object in write affinity deployment, also take configured
@ -711,37 +713,12 @@ class BaseObjectController(Controller):
:param headers: system headers to storage nodes
:return: Response object
"""
policy_index = req.headers.get('X-Backend-Storage-Policy-Index')
policy = POLICIES.get_by_index(policy_index)
node_count = None
node_iterator = None
policy_options = self.app.get_policy_options(policy)
is_local = policy_options.write_affinity_is_local_fn
if is_local is not None:
primaries = obj_ring.get_part_nodes(partition)
node_count = len(primaries)
local_handoffs = policy_options.write_affinity_handoff_delete_count
if local_handoffs is None:
local_primaries = [node for node in primaries
if is_local(node)]
local_handoffs = len(primaries) - len(local_primaries)
node_count += local_handoffs
node_iterator = self.iter_nodes_local_first(
obj_ring, partition, policy=policy, local_handoffs_first=True
)
status_overrides = {404: 204}
resp = self.make_requests(req, obj_ring,
partition, 'DELETE', req.swift_entity_path,
headers, overrides=status_overrides,
node_count=node_count,
node_iterator=node_iterator)
return resp
def _post_object(self, req, obj_ring, partition, headers):
@ -861,6 +838,7 @@ class BaseObjectController(Controller):
# Include local handoff nodes if write-affinity is enabled.
node_count = len(nodes)
node_iterator = None
policy = POLICIES.get_by_index(policy_index)
policy_options = self.app.get_policy_options(policy)
is_local = policy_options.write_affinity_is_local_fn
@ -870,11 +848,16 @@ class BaseObjectController(Controller):
local_primaries = [node for node in nodes if is_local(node)]
local_handoffs = len(nodes) - len(local_primaries)
node_count += local_handoffs
node_iterator = self.iter_nodes_local_first(
obj_ring, partition, policy=policy, local_handoffs_first=True
)
headers = self._backend_requests(
req, node_count, container_partition, container_nodes,
container_path=container_path)
return self._delete_object(req, obj_ring, partition, headers)
return self._delete_object(req, obj_ring, partition, headers,
node_count=node_count,
node_iterator=node_iterator)
@ObjectControllerRouter.register(REPL_POLICY)
@ -1112,18 +1095,14 @@ class ECAppIter(object):
resp.content_type = self.learned_content_type
resp.content_length = self.obj_length
def _next_range(self):
def _next_ranges(self):
# Each FA part should have approximately the same headers. We really
# only care about Content-Range and Content-Type, and that'll be the
# same for all the different FAs.
frag_iters = []
headers = None
for parts_iter in self.internal_parts_iters:
part_info = next(parts_iter)
frag_iters.append(part_info['part_iter'])
headers = part_info['headers']
headers = HeaderKeyDict(headers)
return headers, frag_iters
for part_infos in zip(*self.internal_parts_iters):
frag_iters = [pi['part_iter'] for pi in part_infos]
headers = HeaderKeyDict(part_infos[0]['headers'])
yield headers, frag_iters
def _actual_range(self, req_start, req_end, entity_length):
# Takes 3 args: (requested-first-byte, requested-last-byte,
@ -1272,11 +1251,7 @@ class ECAppIter(object):
seen_first_headers = False
ranges_for_resp = {}
while True:
# this'll raise StopIteration and exit the loop
next_range = self._next_range()
headers, frag_iters = next_range
for headers, frag_iters in self._next_ranges():
content_type = headers['Content-Type']
content_range = headers.get('Content-Range')

View File

@ -53,8 +53,7 @@ from test.unit import SkipTest
from swift.common import constraints, utils, ring, storage_policy
from swift.common.ring import Ring
from swift.common.wsgi import (
monkey_patch_mimetools, loadapp, SwiftHttpProtocol)
from swift.common.wsgi import loadapp, SwiftHttpProtocol
from swift.common.utils import config_true_value, split_path
from swift.account import server as account_server
from swift.container import server as container_server
@ -493,8 +492,6 @@ def in_process_setup(the_object_server=object_server):
swift_conf_src = _in_process_find_conf_file(conf_src_dir, 'swift.conf')
_info('Using swift config from %s' % swift_conf_src)
monkey_patch_mimetools()
global _testdir
_testdir = os.path.join(mkdtemp(), 'tmp_functional')
utils.mkdirs(_testdir)
@ -1292,3 +1289,15 @@ def requires_policies(f):
return f(self, *args, **kwargs)
return wrapper
def requires_bulk(f):
@functools.wraps(f)
def wrapper(*args, **kwargs):
if skip or not cluster_info:
raise SkipTest('Requires bulk middleware')
# Determine whether this cluster has bulk middleware; if not, skip test
if not cluster_info.get('bulk_upload', {}):
raise SkipTest('Requires bulk middleware')
return f(*args, **kwargs)
return wrapper

View File

@ -49,7 +49,8 @@ class TestDloEnv(BaseEnv):
file_item = cls.container.file("%s/seg_lower%s" % (prefix, letter))
file_item.write(letter * 10)
file_item = cls.container.file("%s/seg_upper%s" % (prefix, letter))
file_item = cls.container.file(
"%s/seg_upper_%%ff%s" % (prefix, letter))
file_item.write(letter.upper() * 10)
for letter in ('f', 'g', 'h', 'i', 'j'):
@ -64,7 +65,7 @@ class TestDloEnv(BaseEnv):
man2 = cls.container.file("man2")
man2.write('man2-contents',
hdrs={"X-Object-Manifest": "%s/%s/seg_upper" %
hdrs={"X-Object-Manifest": "%s/%s/seg_upper_%%25ff" %
(cls.container.name, prefix)})
manall = cls.container.file("manall")

View File

@ -20,11 +20,12 @@ import json
import unittest2
from uuid import uuid4
import time
from xml.dom import minidom
from six.moves import range
from test.functional import check_response, retry, requires_acls, \
requires_policies, SkipTest
requires_policies, SkipTest, requires_bulk
import test.functional as tf
@ -1646,6 +1647,37 @@ class TestObject(unittest2.TestCase):
self.assertEqual(resp.status, 200)
self.assertEqual(body, resp.read())
@requires_bulk
def test_bulk_delete(self):
def bulk_delete(url, token, parsed, conn):
# try to bulk delete the object that was created during test setup
conn.request('DELETE', '%s/%s/%s?bulk-delete' % (
parsed.path, self.container, self.obj),
'%s/%s' % (self.container, self.obj),
{'X-Auth-Token': token,
'Accept': 'application/xml',
'Expect': '100-continue',
'Content-Type': 'text/plain'})
return check_response(conn)
resp = retry(bulk_delete)
self.assertEqual(resp.status, 200)
body = resp.read()
tree = minidom.parseString(body)
self.assertEqual(tree.documentElement.tagName, 'delete')
errors = tree.getElementsByTagName('errors')
self.assertEqual(len(errors), 1)
errors = [c.data if c.nodeType == c.TEXT_NODE else c.childNodes[0].data
for c in errors[0].childNodes
if c.nodeType != c.TEXT_NODE or c.data.strip()]
self.assertEqual(errors, [])
final_status = tree.getElementsByTagName('response_status')
self.assertEqual(len(final_status), 1)
self.assertEqual(len(final_status[0].childNodes), 1)
self.assertEqual(final_status[0].childNodes[0].data, '200 OK')
if __name__ == '__main__':
unittest2.main()

View File

@ -96,6 +96,8 @@ class TestSloEnv(BaseEnv):
seg_info['seg_e']]),
parms={'multipart-manifest': 'put'})
cls.container.file('seg_with_%ff_funky_name').write('z' * 10)
# Put the same manifest in the container2
file_item = cls.container2.file("manifest-abcde")
file_item.write(
@ -564,6 +566,17 @@ class TestSlo(Base):
parms={'multipart-manifest': 'put'})
self.assert_status(201)
def test_slo_funky_segment(self):
file_item = self.env.container.file("manifest-with-funky-segment")
file_item.write(
json.dumps([{
'path': '/%s/%s' % (self.env.container.name,
'seg_with_%ff_funky_name')}]),
parms={'multipart-manifest': 'put'})
self.assert_status(201)
self.assertEqual('z' * 10, file_item.read())
def test_slo_missing_etag(self):
file_item = self.env.container.file("manifest-a-missing-etag")
file_item.write(

View File

@ -128,11 +128,13 @@ class Base(unittest2.TestCase):
class Base2(object):
def setUp(self):
@classmethod
def setUpClass(cls):
Utils.create_name = Utils.create_utf8_name
super(Base2, self).setUp()
super(Base2, cls).setUpClass()
def tearDown(self):
@classmethod
def tearDownClass(cls):
Utils.create_name = Utils.create_ascii_name
@ -353,8 +355,10 @@ class TestAccount(Base):
self.assertEqual(len(containers), len(self.env.containers))
self.assert_status(200)
marker = (containers[-1] if format_type is None
else containers[-1]['name'])
containers = self.env.account.containers(
parms={'format': format_type, 'marker': containers[-1]})
parms={'format': format_type, 'marker': marker})
self.assertEqual(len(containers), 0)
if format_type is None:
self.assert_status(204)
@ -771,8 +775,9 @@ class TestContainer(Base):
self.assertEqual(len(files), len(self.env.files))
self.assert_status(200)
marker = files[-1] if format_type is None else files[-1]['name']
files = self.env.container.files(
parms={'format': format_type, 'marker': files[-1]})
parms={'format': format_type, 'marker': marker})
self.assertEqual(len(files), 0)
if format_type is None:

View File

@ -142,8 +142,8 @@ class TestReplicatorFunctions(ReplProbeTest):
for files in test_node_files_list:
self.assertIn(files, new_files_list[0])
for dir in test_node_dir_list:
self.assertIn(dir, new_dir_list[0])
for directory in test_node_dir_list:
self.assertIn(directory, new_dir_list[0])
break
except Exception:
if time.time() - begin > 60:

View File

@ -22,7 +22,6 @@ import unittest
from logging import DEBUG
from mock import patch, call, DEFAULT
import six
import eventlet
from swift.account import reaper
@ -85,9 +84,13 @@ class FakeAccountBroker(object):
'delete_timestamp': time.time() - 10}
return info
def list_containers_iter(self, *args):
def list_containers_iter(self, limit, marker, *args):
for cont in self.containers:
if cont > marker:
yield cont, None, None, None, None
limit -= 1
if limit <= 0:
break
def is_status_deleted(self):
return True
@ -196,11 +199,11 @@ class TestReaper(unittest.TestCase):
raise self.myexp
if self.timeout:
raise eventlet.Timeout()
objects = [{'name': 'o1'},
{'name': 'o2'},
{'name': six.text_type('o3')},
{'name': ''}]
return None, objects
objects = [{'name': u'o1'},
{'name': u'o2'},
{'name': u'o3'},
{'name': u'o4'}]
return None, [o for o in objects if o['name'] > kwargs['marker']]
def fake_container_ring(self):
return FakeRing()
@ -589,7 +592,7 @@ class TestReaper(unittest.TestCase):
self.r.stats_return_codes.get(2, 0) + 1
def test_reap_account(self):
containers = ('c1', 'c2', 'c3', '')
containers = ('c1', 'c2', 'c3', 'c4')
broker = FakeAccountBroker(containers)
self.called_amount = 0
self.r = r = self.init_reaper({}, fakelogger=True)

View File

@ -482,6 +482,78 @@ class TestKeymaster(unittest.TestCase):
self.assertEqual(expected_keys, keys)
self.assertEqual([('/a/c', None), ('/a/c//o', None)], calls)
def test_v1_keys_with_weird_paths(self):
secrets = {None: os.urandom(32),
'22': os.urandom(33)}
conf = {}
for secret_id, secret in secrets.items():
opt = ('encryption_root_secret%s' %
(('_%s' % secret_id) if secret_id else ''))
conf[opt] = base64.b64encode(secret)
conf['active_root_secret_id'] = '22'
self.app = keymaster.KeyMaster(self.swift, conf)
orig_create_key = self.app.create_key
calls = []
def mock_create_key(path, secret_id=None):
calls.append((path, secret_id))
return orig_create_key(path, secret_id)
# request path doesn't match stored path -- this could happen if you
# misconfigured your proxy to have copy right of encryption
context = keymaster.KeyMasterContext(self.app, 'a', 'not-c', 'not-o')
for version in ('1', '2'):
with mock.patch.object(self.app, 'create_key', mock_create_key):
keys = context.fetch_crypto_keys(key_id={
'v': version, 'path': '/a/c/o'})
expected_keys = {
'container': hmac.new(secrets[None], b'/a/c',
digestmod=hashlib.sha256).digest(),
'object': hmac.new(secrets[None], b'/a/c/o',
digestmod=hashlib.sha256).digest(),
'id': {'path': '/a/c/o', 'v': version},
'all_ids': [
{'path': '/a/c/o', 'v': version},
{'path': '/a/c/o', 'secret_id': '22', 'v': version}]}
self.assertEqual(expected_keys, keys)
self.assertEqual([('/a/c', None), ('/a/c/o', None)], calls)
del calls[:]
context = keymaster.KeyMasterContext(
self.app, 'not-a', 'not-c', '/not-o')
with mock.patch.object(self.app, 'create_key', mock_create_key):
keys = context.fetch_crypto_keys(key_id={
'v': '1', 'path': '/o'})
expected_keys = {
'container': hmac.new(secrets[None], b'/not-a/not-c',
digestmod=hashlib.sha256).digest(),
'object': hmac.new(secrets[None], b'/o',
digestmod=hashlib.sha256).digest(),
'id': {'path': '/o', 'v': '1'},
'all_ids': [
{'path': '/o', 'v': '1'},
{'path': '/o', 'secret_id': '22', 'v': '1'}]}
self.assertEqual(expected_keys, keys)
self.assertEqual([('/not-a/not-c', None), ('/o', None)], calls)
del calls[:]
context = keymaster.KeyMasterContext(
self.app, 'not-a', 'not-c', '/not-o')
with mock.patch.object(self.app, 'create_key', mock_create_key):
keys = context.fetch_crypto_keys(key_id={
'v': '2', 'path': '/a/c//o'})
expected_keys = {
'container': hmac.new(secrets[None], b'/a/c',
digestmod=hashlib.sha256).digest(),
'object': hmac.new(secrets[None], b'/a/c//o',
digestmod=hashlib.sha256).digest(),
'id': {'path': '/a/c//o', 'v': '2'},
'all_ids': [
{'path': '/a/c//o', 'v': '2'},
{'path': '/a/c//o', 'secret_id': '22', 'v': '2'}]}
self.assertEqual(expected_keys, keys)
self.assertEqual([('/a/c', None), ('/a/c//o', None)], calls)
@mock.patch('swift.common.middleware.crypto.keymaster.readconf')
def test_keymaster_config_path(self, mock_readconf):
for secret in (os.urandom(32), os.urandom(33), os.urandom(50)):

View File

@ -717,7 +717,7 @@ class TestS3ApiMultiUpload(S3ApiTestCase):
'Response Status': '201 Created',
'Errors': [],
})])
mock_time.return_value.time.side_effect = (
mock_time.time.side_effect = (
1, # start_time
12, # first whitespace
13, # second...
@ -769,7 +769,7 @@ class TestS3ApiMultiUpload(S3ApiTestCase):
'Response Status': '400 Bad Request',
'Errors': [['some/object', '403 Forbidden']],
})])
mock_time.return_value.time.side_effect = (
mock_time.time.side_effect = (
1, # start_time
12, # first whitespace
13, # second...
@ -818,7 +818,7 @@ class TestS3ApiMultiUpload(S3ApiTestCase):
'Response Status': '400 Bad Request',
'Errors': [['some/object', '404 Not Found']],
})])
mock_time.return_value.time.side_effect = (
mock_time.time.side_effect = (
1, # start_time
12, # first whitespace
13, # second...

View File

@ -469,6 +469,60 @@ class TestS3ApiObj(S3ApiTestCase):
# Check that s3api converts a Content-MD5 header into an etag.
self.assertEqual(headers['etag'], etag)
@s3acl
def test_object_PUT_v4(self):
body_sha = hashlib.sha256(self.object_body).hexdigest()
req = Request.blank(
'/bucket/object',
environ={'REQUEST_METHOD': 'PUT'},
headers={
'Authorization':
'AWS4-HMAC-SHA256 '
'Credential=test:tester/%s/us-east-1/s3/aws4_request, '
'SignedHeaders=host;x-amz-date, '
'Signature=hmac' % (
self.get_v4_amz_date_header().split('T', 1)[0]),
'x-amz-date': self.get_v4_amz_date_header(),
'x-amz-storage-class': 'STANDARD',
'x-amz-content-sha256': body_sha,
'Date': self.get_date_header()},
body=self.object_body)
req.date = datetime.now()
req.content_type = 'text/plain'
status, headers, body = self.call_s3api(req)
self.assertEqual(status.split()[0], '200')
# Check that s3api returns an etag header.
self.assertEqual(headers['etag'],
'"%s"' % self.response_headers['etag'])
_, _, headers = self.swift.calls_with_headers[-1]
# No way to determine ETag to send
self.assertNotIn('etag', headers)
@s3acl
def test_object_PUT_v4_bad_hash(self):
req = Request.blank(
'/bucket/object',
environ={'REQUEST_METHOD': 'PUT'},
headers={
'Authorization':
'AWS4-HMAC-SHA256 '
'Credential=test:tester/%s/us-east-1/s3/aws4_request, '
'SignedHeaders=host;x-amz-date, '
'Signature=hmac' % (
self.get_v4_amz_date_header().split('T', 1)[0]),
'x-amz-date': self.get_v4_amz_date_header(),
'x-amz-storage-class': 'STANDARD',
'x-amz-content-sha256': 'not the hash',
'Date': self.get_date_header()},
body=self.object_body)
req.date = datetime.now()
req.content_type = 'text/plain'
status, headers, body = self.call_s3api(req)
self.assertEqual(status.split()[0], '400')
print(body)
self.assertEqual(self._get_error_code(body), 'BadDigest')
def test_object_PUT_headers(self):
content_md5 = self.etag.decode('hex').encode('base64').strip()

View File

@ -13,9 +13,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import hashlib
from mock import patch, MagicMock
import unittest
from six import BytesIO
from swift.common import swob
from swift.common.swob import Request, HTTPNoContent
from swift.common.middleware.s3api.utils import mktime
@ -24,7 +27,7 @@ from swift.common.middleware.s3api.subresource import ACL, User, Owner, \
Grant, encode_acl
from test.unit.common.middleware.s3api.test_s3api import S3ApiTestCase
from swift.common.middleware.s3api.s3request import S3Request, \
S3AclRequest, SigV4Request, SIGV4_X_AMZ_DATE_FORMAT
S3AclRequest, SigV4Request, SIGV4_X_AMZ_DATE_FORMAT, HashingInput
from swift.common.middleware.s3api.s3response import InvalidArgument, \
NoSuchBucket, InternalError, \
AccessDenied, SignatureDoesNotMatch, RequestTimeTooSkewed
@ -802,5 +805,76 @@ class TestRequest(S3ApiTestCase):
u'\u30c9\u30e9\u30b4\u30f3'))
class TestHashingInput(S3ApiTestCase):
def test_good(self):
raw = b'123456789'
wrapped = HashingInput(BytesIO(raw), 9, hashlib.md5,
hashlib.md5(raw).hexdigest())
self.assertEqual(b'1234', wrapped.read(4))
self.assertEqual(b'56', wrapped.read(2))
# trying to read past the end gets us whatever's left
self.assertEqual(b'789', wrapped.read(4))
# can continue trying to read -- but it'll be empty
self.assertEqual(b'', wrapped.read(2))
self.assertFalse(wrapped._input.closed)
wrapped.close()
self.assertTrue(wrapped._input.closed)
def test_empty(self):
wrapped = HashingInput(BytesIO(b''), 0, hashlib.sha256,
hashlib.sha256(b'').hexdigest())
self.assertEqual(b'', wrapped.read(4))
self.assertEqual(b'', wrapped.read(2))
self.assertFalse(wrapped._input.closed)
wrapped.close()
self.assertTrue(wrapped._input.closed)
def test_too_long(self):
raw = b'123456789'
wrapped = HashingInput(BytesIO(raw), 8, hashlib.md5,
hashlib.md5(raw).hexdigest())
self.assertEqual(b'1234', wrapped.read(4))
self.assertEqual(b'56', wrapped.read(2))
# even though the hash matches, there was more data than we expected
with self.assertRaises(swob.Response) as raised:
wrapped.read(3)
self.assertEqual(raised.exception.status, '422 Unprocessable Entity')
# the error causes us to close the input
self.assertTrue(wrapped._input.closed)
def test_too_short(self):
raw = b'123456789'
wrapped = HashingInput(BytesIO(raw), 10, hashlib.md5,
hashlib.md5(raw).hexdigest())
self.assertEqual(b'1234', wrapped.read(4))
self.assertEqual(b'56', wrapped.read(2))
# even though the hash matches, there was more data than we expected
with self.assertRaises(swob.Response) as raised:
wrapped.read(4)
self.assertEqual(raised.exception.status, '422 Unprocessable Entity')
self.assertTrue(wrapped._input.closed)
def test_bad_hash(self):
raw = b'123456789'
wrapped = HashingInput(BytesIO(raw), 9, hashlib.sha256,
hashlib.md5(raw).hexdigest())
self.assertEqual(b'1234', wrapped.read(4))
self.assertEqual(b'5678', wrapped.read(4))
with self.assertRaises(swob.Response) as raised:
wrapped.read(4)
self.assertEqual(raised.exception.status, '422 Unprocessable Entity')
self.assertTrue(wrapped._input.closed)
def test_empty_bad_hash(self):
wrapped = HashingInput(BytesIO(b''), 0, hashlib.sha256, 'nope')
with self.assertRaises(swob.Response) as raised:
wrapped.read(3)
self.assertEqual(raised.exception.status, '422 Unprocessable Entity')
# the error causes us to close the input
self.assertTrue(wrapped._input.closed)
if __name__ == '__main__':
unittest.main()

View File

@ -335,6 +335,29 @@ class TestServerSideCopyMiddleware(unittest.TestCase):
self.assertEqual('PUT', self.authorized[1].method)
self.assertEqual('/v1/a/c/o', self.authorized[1].path)
def test_copy_with_unicode(self):
self.app.register('GET', '/v1/a/c/\xF0\x9F\x8C\xB4', swob.HTTPOk,
{}, 'passed')
self.app.register('PUT', '/v1/a/c/\xE2\x98\x83', swob.HTTPCreated, {})
# Just for fun, let's have a mix of properly encoded and not
req = Request.blank('/v1/a/c/%F0\x9F%8C%B4',
environ={'REQUEST_METHOD': 'COPY'},
headers={'Content-Length': '0',
'Destination': 'c/%E2\x98%83'})
status, headers, body = self.call_ssc(req)
self.assertEqual(status, '201 Created')
calls = self.app.calls_with_headers
method, path, req_headers = calls[0]
self.assertEqual('GET', method)
self.assertEqual('/v1/a/c/\xF0\x9F\x8C\xB4', path)
self.assertIn(('X-Copied-From', 'c/%F0%9F%8C%B4'), headers)
self.assertEqual(len(self.authorized), 2)
self.assertEqual('GET', self.authorized[0].method)
self.assertEqual('/v1/a/c/%F0%9F%8C%B4', self.authorized[0].path)
self.assertEqual('PUT', self.authorized[1].method)
self.assertEqual('/v1/a/c/%E2%98%83', self.authorized[1].path)
def test_copy_with_spaces_in_x_copy_from_and_account(self):
self.app.register('GET', '/v1/a/c/o o2', swob.HTTPOk, {}, b'passed')
self.app.register('PUT', '/v1/a1/c1/o', swob.HTTPCreated, {})

File diff suppressed because it is too large Load Diff

View File

@ -1024,6 +1024,36 @@ class TestAuth(unittest.TestCase):
resp = req.get_response(ath)
self.assertEqual(204, resp.status_int)
def test_request_method_not_allowed(self):
test_auth = auth.filter_factory({'user_ac_user': 'testing'})(FakeApp())
req = self._make_request(
'/auth/v1.0',
headers={'X-Auth-User': 'ac:user', 'X-Auth-Key': 'testing'},
environ={'REQUEST_METHOD': 'PUT'})
resp = req.get_response(test_auth)
self.assertEqual(resp.status_int, 405)
req = self._make_request(
'/auth/v1.0',
headers={'X-Auth-User': 'ac:user', 'X-Auth-Key': 'testing'},
environ={'REQUEST_METHOD': 'HEAD'})
resp = req.get_response(test_auth)
self.assertEqual(resp.status_int, 405)
req = self._make_request(
'/auth/v1.0',
headers={'X-Auth-User': 'ac:user', 'X-Auth-Key': 'testing'},
environ={'REQUEST_METHOD': 'POST'})
resp = req.get_response(test_auth)
self.assertEqual(resp.status_int, 405)
req = self._make_request(
'/auth/v1.0',
headers={'X-Auth-User': 'ac:user', 'X-Auth-Key': 'testing'},
environ={'REQUEST_METHOD': 'DELETE'})
resp = req.get_response(test_auth)
self.assertEqual(resp.status_int, 405)
class TestAuthWithMultiplePrefixes(TestAuth):
"""

View File

@ -27,12 +27,8 @@ import types
import eventlet.wsgi
import six
from six import BytesIO
from six import StringIO
from six.moves.urllib.parse import quote
if six.PY2:
import mimetools
import mock
@ -69,53 +65,6 @@ def _fake_rings(tmpdir):
class TestWSGI(unittest.TestCase):
"""Tests for swift.common.wsgi"""
def setUp(self):
if six.PY2:
self._orig_parsetype = mimetools.Message.parsetype
def tearDown(self):
if six.PY2:
mimetools.Message.parsetype = self._orig_parsetype
@unittest.skipIf(six.PY3, "test specific to Python 2")
def test_monkey_patch_mimetools(self):
sio = StringIO('blah')
self.assertEqual(mimetools.Message(sio).type, 'text/plain')
sio = StringIO('blah')
self.assertEqual(mimetools.Message(sio).plisttext, '')
sio = StringIO('blah')
self.assertEqual(mimetools.Message(sio).maintype, 'text')
sio = StringIO('blah')
self.assertEqual(mimetools.Message(sio).subtype, 'plain')
sio = StringIO('Content-Type: text/html; charset=ISO-8859-4')
self.assertEqual(mimetools.Message(sio).type, 'text/html')
sio = StringIO('Content-Type: text/html; charset=ISO-8859-4')
self.assertEqual(mimetools.Message(sio).plisttext,
'; charset=ISO-8859-4')
sio = StringIO('Content-Type: text/html; charset=ISO-8859-4')
self.assertEqual(mimetools.Message(sio).maintype, 'text')
sio = StringIO('Content-Type: text/html; charset=ISO-8859-4')
self.assertEqual(mimetools.Message(sio).subtype, 'html')
wsgi.monkey_patch_mimetools()
sio = StringIO('blah')
self.assertIsNone(mimetools.Message(sio).type)
sio = StringIO('blah')
self.assertEqual(mimetools.Message(sio).plisttext, '')
sio = StringIO('blah')
self.assertIsNone(mimetools.Message(sio).maintype)
sio = StringIO('blah')
self.assertIsNone(mimetools.Message(sio).subtype)
sio = StringIO('Content-Type: text/html; charset=ISO-8859-4')
self.assertEqual(mimetools.Message(sio).type, 'text/html')
sio = StringIO('Content-Type: text/html; charset=ISO-8859-4')
self.assertEqual(mimetools.Message(sio).plisttext,
'; charset=ISO-8859-4')
sio = StringIO('Content-Type: text/html; charset=ISO-8859-4')
self.assertEqual(mimetools.Message(sio).maintype, 'text')
sio = StringIO('Content-Type: text/html; charset=ISO-8859-4')
self.assertEqual(mimetools.Message(sio).subtype, 'html')
def test_init_request_processor(self):
config = """
[DEFAULT]

View File

@ -40,6 +40,7 @@ from swift.common.storage_policy import StoragePolicy, ECStoragePolicy
from swift.common.middleware import listing_formats, proxy_logging
from swift.common import utils
from swift.common.utils import mkdirs, normalize_timestamp, NullLogger
from swift.common.wsgi import SwiftHttpProtocol
from swift.container import server as container_server
from swift.obj import server as object_server
from swift.proxy import server as proxy_server
@ -212,17 +213,28 @@ def setup_servers(the_object_server=object_server, extra_conf=None):
nl = NullLogger()
logging_prosv = proxy_logging.ProxyLoggingMiddleware(
listing_formats.ListingFilter(prosrv), conf, logger=prosrv.logger)
prospa = spawn(wsgi.server, prolis, logging_prosv, nl)
acc1spa = spawn(wsgi.server, acc1lis, acc1srv, nl)
acc2spa = spawn(wsgi.server, acc2lis, acc2srv, nl)
con1spa = spawn(wsgi.server, con1lis, con1srv, nl)
con2spa = spawn(wsgi.server, con2lis, con2srv, nl)
obj1spa = spawn(wsgi.server, obj1lis, obj1srv, nl)
obj2spa = spawn(wsgi.server, obj2lis, obj2srv, nl)
obj3spa = spawn(wsgi.server, obj3lis, obj3srv, nl)
obj4spa = spawn(wsgi.server, obj4lis, obj4srv, nl)
obj5spa = spawn(wsgi.server, obj5lis, obj5srv, nl)
obj6spa = spawn(wsgi.server, obj6lis, obj6srv, nl)
prospa = spawn(wsgi.server, prolis, logging_prosv, nl,
protocol=SwiftHttpProtocol)
acc1spa = spawn(wsgi.server, acc1lis, acc1srv, nl,
protocol=SwiftHttpProtocol)
acc2spa = spawn(wsgi.server, acc2lis, acc2srv, nl,
protocol=SwiftHttpProtocol)
con1spa = spawn(wsgi.server, con1lis, con1srv, nl,
protocol=SwiftHttpProtocol)
con2spa = spawn(wsgi.server, con2lis, con2srv, nl,
protocol=SwiftHttpProtocol)
obj1spa = spawn(wsgi.server, obj1lis, obj1srv, nl,
protocol=SwiftHttpProtocol)
obj2spa = spawn(wsgi.server, obj2lis, obj2srv, nl,
protocol=SwiftHttpProtocol)
obj3spa = spawn(wsgi.server, obj3lis, obj3srv, nl,
protocol=SwiftHttpProtocol)
obj4spa = spawn(wsgi.server, obj4lis, obj4srv, nl,
protocol=SwiftHttpProtocol)
obj5spa = spawn(wsgi.server, obj5lis, obj5srv, nl,
protocol=SwiftHttpProtocol)
obj6spa = spawn(wsgi.server, obj6lis, obj6srv, nl,
protocol=SwiftHttpProtocol)
context["test_coros"] = \
(prospa, acc1spa, acc2spa, con1spa, con2spa, obj1spa, obj2spa, obj3spa,
obj4spa, obj5spa, obj6spa)

View File

@ -1415,7 +1415,8 @@ class DiskFileManagerMixin(BaseDiskFileTestMixin):
df2_suffix, df2_hash,
"1525354556.65758.ts")))
# Expire the tombstones
# Cache the hashes and expire the tombstones
self.df_mgr.get_hashes(self.existing_device, '0', [], POLICIES[0])
the_time[0] += 2 * self.df_mgr.reclaim_age
hashes = list(self.df_mgr.yield_hashes(
@ -1440,7 +1441,30 @@ class DiskFileManagerMixin(BaseDiskFileTestMixin):
self.testdir, self.existing_device, 'objects', '0',
df2_suffix, df2_hash)))
# The empty suffix dirs are gone
# The empty suffix dirs, and partition are still there
self.assertTrue(os.path.isdir(os.path.join(
self.testdir, self.existing_device, 'objects', '0',
df1_suffix)))
self.assertTrue(os.path.isdir(os.path.join(
self.testdir, self.existing_device, 'objects', '0',
df2_suffix)))
# but the suffixes is invalid
part_dir = os.path.join(
self.testdir, self.existing_device, 'objects', '0')
invalidations_file = os.path.join(
part_dir, diskfile.HASH_INVALIDATIONS_FILE)
with open(invalidations_file) as f:
self.assertEqual('%s\n%s' % (df1_suffix, df2_suffix),
f.read().strip('\n')) # sanity
# next time get hashes runs
with mock.patch('time.time', mock_time):
hashes = self.df_mgr.get_hashes(
self.existing_device, '0', [], POLICIES[0])
self.assertEqual(hashes, {})
# ... suffixes will get cleanup
self.assertFalse(os.path.exists(os.path.join(
self.testdir, self.existing_device, 'objects', '0',
df1_suffix)))
@ -1448,8 +1472,9 @@ class DiskFileManagerMixin(BaseDiskFileTestMixin):
self.testdir, self.existing_device, 'objects', '0',
df2_suffix)))
# The empty partition dir is gone
self.assertFalse(os.path.exists(os.path.join(
# but really it's not diskfile's jobs to decide if a partition belongs
# on a node or not
self.assertTrue(os.path.isdir(os.path.join(
self.testdir, self.existing_device, 'objects', '0')))
def test_focused_yield_hashes_does_not_clean_up(self):

View File

@ -1327,12 +1327,31 @@ class TestGlobalSetupObjectReconstructor(unittest.TestCase):
for stat_key, expected in expected_stats.items():
stat_method, stat_prefix = stat_key
self.assertStatCount(stat_method, stat_prefix, expected)
stub_data = self.reconstructor._get_hashes(
'sda1', 2, self.policy, do_listdir=True)
stub_data.update({'7ca': {None: '8f19c38e1cf8e2390d4ca29051407ae3'}})
pickle_path = os.path.join(part_path, 'hashes.pkl')
with open(pickle_path, 'w') as f:
pickle.dump(stub_data, f)
# part 2 should be totally empty
hash_gen = self.reconstructor._df_router[self.policy].yield_hashes(
'sda1', '2', self.policy)
'sda1', '2', self.policy, suffixes=stub_data.keys())
for path, hash_, ts in hash_gen:
self.fail('found %s with %s in %s' % (hash_, ts, path))
# even the partition directory is gone
new_hashes = self.reconstructor._get_hashes(
'sda1', 2, self.policy, do_listdir=True)
self.assertFalse(new_hashes)
# N.B. the partition directory is removed next pass
ssync_calls = []
with mocked_http_conn() as request_log:
with mock.patch('swift.obj.reconstructor.ssync_sender',
self._make_fake_ssync(ssync_calls)):
self.reconstructor.reconstruct(override_partitions=[2])
self.assertEqual([], ssync_calls)
self.assertFalse(os.access(part_path, os.F_OK))
def test_process_job_all_success(self):

View File

@ -531,7 +531,7 @@ class CommonObjectControllerMixin(BaseObjectControllerMixin):
self.assertNotIn('X-Delete-At-Host', headers)
self.assertNotIn('X-Delete-At-Device', headers)
def test_DELETE_write_affinity_before_replication(self):
def test_DELETE_write_affinity_after_replication(self):
policy_conf = self.app.get_policy_options(self.policy)
policy_conf.write_affinity_handoff_delete_count = self.replicas() // 2
policy_conf.write_affinity_is_local_fn = (
@ -545,7 +545,7 @@ class CommonObjectControllerMixin(BaseObjectControllerMixin):
self.assertEqual(resp.status_int, 204)
def test_DELETE_write_affinity_after_replication(self):
def test_DELETE_write_affinity_before_replication(self):
policy_conf = self.app.get_policy_options(self.policy)
policy_conf.write_affinity_handoff_delete_count = self.replicas() // 2
policy_conf.write_affinity_is_local_fn = (
@ -1416,6 +1416,36 @@ class TestReplicatedObjController(CommonObjectControllerMixin,
self.assertIn(req.swift_entity_path.decode('utf-8'), log_lines[0])
self.assertIn('ERROR with Object server', log_lines[0])
def test_DELETE_with_write_affinity(self):
policy_conf = self.app.get_policy_options(self.policy)
policy_conf.write_affinity_handoff_delete_count = self.replicas() // 2
policy_conf.write_affinity_is_local_fn = (
lambda node: node['region'] == 1)
req = swift.common.swob.Request.blank('/v1/a/c/o', method='DELETE')
codes = [204, 204, 404, 204]
with mocked_http_conn(*codes):
resp = req.get_response(self.app)
self.assertEqual(resp.status_int, 204)
codes = [204, 404, 404, 204]
with mocked_http_conn(*codes):
resp = req.get_response(self.app)
self.assertEqual(resp.status_int, 204)
policy_conf.write_affinity_handoff_delete_count = 2
codes = [204, 204, 404, 204, 404]
with mocked_http_conn(*codes):
resp = req.get_response(self.app)
self.assertEqual(resp.status_int, 204)
codes = [204, 404, 404, 204, 204]
with mocked_http_conn(*codes):
resp = req.get_response(self.app)
self.assertEqual(resp.status_int, 204)
def test_PUT_error_during_transfer_data(self):
class FakeReader(object):
def read(self, size):
@ -1816,15 +1846,39 @@ class TestReplicatedObjController(CommonObjectControllerMixin,
@patch_policies(
[StoragePolicy(0, '1-replica', True),
StoragePolicy(1, '5-replica', False),
StoragePolicy(1, '4-replica', False),
StoragePolicy(2, '8-replica', False),
StoragePolicy(3, '15-replica', False)],
fake_ring_args=[
{'replicas': 1}, {'replicas': 5}, {'replicas': 8}, {'replicas': 15}])
{'replicas': 1}, {'replicas': 4}, {'replicas': 8}, {'replicas': 15}])
class TestReplicatedObjControllerVariousReplicas(CommonObjectControllerMixin,
unittest.TestCase):
controller_cls = obj.ReplicatedObjectController
def test_DELETE_with_write_affinity(self):
policy_index = 1
self.policy = POLICIES[policy_index]
policy_conf = self.app.get_policy_options(self.policy)
self.app.container_info['storage_policy'] = policy_index
policy_conf.write_affinity_handoff_delete_count = \
self.replicas(self.policy) // 2
policy_conf.write_affinity_is_local_fn = (
lambda node: node['region'] == 1)
req = swift.common.swob.Request.blank('/v1/a/c/o', method='DELETE')
codes = [204, 204, 404, 404, 204, 204]
with mocked_http_conn(*codes):
resp = req.get_response(self.app)
self.assertEqual(resp.status_int, 204)
policy_conf.write_affinity_handoff_delete_count = 1
codes = [204, 204, 404, 404, 204]
with mocked_http_conn(*codes):
resp = req.get_response(self.app)
self.assertEqual(resp.status_int, 204)
@patch_policies()
class TestReplicatedObjControllerMimePutter(BaseObjectControllerMixin,

View File

@ -71,7 +71,7 @@ from swift.common import utils, constraints
from swift.common.utils import hash_path, storage_directory, \
parse_content_type, parse_mime_headers, \
iter_multipart_mime_documents, public, mkdirs, NullLogger
from swift.common.wsgi import monkey_patch_mimetools, loadapp, ConfigString
from swift.common.wsgi import loadapp, ConfigString
from swift.proxy.controllers import base as proxy_base
from swift.proxy.controllers.base import get_cache_key, cors_validation, \
get_account_info, get_container_info
@ -97,7 +97,6 @@ def do_setup(object_server):
# setup test context and break out some globals for convenience
global _test_context, _testdir, _test_servers, _test_sockets, \
_test_POLICIES
monkey_patch_mimetools()
_test_context = setup_servers(object_server)
_testdir = _test_context["testdir"]
_test_servers = _test_context["test_servers"]
@ -3269,37 +3268,35 @@ class TestReplicatedObjectController(
self.assertNotEqual(last_modified_put, last_modified_head)
_do_conditional_GET_checks(last_modified_head)
@unpatch_policies
def test_PUT_auto_content_type(self):
with save_globals():
controller = ReplicatedObjectController(
self.app, 'account', 'container', 'object')
prolis = _test_sockets[0]
def test_content_type(filename, expected):
# The three responses here are for account_info() (HEAD to
# account server), container_info() (HEAD to container server)
# and three calls to _connect_put_node() (PUT to three object
# servers)
set_http_connect(201, 201, 201, 201, 201,
give_content_type=lambda content_type:
self.assertEqual(content_type,
next(expected)))
# We need into include a transfer-encoding to get past
# constraints.check_object_creation()
req = Request.blank('/v1/a/c/%s' % filename, {},
headers={'transfer-encoding': 'chunked'})
self.app.update_request(req)
self.app.memcache.store = {}
res = controller.PUT(req)
# If we don't check the response here we could miss problems
# in PUT()
self.assertEqual(res.status_int, 201)
def do_test(ext, content_type):
sock = connect_tcp(('localhost', prolis.getsockname()[1]))
fd = sock.makefile('rwb')
fd.write(b'PUT /v1/a/c/o.%s HTTP/1.1\r\n'
b'Host: localhost\r\n'
b'X-Storage-Token: t\r\nContent-Length: 0\r\n\r\n' %
ext.encode())
fd.flush()
headers = readuntil2crlfs(fd)
exp = b'HTTP/1.1 201'
self.assertEqual(headers[:len(exp)], exp)
test_content_type('test.jpg', iter(['', '', 'image/jpeg',
'image/jpeg', 'image/jpeg']))
test_content_type('test.html', iter(['', '', 'text/html',
'text/html', 'text/html']))
test_content_type('test.css', iter(['', '', 'text/css',
'text/css', 'text/css']))
fd.write(b'GET /v1/a/c/o.%s HTTP/1.1\r\n'
b'Host: localhost\r\nConnection: close\r\n'
b'X-Storage-Token: t\r\n\r\n' % ext.encode())
fd.flush()
headers = readuntil2crlfs(fd)
exp = b'HTTP/1.1 200'
self.assertIn(b'Content-Type: %s' % content_type.encode(),
headers.split(b'\r\n'))
sock.close()
do_test('jpg', 'image/jpeg')
do_test('html', 'text/html')
do_test('css', 'text/css')
def test_custom_mime_types_files(self):
swift_dir = mkdtemp()

View File

@ -24,7 +24,7 @@ from swift.common.middleware.copy import ServerSideCopyMiddleware
from swift.common.storage_policy import StoragePolicy
from swift.common.swob import Request
from swift.common.utils import mkdirs, split_path
from swift.common.wsgi import monkey_patch_mimetools, WSGIContext
from swift.common.wsgi import WSGIContext
from swift.obj import server as object_server
from swift.proxy import server as proxy
import swift.proxy.controllers
@ -138,7 +138,6 @@ class TestObjectSysmeta(unittest.TestCase):
account_ring=FakeRing(replicas=1),
container_ring=FakeRing(replicas=1))
self.copy_app = ServerSideCopyMiddleware(self.app, {})
monkey_patch_mimetools()
self.tmpdir = mkdtemp()
self.testdir = os.path.join(self.tmpdir,
'tmp_test_object_server_ObjectController')

View File

@ -45,6 +45,7 @@ commands =
test/unit/common/middleware/test_copy.py \
test/unit/common/middleware/test_crossdomain.py \
test/unit/common/middleware/test_domain_remap.py \
test/unit/common/middleware/test_formpost.py \
test/unit/common/middleware/test_gatekeeper.py \
test/unit/common/middleware/test_healthcheck.py \
test/unit/common/middleware/test_keystoneauth.py \
@ -96,6 +97,9 @@ commands =
[testenv:py36]
commands = {[testenv:py35]commands}
[testenv:py37]
commands = {[testenv:py35]commands}
[testenv:pep8]
basepython = python2.7
commands =