Since Python 2.7, unittest in the standard library has included mulitple
facilities for skipping tests by decorators as well as an exception.
Switch to that directly, rather than importing nose.
Change-Id: I4009033473ea24f0d0faed3670db844f40051f30
Currently, our integrity checking for objects is pretty weak when it
comes to object metadata. If the extended attributes on a .data or
.meta file get corrupted in such a way that we can still unpickle it,
we don't have anything that detects that.
This could be especially bad with encrypted etags; if the encrypted
etag (X-Object-Sysmeta-Crypto-Etag or whatever it is) gets some bits
flipped, then we'll cheerfully decrypt the cipherjunk into plainjunk,
then send it to the client. Net effect is that the client sees a GET
response with an ETag that doesn't match the MD5 of the object *and*
Swift has no way of detecting and quarantining this object.
Note that, with an unencrypted object, if the ETag metadatum gets
mangled, then the object will be quarantined by the object server or
auditor, whichever notices first.
As part of this commit, I also ripped out some mocking of
getxattr/setxattr in tests. It appears to be there to allow unit tests
to run on systems where /tmp doesn't support xattrs. However, since
the mock is keyed off of inode number and inode numbers get re-used,
there's lots of leakage between different test runs. On a real FS,
unlinking a file and then creating a new one of the same name will
also reset the xattrs; this isn't the case with the mock.
The mock was pretty old; Ubuntu 12.04 and up all support xattrs in
/tmp, and recent Red Hat / CentOS releases do too. The xattr mock was
added in 2011; maybe it was to support Ubuntu Lucid Lynx?
Bonus: now you can pause a test with the debugger, inspect its files
in /tmp, and actually see the xattrs along with the data.
Since this patch now uses a real filesystem for testing filesystem
operations, tests are skipped if the underlying filesystem does not
support setting xattrs (eg tmpfs or more than 4k of xattrs on ext4).
References to "/tmp" have been replaced with calls to
tempfile.gettempdir(). This will allow setting the TMPDIR envvar in
test setup and getting an XFS filesystem instead of ext4 or tmpfs.
THIS PATCH SIGNIFICANTLY CHANGES TESTING ENVIRONMENTS
With this patch, every test environment will require TMPDIR to be
using a filesystem that supports at least 4k of extended attributes.
Neither ext4 nor tempfs support this. XFS is recommended.
So why all the SkipTests? Why not simply raise an error? We still need
the tests to run on the base image for OpenStack's CI system. Since
we were previously mocking out xattr, there wasn't a problem, but we
also weren't actually testing anything. This patch adds functionality
to validate xattr data, so we need to drop the mock.
`test.unit.skip_if_no_xattrs()` is also imported into `test.functional`
so that functional tests can import it from the functional test
namespace.
The related OpenStack CI infrastructure changes are made in
https://review.openstack.org/#/c/394600/.
Co-Authored-By: John Dickinson <me@not.mn>
Change-Id: I98a37c0d451f4960b7a12f648e4405c6c6716808
Keymaster middleware does some nice input validation on
base64-encoded strings; pull that out somewhere common so
other things (like SLOs with inlined data) can use it, too.
Change-Id: I3436bf3724884fe252c6cb603243c1195f67b701
Currently if limit=0 is passed to lock_path then the method will time
out and never acquire a lock which is reasonable but not useful. This
is a potential pitfall given that in other contexts, for example
diskfile's replication_lock, a concurrency value of 0 has the meaning
'no limit'. It would be easy to erroneously assume that the same
semantic holds for lock_path.
To avoid that pitfall, this patch makes it an error to pass limit<1 to
lock_path.
Related-Change: I3c3193344c7a57a8a4fc7932d1b10e702efd3572
Change-Id: I9ea7ee5b93e3d6924bff9790141b107b53f77883
use assertRaises where applicable; use the with_tempdir
decorator
Related-Change: I3c3193344c7a57a8a4fc7932d1b10e702efd3572
Change-Id: Ie83584fc22a5ec6e0a568e39c90caf577da29497
This commit replaces boolean replication_one_per_device by an integer
replication_concurrency_per_device. The new configuration parameter is
passed to utils.lock_path() which now accept as an argument a limit for
the number of locks that can be acquired for a specific path.
Instead of trying to lock path/.lock, utils.lock_path() now tries to lock
files path/.lock-X, where X is in the range (0, N), N being the limit for
the number of locks allowed for the path. The default value of limit is
set to 1.
Change-Id: I3c3193344c7a57a8a4fc7932d1b10e702efd3572
add more assertions about args that are passed to
os module functions
Related-Change: Ida15e72ae4ecdc2d6ce0d37bd99c2d86bd4e5ddc
Change-Id: Iee483274aff37fc9930cd54008533de2917157f4
Drop the group comparison from drop_privileges test as it isn't
valid since os.setgroups() is mocked.
Change-Id: Ida15e72ae4ecdc2d6ce0d37bd99c2d86bd4e5ddc
Closes-Bug: #1724342
The object server runs certain IO-intensive methods outside the main
pthread for performance. If one of those methods tries to log, this can
cause a crash that eventually leads to an object server with hundreds
or thousands of greenthreads, all deadlocked.
The short version of the story is that logging.SysLogHandler has a
mutex which Eventlet monkey-patches. However, the monkey-patched mutex
sometimes breaks if used across different pthreads, and it breaks in
such a way that it is still considered held. After that happens, any
attempt to emit a log message blocks the calling greenthread forever.
The fix is to use a mutex that works across different greenlets and
across different pthreads. This patch introduces such a lock based on
an anonymous pipe.
Change-Id: I57decefaf5bbed57b97a62d0df8518b112917480
Closes-Bug: 1710328
Calling dump_recon_cache with a key mapped to an empty dict value
causes the key to be removed from the cache entry. Doing the same
again causes the key to be added back and mapped an empty dict, and
the key continues to toggle as calls are repeated. This behavior is
seen on the Related-Bug report.
This patch fixes dump_recon_cache to make deletion of a key
idempotent. This fix is needed for the Related-Change which makes use
of empty dicts with dump_recon_cache to clear unwanted keys from the
cache.
The only caller that currently set empty dict values is
obj/auditor.py where the current intended behavior would appear to be
as per this patch.
Related-Change: I28925a37f3985c9082b5a06e76af4dc3ec813abe
Related-Bug: #1704858
Change-Id: If9638b4e7dba0ec2c7bd95809cec6c5e18e9301e
If swift-recon/swift-get-nodes/swift-object-info is used with the
swiftdir option they will read rings from the given directory; however
they are still using /etc/swift/swift.conf to find the policies on the
current node.
This makes it impossible to maintain a local swift.conf copy (if you
don't have write access to /etc/swift) or check multiple clusters from
the same node.
Until now swift-recon was also not usable with storage policy aliases,
this patch fixes this as well.
Closes-Bug: 1577582
Closes-Bug: 1604707
Closes-Bug: 1617951
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Co-Authored-By: Thiago da Silva <thiago@redhat.com>
Change-Id: I13188d42ec19e32e4420739eacd1e5b454af2ae3
This patch adds methods to increase the partition power of an existing
object ring without downtime for the users using a 3-step process. Data
won't be moved to other nodes; objects using the new increased partition
power will be located on the same device and are hardlinked to avoid
data movement.
1. A new setting "next_part_power" will be added to the rings, and once
the proxy server reloaded the rings it will send this value to the
object servers on any write operation. Object servers will now create a
hard-link in the new location to the original DiskFile object. Already
existing data will be relinked using a new tool in the new locations
using hardlinks.
2. The actual partition power itself will be increased. Servers will now
use the new partition power to read from and write to. No longer
required hard links in the old object location have to be removed now by
the relinker tool; the relinker tool reads the next_part_power setting
to find object locations that need to be cleaned up.
3. The "next_part_power" flag will be removed.
This mostly implements the spec in [1]; however it's not using an
"epoch" as described there. The idea of the epoch was to store data
using different partition powers in their own namespace to avoid
conflicts with auditors and replicators as well as being able to abort
such an operation and just remove the new tree. This would require some
heavy change of the on-disk data layout, and other object-server
implementations would be required to adopt this scheme too.
Instead the object-replicator is now aware that there is a partition
power increase in progress and will skip replication of data in that
storage policy; the relinker tool should be simply run and afterwards
the partition power will be increased. This shouldn't take that much
time (it's only walking the filesystem and hardlinking); impact should
be low therefore. The relinker should be run on all storage nodes at the
same time in parallel to decrease the required time (though this is not
mandatory). Failures during relinking should not affect cluster
operations - relinking can be even aborted manually and restarted later.
Auditors are not quarantining objects written to a path with a different
partition power and therefore working as before (though they are reading
each object twice in the worst case before the no longer needed hard
links are removed).
Co-Authored-By: Alistair Coles <alistair.coles@hpe.com>
Co-Authored-By: Matthew Oliver <matt@oliver.net.au>
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
[1] https://specs.openstack.org/openstack/swift-specs/specs/in_progress/
increasing_partition_power.html
Change-Id: I7d6371a04f5c1c4adbb8733a71f3c177ee5448bb
Following OpenStack Style Guidelines:
[1] http://docs.openstack.org/developer/hacking/#unit-tests-and-assertraises
[H203] Unit test assertions tend to give better messages for more specific
assertions. As a result, assertIsNone(...) is preferred over
assertEqual(None, ...) and assertIs(..., None)
Change-Id: If4db8872c4f5705c1fff017c4891626e9ce4d1e4
The ismount_raw method does not work inside containers if disks are
mounted on the hostsystem and only mountpoints are exposed inside the
containers. In this case the inode and device checks fail, making this
option unusable.
Mounting devices into the containers would solve this. However, this
would require that all processes that require access to a device are
running inside the same container, which counteracts the container
concept.
This patch adds the possiblity to place stubfiles named ".ismount" into
the root directory of any device, and Swift assumes a given device to be
mounted if that file exists. This should be transparent to existing
clusters.
Change-Id: I9d9fc0a4447a8c5dd39ca60b274c119af6b4c28f
Often, we want the current timestamp. May as well improve the ergonomics
a bit and provide a class method for it.
Change-Id: I3581c635c094a8c4339e9b770331a03eab704074
Include a couple trivial cases, and verify that surrogate pairs get
collapsed.
Also, move it to a more-appropriate class.
Related-Change: I4c570c08c770636d57b1157e19d5b7034fd9ed4e (patchset 3)
Change-Id: Iab0fdafe08d06a9d677dc421e60779e94d27ba9b
Follow up for related change:
- fix typos
- use common helper methods
- refactor some tests to reduce duplicate code
Related-Change: Idd155401982a2c48110c30b480966a863f6bd305
Change-Id: I2f91a2f31e4c1b11f3d685fa8166c1a25eb87429
This patch enables efficent PUT/GET for global distributed cluster[1].
Problem:
Erasure coding has the capability to decrease the amout of actual stored
data less then replicated model. For example, ec_k=6, ec_m=3 parameter
can be 1.5x of the original data which is smaller than 3x replicated.
However, unlike replication, erasure coding requires availability of at
least some ec_k fragments of the total ec_k + ec_m fragments to service
read (e.g. 6 of 9 in the case above). As such, if we stored the
EC object into a swift cluster on 2 geographically distributed data
centers which have the same volume of disks, it is likely the fragments
will be stored evenly (about 4 and 5) so we still need to access a
faraway data center to decode the original object. In addition, if one
of the data centers was lost in a disaster, the stored objects will be
lost forever, and we have to cry a lot. To ensure highly durable
storage, you would think of making *more* parity fragments (e.g.
ec_k=6, ec_m=10), unfortunately this causes *significant* performance
degradation due to the cost of mathmetical caluculation for erasure
coding encode/decode.
How this resolves the problem:
EC Fragment Duplication extends on the initial solution to add *more*
fragments from which to rebuild an object similar to the solution
described above. The difference is making *copies* of encoded fragments.
With experimental results[1][2], employing small ec_k and ec_m shows
enough performance to store/retrieve objects.
On PUT:
- Encode incomming object with small ec_k and ec_m <- faster!
- Make duplicated copies of the encoded fragments. The # of copies
are determined by 'ec_duplication_factor' in swift.conf
- Store all fragments in Swift Global EC Cluster
The duplicated fragments increase pressure on existing requirements
when decoding objects in service to a read request. All fragments are
stored with their X-Object-Sysmeta-Ec-Frag-Index. In this change, the
X-Object-Sysmeta-Ec-Frag-Index represents the actual fragment index
encoded by PyECLib, there *will* be duplicates. Anytime we must decode
the original object data, we must only consider the ec_k fragments as
unique according to their X-Object-Sysmeta-Ec-Frag-Index. On decode no
duplicate X-Object-Sysmeta-Ec-Frag-Index may be used when decoding an
object, duplicate X-Object-Sysmeta-Ec-Frag-Index should be expected and
avoided if possible.
On GET:
This patch inclues following changes:
- Change GET Path to sort primary nodes grouping as subsets, so that
each subset will includes unique fragments
- Change Reconstructor to be more aware of possibly duplicate fragments
For example, with this change, a policy could be configured such that
swift.conf:
ec_num_data_fragments = 2
ec_num_parity_fragments = 1
ec_duplication_factor = 2
(object ring must have 6 replicas)
At Object-Server:
node index (from object ring): 0 1 2 3 4 5 <- keep node index for
reconstruct decision
X-Object-Sysmeta-Ec-Frag-Index: 0 1 2 0 1 2 <- each object keeps actual
fragment index for
backend (PyEClib)
Additional improvements to Global EC Cluster Support will require
features such as Composite Rings, and more efficient fragment
rebalance/reconstruction.
1: http://goo.gl/IYiNPk (Swift Design Spec Repository)
2: http://goo.gl/frgj6w (Slide Share for OpenStack Summit Tokyo)
Doc-Impact
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Change-Id: Idd155401982a2c48110c30b480966a863f6bd305
This patches fixes Swift's IO priority control on the AArch64 architecture
by getting the correct __NR_ioprio_set value.
Change-Id: Ic93ce80fde223074e7d1a5338c8cf88863c6ddeb
Closes-Bug: #1658405
Instead of printing the error message and
calling sys.exit() when a section not exists
or reading the file failed rais an Exception
from readconfig. Depending on the Value or IO-Error,
the caller can decide if he wants to exit or continue.
If an Exception reaches the wsgi utilities
it bubbles all the way up.
Change-Id: Ieb444f8c34e37f49bea21c3caf1c6c2d7bee5fb4
Closes-Bug: 1578321
There were several implementations of hashing the content
of a file in cli/recon.py and common/middleware/recon.py.
This patch relocates one implementation (_hash_for_ringfile,
introduced in the Related Change) to common/utils.py and
refactors recon cli and middleware to use that function.
Also improves use of mocking in the unit tests to eliminate passing
custom file opener functions to the ReconMiddleware get_ring_md5
and get_swift_conf_md5 methods.
Related-Change: I9623752c3cd2361f57864f3e938e1baf5e9292d7
Change-Id: Iaad88e49aadeb28f614aafa1e9596fe07ce9793a
Fixies this problem:
* swift-drive-audit needs to be run by root, because only root have
"umount" permission
* swift-object servers typically runs as user swift
* if swift-drive-audit is run by root, /var/cache/swift/drive.recon is
owned by root, with 0o600
* recon middleware (inside swift-object-server) can't read this cache
file: swift-object: Error reading recon cache file
This patch adds "user" option to drive-audit config file. Recon cache
is chowned to this user.
Change-Id: Ibf20543ee690b7c5a37fabd1540fd5c0c7b638c9
Also clean up a comment and some exception text
Change-Id: I1e7755cc0468f9a3ba96a0dd24868f09a10c3df0
Related-Change: I24716e3271cf3370642e3755447e717fd7d9957c
This patch improves EC GET response handling:
- The proxy no longer requires all object servers to have a
durable file for the fragment archive that they return in
response to a GET. The proxy will now be satisfied if just
one object server has a durable file at the same timestamp
as fragments from other object servers.
This means that the proxy can now successfully GET an
object that had missing durable files when it was PUT.
- The proxy will now ensure that it has a quorum of *unique*
fragment indexes from object servers before considering a
GET to be successful.
- The proxy is now able to fetch multiple fragment archives
having different indexes from the same node. This enables
the proxy to successfully GET an object that has some
fragments that have landed on the same node, for example
after a rebalance.
This new behavior is facilitated by an exchange of new
headers on a GET request and response between the proxy and
object servers.
An object server now includes with a GET (or HEAD) response:
- X-Backend-Fragments: the value of this describes all
fragment archive indexes that the server has for the
object by encoding a map of the form: timestamp -> <list
of fragment indexes>
- X-Backend-Durable-Timestamp: the value of this is the
internal form of the timestamp of the newest durable file
that was found, if any.
- X-Backend-Data-Timestamp: the value of this is the
internal form of the timestamp of the data file that was
used to construct the diskfile.
A proxy server now includes with a GET request:
- X-Backend-Fragment-Preferences: the value of this
describes the proxy's current preference with respect to
those fragments that it would have object servers
return. It encodes a list of timestamp, and for each
timestamp a list of fragment indexes that the proxy does
NOT require (because it already has them).
The presence of a X-Backend-Fragment-Preferences header
(even one with an empty list as its value) will cause the
object server to search for the most appropriate fragment
to return, disregarding the existence or not of any
durable file. The object server assumes that the proxy
knows best.
Closes-Bug: 1469094
Closes-Bug: 1484598
Change-Id: I2310981fd1c4622ff5d1a739cbcc59637ffe3fc3
Co-Authored-By: Paul Luse <paul.e.luse@intel.com>
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>