Include a couple trivial cases, and verify that surrogate pairs get
collapsed.
Also, move it to a more-appropriate class.
Related-Change: I4c570c08c770636d57b1157e19d5b7034fd9ed4e (patchset 3)
Change-Id: Iab0fdafe08d06a9d677dc421e60779e94d27ba9b
Follow up for related change:
- fix typos
- use common helper methods
- refactor some tests to reduce duplicate code
Related-Change: Idd155401982a2c48110c30b480966a863f6bd305
Change-Id: I2f91a2f31e4c1b11f3d685fa8166c1a25eb87429
This patch enables efficent PUT/GET for global distributed cluster[1].
Problem:
Erasure coding has the capability to decrease the amout of actual stored
data less then replicated model. For example, ec_k=6, ec_m=3 parameter
can be 1.5x of the original data which is smaller than 3x replicated.
However, unlike replication, erasure coding requires availability of at
least some ec_k fragments of the total ec_k + ec_m fragments to service
read (e.g. 6 of 9 in the case above). As such, if we stored the
EC object into a swift cluster on 2 geographically distributed data
centers which have the same volume of disks, it is likely the fragments
will be stored evenly (about 4 and 5) so we still need to access a
faraway data center to decode the original object. In addition, if one
of the data centers was lost in a disaster, the stored objects will be
lost forever, and we have to cry a lot. To ensure highly durable
storage, you would think of making *more* parity fragments (e.g.
ec_k=6, ec_m=10), unfortunately this causes *significant* performance
degradation due to the cost of mathmetical caluculation for erasure
coding encode/decode.
How this resolves the problem:
EC Fragment Duplication extends on the initial solution to add *more*
fragments from which to rebuild an object similar to the solution
described above. The difference is making *copies* of encoded fragments.
With experimental results[1][2], employing small ec_k and ec_m shows
enough performance to store/retrieve objects.
On PUT:
- Encode incomming object with small ec_k and ec_m <- faster!
- Make duplicated copies of the encoded fragments. The # of copies
are determined by 'ec_duplication_factor' in swift.conf
- Store all fragments in Swift Global EC Cluster
The duplicated fragments increase pressure on existing requirements
when decoding objects in service to a read request. All fragments are
stored with their X-Object-Sysmeta-Ec-Frag-Index. In this change, the
X-Object-Sysmeta-Ec-Frag-Index represents the actual fragment index
encoded by PyECLib, there *will* be duplicates. Anytime we must decode
the original object data, we must only consider the ec_k fragments as
unique according to their X-Object-Sysmeta-Ec-Frag-Index. On decode no
duplicate X-Object-Sysmeta-Ec-Frag-Index may be used when decoding an
object, duplicate X-Object-Sysmeta-Ec-Frag-Index should be expected and
avoided if possible.
On GET:
This patch inclues following changes:
- Change GET Path to sort primary nodes grouping as subsets, so that
each subset will includes unique fragments
- Change Reconstructor to be more aware of possibly duplicate fragments
For example, with this change, a policy could be configured such that
swift.conf:
ec_num_data_fragments = 2
ec_num_parity_fragments = 1
ec_duplication_factor = 2
(object ring must have 6 replicas)
At Object-Server:
node index (from object ring): 0 1 2 3 4 5 <- keep node index for
reconstruct decision
X-Object-Sysmeta-Ec-Frag-Index: 0 1 2 0 1 2 <- each object keeps actual
fragment index for
backend (PyEClib)
Additional improvements to Global EC Cluster Support will require
features such as Composite Rings, and more efficient fragment
rebalance/reconstruction.
1: http://goo.gl/IYiNPk (Swift Design Spec Repository)
2: http://goo.gl/frgj6w (Slide Share for OpenStack Summit Tokyo)
Doc-Impact
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Change-Id: Idd155401982a2c48110c30b480966a863f6bd305
This patches fixes Swift's IO priority control on the AArch64 architecture
by getting the correct __NR_ioprio_set value.
Change-Id: Ic93ce80fde223074e7d1a5338c8cf88863c6ddeb
Closes-Bug: #1658405
Instead of printing the error message and
calling sys.exit() when a section not exists
or reading the file failed rais an Exception
from readconfig. Depending on the Value or IO-Error,
the caller can decide if he wants to exit or continue.
If an Exception reaches the wsgi utilities
it bubbles all the way up.
Change-Id: Ieb444f8c34e37f49bea21c3caf1c6c2d7bee5fb4
Closes-Bug: 1578321
There were several implementations of hashing the content
of a file in cli/recon.py and common/middleware/recon.py.
This patch relocates one implementation (_hash_for_ringfile,
introduced in the Related Change) to common/utils.py and
refactors recon cli and middleware to use that function.
Also improves use of mocking in the unit tests to eliminate passing
custom file opener functions to the ReconMiddleware get_ring_md5
and get_swift_conf_md5 methods.
Related-Change: I9623752c3cd2361f57864f3e938e1baf5e9292d7
Change-Id: Iaad88e49aadeb28f614aafa1e9596fe07ce9793a
Fixies this problem:
* swift-drive-audit needs to be run by root, because only root have
"umount" permission
* swift-object servers typically runs as user swift
* if swift-drive-audit is run by root, /var/cache/swift/drive.recon is
owned by root, with 0o600
* recon middleware (inside swift-object-server) can't read this cache
file: swift-object: Error reading recon cache file
This patch adds "user" option to drive-audit config file. Recon cache
is chowned to this user.
Change-Id: Ibf20543ee690b7c5a37fabd1540fd5c0c7b638c9
Also clean up a comment and some exception text
Change-Id: I1e7755cc0468f9a3ba96a0dd24868f09a10c3df0
Related-Change: I24716e3271cf3370642e3755447e717fd7d9957c
This patch improves EC GET response handling:
- The proxy no longer requires all object servers to have a
durable file for the fragment archive that they return in
response to a GET. The proxy will now be satisfied if just
one object server has a durable file at the same timestamp
as fragments from other object servers.
This means that the proxy can now successfully GET an
object that had missing durable files when it was PUT.
- The proxy will now ensure that it has a quorum of *unique*
fragment indexes from object servers before considering a
GET to be successful.
- The proxy is now able to fetch multiple fragment archives
having different indexes from the same node. This enables
the proxy to successfully GET an object that has some
fragments that have landed on the same node, for example
after a rebalance.
This new behavior is facilitated by an exchange of new
headers on a GET request and response between the proxy and
object servers.
An object server now includes with a GET (or HEAD) response:
- X-Backend-Fragments: the value of this describes all
fragment archive indexes that the server has for the
object by encoding a map of the form: timestamp -> <list
of fragment indexes>
- X-Backend-Durable-Timestamp: the value of this is the
internal form of the timestamp of the newest durable file
that was found, if any.
- X-Backend-Data-Timestamp: the value of this is the
internal form of the timestamp of the data file that was
used to construct the diskfile.
A proxy server now includes with a GET request:
- X-Backend-Fragment-Preferences: the value of this
describes the proxy's current preference with respect to
those fragments that it would have object servers
return. It encodes a list of timestamp, and for each
timestamp a list of fragment indexes that the proxy does
NOT require (because it already has them).
The presence of a X-Backend-Fragment-Preferences header
(even one with an empty list as its value) will cause the
object server to search for the most appropriate fragment
to return, disregarding the existence or not of any
durable file. The object server assumes that the proxy
knows best.
Closes-Bug: 1469094
Closes-Bug: 1484598
Change-Id: I2310981fd1c4622ff5d1a739cbcc59637ffe3fc3
Co-Authored-By: Paul Luse <paul.e.luse@intel.com>
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Make sure to close the underlying iterator in string_along. What is
currently happening when using the InternalClient is that "Client
disconnected" warnings are generated and resources are tied up until
GC runs.
Change-Id: If1f6c0c756aee95f53f99371439533a97d347eab
Linux 3.11 introduced O_TMPFILE as a flag to open() sys call. This would
enable users to get a fd to an unnamed temporary file. As it's unnamed,
it does not require the caller to devise unique names. It is also not
accessible through any path. Hence, file creation is race-free.
This file is initially unreachable. It is then populated with data(write),
metadata(fsetxattr) and fsync'd before being atomically linked into the
filesystem in a fully formed state using linkat() sys call. Only after a
successful linkat() will the object file will be available for reference.
Caveats
* Unlike os.rename(), linkat() cannot overwrite destination path if it
already exists. If path exists, we unlink and try again.
* XFS support for O_TMPFILE was only added in Linux 3.15.
* If client disconnects during object upload, although there is no
incomplete/stale file on disk, the object directory would persist
and is not cleaned up immediately.
Change-Id: I8402439fab3aba5d7af449b5e465f89332f606ec
Signed-off-by: Prashanth Pai <ppai@redhat.com>
Bonus consistency: 416 responses now always have a body. Before, if
you had "swob.HTTPRequestedRangeNotSatisfiable()", you'd get a body,
but if you had "swob.Response(..., conditional_response=True)", then
you'd get a length-0 response body. Now you always get a response
body. It's just the default <html><h1>..., but at it's always there.
Bonus efficiency: do a little caching of sub-SLO manifests to avoid
needless re-fetches. This kicks in when there are multiple references
to the same sub-SLO in a given manifest. The caching only holds 20
sub-SLOs so that a malicious user can't build a giant SLO tree and use
it to run the proxy out of memory (we're already holding up to 10
manifests in memory at a time since a SLO can include another SLO to a
depth of 10; this doesn't make the situation too much worse).
Change-Id: I24716e3271cf3370642e3755447e717fd7d9957c
The goal is to modify schedule priority and I/O scheduling class and
priority of daemon/server via configuration.
Setting is optional, default keeps current behaviour.
Use case:
Prioritize object-server to object-auditor, because all user's requests
needed to be served in peak hours and audit could wait.
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
DocImpact
Change-Id: I1018a18f4706daabdb84574ffd9a58d831e68396
Validating ip addresses for ipv4 and ipv6 formats have more generic
use cases outside of rings. swift-get-nodes and other utilities that
need to handle ipv6 adrresses often require importing ip validation
methods from swift/common/rings/utils (see Related-Change). Also,
expand_ipv6 method already exists in swift/common/utils. Hence moving
validation of ips also into swift/common/utils from
swift/common/ring/utils.
Related-Change: I6551d65241950c65e7160587cc414deb4a2122f5
Change-Id: I720a9586469cf55acab74b4b005907ce106b3da4
Relocates some test infrastructure in preparation for
use with encryption tests, in particular moves the test
server setup code from test/unit/proxy/test_server.py
to a new helpers.py so that it can be re-used, and adds
ability to specify additional config options for the
test servers (used in encryption tests).
Adds unit test coverage for extract_swift_bytes and functional
test coverage for container listings. Adds a check on the content
and metadata of reconciled objects in probe tests.
Change-Id: I9bfbf4e47cb0eb370e7a74d18c78d67b6b9d6645
Currently when disable_fallocate is true it disables calling the
fallocate syscall, but it doesn't disable fallocate_reserve. This
patch fixes this.
This problem has caused functional tests to fail in our SAIOs, since
SAIOs have disable_fallocate set but the fallocate_reserve space free
checking was still being run creating 507 responses. This is thanks
to the change in fallocate_reserve default changing from 0 to 1%.
Because fallocate_reserve and disable_fallocate causes SAIO functional
tests to fail a section called 'Known Issues' has been added to the
SAIO developer documentation which includes a warning about
using fallocate_reserve on SAIOs.
Change-Id: I727bfb0861ea26fe2f16ad55f4d36ae088864d8f
With the removement of threads_per_disk there is no longer a need to use
run_in_thread() at all; it was just calling the function itself when
running with 0 threads.
Similar to force_run_in_thread() - with 0 threads it was basically doing
the same like in tpool_reraise(), therefore replacing the call and
finally removing the complete ThreadPool class.
Note that this might break external consumers that are inheriting
BaseDiskFileManager; in this case you need to adopt this change in your
codebase then.
Change-Id: I39489dd660935bdbfbc26b92af86814369369fb5
Previously SwiftLogFormatter would make two checks. One to see if the
transaction id was already in the message field and another check to
make sure the log level wasn't set to info. If either of these was
true, then it would not log the transaction ID in the transaction ID
field.
This commit removes the check for the info log. Now transaction IDs
will be recorded in all cases that have them.
Change-Id: Ic06538ab55a75d298169ae1745671573ee9c09e8
Closes-Bug: #1504344
Requiring 2/2 backends for PUT requests means that the cluster can't
tolerate a single failure. Likewise, if you have 4 replicas in 2
regions, requiring 3/4 on a POST request means you cannot POST with
your inter-region link down or congested.
This changes the (replication) quorum size in the proxy to be at least
half the nodes instead of a majority of the nodes.
Daemons that were looking for a majority remain unchanged. The
container reconciler, replicator, and updater still require majorities
so their functioning is unchanged.
Odd numbers of replicas are unaffected by this commit.
Change-Id: I3b07ff0222aba6293ad7d60afe1747acafbe6ce4
Add the ability to set the fallocate_reserve value as a percentage.
This happens automatically when adding the '%' at the end of the value.
Having the ability to set a % of free space rather than a byte value is
useful especially when drive sizes are heterogenous.
The default for fallocate_reserve has been adjusted to 1%, having the
fallocate_reserve set seems sensible for all deploys and percentages are
far safer to default than byte values (across drives of any size).
Tests added for using fallocate_reserve as a percentage.
Duplicate tests for fallocate_reserve have been removed.
Docs updated to reflect the fallocate_reserve change.
Change-Id: I4aea613a708205c917e81d6b2861396655e73238
Previously, fallocate_reserve could result in a traceback. The
OSError being raised didn't have the proper errno set. This patch
sets the errno to ENOSPC.
Change-Id: I017b0584972ca8832f3b160bbcdff335ae9a1aa6
DiskFile already fills in the _ondisk_info attribute when it tries to open
a diskfile - even if the DiskFile's fileset is not valid or deleted.
During this process the rsync tempfiles would be discovered and logged,
but no-one would attempt to clean them up - even if they were really old.
Instead of logging and ignoring unexpected files when validate a DiskFile
fileset we'll add unexpected files to the unexpected key in the
_ondisk_info attribute.
With a little bit of re-organization in the auditor's object_audit method
to get things into a single return path we can add an unconditional check
for unexpected files and remove those that are "old enough".
Since the replicator will kill any rsync processes that are running longer
than the configured rsync_timeout we know that any rsync tempfiles older
than this can be deleted.
Split unlink_older_than in common.utils into two functions to allow an
explicit list of previously discovered paths to be passed in to avoid an
extra listdir. Since the getmtime handling already ignores OSError
there's less concern of race condition where a previous discovered
unexpected file is reaped by rsync while we're attempting to clean it up.
Update some doc on the new config option.
Closes-Bug: #1554005
Change-Id: Id67681cb77f605e3491b8afcb9c69d769e154283
This change adds 2 new parameters to enable and control concurrent GETs
in swift, these are 'concurrent_gets' and 'concurrency_timeout'.
'concurrent_gets' allows you to turn on or off concurrent GETs, when
on it will set the GET/HEAD concurrency to replica count. And in the
case of EC HEADs it will set it to ndata.
The proxy will then serve only the first valid source to respond.
This applies to all account, container and object GETs except
for EC. For EC only HEAD requests are effected.
It achieves this by changing the request sending mechanism to using
GreenAsyncPile and green threads with a time out between each
request.
'concurrency_timeout' is related to concurrent_gets. And is the
amount of time to wait before firing the next thread. A value of 0
will fire at the same time (fully concurrent), setting another value
will stagger the firing allowing you the ability to give a node a
shorter chance to respond before firing the next. This value is a float
and should be somewhere between 0 and node_timeout. The default is
conn_timeout. Meaning by default it will stagger the firing.
DocImpact
Implements: blueprint concurrent-reads
Change-Id: I789d39472ec48b22415ff9d9821b1eefab7da867
There was a function in swift.common.utils that was importing
swob.HeaderKeyDict at call time. It couldn't import it at compilation
time since utils can't import from swob or else it blows up with a
circular import error.
This commit just moves HeaderKeyDict into swift.common.header_key_dict
so that we can remove the inline import.
Change-Id: I656fde8cc2e125327c26c589cf1045cb81ffc7e5
This patch makes a number of changes to enable content-type
metadata to be updated when using the fast-POST mode of
operation, as proposed in the associated spec [1].
* the object server and diskfile are modified to allow
content-type to be updated by a POST and the updated value
to be stored in .meta files.
* the object server accepts PUTs and DELETEs with older
timestamps than existing .meta files. This is to be
consistent with replication that will leave a later .meta
file in place when replicating a .data file.
* the diskfile interface is modified to provide accessor
methods for the content-type and its timestamp.
* the naming of .meta files is modified to encode two
timestamps when the .meta file contains a content-type value
that was set prior to the latest metadata update; this
enables consistency to be achieved when rsync is used for
replication.
* ssync is modified to sync meta files when content-type
differs between local and remote copies of objects.
* the object server issues container updates when handling
POST requests, notifying the container server of the current
immutable metadata (etag, size, hash, swift_bytes),
content-type with their respective timestamps, and the
mutable metadata timestamp.
* the container server maintains the most recently reported
values for immutable metadata, content-type and mutable
metadata, each with their respective timestamps, in a single
db row.
* new probe tests verify that replication achieves eventual
consistency of containers and objects after discrete updates
to content-type and mutable metadata, and that container-sync
sync's objects after fast-post updates.
[1] spec change-id: I60688efc3df692d3a39557114dca8c5490f7837e
Change-Id: Ia597cd460bb5fd40aa92e886e3e18a7542603d01
Proxy-server now requires Content-Length in the response header
when getting object and does not support chunked transferring with
"Transfer-Encoding: chunked"
This doesn't matter in normal swift, but prohibits us from putting
any middelwares to execute something like streaming processing of
objects, which can't calculate the length of their response body
before they start to send their response.
Change-Id: I60fc6c86338d734e39b7e5f1e48a2647995045ef
In common/test_utils.py, TestStatsdLogging had the majority of its
test cases calling the real socket.getaddrinfo(), which uses real
DNS. This is very slightly slower than using a mock getaddrinfo() when
the machine running the tests has functioning DNS, but on a machine
with no network connection at all, the tests are excruciatingly slow
due to timeouts.
This commit mocks things out as appropriate. There's still one user of
the real getaddrinfo(), but it's for ::1, so that's just local
resolution based on /etc/hosts.
Timing numbers for "./.unittests test.unit.common.test_utils:TestStatsdLogging":
* network, without this patch: 1.8s
* no network, without this patch: 221.2s (ouch)
* network, with this patch: 1.1s
* no network, with this patch: 1.1s
Change-Id: I1a2d6f24fc9bb928894fb1fd8383516250e29e0c
As swift no longer supports Python 2.6, replace assertEqual(None, *)
with assertIsNone in tests to have more clear messages in case of
failure.
Change-Id: I94af3e8156ef40465d4f7a2cb79fb99fc7bbda56
Closes-Bug: #1280522