From openstackdocstheme 1.18.0, valid Git URLs can be retrieved by
openstackdocstheme[1], we do not need giturl option anymore.
[1] https://review.openstack.org/532163
Change-Id: I579cebae6486d250915f936f0c1c61008471c089
If you ran this probe test with ssync before the related change it would
demonstrate the related bug. The hack isn't harmful, but it isn't
needed anymore.
Related-Change-Id: I7f90b732c3268cb852b64f17555c631d668044a8
Related-Bug: 1652323
Change-Id: I09e3984a0500a0f4eceec392e7970b84070a5b39
Currently, on deletion of an expiring object, each object server
writes an async_pending to update the expirer queue and remove the row
for that object. Each async_pending is processed by the object updater
and results in all container replicas being updated. This is also true
for PUT and POST requests for existing expiring objects.
If you have Rc container replicas and Ro object replicas (or EC
pieces), then the number of expirer-queue requests made is Rc * Ro [1].
For a 3-replica cluster, that number is 9, which is not terrible. For
a cluster with 3 container replicas and a 15+4 EC scheme, that number
is 57, which is terrible.
This commit makes it so at most two object servers will write out the
async_pending files needed to update the queue, dropping the request
count to 2 * Rc [2]. The object server now looks for a header
"X-Backend-Clean-Expiring-Object-Queue: <true|false>" and writes or
does not write expirer-queue async_pendings as appropriate. The proxy
sends that header to 2 object servers.
The queue update is not necessary for the proper functioning of the
object expirer; if the queue update fails, then the object expirer
will try to delete the object, receive 404s or 412s, and remove the
queue entry. Removal on object PUT/POST/DELETE is helpful but not
required.
[1] assuming no retries needed by the object updater
[2] or Rc, if a cluster has only one object replica
Change-Id: I4d64f4d1d107c437fd3c23e19160157fdafbcd42
Displaying an empty ring should not be an error, thus
changing the exit code back to the former value of 0.
Closes-Bug: 1742417
Change-Id: I779c30cff1b4d24483f993221a8c6d944b7ae98d
When the proxy is putting X-Container headers into object PUT
requests, it should put out just enough to make the container update
durable in the worst case. It shouldn't do more, since that results in
extra work for the container servers; and it shouldn't do less, since
that results in objects not showing up in listings.
The current code gets the number right as long as you have 3 container
replicas and an odd number of object replicas, but it comes up with
some bogus numbers in other cases. The number it computes is
(object-quorum + 1).
This patch changes the number to (container-quorum +
max_put_failures).
Example: given an EC 12+5 policy and 3 container replicas, you can
lose up to 4 connections and still succeed. Since you need to have 2
container updates happen for durability, you need 6 connections to
have X-Container headers. That way, you can lose 4 and still have 2
left. The current code would put X-Container headers on 14 of the
connections, resulting in more than double the workload on the
container servers; this patch changes the number to 6.
Example 2: given a (crazy) EC 3+6 policy and 3 container replicas, you
can lose up to 5 connections, so you need X-Container headers on
7. The current code only sends 5, giving a worst-case result of a PUT
succeeds but never reaches the containers. This patch changes the
number to 7.
Other examples:
| current | this change |
--+-----------+---------------+
EC 10+4, 3x container | 12 | 5 |
EC 10+4, 5x container | 12 | 6 |
EC 15+4, 3x container | 17 | 5 |
EC 15+4, 5x container | 17 | 6 |
EC 4+8, 3x container | 6 | 9 |
7x object, 3x container | 5 | 5 |
6x object, 3x container | 4 | 5 |
Change-Id: I34efd48655b890340912810ab111bb63445e5c8b
The DELETE method repeats inline the same behaviour as provided by
_update_x_timestamp, so just call the method.
Also add unit tests for the behaviour of _update_x_timestamp.
Change-Id: I8b6cfdbfb54b6d43ac507f23d84309ab543374aa
This is a follow up patch to 530258 which will show
extra infromation on empty rings.
This patch goes one step further. On a completely empty ring:
$ swift-ring-builder my.builder create 8 3 1
$ swift-ring-builder my.builder
my.builder, build version 0, id 33b4e117056340feae7d40430180c6bb
256 partitions, 3.000000 replicas, 0 regions, 0 zones, 0 devices, 0.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1 (0:00:00 remaining)
The overload factor is 0.00% (0.000000)
Ring file my.ring.gz not found, probably it hasn't been written yet
Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta
There are no devices in this ring, or all devices have been deleted
It will still start the device list and then say no devices.. Why. let's
see what happens now on an empty ring with devices still marked as
deleted:
$ swift-ring-builder my.builder add r1z1-127.0.0.1:6010/sdb1 1
Device d0r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb1_"" with 1.0 weight got id 0
$ swift-ring-builder my.builder add r1z1-127.0.0.1:6010/sdb2 1
Device d1r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb2_"" with 1.0 weight got id 1
$ swift-ring-builder my.builder remove r1z1-127.0.0.1
Matched more than one device:
d0r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb1_""
d1r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb2_""
Are you sure you want to remove these 2 devices? (y/N) y
d0r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb1_"" marked for removal and will be removed next rebalance.
d1r1z1-127.0.0.1:6010R127.0.0.1:6010/sdb2_"" marked for removal and will be removed next rebalance.
$ swift-ring-builder my.builder
my.builder, build version 4, id 33b4e117056340feae7d40430180c6bb
256 partitions, 3.000000 replicas, 1 regions, 1 zones, 2 devices, 0.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1 (0:00:00 remaining)
The overload factor is 0.00% (0.000000)
Ring file my.ring.gz not found, probably it hasn't been written yet
Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta
0 1 1 127.0.0.1:6010 127.0.0.1:6010 sdb1 0.00 0 0.00 DEL
1 1 1 127.0.0.1:6010 127.0.0.1:6010 sdb2 0.00 0 0.00 DEL
There are no devices in this ring, or all devices have been deleted
Now even when all devices are removed we can still see them as they are still there, only marked as deleted.
Change-Id: Ib39f734deb67ad50bcdad5333cba716161a47e95
These were likely written before the first related change, or created
from an existing ring file.
Also, tolerate missing dispersion when rebalancing -- that may not exist
in the builder file.
Change-Id: I26e3b4429c747c23206e4671f7c86543bb182a15
Related-Change: Ib165cf974c865d47c2d9e8f7b3641971d2e9f404
Related-Change: Ie239b958fc7e0547ffda2bebf61546bd4ef3d829
Related-Change: I551fcaf274876861feb12848749590f220842d68
Use a utils.Timestamp object to set a more realistic x-timestamp
header to avoid intermittent failure when str(time.time()) results
in a rounded up value.
Closes-Bug: 1741912
Change-Id: I0c54d07e30ecb391f9429e7bcfb782f965ece1ea
Before an x-delete-after header with value '0' would almost
certainly result in a 400 response, but the response body would
report a problem with x-delete-at. Now the response correctly
blames the x-delete-after header.
Related-Change: I9a1b6826c4c553f0442cfe2bb78cdf49508fa4a5
Change-Id: Ia8d00fcef8893e3b3dd5720da2c8a5ae1e6e4cb8
There was incorrect path starting /home/swift, but /home/<your-user-name> is correct for common users.
Change-Id: Ia81b2119c87dd88417428e55c82dac1ab7c028b3
Closes-Bug: 1741378
When a client sent "X-Delete-After: <n>", the proxy and all object
servers would each compute X-Delete-At as "int(time.time() +
n)". However, since they don't all compute it at exactly the same
time, the objects stored on disk can end up with differing values for
X-Delete-At, and in that case, the object-expirer queue has multiple
entries for the same object (one for each distinct X-Delete-At value).
This commit makes two changes, either one of which is sufficient to
fix the bug.
First, after computing X-Delete-At from X-Delete-After, X-Delete-After
is removed from the request's headers. Thus, the proxy computes
X-Delete-At, and the object servers don't, so there's only a single
value.
Second, computation of X-Delete-At now uses the request's X-Timestamp
instead of time.time(). In the proxy, these values are essentially the
same; the proxy is responsible for setting X-Timestamp. In the object
server, this ensures that all computed X-Delete-At values are
identical, even if the object servers' clocks are not, or if one
object server takes an extra second to respond to a PUT request.
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Change-Id: I9a1b6826c4c553f0442cfe2bb78cdf49508fa4a5
Closes-Bug: 1741371
After running the functional tests, this directory shows up. I don't
know what's in it, but I'm fairly certain I don't want to commit it.
Change-Id: If9179330c337daf2ae0a01e6c8aa8d349969e737
Since dispersion info is cached, this can easily happen if we make
changes to how dispersion info is calculated or stored (e.g. we extend
the dispersion calculation to consider dispersion of all part-replicas
in the related change)
Related-Change-Id: Ifefff0260deac0c3e8b369a1e158686c89936686
Change-Id: I714deb9e349cd114a21ec591216a9496aaf9e0d1
When the default display command for swift-ring-error encounters a
EmptyRingError trying to calculate balance it should not raise exception
and display the traceback in a command line environment.
Instead handle the exceptional condition and provide the user with
useful feedback.
Closes-Bug: #1737068
Change-Id: Ibadaf64748728a47a8f3f861ec1af601dbfeb9e0
Sometimes, when handling an EC GET request with a Range header, the
object servers reply 206 to the proxy, but the proxy (correctly)
replies 416 to the client[1]. In that case, the connections to the object
servers were not being closed. This was due to improper error handling
in ECAppIter.
Since ECAppIter is intended to be a WSGI iterable, it expects to have
its close() method called when the caller is done with it. In this
particular case, the caller (ECAppIter.kickoff()) was not calling
close() when an exception was raised. Now it is.
[1] consider a 4+2 EC policy with segment size 1024, an 20 byte
object, and a request with "Range: bytes=21-50". The proxy needs whole
fragments to decode, so it asks the object server for "Range:
bytes=0-255" [2], the object server says 206, and then the proxy
realizes that the client's request is unsatisfiable and tells the
client 416.
[2] segment size 1024 and 4 data fragments means the fragments have
size 1024 / 4 = 256, hence "bytes=0-255" asks for the first whole
fragment
Change-Id: Ide2edf8c449c97d45f48c2dbbbff7aebefa4b158
Closes-Bug: 1738804
With a sufficiently undispersed ring it's possible to move an entire
replicas worth of parts and yet the value of dispersion may not get any
better (even though in reality dispersion has dramatically improved).
The problem is dispersion will currently only represent up to one whole
replica worth of parts being undispersed.
However with EC rings it's possible for more than one whole replicas
worth of partitions to be undispersed, in these cases the builder will
require multiple rebalance operations to fully disperse replicas - but
the dispersion value should improve with every rebalance.
N.B. with this change it's possible for rings with a bad dispersion
value to measure as having a significantly smaller dispersion value
after a rebalance (even though they may not have had their dispersion
change) because the total amount of bad dispersion we can measure has
been increased but we're normalizing within a similar range.
Closes-Bug: #1697543
Change-Id: Ifefff0260deac0c3e8b369a1e158686c89936686
Functional tests for symlink and versioned writes run and result in
falure even if symlink is not enabled.
This patch fixes the functional tests to run only if both of
symlink and versioned writes are enabled.
Change-Id: I5ffd0b6436e56a805784baf5ceb722effdf74884
This patch adds 'prefix' argument to iter_containers/iter_objects
method of InternalClient.
This change will be used in general task queue feature [1].
[1]: https://review.openstack.org/#/c/517389/
Change-Id: I8c2067c07fe35681fdc9403da771f451c21136d3
In current manpage docs, some of daemon names for concurrency
explanation is wrong.
This patch fixes the daemon names.
Change-Id: I2a505c9590ee3a3a7e37e8d949a10db36206faec
You've got two test classes: TestContainer and TestContainerUTF8. They
each try to create the same set of containers with names of varying
lengths to make sure the container-name length limit is being honored.
Also, each test class tries to clean up pre-existing data in its
setUpClass method. If TestContainerUTF8 fails to delete a contaienr
that TestContainer made, then its testContainerNameLimit method will
fail because the container PUT response has status 202 instead of 201,
which is because the container still existed from the prior test.
I've made the test consider both 201 and 202 as success. For purposes
of testing the maximum container name length, any 2xx is fine.
Change-Id: I7b343a8ed0d12537659c051ddf29226cefa78a8f