swift/test/probe/test_object_expirer.py

131 lines
5.2 KiB
Python
Raw Normal View History

Fix object-expirer for missing objects Currently if the object-expirer goes to delete an object and the primary nodes are unavailable, or the object is on handoffs - the object servers are unable to verify the x-if-delete-at timestamp and return 412, without writing a tombstone or updating the containers. The expirer treats 412 as success and the dark data is not removed form the object servers nor the object removed in the listing. As a side effect of this bug, if the expirer encounters split brain the delete would never get processed in the correct storage policy. It seems it's just not correct to treat the lack of data as success. Now the object server will treat x-if-delete at against a non-existent object as a 404, and to distinguish from a successfull process of an x-if-delete-at request, will return 204. The expirer will treat a 404 response from swift as a failure, and will continue to attempt to expire the object until it is older that it's configurable reclaim age. However swift will only return 404 if the majority of nodes are able to return success, or if only even a single node is able to accept the x-if-delete-at request the containers will get updated and replicaiton will settle the tombstone - the subsequent x-if-delete-at request will 412 and be removed from the queue. It's worth noting that if an object with x-delete-at meta is DELETED (by a client request) an async update for the expiring update containers will be processed to remove the queue entry - but if no primary nodes handle the DELETE request replication will never remove the expiring entry and assuming it's scheduled for beyond the tombstones reclaim age - the queue entry will not be processable. In this case the expirer will attempt to DELETE the object (and get 404s) in vain until the queue entry passes the configurable reclaim age. DocImpact Implements: blueprint storage-policies Change-Id: I66260e99fda37e97d6d2470971b6f811ee9e01be
2014-06-06 11:35:34 -07:00
#!/usr/bin/python -u
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import random
import uuid
import unittest
Fix object-expirer for missing objects Currently if the object-expirer goes to delete an object and the primary nodes are unavailable, or the object is on handoffs - the object servers are unable to verify the x-if-delete-at timestamp and return 412, without writing a tombstone or updating the containers. The expirer treats 412 as success and the dark data is not removed form the object servers nor the object removed in the listing. As a side effect of this bug, if the expirer encounters split brain the delete would never get processed in the correct storage policy. It seems it's just not correct to treat the lack of data as success. Now the object server will treat x-if-delete at against a non-existent object as a 404, and to distinguish from a successfull process of an x-if-delete-at request, will return 204. The expirer will treat a 404 response from swift as a failure, and will continue to attempt to expire the object until it is older that it's configurable reclaim age. However swift will only return 404 if the majority of nodes are able to return success, or if only even a single node is able to accept the x-if-delete-at request the containers will get updated and replicaiton will settle the tombstone - the subsequent x-if-delete-at request will 412 and be removed from the queue. It's worth noting that if an object with x-delete-at meta is DELETED (by a client request) an async update for the expiring update containers will be processed to remove the queue entry - but if no primary nodes handle the DELETE request replication will never remove the expiring entry and assuming it's scheduled for beyond the tombstones reclaim age - the queue entry will not be processable. In this case the expirer will attempt to DELETE the object (and get 404s) in vain until the queue entry passes the configurable reclaim age. DocImpact Implements: blueprint storage-policies Change-Id: I66260e99fda37e97d6d2470971b6f811ee9e01be
2014-06-06 11:35:34 -07:00
from nose import SkipTest
from swift.common.internal_client import InternalClient
from swift.common.manager import Manager
Add two vector timestamps The normalized form of the X-Timestamp header looks like a float with a fixed width to ensure stable string sorting - normalized timestamps look like "1402464677.04188" To support overwrites of existing data without modifying the original timestamp but still maintain consistency a second internal offset vector is append to the normalized timestamp form which compares and sorts greater than the fixed width float format but less than a newer timestamp. The internalized format of timestamps looks like "1402464677.04188_0000000000000000" - the portion after the underscore is the offset and is a formatted hexadecimal integer. The internalized form is not exposed to clients in responses from Swift. Normal client operations will not create a timestamp with an offset. The Timestamp class in common.utils supports internalized and normalized formatting of timestamps and also comparison of timestamp values. When the offset value of a Timestamp is 0 - it's considered insignificant and need not be represented in the string format; to support backwards compatibility during a Swift upgrade the internalized and normalized form of a Timestamp with an insignificant offset are identical. When a timestamp includes an offset it will always be represented in the internalized form, but is still excluded from the normalized form. Timestamps with an equivalent timestamp portion (the float part) will compare and order by their offset. Timestamps with a greater timestamp portion will always compare and order greater than a Timestamp with a lesser timestamp regardless of it's offset. String comparison and ordering is guaranteed for the internalized string format, and is backwards compatible for normalized timestamps which do not include an offset. The reconciler currently uses a offset bump to ensure that objects can move to the wrong storage policy and be moved back. This use-case is valid because the content represented by the user-facing timestamp is not modified in way. Future consumers of the offset vector of timestamps should be mindful of HTTP semantics of If-Modified and take care to avoid deviation in the response from the object server without an accompanying change to the user facing timestamp. DocImpact Implements: blueprint storage-policies Change-Id: Id85c960b126ec919a481dc62469bf172b7fb8549
2014-06-10 22:17:47 -07:00
from swift.common.utils import Timestamp
Fix object-expirer for missing objects Currently if the object-expirer goes to delete an object and the primary nodes are unavailable, or the object is on handoffs - the object servers are unable to verify the x-if-delete-at timestamp and return 412, without writing a tombstone or updating the containers. The expirer treats 412 as success and the dark data is not removed form the object servers nor the object removed in the listing. As a side effect of this bug, if the expirer encounters split brain the delete would never get processed in the correct storage policy. It seems it's just not correct to treat the lack of data as success. Now the object server will treat x-if-delete at against a non-existent object as a 404, and to distinguish from a successfull process of an x-if-delete-at request, will return 204. The expirer will treat a 404 response from swift as a failure, and will continue to attempt to expire the object until it is older that it's configurable reclaim age. However swift will only return 404 if the majority of nodes are able to return success, or if only even a single node is able to accept the x-if-delete-at request the containers will get updated and replicaiton will settle the tombstone - the subsequent x-if-delete-at request will 412 and be removed from the queue. It's worth noting that if an object with x-delete-at meta is DELETED (by a client request) an async update for the expiring update containers will be processed to remove the queue entry - but if no primary nodes handle the DELETE request replication will never remove the expiring entry and assuming it's scheduled for beyond the tombstones reclaim age - the queue entry will not be processable. In this case the expirer will attempt to DELETE the object (and get 404s) in vain until the queue entry passes the configurable reclaim age. DocImpact Implements: blueprint storage-policies Change-Id: I66260e99fda37e97d6d2470971b6f811ee9e01be
2014-06-06 11:35:34 -07:00
from test.probe.common import ReplProbeTest, ENABLED_POLICIES
Fix object-expirer for missing objects Currently if the object-expirer goes to delete an object and the primary nodes are unavailable, or the object is on handoffs - the object servers are unable to verify the x-if-delete-at timestamp and return 412, without writing a tombstone or updating the containers. The expirer treats 412 as success and the dark data is not removed form the object servers nor the object removed in the listing. As a side effect of this bug, if the expirer encounters split brain the delete would never get processed in the correct storage policy. It seems it's just not correct to treat the lack of data as success. Now the object server will treat x-if-delete at against a non-existent object as a 404, and to distinguish from a successfull process of an x-if-delete-at request, will return 204. The expirer will treat a 404 response from swift as a failure, and will continue to attempt to expire the object until it is older that it's configurable reclaim age. However swift will only return 404 if the majority of nodes are able to return success, or if only even a single node is able to accept the x-if-delete-at request the containers will get updated and replicaiton will settle the tombstone - the subsequent x-if-delete-at request will 412 and be removed from the queue. It's worth noting that if an object with x-delete-at meta is DELETED (by a client request) an async update for the expiring update containers will be processed to remove the queue entry - but if no primary nodes handle the DELETE request replication will never remove the expiring entry and assuming it's scheduled for beyond the tombstones reclaim age - the queue entry will not be processable. In this case the expirer will attempt to DELETE the object (and get 404s) in vain until the queue entry passes the configurable reclaim age. DocImpact Implements: blueprint storage-policies Change-Id: I66260e99fda37e97d6d2470971b6f811ee9e01be
2014-06-06 11:35:34 -07:00
from test.probe.test_container_merge_policy_index import BrainSplitter
from swiftclient import client
class TestObjectExpirer(ReplProbeTest):
Fix object-expirer for missing objects Currently if the object-expirer goes to delete an object and the primary nodes are unavailable, or the object is on handoffs - the object servers are unable to verify the x-if-delete-at timestamp and return 412, without writing a tombstone or updating the containers. The expirer treats 412 as success and the dark data is not removed form the object servers nor the object removed in the listing. As a side effect of this bug, if the expirer encounters split brain the delete would never get processed in the correct storage policy. It seems it's just not correct to treat the lack of data as success. Now the object server will treat x-if-delete at against a non-existent object as a 404, and to distinguish from a successfull process of an x-if-delete-at request, will return 204. The expirer will treat a 404 response from swift as a failure, and will continue to attempt to expire the object until it is older that it's configurable reclaim age. However swift will only return 404 if the majority of nodes are able to return success, or if only even a single node is able to accept the x-if-delete-at request the containers will get updated and replicaiton will settle the tombstone - the subsequent x-if-delete-at request will 412 and be removed from the queue. It's worth noting that if an object with x-delete-at meta is DELETED (by a client request) an async update for the expiring update containers will be processed to remove the queue entry - but if no primary nodes handle the DELETE request replication will never remove the expiring entry and assuming it's scheduled for beyond the tombstones reclaim age - the queue entry will not be processable. In this case the expirer will attempt to DELETE the object (and get 404s) in vain until the queue entry passes the configurable reclaim age. DocImpact Implements: blueprint storage-policies Change-Id: I66260e99fda37e97d6d2470971b6f811ee9e01be
2014-06-06 11:35:34 -07:00
def setUp(self):
if len(ENABLED_POLICIES) < 2:
Fix object-expirer for missing objects Currently if the object-expirer goes to delete an object and the primary nodes are unavailable, or the object is on handoffs - the object servers are unable to verify the x-if-delete-at timestamp and return 412, without writing a tombstone or updating the containers. The expirer treats 412 as success and the dark data is not removed form the object servers nor the object removed in the listing. As a side effect of this bug, if the expirer encounters split brain the delete would never get processed in the correct storage policy. It seems it's just not correct to treat the lack of data as success. Now the object server will treat x-if-delete at against a non-existent object as a 404, and to distinguish from a successfull process of an x-if-delete-at request, will return 204. The expirer will treat a 404 response from swift as a failure, and will continue to attempt to expire the object until it is older that it's configurable reclaim age. However swift will only return 404 if the majority of nodes are able to return success, or if only even a single node is able to accept the x-if-delete-at request the containers will get updated and replicaiton will settle the tombstone - the subsequent x-if-delete-at request will 412 and be removed from the queue. It's worth noting that if an object with x-delete-at meta is DELETED (by a client request) an async update for the expiring update containers will be processed to remove the queue entry - but if no primary nodes handle the DELETE request replication will never remove the expiring entry and assuming it's scheduled for beyond the tombstones reclaim age - the queue entry will not be processable. In this case the expirer will attempt to DELETE the object (and get 404s) in vain until the queue entry passes the configurable reclaim age. DocImpact Implements: blueprint storage-policies Change-Id: I66260e99fda37e97d6d2470971b6f811ee9e01be
2014-06-06 11:35:34 -07:00
raise SkipTest('Need more than one policy')
self.expirer = Manager(['object-expirer'])
self.expirer.start()
err = self.expirer.stop()
if err:
raise SkipTest('Unable to verify object-expirer service')
conf_files = []
for server in self.expirer.servers:
conf_files.extend(server.conf_files())
conf_file = conf_files[0]
self.client = InternalClient(conf_file, 'probe-test', 3)
super(TestObjectExpirer, self).setUp()
Fix object-expirer for missing objects Currently if the object-expirer goes to delete an object and the primary nodes are unavailable, or the object is on handoffs - the object servers are unable to verify the x-if-delete-at timestamp and return 412, without writing a tombstone or updating the containers. The expirer treats 412 as success and the dark data is not removed form the object servers nor the object removed in the listing. As a side effect of this bug, if the expirer encounters split brain the delete would never get processed in the correct storage policy. It seems it's just not correct to treat the lack of data as success. Now the object server will treat x-if-delete at against a non-existent object as a 404, and to distinguish from a successfull process of an x-if-delete-at request, will return 204. The expirer will treat a 404 response from swift as a failure, and will continue to attempt to expire the object until it is older that it's configurable reclaim age. However swift will only return 404 if the majority of nodes are able to return success, or if only even a single node is able to accept the x-if-delete-at request the containers will get updated and replicaiton will settle the tombstone - the subsequent x-if-delete-at request will 412 and be removed from the queue. It's worth noting that if an object with x-delete-at meta is DELETED (by a client request) an async update for the expiring update containers will be processed to remove the queue entry - but if no primary nodes handle the DELETE request replication will never remove the expiring entry and assuming it's scheduled for beyond the tombstones reclaim age - the queue entry will not be processable. In this case the expirer will attempt to DELETE the object (and get 404s) in vain until the queue entry passes the configurable reclaim age. DocImpact Implements: blueprint storage-policies Change-Id: I66260e99fda37e97d6d2470971b6f811ee9e01be
2014-06-06 11:35:34 -07:00
self.container_name = 'container-%s' % uuid.uuid4()
self.object_name = 'object-%s' % uuid.uuid4()
self.brain = BrainSplitter(self.url, self.token, self.container_name,
self.object_name)
def test_expirer_object_split_brain(self):
old_policy = random.choice(ENABLED_POLICIES)
wrong_policy = random.choice([p for p in ENABLED_POLICIES
if p != old_policy])
Fix object-expirer for missing objects Currently if the object-expirer goes to delete an object and the primary nodes are unavailable, or the object is on handoffs - the object servers are unable to verify the x-if-delete-at timestamp and return 412, without writing a tombstone or updating the containers. The expirer treats 412 as success and the dark data is not removed form the object servers nor the object removed in the listing. As a side effect of this bug, if the expirer encounters split brain the delete would never get processed in the correct storage policy. It seems it's just not correct to treat the lack of data as success. Now the object server will treat x-if-delete at against a non-existent object as a 404, and to distinguish from a successfull process of an x-if-delete-at request, will return 204. The expirer will treat a 404 response from swift as a failure, and will continue to attempt to expire the object until it is older that it's configurable reclaim age. However swift will only return 404 if the majority of nodes are able to return success, or if only even a single node is able to accept the x-if-delete-at request the containers will get updated and replicaiton will settle the tombstone - the subsequent x-if-delete-at request will 412 and be removed from the queue. It's worth noting that if an object with x-delete-at meta is DELETED (by a client request) an async update for the expiring update containers will be processed to remove the queue entry - but if no primary nodes handle the DELETE request replication will never remove the expiring entry and assuming it's scheduled for beyond the tombstones reclaim age - the queue entry will not be processable. In this case the expirer will attempt to DELETE the object (and get 404s) in vain until the queue entry passes the configurable reclaim age. DocImpact Implements: blueprint storage-policies Change-Id: I66260e99fda37e97d6d2470971b6f811ee9e01be
2014-06-06 11:35:34 -07:00
# create an expiring object and a container with the wrong policy
self.brain.stop_primary_half()
self.brain.put_container(int(old_policy))
Add two vector timestamps The normalized form of the X-Timestamp header looks like a float with a fixed width to ensure stable string sorting - normalized timestamps look like "1402464677.04188" To support overwrites of existing data without modifying the original timestamp but still maintain consistency a second internal offset vector is append to the normalized timestamp form which compares and sorts greater than the fixed width float format but less than a newer timestamp. The internalized format of timestamps looks like "1402464677.04188_0000000000000000" - the portion after the underscore is the offset and is a formatted hexadecimal integer. The internalized form is not exposed to clients in responses from Swift. Normal client operations will not create a timestamp with an offset. The Timestamp class in common.utils supports internalized and normalized formatting of timestamps and also comparison of timestamp values. When the offset value of a Timestamp is 0 - it's considered insignificant and need not be represented in the string format; to support backwards compatibility during a Swift upgrade the internalized and normalized form of a Timestamp with an insignificant offset are identical. When a timestamp includes an offset it will always be represented in the internalized form, but is still excluded from the normalized form. Timestamps with an equivalent timestamp portion (the float part) will compare and order by their offset. Timestamps with a greater timestamp portion will always compare and order greater than a Timestamp with a lesser timestamp regardless of it's offset. String comparison and ordering is guaranteed for the internalized string format, and is backwards compatible for normalized timestamps which do not include an offset. The reconciler currently uses a offset bump to ensure that objects can move to the wrong storage policy and be moved back. This use-case is valid because the content represented by the user-facing timestamp is not modified in way. Future consumers of the offset vector of timestamps should be mindful of HTTP semantics of If-Modified and take care to avoid deviation in the response from the object server without an accompanying change to the user facing timestamp. DocImpact Implements: blueprint storage-policies Change-Id: Id85c960b126ec919a481dc62469bf172b7fb8549
2014-06-10 22:17:47 -07:00
self.brain.put_object(headers={'X-Delete-After': 2})
Fix object-expirer for missing objects Currently if the object-expirer goes to delete an object and the primary nodes are unavailable, or the object is on handoffs - the object servers are unable to verify the x-if-delete-at timestamp and return 412, without writing a tombstone or updating the containers. The expirer treats 412 as success and the dark data is not removed form the object servers nor the object removed in the listing. As a side effect of this bug, if the expirer encounters split brain the delete would never get processed in the correct storage policy. It seems it's just not correct to treat the lack of data as success. Now the object server will treat x-if-delete at against a non-existent object as a 404, and to distinguish from a successfull process of an x-if-delete-at request, will return 204. The expirer will treat a 404 response from swift as a failure, and will continue to attempt to expire the object until it is older that it's configurable reclaim age. However swift will only return 404 if the majority of nodes are able to return success, or if only even a single node is able to accept the x-if-delete-at request the containers will get updated and replicaiton will settle the tombstone - the subsequent x-if-delete-at request will 412 and be removed from the queue. It's worth noting that if an object with x-delete-at meta is DELETED (by a client request) an async update for the expiring update containers will be processed to remove the queue entry - but if no primary nodes handle the DELETE request replication will never remove the expiring entry and assuming it's scheduled for beyond the tombstones reclaim age - the queue entry will not be processable. In this case the expirer will attempt to DELETE the object (and get 404s) in vain until the queue entry passes the configurable reclaim age. DocImpact Implements: blueprint storage-policies Change-Id: I66260e99fda37e97d6d2470971b6f811ee9e01be
2014-06-06 11:35:34 -07:00
# get the object timestamp
metadata = self.client.get_object_metadata(
self.account, self.container_name, self.object_name,
headers={'X-Backend-Storage-Policy-Index': int(old_policy)})
Add two vector timestamps The normalized form of the X-Timestamp header looks like a float with a fixed width to ensure stable string sorting - normalized timestamps look like "1402464677.04188" To support overwrites of existing data without modifying the original timestamp but still maintain consistency a second internal offset vector is append to the normalized timestamp form which compares and sorts greater than the fixed width float format but less than a newer timestamp. The internalized format of timestamps looks like "1402464677.04188_0000000000000000" - the portion after the underscore is the offset and is a formatted hexadecimal integer. The internalized form is not exposed to clients in responses from Swift. Normal client operations will not create a timestamp with an offset. The Timestamp class in common.utils supports internalized and normalized formatting of timestamps and also comparison of timestamp values. When the offset value of a Timestamp is 0 - it's considered insignificant and need not be represented in the string format; to support backwards compatibility during a Swift upgrade the internalized and normalized form of a Timestamp with an insignificant offset are identical. When a timestamp includes an offset it will always be represented in the internalized form, but is still excluded from the normalized form. Timestamps with an equivalent timestamp portion (the float part) will compare and order by their offset. Timestamps with a greater timestamp portion will always compare and order greater than a Timestamp with a lesser timestamp regardless of it's offset. String comparison and ordering is guaranteed for the internalized string format, and is backwards compatible for normalized timestamps which do not include an offset. The reconciler currently uses a offset bump to ensure that objects can move to the wrong storage policy and be moved back. This use-case is valid because the content represented by the user-facing timestamp is not modified in way. Future consumers of the offset vector of timestamps should be mindful of HTTP semantics of If-Modified and take care to avoid deviation in the response from the object server without an accompanying change to the user facing timestamp. DocImpact Implements: blueprint storage-policies Change-Id: Id85c960b126ec919a481dc62469bf172b7fb8549
2014-06-10 22:17:47 -07:00
create_timestamp = Timestamp(metadata['x-timestamp'])
Fix object-expirer for missing objects Currently if the object-expirer goes to delete an object and the primary nodes are unavailable, or the object is on handoffs - the object servers are unable to verify the x-if-delete-at timestamp and return 412, without writing a tombstone or updating the containers. The expirer treats 412 as success and the dark data is not removed form the object servers nor the object removed in the listing. As a side effect of this bug, if the expirer encounters split brain the delete would never get processed in the correct storage policy. It seems it's just not correct to treat the lack of data as success. Now the object server will treat x-if-delete at against a non-existent object as a 404, and to distinguish from a successfull process of an x-if-delete-at request, will return 204. The expirer will treat a 404 response from swift as a failure, and will continue to attempt to expire the object until it is older that it's configurable reclaim age. However swift will only return 404 if the majority of nodes are able to return success, or if only even a single node is able to accept the x-if-delete-at request the containers will get updated and replicaiton will settle the tombstone - the subsequent x-if-delete-at request will 412 and be removed from the queue. It's worth noting that if an object with x-delete-at meta is DELETED (by a client request) an async update for the expiring update containers will be processed to remove the queue entry - but if no primary nodes handle the DELETE request replication will never remove the expiring entry and assuming it's scheduled for beyond the tombstones reclaim age - the queue entry will not be processable. In this case the expirer will attempt to DELETE the object (and get 404s) in vain until the queue entry passes the configurable reclaim age. DocImpact Implements: blueprint storage-policies Change-Id: I66260e99fda37e97d6d2470971b6f811ee9e01be
2014-06-06 11:35:34 -07:00
self.brain.start_primary_half()
# get the expiring object updates in their queue, while we have all
# the servers up
Manager(['object-updater']).once()
self.brain.stop_handoff_half()
self.brain.put_container(int(wrong_policy))
# don't start handoff servers, only wrong policy is available
# make sure auto-created containers get in the account listing
Manager(['container-updater']).once()
# this guy should no-op since it's unable to expire the object
self.expirer.once()
self.brain.start_handoff_half()
self.get_to_final_state()
Fix object-expirer for missing objects Currently if the object-expirer goes to delete an object and the primary nodes are unavailable, or the object is on handoffs - the object servers are unable to verify the x-if-delete-at timestamp and return 412, without writing a tombstone or updating the containers. The expirer treats 412 as success and the dark data is not removed form the object servers nor the object removed in the listing. As a side effect of this bug, if the expirer encounters split brain the delete would never get processed in the correct storage policy. It seems it's just not correct to treat the lack of data as success. Now the object server will treat x-if-delete at against a non-existent object as a 404, and to distinguish from a successfull process of an x-if-delete-at request, will return 204. The expirer will treat a 404 response from swift as a failure, and will continue to attempt to expire the object until it is older that it's configurable reclaim age. However swift will only return 404 if the majority of nodes are able to return success, or if only even a single node is able to accept the x-if-delete-at request the containers will get updated and replicaiton will settle the tombstone - the subsequent x-if-delete-at request will 412 and be removed from the queue. It's worth noting that if an object with x-delete-at meta is DELETED (by a client request) an async update for the expiring update containers will be processed to remove the queue entry - but if no primary nodes handle the DELETE request replication will never remove the expiring entry and assuming it's scheduled for beyond the tombstones reclaim age - the queue entry will not be processable. In this case the expirer will attempt to DELETE the object (and get 404s) in vain until the queue entry passes the configurable reclaim age. DocImpact Implements: blueprint storage-policies Change-Id: I66260e99fda37e97d6d2470971b6f811ee9e01be
2014-06-06 11:35:34 -07:00
# validate object is expired
found_in_policy = None
metadata = self.client.get_object_metadata(
self.account, self.container_name, self.object_name,
acceptable_statuses=(4,),
headers={'X-Backend-Storage-Policy-Index': int(old_policy)})
self.assertTrue('x-backend-timestamp' in metadata)
Add two vector timestamps The normalized form of the X-Timestamp header looks like a float with a fixed width to ensure stable string sorting - normalized timestamps look like "1402464677.04188" To support overwrites of existing data without modifying the original timestamp but still maintain consistency a second internal offset vector is append to the normalized timestamp form which compares and sorts greater than the fixed width float format but less than a newer timestamp. The internalized format of timestamps looks like "1402464677.04188_0000000000000000" - the portion after the underscore is the offset and is a formatted hexadecimal integer. The internalized form is not exposed to clients in responses from Swift. Normal client operations will not create a timestamp with an offset. The Timestamp class in common.utils supports internalized and normalized formatting of timestamps and also comparison of timestamp values. When the offset value of a Timestamp is 0 - it's considered insignificant and need not be represented in the string format; to support backwards compatibility during a Swift upgrade the internalized and normalized form of a Timestamp with an insignificant offset are identical. When a timestamp includes an offset it will always be represented in the internalized form, but is still excluded from the normalized form. Timestamps with an equivalent timestamp portion (the float part) will compare and order by their offset. Timestamps with a greater timestamp portion will always compare and order greater than a Timestamp with a lesser timestamp regardless of it's offset. String comparison and ordering is guaranteed for the internalized string format, and is backwards compatible for normalized timestamps which do not include an offset. The reconciler currently uses a offset bump to ensure that objects can move to the wrong storage policy and be moved back. This use-case is valid because the content represented by the user-facing timestamp is not modified in way. Future consumers of the offset vector of timestamps should be mindful of HTTP semantics of If-Modified and take care to avoid deviation in the response from the object server without an accompanying change to the user facing timestamp. DocImpact Implements: blueprint storage-policies Change-Id: Id85c960b126ec919a481dc62469bf172b7fb8549
2014-06-10 22:17:47 -07:00
self.assertEqual(Timestamp(metadata['x-backend-timestamp']),
Fix object-expirer for missing objects Currently if the object-expirer goes to delete an object and the primary nodes are unavailable, or the object is on handoffs - the object servers are unable to verify the x-if-delete-at timestamp and return 412, without writing a tombstone or updating the containers. The expirer treats 412 as success and the dark data is not removed form the object servers nor the object removed in the listing. As a side effect of this bug, if the expirer encounters split brain the delete would never get processed in the correct storage policy. It seems it's just not correct to treat the lack of data as success. Now the object server will treat x-if-delete at against a non-existent object as a 404, and to distinguish from a successfull process of an x-if-delete-at request, will return 204. The expirer will treat a 404 response from swift as a failure, and will continue to attempt to expire the object until it is older that it's configurable reclaim age. However swift will only return 404 if the majority of nodes are able to return success, or if only even a single node is able to accept the x-if-delete-at request the containers will get updated and replicaiton will settle the tombstone - the subsequent x-if-delete-at request will 412 and be removed from the queue. It's worth noting that if an object with x-delete-at meta is DELETED (by a client request) an async update for the expiring update containers will be processed to remove the queue entry - but if no primary nodes handle the DELETE request replication will never remove the expiring entry and assuming it's scheduled for beyond the tombstones reclaim age - the queue entry will not be processable. In this case the expirer will attempt to DELETE the object (and get 404s) in vain until the queue entry passes the configurable reclaim age. DocImpact Implements: blueprint storage-policies Change-Id: I66260e99fda37e97d6d2470971b6f811ee9e01be
2014-06-06 11:35:34 -07:00
create_timestamp)
# but it is still in the listing
for obj in self.client.iter_objects(self.account,
self.container_name):
if self.object_name == obj['name']:
break
else:
self.fail('Did not find listing for %s' % self.object_name)
# clear proxy cache
client.post_container(self.url, self.token, self.container_name, {})
# run the expirier again after replication
self.expirer.once()
# object is not in the listing
for obj in self.client.iter_objects(self.account,
self.container_name):
if self.object_name == obj['name']:
self.fail('Found listing for %s' % self.object_name)
# and validate object is tombstoned
found_in_policy = None
for policy in ENABLED_POLICIES:
Fix object-expirer for missing objects Currently if the object-expirer goes to delete an object and the primary nodes are unavailable, or the object is on handoffs - the object servers are unable to verify the x-if-delete-at timestamp and return 412, without writing a tombstone or updating the containers. The expirer treats 412 as success and the dark data is not removed form the object servers nor the object removed in the listing. As a side effect of this bug, if the expirer encounters split brain the delete would never get processed in the correct storage policy. It seems it's just not correct to treat the lack of data as success. Now the object server will treat x-if-delete at against a non-existent object as a 404, and to distinguish from a successfull process of an x-if-delete-at request, will return 204. The expirer will treat a 404 response from swift as a failure, and will continue to attempt to expire the object until it is older that it's configurable reclaim age. However swift will only return 404 if the majority of nodes are able to return success, or if only even a single node is able to accept the x-if-delete-at request the containers will get updated and replicaiton will settle the tombstone - the subsequent x-if-delete-at request will 412 and be removed from the queue. It's worth noting that if an object with x-delete-at meta is DELETED (by a client request) an async update for the expiring update containers will be processed to remove the queue entry - but if no primary nodes handle the DELETE request replication will never remove the expiring entry and assuming it's scheduled for beyond the tombstones reclaim age - the queue entry will not be processable. In this case the expirer will attempt to DELETE the object (and get 404s) in vain until the queue entry passes the configurable reclaim age. DocImpact Implements: blueprint storage-policies Change-Id: I66260e99fda37e97d6d2470971b6f811ee9e01be
2014-06-06 11:35:34 -07:00
metadata = self.client.get_object_metadata(
self.account, self.container_name, self.object_name,
acceptable_statuses=(4,),
headers={'X-Backend-Storage-Policy-Index': int(policy)})
if 'x-backend-timestamp' in metadata:
if found_in_policy:
self.fail('found object in %s and also %s' %
(found_in_policy, policy))
found_in_policy = policy
self.assertTrue('x-backend-timestamp' in metadata)
self.assertTrue(Timestamp(metadata['x-backend-timestamp']) >
create_timestamp)
Fix object-expirer for missing objects Currently if the object-expirer goes to delete an object and the primary nodes are unavailable, or the object is on handoffs - the object servers are unable to verify the x-if-delete-at timestamp and return 412, without writing a tombstone or updating the containers. The expirer treats 412 as success and the dark data is not removed form the object servers nor the object removed in the listing. As a side effect of this bug, if the expirer encounters split brain the delete would never get processed in the correct storage policy. It seems it's just not correct to treat the lack of data as success. Now the object server will treat x-if-delete at against a non-existent object as a 404, and to distinguish from a successfull process of an x-if-delete-at request, will return 204. The expirer will treat a 404 response from swift as a failure, and will continue to attempt to expire the object until it is older that it's configurable reclaim age. However swift will only return 404 if the majority of nodes are able to return success, or if only even a single node is able to accept the x-if-delete-at request the containers will get updated and replicaiton will settle the tombstone - the subsequent x-if-delete-at request will 412 and be removed from the queue. It's worth noting that if an object with x-delete-at meta is DELETED (by a client request) an async update for the expiring update containers will be processed to remove the queue entry - but if no primary nodes handle the DELETE request replication will never remove the expiring entry and assuming it's scheduled for beyond the tombstones reclaim age - the queue entry will not be processable. In this case the expirer will attempt to DELETE the object (and get 404s) in vain until the queue entry passes the configurable reclaim age. DocImpact Implements: blueprint storage-policies Change-Id: I66260e99fda37e97d6d2470971b6f811ee9e01be
2014-06-06 11:35:34 -07:00
if __name__ == "__main__":
unittest.main()