Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
# Copyright (c) 2013 OpenStack Foundation
|
|
|
|
#
|
|
|
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
# you may not use this file except in compliance with the License.
|
|
|
|
# You may obtain a copy of the License at
|
|
|
|
#
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
#
|
|
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
|
|
# implied.
|
|
|
|
# See the License for the specific language governing permissions and
|
|
|
|
# limitations under the License.
|
2013-12-03 21:12:19 -08:00
|
|
|
import os
|
|
|
|
import time
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
import unittest
|
|
|
|
|
|
|
|
import eventlet
|
|
|
|
import mock
|
2015-05-27 17:27:47 +02:00
|
|
|
import six
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
2013-12-03 21:12:19 -08:00
|
|
|
from swift.common import exceptions, utils
|
2015-03-17 08:32:57 +00:00
|
|
|
from swift.common.storage_policy import POLICIES
|
2015-05-01 13:02:29 +01:00
|
|
|
from swift.common.utils import Timestamp
|
2015-09-03 16:13:17 +01:00
|
|
|
from swift.obj import ssync_sender, diskfile, ssync_receiver
|
|
|
|
|
|
|
|
from test.unit import patch_policies, make_timestamp_iter
|
|
|
|
from test.unit.obj.common import FakeReplicator, BaseTest
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
|
|
|
|
class NullBufferedHTTPConnection(object):
|
|
|
|
|
|
|
|
def __init__(*args, **kwargs):
|
|
|
|
pass
|
|
|
|
|
|
|
|
def putrequest(*args, **kwargs):
|
|
|
|
pass
|
|
|
|
|
|
|
|
def putheader(*args, **kwargs):
|
|
|
|
pass
|
|
|
|
|
|
|
|
def endheaders(*args, **kwargs):
|
|
|
|
pass
|
|
|
|
|
|
|
|
def getresponse(*args, **kwargs):
|
|
|
|
pass
|
|
|
|
|
2015-08-27 18:35:09 -07:00
|
|
|
def close(*args, **kwargs):
|
|
|
|
pass
|
|
|
|
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
class FakeResponse(object):
|
|
|
|
|
|
|
|
def __init__(self, chunk_body=''):
|
|
|
|
self.status = 200
|
|
|
|
self.close_called = False
|
|
|
|
if chunk_body:
|
2015-05-27 17:27:47 +02:00
|
|
|
self.fp = six.StringIO(
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
'%x\r\n%s\r\n0\r\n\r\n' % (len(chunk_body), chunk_body))
|
|
|
|
|
2015-06-25 01:44:10 -07:00
|
|
|
def read(self, *args, **kwargs):
|
|
|
|
return ''
|
|
|
|
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
def close(self):
|
|
|
|
self.close_called = True
|
|
|
|
|
|
|
|
|
|
|
|
class FakeConnection(object):
|
|
|
|
|
|
|
|
def __init__(self):
|
|
|
|
self.sent = []
|
|
|
|
self.closed = False
|
|
|
|
|
|
|
|
def send(self, data):
|
|
|
|
self.sent.append(data)
|
|
|
|
|
|
|
|
def close(self):
|
|
|
|
self.closed = True
|
|
|
|
|
|
|
|
|
2015-09-03 16:13:17 +01:00
|
|
|
@patch_policies()
|
|
|
|
class TestSender(BaseTest):
|
|
|
|
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
def setUp(self):
|
2015-09-03 16:13:17 +01:00
|
|
|
super(TestSender, self).setUp()
|
2013-12-16 17:14:00 +00:00
|
|
|
self.testdir = os.path.join(self.tmpdir, 'tmp_test_ssync_sender')
|
2015-03-17 08:32:57 +00:00
|
|
|
utils.mkdirs(os.path.join(self.testdir, 'dev'))
|
2014-10-28 09:51:06 -07:00
|
|
|
self.daemon = FakeReplicator(self.testdir)
|
|
|
|
self.sender = ssync_sender.Sender(self.daemon, None, None, None)
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
def test_call_catches_MessageTimeout(self):
|
|
|
|
|
|
|
|
def connect(self):
|
|
|
|
exc = exceptions.MessageTimeout(1, 'test connect')
|
|
|
|
# Cancels Eventlet's raising of this since we're about to do it.
|
|
|
|
exc.cancel()
|
|
|
|
raise exc
|
|
|
|
|
|
|
|
with mock.patch.object(ssync_sender.Sender, 'connect', connect):
|
2014-10-16 01:56:48 +00:00
|
|
|
node = dict(replication_ip='1.2.3.4', replication_port=5678,
|
|
|
|
device='sda1')
|
2015-03-17 08:32:57 +00:00
|
|
|
job = dict(partition='9', policy=POLICIES.legacy)
|
2014-10-28 09:51:06 -07:00
|
|
|
self.sender = ssync_sender.Sender(self.daemon, node, job, None)
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.suffixes = ['abc']
|
2014-05-29 00:54:07 -07:00
|
|
|
success, candidates = self.sender()
|
|
|
|
self.assertFalse(success)
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(candidates, {})
|
2014-10-28 09:51:06 -07:00
|
|
|
error_lines = self.daemon.logger.get_lines_for_level('error')
|
2015-03-17 08:32:57 +00:00
|
|
|
self.assertEqual(1, len(error_lines))
|
|
|
|
self.assertEqual('1.2.3.4:5678/sda1/9 1 second: test connect',
|
|
|
|
error_lines[0])
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
def test_call_catches_ReplicationException(self):
|
|
|
|
|
|
|
|
def connect(self):
|
|
|
|
raise exceptions.ReplicationException('test connect')
|
|
|
|
|
|
|
|
with mock.patch.object(ssync_sender.Sender, 'connect', connect):
|
2014-10-16 01:56:48 +00:00
|
|
|
node = dict(replication_ip='1.2.3.4', replication_port=5678,
|
|
|
|
device='sda1')
|
2015-03-17 08:32:57 +00:00
|
|
|
job = dict(partition='9', policy=POLICIES.legacy)
|
2014-10-28 09:51:06 -07:00
|
|
|
self.sender = ssync_sender.Sender(self.daemon, node, job, None)
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.suffixes = ['abc']
|
2014-05-29 00:54:07 -07:00
|
|
|
success, candidates = self.sender()
|
|
|
|
self.assertFalse(success)
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(candidates, {})
|
2014-10-28 09:51:06 -07:00
|
|
|
error_lines = self.daemon.logger.get_lines_for_level('error')
|
2015-03-17 08:32:57 +00:00
|
|
|
self.assertEqual(1, len(error_lines))
|
|
|
|
self.assertEqual('1.2.3.4:5678/sda1/9 test connect',
|
|
|
|
error_lines[0])
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
def test_call_catches_other_exceptions(self):
|
2014-10-16 01:56:48 +00:00
|
|
|
node = dict(replication_ip='1.2.3.4', replication_port=5678,
|
|
|
|
device='sda1')
|
2015-03-17 08:32:57 +00:00
|
|
|
job = dict(partition='9', policy=POLICIES.legacy)
|
2014-10-28 09:51:06 -07:00
|
|
|
self.sender = ssync_sender.Sender(self.daemon, node, job, None)
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.suffixes = ['abc']
|
|
|
|
self.sender.connect = 'cause exception'
|
2014-05-29 00:54:07 -07:00
|
|
|
success, candidates = self.sender()
|
|
|
|
self.assertFalse(success)
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(candidates, {})
|
2014-10-28 09:51:06 -07:00
|
|
|
error_lines = self.daemon.logger.get_lines_for_level('error')
|
2015-03-17 08:32:57 +00:00
|
|
|
for line in error_lines:
|
|
|
|
self.assertTrue(line.startswith(
|
|
|
|
'1.2.3.4:5678/sda1/9 EXCEPTION in replication.Sender:'))
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
def test_call_catches_exception_handling_exception(self):
|
2015-03-17 08:32:57 +00:00
|
|
|
job = node = None # Will cause inside exception handler to fail
|
2014-10-28 09:51:06 -07:00
|
|
|
self.sender = ssync_sender.Sender(self.daemon, node, job, None)
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.suffixes = ['abc']
|
|
|
|
self.sender.connect = 'cause exception'
|
2014-05-29 00:54:07 -07:00
|
|
|
success, candidates = self.sender()
|
|
|
|
self.assertFalse(success)
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(candidates, {})
|
2014-10-28 09:51:06 -07:00
|
|
|
error_lines = self.daemon.logger.get_lines_for_level('error')
|
2015-03-17 08:32:57 +00:00
|
|
|
for line in error_lines:
|
|
|
|
self.assertTrue(line.startswith(
|
|
|
|
'EXCEPTION in replication.Sender'))
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
def test_call_calls_others(self):
|
|
|
|
self.sender.suffixes = ['abc']
|
|
|
|
self.sender.connect = mock.MagicMock()
|
|
|
|
self.sender.missing_check = mock.MagicMock()
|
|
|
|
self.sender.updates = mock.MagicMock()
|
|
|
|
self.sender.disconnect = mock.MagicMock()
|
2014-05-29 00:54:07 -07:00
|
|
|
success, candidates = self.sender()
|
|
|
|
self.assertTrue(success)
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(candidates, {})
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.connect.assert_called_once_with()
|
|
|
|
self.sender.missing_check.assert_called_once_with()
|
|
|
|
self.sender.updates.assert_called_once_with()
|
|
|
|
self.sender.disconnect.assert_called_once_with()
|
|
|
|
|
|
|
|
def test_call_calls_others_returns_failure(self):
|
|
|
|
self.sender.suffixes = ['abc']
|
|
|
|
self.sender.connect = mock.MagicMock()
|
|
|
|
self.sender.missing_check = mock.MagicMock()
|
|
|
|
self.sender.updates = mock.MagicMock()
|
|
|
|
self.sender.disconnect = mock.MagicMock()
|
|
|
|
self.sender.failures = 1
|
2014-05-29 00:54:07 -07:00
|
|
|
success, candidates = self.sender()
|
|
|
|
self.assertFalse(success)
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(candidates, {})
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.connect.assert_called_once_with()
|
|
|
|
self.sender.missing_check.assert_called_once_with()
|
|
|
|
self.sender.updates.assert_called_once_with()
|
|
|
|
self.sender.disconnect.assert_called_once_with()
|
|
|
|
|
2014-03-18 11:06:52 -07:00
|
|
|
def test_connect(self):
|
2014-10-16 01:56:48 +00:00
|
|
|
node = dict(replication_ip='1.2.3.4', replication_port=5678,
|
2015-03-17 08:32:57 +00:00
|
|
|
device='sda1', index=0)
|
|
|
|
job = dict(partition='9', policy=POLICIES[1])
|
2014-10-28 09:51:06 -07:00
|
|
|
self.sender = ssync_sender.Sender(self.daemon, node, job, None)
|
2014-03-18 11:06:52 -07:00
|
|
|
self.sender.suffixes = ['abc']
|
|
|
|
with mock.patch(
|
|
|
|
'swift.obj.ssync_sender.bufferedhttp.BufferedHTTPConnection'
|
|
|
|
) as mock_conn_class:
|
|
|
|
mock_conn = mock_conn_class.return_value
|
|
|
|
mock_resp = mock.MagicMock()
|
|
|
|
mock_resp.status = 200
|
|
|
|
mock_conn.getresponse.return_value = mock_resp
|
|
|
|
self.sender.connect()
|
|
|
|
mock_conn_class.assert_called_once_with('1.2.3.4:5678')
|
|
|
|
expectations = {
|
|
|
|
'putrequest': [
|
2014-10-28 09:51:06 -07:00
|
|
|
mock.call('SSYNC', '/sda1/9'),
|
2014-03-18 11:06:52 -07:00
|
|
|
],
|
|
|
|
'putheader': [
|
|
|
|
mock.call('Transfer-Encoding', 'chunked'),
|
2014-06-23 12:52:50 -07:00
|
|
|
mock.call('X-Backend-Storage-Policy-Index', 1),
|
2014-10-28 09:51:06 -07:00
|
|
|
mock.call('X-Backend-Ssync-Frag-Index', 0),
|
2015-06-13 11:03:56 -07:00
|
|
|
mock.call('X-Backend-Ssync-Node-Index', 0),
|
2014-03-18 11:06:52 -07:00
|
|
|
],
|
|
|
|
'endheaders': [mock.call()],
|
|
|
|
}
|
|
|
|
for method_name, expected_calls in expectations.items():
|
|
|
|
mock_method = getattr(mock_conn, method_name)
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(expected_calls, mock_method.mock_calls,
|
|
|
|
'connection method "%s" got %r not %r' % (
|
|
|
|
method_name, mock_method.mock_calls,
|
|
|
|
expected_calls))
|
2015-06-25 01:35:07 -07:00
|
|
|
|
|
|
|
def test_connect_handoff(self):
|
|
|
|
node = dict(replication_ip='1.2.3.4', replication_port=5678,
|
|
|
|
device='sda1')
|
|
|
|
job = dict(partition='9', policy=POLICIES[1], frag_index=9)
|
|
|
|
self.sender = ssync_sender.Sender(self.daemon, node, job, None)
|
|
|
|
self.sender.suffixes = ['abc']
|
|
|
|
with mock.patch(
|
|
|
|
'swift.obj.ssync_sender.bufferedhttp.BufferedHTTPConnection'
|
|
|
|
) as mock_conn_class:
|
|
|
|
mock_conn = mock_conn_class.return_value
|
|
|
|
mock_resp = mock.MagicMock()
|
|
|
|
mock_resp.status = 200
|
|
|
|
mock_conn.getresponse.return_value = mock_resp
|
|
|
|
self.sender.connect()
|
|
|
|
mock_conn_class.assert_called_once_with('1.2.3.4:5678')
|
|
|
|
expectations = {
|
|
|
|
'putrequest': [
|
|
|
|
mock.call('SSYNC', '/sda1/9'),
|
|
|
|
],
|
|
|
|
'putheader': [
|
|
|
|
mock.call('Transfer-Encoding', 'chunked'),
|
|
|
|
mock.call('X-Backend-Storage-Policy-Index', 1),
|
|
|
|
mock.call('X-Backend-Ssync-Frag-Index', 9),
|
|
|
|
mock.call('X-Backend-Ssync-Node-Index', ''),
|
|
|
|
],
|
|
|
|
'endheaders': [mock.call()],
|
|
|
|
}
|
|
|
|
for method_name, expected_calls in expectations.items():
|
|
|
|
mock_method = getattr(mock_conn, method_name)
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(expected_calls, mock_method.mock_calls,
|
|
|
|
'connection method "%s" got %r not %r' % (
|
|
|
|
method_name, mock_method.mock_calls,
|
|
|
|
expected_calls))
|
2015-06-25 01:35:07 -07:00
|
|
|
|
2015-08-27 11:02:27 -07:00
|
|
|
def test_connect_handoff_no_frag(self):
|
|
|
|
node = dict(replication_ip='1.2.3.4', replication_port=5678,
|
|
|
|
device='sda1')
|
|
|
|
job = dict(partition='9', policy=POLICIES[0])
|
|
|
|
self.sender = ssync_sender.Sender(self.daemon, node, job, None)
|
|
|
|
self.sender.suffixes = ['abc']
|
|
|
|
with mock.patch(
|
|
|
|
'swift.obj.ssync_sender.bufferedhttp.BufferedHTTPConnection'
|
|
|
|
) as mock_conn_class:
|
|
|
|
mock_conn = mock_conn_class.return_value
|
|
|
|
mock_resp = mock.MagicMock()
|
|
|
|
mock_resp.status = 200
|
|
|
|
mock_conn.getresponse.return_value = mock_resp
|
|
|
|
self.sender.connect()
|
|
|
|
mock_conn_class.assert_called_once_with('1.2.3.4:5678')
|
|
|
|
expectations = {
|
|
|
|
'putrequest': [
|
|
|
|
mock.call('SSYNC', '/sda1/9'),
|
|
|
|
],
|
|
|
|
'putheader': [
|
|
|
|
mock.call('Transfer-Encoding', 'chunked'),
|
|
|
|
mock.call('X-Backend-Storage-Policy-Index', 0),
|
|
|
|
mock.call('X-Backend-Ssync-Frag-Index', ''),
|
|
|
|
mock.call('X-Backend-Ssync-Node-Index', ''),
|
|
|
|
],
|
|
|
|
'endheaders': [mock.call()],
|
|
|
|
}
|
|
|
|
for method_name, expected_calls in expectations.items():
|
|
|
|
mock_method = getattr(mock_conn, method_name)
|
|
|
|
self.assertEqual(expected_calls, mock_method.mock_calls,
|
|
|
|
'connection method "%s" got %r not %r' % (
|
|
|
|
method_name, mock_method.mock_calls,
|
|
|
|
expected_calls))
|
|
|
|
|
|
|
|
def test_connect_handoff_none_frag(self):
|
|
|
|
node = dict(replication_ip='1.2.3.4', replication_port=5678,
|
|
|
|
device='sda1')
|
|
|
|
job = dict(partition='9', policy=POLICIES[1], frag_index=None)
|
|
|
|
self.sender = ssync_sender.Sender(self.daemon, node, job, None)
|
|
|
|
self.sender.suffixes = ['abc']
|
|
|
|
with mock.patch(
|
|
|
|
'swift.obj.ssync_sender.bufferedhttp.BufferedHTTPConnection'
|
|
|
|
) as mock_conn_class:
|
|
|
|
mock_conn = mock_conn_class.return_value
|
|
|
|
mock_resp = mock.MagicMock()
|
|
|
|
mock_resp.status = 200
|
|
|
|
mock_conn.getresponse.return_value = mock_resp
|
|
|
|
self.sender.connect()
|
|
|
|
mock_conn_class.assert_called_once_with('1.2.3.4:5678')
|
|
|
|
expectations = {
|
|
|
|
'putrequest': [
|
|
|
|
mock.call('SSYNC', '/sda1/9'),
|
|
|
|
],
|
|
|
|
'putheader': [
|
|
|
|
mock.call('Transfer-Encoding', 'chunked'),
|
|
|
|
mock.call('X-Backend-Storage-Policy-Index', 1),
|
|
|
|
mock.call('X-Backend-Ssync-Frag-Index', ''),
|
|
|
|
mock.call('X-Backend-Ssync-Node-Index', ''),
|
|
|
|
],
|
|
|
|
'endheaders': [mock.call()],
|
|
|
|
}
|
|
|
|
for method_name, expected_calls in expectations.items():
|
|
|
|
mock_method = getattr(mock_conn, method_name)
|
|
|
|
self.assertEqual(expected_calls, mock_method.mock_calls,
|
|
|
|
'connection method "%s" got %r not %r' % (
|
|
|
|
method_name, mock_method.mock_calls,
|
|
|
|
expected_calls))
|
|
|
|
|
2015-06-25 01:35:07 -07:00
|
|
|
def test_connect_handoff_replicated(self):
|
|
|
|
node = dict(replication_ip='1.2.3.4', replication_port=5678,
|
|
|
|
device='sda1')
|
|
|
|
# no frag_index in rsync job
|
|
|
|
job = dict(partition='9', policy=POLICIES[1])
|
|
|
|
self.sender = ssync_sender.Sender(self.daemon, node, job, None)
|
|
|
|
self.sender.suffixes = ['abc']
|
|
|
|
with mock.patch(
|
|
|
|
'swift.obj.ssync_sender.bufferedhttp.BufferedHTTPConnection'
|
|
|
|
) as mock_conn_class:
|
|
|
|
mock_conn = mock_conn_class.return_value
|
|
|
|
mock_resp = mock.MagicMock()
|
|
|
|
mock_resp.status = 200
|
|
|
|
mock_conn.getresponse.return_value = mock_resp
|
|
|
|
self.sender.connect()
|
|
|
|
mock_conn_class.assert_called_once_with('1.2.3.4:5678')
|
|
|
|
expectations = {
|
|
|
|
'putrequest': [
|
|
|
|
mock.call('SSYNC', '/sda1/9'),
|
|
|
|
],
|
|
|
|
'putheader': [
|
|
|
|
mock.call('Transfer-Encoding', 'chunked'),
|
|
|
|
mock.call('X-Backend-Storage-Policy-Index', 1),
|
|
|
|
mock.call('X-Backend-Ssync-Frag-Index', ''),
|
|
|
|
mock.call('X-Backend-Ssync-Node-Index', ''),
|
|
|
|
],
|
|
|
|
'endheaders': [mock.call()],
|
|
|
|
}
|
|
|
|
for method_name, expected_calls in expectations.items():
|
|
|
|
mock_method = getattr(mock_conn, method_name)
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(expected_calls, mock_method.mock_calls,
|
|
|
|
'connection method "%s" got %r not %r' % (
|
|
|
|
method_name, mock_method.mock_calls,
|
|
|
|
expected_calls))
|
2014-03-18 11:06:52 -07:00
|
|
|
|
2014-10-28 09:51:06 -07:00
|
|
|
def test_call(self):
|
|
|
|
def patch_sender(sender):
|
|
|
|
sender.connect = mock.MagicMock()
|
|
|
|
sender.missing_check = mock.MagicMock()
|
|
|
|
sender.updates = mock.MagicMock()
|
|
|
|
sender.disconnect = mock.MagicMock()
|
|
|
|
|
|
|
|
node = dict(replication_ip='1.2.3.4', replication_port=5678,
|
|
|
|
device='sda1')
|
|
|
|
job = {
|
|
|
|
'device': 'dev',
|
|
|
|
'partition': '9',
|
|
|
|
'policy': POLICIES.legacy,
|
|
|
|
'frag_index': 0,
|
|
|
|
}
|
|
|
|
available_map = dict([('9d41d8cd98f00b204e9800998ecf0abc',
|
|
|
|
'1380144470.00000'),
|
|
|
|
('9d41d8cd98f00b204e9800998ecf0def',
|
|
|
|
'1380144472.22222'),
|
|
|
|
('9d41d8cd98f00b204e9800998ecf1def',
|
|
|
|
'1380144474.44444')])
|
|
|
|
|
|
|
|
# no suffixes -> no work done
|
|
|
|
sender = ssync_sender.Sender(
|
|
|
|
self.daemon, node, job, [], remote_check_objs=None)
|
|
|
|
patch_sender(sender)
|
|
|
|
sender.available_map = available_map
|
|
|
|
success, candidates = sender()
|
|
|
|
self.assertTrue(success)
|
|
|
|
self.assertEqual({}, candidates)
|
|
|
|
|
|
|
|
# all objs in sync
|
|
|
|
sender = ssync_sender.Sender(
|
|
|
|
self.daemon, node, job, ['ignored'], remote_check_objs=None)
|
|
|
|
patch_sender(sender)
|
|
|
|
sender.available_map = available_map
|
|
|
|
success, candidates = sender()
|
|
|
|
self.assertTrue(success)
|
|
|
|
self.assertEqual(available_map, candidates)
|
|
|
|
|
|
|
|
# one obj not in sync, sync'ing faked, all objs should be in return set
|
|
|
|
wanted = '9d41d8cd98f00b204e9800998ecf0def'
|
|
|
|
sender = ssync_sender.Sender(
|
|
|
|
self.daemon, node, job, ['ignored'],
|
|
|
|
remote_check_objs=None)
|
|
|
|
patch_sender(sender)
|
2015-04-22 12:56:50 +01:00
|
|
|
sender.send_map = {wanted: []}
|
2014-10-28 09:51:06 -07:00
|
|
|
sender.available_map = available_map
|
|
|
|
success, candidates = sender()
|
|
|
|
self.assertTrue(success)
|
|
|
|
self.assertEqual(available_map, candidates)
|
|
|
|
|
|
|
|
# one obj not in sync, remote check only so that obj is not sync'd
|
|
|
|
# and should not be in the return set
|
|
|
|
wanted = '9d41d8cd98f00b204e9800998ecf0def'
|
|
|
|
remote_check_objs = set(available_map.keys())
|
|
|
|
sender = ssync_sender.Sender(
|
|
|
|
self.daemon, node, job, ['ignored'],
|
|
|
|
remote_check_objs=remote_check_objs)
|
|
|
|
patch_sender(sender)
|
2015-04-22 12:56:50 +01:00
|
|
|
sender.send_map = {wanted: []}
|
2014-10-28 09:51:06 -07:00
|
|
|
sender.available_map = available_map
|
|
|
|
success, candidates = sender()
|
|
|
|
self.assertTrue(success)
|
|
|
|
expected_map = dict([('9d41d8cd98f00b204e9800998ecf0abc',
|
|
|
|
'1380144470.00000'),
|
|
|
|
('9d41d8cd98f00b204e9800998ecf1def',
|
|
|
|
'1380144474.44444')])
|
|
|
|
self.assertEqual(expected_map, candidates)
|
|
|
|
|
2015-04-22 12:56:50 +01:00
|
|
|
def test_call_and_missing_check_metadata_legacy_response(self):
|
2015-03-17 08:32:57 +00:00
|
|
|
def yield_hashes(device, partition, policy, suffixes=None, **kwargs):
|
2014-05-29 00:54:07 -07:00
|
|
|
if device == 'dev' and partition == '9' and suffixes == ['abc'] \
|
2015-03-17 08:32:57 +00:00
|
|
|
and policy == POLICIES.legacy:
|
2014-05-29 00:54:07 -07:00
|
|
|
yield (
|
|
|
|
'/srv/node/dev/objects/9/abc/'
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc',
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc',
|
2015-04-22 12:56:50 +01:00
|
|
|
{'ts_data': Timestamp(1380144470.00000),
|
|
|
|
'ts_meta': Timestamp(1380155570.00005)})
|
2014-05-29 00:54:07 -07:00
|
|
|
else:
|
|
|
|
raise Exception(
|
|
|
|
'No match for %r %r %r' % (device, partition, suffixes))
|
|
|
|
|
|
|
|
self.sender.connection = FakeConnection()
|
2015-04-22 12:56:50 +01:00
|
|
|
self.sender.node = {}
|
2015-03-17 08:32:57 +00:00
|
|
|
self.sender.job = {
|
|
|
|
'device': 'dev',
|
|
|
|
'partition': '9',
|
|
|
|
'policy': POLICIES.legacy,
|
2014-10-28 09:51:06 -07:00
|
|
|
'frag_index': 0,
|
2015-03-17 08:32:57 +00:00
|
|
|
}
|
2014-05-29 00:54:07 -07:00
|
|
|
self.sender.suffixes = ['abc']
|
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
':MISSING_CHECK: START\r\n'
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc\r\n'
|
2015-04-22 12:56:50 +01:00
|
|
|
':MISSING_CHECK: END\r\n'
|
|
|
|
':UPDATES: START\r\n'
|
|
|
|
':UPDATES: END\r\n'
|
|
|
|
))
|
|
|
|
self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes
|
|
|
|
self.sender.connect = mock.MagicMock()
|
|
|
|
self.sender.df_mgr.get_diskfile_from_hash = mock.MagicMock()
|
|
|
|
self.sender.disconnect = mock.MagicMock()
|
|
|
|
success, candidates = self.sender()
|
|
|
|
self.assertTrue(success)
|
|
|
|
found_post = found_put = False
|
|
|
|
for chunk in self.sender.connection.sent:
|
|
|
|
if 'POST' in chunk:
|
|
|
|
found_post = True
|
|
|
|
if 'PUT' in chunk:
|
|
|
|
found_put = True
|
|
|
|
self.assertFalse(found_post)
|
|
|
|
self.assertTrue(found_put)
|
|
|
|
self.assertEqual(self.sender.failures, 0)
|
|
|
|
|
|
|
|
def test_call_and_missing_check(self):
|
|
|
|
def yield_hashes(device, partition, policy, suffixes=None, **kwargs):
|
|
|
|
if device == 'dev' and partition == '9' and suffixes == ['abc'] \
|
|
|
|
and policy == POLICIES.legacy:
|
|
|
|
yield (
|
|
|
|
'/srv/node/dev/objects/9/abc/'
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc',
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc',
|
|
|
|
{'ts_data': Timestamp(1380144470.00000)})
|
|
|
|
else:
|
|
|
|
raise Exception(
|
|
|
|
'No match for %r %r %r' % (device, partition, suffixes))
|
|
|
|
|
|
|
|
self.sender.connection = FakeConnection()
|
|
|
|
self.sender.node = {}
|
|
|
|
self.sender.job = {
|
|
|
|
'device': 'dev',
|
|
|
|
'partition': '9',
|
|
|
|
'policy': POLICIES.legacy,
|
|
|
|
'frag_index': 0,
|
|
|
|
}
|
|
|
|
self.sender.suffixes = ['abc']
|
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
':MISSING_CHECK: START\r\n'
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc d\r\n'
|
2014-05-29 00:54:07 -07:00
|
|
|
':MISSING_CHECK: END\r\n'))
|
|
|
|
self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes
|
|
|
|
self.sender.connect = mock.MagicMock()
|
|
|
|
self.sender.updates = mock.MagicMock()
|
|
|
|
self.sender.disconnect = mock.MagicMock()
|
|
|
|
success, candidates = self.sender()
|
|
|
|
self.assertTrue(success)
|
2015-04-22 12:56:50 +01:00
|
|
|
self.assertEqual(candidates,
|
|
|
|
dict([('9d41d8cd98f00b204e9800998ecf0abc',
|
|
|
|
{'ts_data': Timestamp(1380144470.00000)})]))
|
2014-05-29 00:54:07 -07:00
|
|
|
self.assertEqual(self.sender.failures, 0)
|
|
|
|
|
|
|
|
def test_call_and_missing_check_with_obj_list(self):
|
2015-03-17 08:32:57 +00:00
|
|
|
def yield_hashes(device, partition, policy, suffixes=None, **kwargs):
|
2014-05-29 00:54:07 -07:00
|
|
|
if device == 'dev' and partition == '9' and suffixes == ['abc'] \
|
2015-03-17 08:32:57 +00:00
|
|
|
and policy == POLICIES.legacy:
|
2014-05-29 00:54:07 -07:00
|
|
|
yield (
|
|
|
|
'/srv/node/dev/objects/9/abc/'
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc',
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc',
|
2015-04-22 12:56:50 +01:00
|
|
|
{'ts_data': Timestamp(1380144470.00000)})
|
2014-05-29 00:54:07 -07:00
|
|
|
else:
|
|
|
|
raise Exception(
|
|
|
|
'No match for %r %r %r' % (device, partition, suffixes))
|
2015-03-17 08:32:57 +00:00
|
|
|
job = {
|
|
|
|
'device': 'dev',
|
|
|
|
'partition': '9',
|
|
|
|
'policy': POLICIES.legacy,
|
2014-10-28 09:51:06 -07:00
|
|
|
'frag_index': 0,
|
2015-03-17 08:32:57 +00:00
|
|
|
}
|
2014-10-28 09:51:06 -07:00
|
|
|
self.sender = ssync_sender.Sender(self.daemon, None, job, ['abc'],
|
2014-05-29 00:54:07 -07:00
|
|
|
['9d41d8cd98f00b204e9800998ecf0abc'])
|
|
|
|
self.sender.connection = FakeConnection()
|
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
':MISSING_CHECK: START\r\n'
|
|
|
|
':MISSING_CHECK: END\r\n'))
|
|
|
|
self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes
|
|
|
|
self.sender.connect = mock.MagicMock()
|
|
|
|
self.sender.updates = mock.MagicMock()
|
|
|
|
self.sender.disconnect = mock.MagicMock()
|
|
|
|
success, candidates = self.sender()
|
|
|
|
self.assertTrue(success)
|
2015-04-22 12:56:50 +01:00
|
|
|
self.assertEqual(candidates,
|
|
|
|
dict([('9d41d8cd98f00b204e9800998ecf0abc',
|
|
|
|
{'ts_data': Timestamp(1380144470.00000)})]))
|
2014-05-29 00:54:07 -07:00
|
|
|
self.assertEqual(self.sender.failures, 0)
|
|
|
|
|
|
|
|
def test_call_and_missing_check_with_obj_list_but_required(self):
|
2015-03-17 08:32:57 +00:00
|
|
|
def yield_hashes(device, partition, policy, suffixes=None, **kwargs):
|
2014-05-29 00:54:07 -07:00
|
|
|
if device == 'dev' and partition == '9' and suffixes == ['abc'] \
|
2015-03-17 08:32:57 +00:00
|
|
|
and policy == POLICIES.legacy:
|
2014-05-29 00:54:07 -07:00
|
|
|
yield (
|
|
|
|
'/srv/node/dev/objects/9/abc/'
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc',
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc',
|
2015-04-22 12:56:50 +01:00
|
|
|
{'ts_data': Timestamp(1380144470.00000)})
|
2014-05-29 00:54:07 -07:00
|
|
|
else:
|
|
|
|
raise Exception(
|
|
|
|
'No match for %r %r %r' % (device, partition, suffixes))
|
2015-03-17 08:32:57 +00:00
|
|
|
job = {
|
|
|
|
'device': 'dev',
|
|
|
|
'partition': '9',
|
|
|
|
'policy': POLICIES.legacy,
|
2014-10-28 09:51:06 -07:00
|
|
|
'frag_index': 0,
|
2015-03-17 08:32:57 +00:00
|
|
|
}
|
2015-04-22 12:56:50 +01:00
|
|
|
self.sender = ssync_sender.Sender(self.daemon, {}, job, ['abc'],
|
2014-05-29 00:54:07 -07:00
|
|
|
['9d41d8cd98f00b204e9800998ecf0abc'])
|
|
|
|
self.sender.connection = FakeConnection()
|
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
':MISSING_CHECK: START\r\n'
|
2015-04-22 12:56:50 +01:00
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc d\r\n'
|
2014-05-29 00:54:07 -07:00
|
|
|
':MISSING_CHECK: END\r\n'))
|
|
|
|
self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes
|
|
|
|
self.sender.connect = mock.MagicMock()
|
|
|
|
self.sender.updates = mock.MagicMock()
|
|
|
|
self.sender.disconnect = mock.MagicMock()
|
|
|
|
success, candidates = self.sender()
|
|
|
|
self.assertTrue(success)
|
2014-10-28 09:51:06 -07:00
|
|
|
self.assertEqual(candidates, {})
|
2014-05-29 00:54:07 -07:00
|
|
|
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
def test_connect_send_timeout(self):
|
2015-08-27 11:02:27 -07:00
|
|
|
self.daemon.node_timeout = 0.01 # make disconnect fail fast
|
2014-10-28 09:51:06 -07:00
|
|
|
self.daemon.conn_timeout = 0.01
|
2014-10-16 01:56:48 +00:00
|
|
|
node = dict(replication_ip='1.2.3.4', replication_port=5678,
|
|
|
|
device='sda1')
|
2015-03-17 08:32:57 +00:00
|
|
|
job = dict(partition='9', policy=POLICIES.legacy)
|
2014-10-28 09:51:06 -07:00
|
|
|
self.sender = ssync_sender.Sender(self.daemon, node, job, None)
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.suffixes = ['abc']
|
|
|
|
|
|
|
|
def putrequest(*args, **kwargs):
|
|
|
|
eventlet.sleep(0.1)
|
|
|
|
|
|
|
|
with mock.patch.object(
|
|
|
|
ssync_sender.bufferedhttp.BufferedHTTPConnection,
|
|
|
|
'putrequest', putrequest):
|
2014-05-29 00:54:07 -07:00
|
|
|
success, candidates = self.sender()
|
|
|
|
self.assertFalse(success)
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(candidates, {})
|
2014-10-28 09:51:06 -07:00
|
|
|
error_lines = self.daemon.logger.get_lines_for_level('error')
|
2015-03-17 08:32:57 +00:00
|
|
|
for line in error_lines:
|
|
|
|
self.assertTrue(line.startswith(
|
|
|
|
'1.2.3.4:5678/sda1/9 0.01 seconds: connect send'))
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
def test_connect_receive_timeout(self):
|
2014-10-28 09:51:06 -07:00
|
|
|
self.daemon.node_timeout = 0.02
|
2014-10-16 01:56:48 +00:00
|
|
|
node = dict(replication_ip='1.2.3.4', replication_port=5678,
|
2015-03-17 08:32:57 +00:00
|
|
|
device='sda1', index=0)
|
|
|
|
job = dict(partition='9', policy=POLICIES.legacy)
|
2014-10-28 09:51:06 -07:00
|
|
|
self.sender = ssync_sender.Sender(self.daemon, node, job, None)
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.suffixes = ['abc']
|
|
|
|
|
|
|
|
class FakeBufferedHTTPConnection(NullBufferedHTTPConnection):
|
|
|
|
|
|
|
|
def getresponse(*args, **kwargs):
|
|
|
|
eventlet.sleep(0.1)
|
|
|
|
|
|
|
|
with mock.patch.object(
|
|
|
|
ssync_sender.bufferedhttp, 'BufferedHTTPConnection',
|
|
|
|
FakeBufferedHTTPConnection):
|
2014-05-29 00:54:07 -07:00
|
|
|
success, candidates = self.sender()
|
|
|
|
self.assertFalse(success)
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(candidates, {})
|
2014-10-28 09:51:06 -07:00
|
|
|
error_lines = self.daemon.logger.get_lines_for_level('error')
|
2015-03-17 08:32:57 +00:00
|
|
|
for line in error_lines:
|
|
|
|
self.assertTrue(line.startswith(
|
|
|
|
'1.2.3.4:5678/sda1/9 0.02 seconds: connect receive'))
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
def test_connect_bad_status(self):
|
2014-10-28 09:51:06 -07:00
|
|
|
self.daemon.node_timeout = 0.02
|
2014-10-16 01:56:48 +00:00
|
|
|
node = dict(replication_ip='1.2.3.4', replication_port=5678,
|
2014-10-28 09:51:06 -07:00
|
|
|
device='sda1', index=0)
|
2015-03-17 08:32:57 +00:00
|
|
|
job = dict(partition='9', policy=POLICIES.legacy)
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
class FakeBufferedHTTPConnection(NullBufferedHTTPConnection):
|
|
|
|
def getresponse(*args, **kwargs):
|
|
|
|
response = FakeResponse()
|
|
|
|
response.status = 503
|
2015-08-27 11:02:27 -07:00
|
|
|
response.read = lambda: 'an error message'
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
return response
|
|
|
|
|
2015-04-27 09:17:46 +01:00
|
|
|
missing_check_fn = 'swift.obj.ssync_sender.Sender.missing_check'
|
|
|
|
with mock.patch(missing_check_fn) as mock_missing_check:
|
|
|
|
with mock.patch.object(
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
ssync_sender.bufferedhttp, 'BufferedHTTPConnection',
|
2015-04-27 09:17:46 +01:00
|
|
|
FakeBufferedHTTPConnection):
|
|
|
|
self.sender = ssync_sender.Sender(
|
|
|
|
self.daemon, node, job, ['abc'])
|
|
|
|
success, candidates = self.sender()
|
|
|
|
self.assertFalse(success)
|
2015-08-06 00:55:36 +05:30
|
|
|
self.assertEqual(candidates, {})
|
2014-10-28 09:51:06 -07:00
|
|
|
error_lines = self.daemon.logger.get_lines_for_level('error')
|
2015-03-17 08:32:57 +00:00
|
|
|
for line in error_lines:
|
|
|
|
self.assertTrue(line.startswith(
|
|
|
|
'1.2.3.4:5678/sda1/9 Expected status 200; got 503'))
|
2015-08-27 11:02:27 -07:00
|
|
|
self.assertIn('an error message', line)
|
2015-04-27 09:17:46 +01:00
|
|
|
# sanity check that Sender did not proceed to missing_check exchange
|
|
|
|
self.assertFalse(mock_missing_check.called)
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
def test_readline_newline_in_buffer(self):
|
|
|
|
self.sender.response_buffer = 'Has a newline already.\r\nOkay.'
|
|
|
|
self.assertEqual(self.sender.readline(), 'Has a newline already.\r\n')
|
|
|
|
self.assertEqual(self.sender.response_buffer, 'Okay.')
|
|
|
|
|
|
|
|
def test_readline_buffer_exceeds_network_chunk_size_somehow(self):
|
2014-10-28 09:51:06 -07:00
|
|
|
self.daemon.network_chunk_size = 2
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.response_buffer = '1234567890'
|
|
|
|
self.assertEqual(self.sender.readline(), '1234567890')
|
|
|
|
self.assertEqual(self.sender.response_buffer, '')
|
|
|
|
|
|
|
|
def test_readline_at_start_of_chunk(self):
|
|
|
|
self.sender.response = FakeResponse()
|
2015-05-27 17:27:47 +02:00
|
|
|
self.sender.response.fp = six.StringIO('2\r\nx\n\r\n')
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.assertEqual(self.sender.readline(), 'x\n')
|
|
|
|
|
|
|
|
def test_readline_chunk_with_extension(self):
|
|
|
|
self.sender.response = FakeResponse()
|
2015-05-27 17:27:47 +02:00
|
|
|
self.sender.response.fp = six.StringIO(
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
'2 ; chunk=extension\r\nx\n\r\n')
|
|
|
|
self.assertEqual(self.sender.readline(), 'x\n')
|
|
|
|
|
|
|
|
def test_readline_broken_chunk(self):
|
|
|
|
self.sender.response = FakeResponse()
|
2015-05-27 17:27:47 +02:00
|
|
|
self.sender.response.fp = six.StringIO('q\r\nx\n\r\n')
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.assertRaises(
|
|
|
|
exceptions.ReplicationException, self.sender.readline)
|
|
|
|
self.assertTrue(self.sender.response.close_called)
|
|
|
|
|
|
|
|
def test_readline_terminated_chunk(self):
|
|
|
|
self.sender.response = FakeResponse()
|
2015-05-27 17:27:47 +02:00
|
|
|
self.sender.response.fp = six.StringIO('b\r\nnot enough')
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.assertRaises(
|
|
|
|
exceptions.ReplicationException, self.sender.readline)
|
|
|
|
self.assertTrue(self.sender.response.close_called)
|
|
|
|
|
|
|
|
def test_readline_all(self):
|
|
|
|
self.sender.response = FakeResponse()
|
2015-05-27 17:27:47 +02:00
|
|
|
self.sender.response.fp = six.StringIO('2\r\nx\n\r\n0\r\n\r\n')
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.assertEqual(self.sender.readline(), 'x\n')
|
|
|
|
self.assertEqual(self.sender.readline(), '')
|
|
|
|
self.assertEqual(self.sender.readline(), '')
|
|
|
|
|
|
|
|
def test_readline_all_trailing_not_newline_termed(self):
|
|
|
|
self.sender.response = FakeResponse()
|
2015-05-27 17:27:47 +02:00
|
|
|
self.sender.response.fp = six.StringIO(
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
'2\r\nx\n\r\n3\r\n123\r\n0\r\n\r\n')
|
|
|
|
self.assertEqual(self.sender.readline(), 'x\n')
|
|
|
|
self.assertEqual(self.sender.readline(), '123')
|
|
|
|
self.assertEqual(self.sender.readline(), '')
|
|
|
|
self.assertEqual(self.sender.readline(), '')
|
|
|
|
|
|
|
|
def test_missing_check_timeout(self):
|
|
|
|
self.sender.connection = FakeConnection()
|
|
|
|
self.sender.connection.send = lambda d: eventlet.sleep(1)
|
|
|
|
self.sender.daemon.node_timeout = 0.01
|
|
|
|
self.assertRaises(exceptions.MessageTimeout, self.sender.missing_check)
|
|
|
|
|
|
|
|
def test_missing_check_has_empty_suffixes(self):
|
2015-03-17 08:32:57 +00:00
|
|
|
def yield_hashes(device, partition, policy, suffixes=None, **kwargs):
|
|
|
|
if (device != 'dev' or partition != '9' or
|
|
|
|
policy != POLICIES.legacy or
|
2014-03-18 11:06:52 -07:00
|
|
|
suffixes != ['abc', 'def']):
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
yield # Just here to make this a generator
|
|
|
|
raise Exception(
|
2014-03-18 11:06:52 -07:00
|
|
|
'No match for %r %r %r %r' % (device, partition,
|
2015-03-17 08:32:57 +00:00
|
|
|
policy, suffixes))
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
self.sender.connection = FakeConnection()
|
2015-03-17 08:32:57 +00:00
|
|
|
self.sender.job = {
|
|
|
|
'device': 'dev',
|
|
|
|
'partition': '9',
|
|
|
|
'policy': POLICIES.legacy,
|
|
|
|
}
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.suffixes = ['abc', 'def']
|
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
':MISSING_CHECK: START\r\n'
|
|
|
|
':MISSING_CHECK: END\r\n'))
|
|
|
|
self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes
|
|
|
|
self.sender.missing_check()
|
|
|
|
self.assertEqual(
|
|
|
|
''.join(self.sender.connection.sent),
|
|
|
|
'17\r\n:MISSING_CHECK: START\r\n\r\n'
|
|
|
|
'15\r\n:MISSING_CHECK: END\r\n\r\n')
|
2015-04-22 12:56:50 +01:00
|
|
|
self.assertEqual(self.sender.send_map, {})
|
2014-10-28 09:51:06 -07:00
|
|
|
self.assertEqual(self.sender.available_map, {})
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
def test_missing_check_has_suffixes(self):
|
2015-03-17 08:32:57 +00:00
|
|
|
def yield_hashes(device, partition, policy, suffixes=None, **kwargs):
|
|
|
|
if (device == 'dev' and partition == '9' and
|
|
|
|
policy == POLICIES.legacy and
|
2014-03-18 11:06:52 -07:00
|
|
|
suffixes == ['abc', 'def']):
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
yield (
|
|
|
|
'/srv/node/dev/objects/9/abc/'
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc',
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc',
|
2015-04-22 12:56:50 +01:00
|
|
|
{'ts_data': Timestamp(1380144470.00000)})
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
yield (
|
|
|
|
'/srv/node/dev/objects/9/def/'
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0def',
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0def',
|
2015-04-22 12:56:50 +01:00
|
|
|
{'ts_data': Timestamp(1380144472.22222)})
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
yield (
|
|
|
|
'/srv/node/dev/objects/9/def/'
|
|
|
|
'9d41d8cd98f00b204e9800998ecf1def',
|
|
|
|
'9d41d8cd98f00b204e9800998ecf1def',
|
2015-04-22 12:56:50 +01:00
|
|
|
{'ts_data': Timestamp(1380144474.44444),
|
|
|
|
'ts_meta': Timestamp(1380144475.44444)})
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
else:
|
|
|
|
raise Exception(
|
2014-03-18 11:06:52 -07:00
|
|
|
'No match for %r %r %r %r' % (device, partition,
|
2015-03-17 08:32:57 +00:00
|
|
|
policy, suffixes))
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
self.sender.connection = FakeConnection()
|
2015-03-17 08:32:57 +00:00
|
|
|
self.sender.job = {
|
|
|
|
'device': 'dev',
|
|
|
|
'partition': '9',
|
|
|
|
'policy': POLICIES.legacy,
|
|
|
|
}
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.suffixes = ['abc', 'def']
|
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
':MISSING_CHECK: START\r\n'
|
|
|
|
':MISSING_CHECK: END\r\n'))
|
|
|
|
self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes
|
|
|
|
self.sender.missing_check()
|
|
|
|
self.assertEqual(
|
|
|
|
''.join(self.sender.connection.sent),
|
|
|
|
'17\r\n:MISSING_CHECK: START\r\n\r\n'
|
|
|
|
'33\r\n9d41d8cd98f00b204e9800998ecf0abc 1380144470.00000\r\n\r\n'
|
|
|
|
'33\r\n9d41d8cd98f00b204e9800998ecf0def 1380144472.22222\r\n\r\n'
|
2015-04-22 12:56:50 +01:00
|
|
|
'3b\r\n9d41d8cd98f00b204e9800998ecf1def 1380144474.44444 '
|
|
|
|
'm:186a0\r\n\r\n'
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
'15\r\n:MISSING_CHECK: END\r\n\r\n')
|
2015-04-22 12:56:50 +01:00
|
|
|
self.assertEqual(self.sender.send_map, {})
|
|
|
|
candidates = [('9d41d8cd98f00b204e9800998ecf0abc',
|
|
|
|
dict(ts_data=Timestamp(1380144470.00000))),
|
|
|
|
('9d41d8cd98f00b204e9800998ecf0def',
|
|
|
|
dict(ts_data=Timestamp(1380144472.22222))),
|
|
|
|
('9d41d8cd98f00b204e9800998ecf1def',
|
|
|
|
dict(ts_data=Timestamp(1380144474.44444),
|
|
|
|
ts_meta=Timestamp(1380144475.44444)))]
|
2014-10-28 09:51:06 -07:00
|
|
|
self.assertEqual(self.sender.available_map, dict(candidates))
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
def test_missing_check_far_end_disconnect(self):
|
2015-03-17 08:32:57 +00:00
|
|
|
def yield_hashes(device, partition, policy, suffixes=None, **kwargs):
|
|
|
|
if (device == 'dev' and partition == '9' and
|
|
|
|
policy == POLICIES.legacy and
|
2014-03-18 11:06:52 -07:00
|
|
|
suffixes == ['abc']):
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
yield (
|
|
|
|
'/srv/node/dev/objects/9/abc/'
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc',
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc',
|
2015-04-22 12:56:50 +01:00
|
|
|
{'ts_data': Timestamp(1380144470.00000)})
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
else:
|
|
|
|
raise Exception(
|
2014-03-18 11:06:52 -07:00
|
|
|
'No match for %r %r %r %r' % (device, partition,
|
2015-03-17 08:32:57 +00:00
|
|
|
policy, suffixes))
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
self.sender.connection = FakeConnection()
|
2015-03-17 08:32:57 +00:00
|
|
|
self.sender.job = {
|
|
|
|
'device': 'dev',
|
|
|
|
'partition': '9',
|
|
|
|
'policy': POLICIES.legacy,
|
|
|
|
}
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.suffixes = ['abc']
|
|
|
|
self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes
|
|
|
|
self.sender.response = FakeResponse(chunk_body='\r\n')
|
|
|
|
exc = None
|
|
|
|
try:
|
|
|
|
self.sender.missing_check()
|
|
|
|
except exceptions.ReplicationException as err:
|
|
|
|
exc = err
|
|
|
|
self.assertEqual(str(exc), 'Early disconnect')
|
|
|
|
self.assertEqual(
|
|
|
|
''.join(self.sender.connection.sent),
|
|
|
|
'17\r\n:MISSING_CHECK: START\r\n\r\n'
|
|
|
|
'33\r\n9d41d8cd98f00b204e9800998ecf0abc 1380144470.00000\r\n\r\n'
|
|
|
|
'15\r\n:MISSING_CHECK: END\r\n\r\n')
|
2014-10-28 09:51:06 -07:00
|
|
|
self.assertEqual(self.sender.available_map,
|
|
|
|
dict([('9d41d8cd98f00b204e9800998ecf0abc',
|
2015-04-22 12:56:50 +01:00
|
|
|
dict(ts_data=Timestamp(1380144470.00000)))]))
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
def test_missing_check_far_end_disconnect2(self):
|
2015-03-17 08:32:57 +00:00
|
|
|
def yield_hashes(device, partition, policy, suffixes=None, **kwargs):
|
|
|
|
if (device == 'dev' and partition == '9' and
|
|
|
|
policy == POLICIES.legacy and
|
2014-03-18 11:06:52 -07:00
|
|
|
suffixes == ['abc']):
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
yield (
|
|
|
|
'/srv/node/dev/objects/9/abc/'
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc',
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc',
|
2015-04-22 12:56:50 +01:00
|
|
|
{'ts_data': Timestamp(1380144470.00000)})
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
else:
|
|
|
|
raise Exception(
|
2014-03-18 11:06:52 -07:00
|
|
|
'No match for %r %r %r %r' % (device, partition,
|
2015-03-17 08:32:57 +00:00
|
|
|
policy, suffixes))
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
self.sender.connection = FakeConnection()
|
2015-03-17 08:32:57 +00:00
|
|
|
self.sender.job = {
|
|
|
|
'device': 'dev',
|
|
|
|
'partition': '9',
|
|
|
|
'policy': POLICIES.legacy,
|
|
|
|
}
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.suffixes = ['abc']
|
|
|
|
self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes
|
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=':MISSING_CHECK: START\r\n')
|
|
|
|
exc = None
|
|
|
|
try:
|
|
|
|
self.sender.missing_check()
|
|
|
|
except exceptions.ReplicationException as err:
|
|
|
|
exc = err
|
|
|
|
self.assertEqual(str(exc), 'Early disconnect')
|
|
|
|
self.assertEqual(
|
|
|
|
''.join(self.sender.connection.sent),
|
|
|
|
'17\r\n:MISSING_CHECK: START\r\n\r\n'
|
|
|
|
'33\r\n9d41d8cd98f00b204e9800998ecf0abc 1380144470.00000\r\n\r\n'
|
|
|
|
'15\r\n:MISSING_CHECK: END\r\n\r\n')
|
2014-10-28 09:51:06 -07:00
|
|
|
self.assertEqual(self.sender.available_map,
|
|
|
|
dict([('9d41d8cd98f00b204e9800998ecf0abc',
|
2015-04-22 12:56:50 +01:00
|
|
|
{'ts_data': Timestamp(1380144470.00000)})]))
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
def test_missing_check_far_end_unexpected(self):
|
2015-03-17 08:32:57 +00:00
|
|
|
def yield_hashes(device, partition, policy, suffixes=None, **kwargs):
|
|
|
|
if (device == 'dev' and partition == '9' and
|
|
|
|
policy == POLICIES.legacy and
|
2014-03-18 11:06:52 -07:00
|
|
|
suffixes == ['abc']):
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
yield (
|
|
|
|
'/srv/node/dev/objects/9/abc/'
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc',
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc',
|
2015-04-22 12:56:50 +01:00
|
|
|
{'ts_data': Timestamp(1380144470.00000)})
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
else:
|
|
|
|
raise Exception(
|
2014-03-18 11:06:52 -07:00
|
|
|
'No match for %r %r %r %r' % (device, partition,
|
2015-03-17 08:32:57 +00:00
|
|
|
policy, suffixes))
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
self.sender.connection = FakeConnection()
|
2015-03-17 08:32:57 +00:00
|
|
|
self.sender.job = {
|
|
|
|
'device': 'dev',
|
|
|
|
'partition': '9',
|
|
|
|
'policy': POLICIES.legacy,
|
|
|
|
}
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.suffixes = ['abc']
|
|
|
|
self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes
|
|
|
|
self.sender.response = FakeResponse(chunk_body='OH HAI\r\n')
|
|
|
|
exc = None
|
|
|
|
try:
|
|
|
|
self.sender.missing_check()
|
|
|
|
except exceptions.ReplicationException as err:
|
|
|
|
exc = err
|
|
|
|
self.assertEqual(str(exc), "Unexpected response: 'OH HAI'")
|
|
|
|
self.assertEqual(
|
|
|
|
''.join(self.sender.connection.sent),
|
|
|
|
'17\r\n:MISSING_CHECK: START\r\n\r\n'
|
|
|
|
'33\r\n9d41d8cd98f00b204e9800998ecf0abc 1380144470.00000\r\n\r\n'
|
|
|
|
'15\r\n:MISSING_CHECK: END\r\n\r\n')
|
2014-10-28 09:51:06 -07:00
|
|
|
self.assertEqual(self.sender.available_map,
|
|
|
|
dict([('9d41d8cd98f00b204e9800998ecf0abc',
|
2015-04-22 12:56:50 +01:00
|
|
|
{'ts_data': Timestamp(1380144470.00000)})]))
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
2015-04-22 12:56:50 +01:00
|
|
|
def test_missing_check_send_map(self):
|
2015-03-17 08:32:57 +00:00
|
|
|
def yield_hashes(device, partition, policy, suffixes=None, **kwargs):
|
|
|
|
if (device == 'dev' and partition == '9' and
|
|
|
|
policy == POLICIES.legacy and
|
2014-03-18 11:06:52 -07:00
|
|
|
suffixes == ['abc']):
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
yield (
|
|
|
|
'/srv/node/dev/objects/9/abc/'
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc',
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc',
|
2015-04-22 12:56:50 +01:00
|
|
|
{'ts_data': Timestamp(1380144470.00000)})
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
else:
|
|
|
|
raise Exception(
|
2014-03-18 11:06:52 -07:00
|
|
|
'No match for %r %r %r %r' % (device, partition,
|
2015-03-17 08:32:57 +00:00
|
|
|
policy, suffixes))
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
self.sender.connection = FakeConnection()
|
2015-03-17 08:32:57 +00:00
|
|
|
self.sender.job = {
|
|
|
|
'device': 'dev',
|
|
|
|
'partition': '9',
|
|
|
|
'policy': POLICIES.legacy,
|
|
|
|
}
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.suffixes = ['abc']
|
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
':MISSING_CHECK: START\r\n'
|
2015-04-22 12:56:50 +01:00
|
|
|
'0123abc dm\r\n'
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
':MISSING_CHECK: END\r\n'))
|
|
|
|
self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes
|
|
|
|
self.sender.missing_check()
|
|
|
|
self.assertEqual(
|
|
|
|
''.join(self.sender.connection.sent),
|
|
|
|
'17\r\n:MISSING_CHECK: START\r\n\r\n'
|
|
|
|
'33\r\n9d41d8cd98f00b204e9800998ecf0abc 1380144470.00000\r\n\r\n'
|
|
|
|
'15\r\n:MISSING_CHECK: END\r\n\r\n')
|
2015-04-22 12:56:50 +01:00
|
|
|
self.assertEqual(
|
|
|
|
self.sender.send_map, {'0123abc': {'data': True, 'meta': True}})
|
2014-10-28 09:51:06 -07:00
|
|
|
self.assertEqual(self.sender.available_map,
|
|
|
|
dict([('9d41d8cd98f00b204e9800998ecf0abc',
|
2015-04-22 12:56:50 +01:00
|
|
|
{'ts_data': Timestamp(1380144470.00000)})]))
|
2014-10-28 09:51:06 -07:00
|
|
|
|
|
|
|
def test_missing_check_extra_line_parts(self):
|
|
|
|
# check that sender tolerates extra parts in missing check
|
|
|
|
# line responses to allow for protocol upgrades
|
|
|
|
def yield_hashes(device, partition, policy, suffixes=None, **kwargs):
|
|
|
|
if (device == 'dev' and partition == '9' and
|
|
|
|
policy == POLICIES.legacy and
|
|
|
|
suffixes == ['abc']):
|
|
|
|
yield (
|
|
|
|
'/srv/node/dev/objects/9/abc/'
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc',
|
|
|
|
'9d41d8cd98f00b204e9800998ecf0abc',
|
2015-04-22 12:56:50 +01:00
|
|
|
{'ts_data': Timestamp(1380144470.00000)})
|
2014-10-28 09:51:06 -07:00
|
|
|
else:
|
|
|
|
raise Exception(
|
|
|
|
'No match for %r %r %r %r' % (device, partition,
|
|
|
|
policy, suffixes))
|
|
|
|
|
|
|
|
self.sender.connection = FakeConnection()
|
|
|
|
self.sender.job = {
|
|
|
|
'device': 'dev',
|
|
|
|
'partition': '9',
|
|
|
|
'policy': POLICIES.legacy,
|
|
|
|
}
|
|
|
|
self.sender.suffixes = ['abc']
|
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
':MISSING_CHECK: START\r\n'
|
2015-04-22 12:56:50 +01:00
|
|
|
'0123abc d extra response parts\r\n'
|
2014-10-28 09:51:06 -07:00
|
|
|
':MISSING_CHECK: END\r\n'))
|
|
|
|
self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes
|
|
|
|
self.sender.missing_check()
|
2015-04-22 12:56:50 +01:00
|
|
|
self.assertEqual(self.sender.send_map,
|
|
|
|
{'0123abc': {'data': True}})
|
2014-10-28 09:51:06 -07:00
|
|
|
self.assertEqual(self.sender.available_map,
|
|
|
|
dict([('9d41d8cd98f00b204e9800998ecf0abc',
|
2015-04-22 12:56:50 +01:00
|
|
|
{'ts_data': Timestamp(1380144470.00000)})]))
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
|
|
|
def test_updates_timeout(self):
|
|
|
|
self.sender.connection = FakeConnection()
|
|
|
|
self.sender.connection.send = lambda d: eventlet.sleep(1)
|
|
|
|
self.sender.daemon.node_timeout = 0.01
|
|
|
|
self.assertRaises(exceptions.MessageTimeout, self.sender.updates)
|
|
|
|
|
2015-04-22 12:56:50 +01:00
|
|
|
def test_updates_empty_send_map(self):
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.connection = FakeConnection()
|
2015-04-22 12:56:50 +01:00
|
|
|
self.sender.send_map = {}
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
':UPDATES: START\r\n'
|
|
|
|
':UPDATES: END\r\n'))
|
|
|
|
self.sender.updates()
|
|
|
|
self.assertEqual(
|
|
|
|
''.join(self.sender.connection.sent),
|
|
|
|
'11\r\n:UPDATES: START\r\n\r\n'
|
|
|
|
'f\r\n:UPDATES: END\r\n\r\n')
|
|
|
|
|
|
|
|
def test_updates_unexpected_response_lines1(self):
|
|
|
|
self.sender.connection = FakeConnection()
|
2015-04-22 12:56:50 +01:00
|
|
|
self.sender.send_map = {}
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
'abc\r\n'
|
|
|
|
':UPDATES: START\r\n'
|
|
|
|
':UPDATES: END\r\n'))
|
|
|
|
exc = None
|
|
|
|
try:
|
|
|
|
self.sender.updates()
|
|
|
|
except exceptions.ReplicationException as err:
|
|
|
|
exc = err
|
|
|
|
self.assertEqual(str(exc), "Unexpected response: 'abc'")
|
|
|
|
self.assertEqual(
|
|
|
|
''.join(self.sender.connection.sent),
|
|
|
|
'11\r\n:UPDATES: START\r\n\r\n'
|
|
|
|
'f\r\n:UPDATES: END\r\n\r\n')
|
|
|
|
|
|
|
|
def test_updates_unexpected_response_lines2(self):
|
|
|
|
self.sender.connection = FakeConnection()
|
2015-04-22 12:56:50 +01:00
|
|
|
self.sender.send_map = {}
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
':UPDATES: START\r\n'
|
|
|
|
'abc\r\n'
|
|
|
|
':UPDATES: END\r\n'))
|
|
|
|
exc = None
|
|
|
|
try:
|
|
|
|
self.sender.updates()
|
|
|
|
except exceptions.ReplicationException as err:
|
|
|
|
exc = err
|
|
|
|
self.assertEqual(str(exc), "Unexpected response: 'abc'")
|
|
|
|
self.assertEqual(
|
|
|
|
''.join(self.sender.connection.sent),
|
|
|
|
'11\r\n:UPDATES: START\r\n\r\n'
|
|
|
|
'f\r\n:UPDATES: END\r\n\r\n')
|
|
|
|
|
|
|
|
def test_updates_is_deleted(self):
|
2013-12-03 21:12:19 -08:00
|
|
|
device = 'dev'
|
|
|
|
part = '9'
|
|
|
|
object_parts = ('a', 'c', 'o')
|
|
|
|
df = self._make_open_diskfile(device, part, *object_parts)
|
|
|
|
object_hash = utils.hash_path(*object_parts)
|
|
|
|
delete_timestamp = utils.normalize_timestamp(time.time())
|
|
|
|
df.delete(delete_timestamp)
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.connection = FakeConnection()
|
2015-03-17 08:32:57 +00:00
|
|
|
self.sender.job = {
|
|
|
|
'device': device,
|
|
|
|
'partition': part,
|
|
|
|
'policy': POLICIES.legacy,
|
2014-10-28 09:51:06 -07:00
|
|
|
'frag_index': 0,
|
2015-03-17 08:32:57 +00:00
|
|
|
}
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.node = {}
|
2015-04-22 12:56:50 +01:00
|
|
|
self.sender.send_map = {object_hash: {'data': True}}
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.send_delete = mock.MagicMock()
|
|
|
|
self.sender.send_put = mock.MagicMock()
|
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
':UPDATES: START\r\n'
|
|
|
|
':UPDATES: END\r\n'))
|
|
|
|
self.sender.updates()
|
|
|
|
self.sender.send_delete.assert_called_once_with(
|
2013-12-03 21:12:19 -08:00
|
|
|
'/a/c/o', delete_timestamp)
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.assertEqual(self.sender.send_put.mock_calls, [])
|
|
|
|
# note that the delete line isn't actually sent since we mock
|
|
|
|
# send_delete; send_delete is tested separately.
|
|
|
|
self.assertEqual(
|
|
|
|
''.join(self.sender.connection.sent),
|
|
|
|
'11\r\n:UPDATES: START\r\n\r\n'
|
|
|
|
'f\r\n:UPDATES: END\r\n\r\n')
|
|
|
|
|
2015-02-12 21:30:37 +00:00
|
|
|
def test_update_send_delete(self):
|
|
|
|
device = 'dev'
|
|
|
|
part = '9'
|
|
|
|
object_parts = ('a', 'c', 'o')
|
|
|
|
df = self._make_open_diskfile(device, part, *object_parts)
|
|
|
|
object_hash = utils.hash_path(*object_parts)
|
|
|
|
delete_timestamp = utils.normalize_timestamp(time.time())
|
|
|
|
df.delete(delete_timestamp)
|
|
|
|
self.sender.connection = FakeConnection()
|
2015-03-17 08:32:57 +00:00
|
|
|
self.sender.job = {
|
|
|
|
'device': device,
|
|
|
|
'partition': part,
|
|
|
|
'policy': POLICIES.legacy,
|
2014-10-28 09:51:06 -07:00
|
|
|
'frag_index': 0,
|
2015-03-17 08:32:57 +00:00
|
|
|
}
|
2015-02-12 21:30:37 +00:00
|
|
|
self.sender.node = {}
|
2015-04-22 12:56:50 +01:00
|
|
|
self.sender.send_map = {object_hash: {'data': True}}
|
2015-02-12 21:30:37 +00:00
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
':UPDATES: START\r\n'
|
|
|
|
':UPDATES: END\r\n'))
|
|
|
|
self.sender.updates()
|
|
|
|
self.assertEqual(
|
|
|
|
''.join(self.sender.connection.sent),
|
|
|
|
'11\r\n:UPDATES: START\r\n\r\n'
|
|
|
|
'30\r\n'
|
|
|
|
'DELETE /a/c/o\r\n'
|
|
|
|
'X-Timestamp: %s\r\n\r\n\r\n'
|
|
|
|
'f\r\n:UPDATES: END\r\n\r\n'
|
|
|
|
% delete_timestamp
|
|
|
|
)
|
|
|
|
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
def test_updates_put(self):
|
2015-04-22 12:56:50 +01:00
|
|
|
# sender has data file and meta file
|
|
|
|
ts_iter = make_timestamp_iter()
|
2013-12-03 21:12:19 -08:00
|
|
|
device = 'dev'
|
|
|
|
part = '9'
|
|
|
|
object_parts = ('a', 'c', 'o')
|
2015-10-08 15:38:36 +02:00
|
|
|
t1 = next(ts_iter)
|
2015-04-22 12:56:50 +01:00
|
|
|
df = self._make_open_diskfile(
|
|
|
|
device, part, *object_parts, timestamp=t1)
|
2015-10-08 15:38:36 +02:00
|
|
|
t2 = next(ts_iter)
|
2015-04-22 12:56:50 +01:00
|
|
|
metadata = {'X-Timestamp': t2.internal, 'X-Object-Meta-Fruit': 'kiwi'}
|
|
|
|
df.write_metadata(metadata)
|
2013-12-03 21:12:19 -08:00
|
|
|
object_hash = utils.hash_path(*object_parts)
|
2015-04-22 12:56:50 +01:00
|
|
|
df.open()
|
2013-12-03 21:12:19 -08:00
|
|
|
expected = df.get_metadata()
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.connection = FakeConnection()
|
2015-03-17 08:32:57 +00:00
|
|
|
self.sender.job = {
|
|
|
|
'device': device,
|
|
|
|
'partition': part,
|
|
|
|
'policy': POLICIES.legacy,
|
2014-10-28 09:51:06 -07:00
|
|
|
'frag_index': 0,
|
2015-03-17 08:32:57 +00:00
|
|
|
}
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.node = {}
|
2015-04-22 12:56:50 +01:00
|
|
|
# receiver requested data only
|
|
|
|
self.sender.send_map = {object_hash: {'data': True}}
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.send_delete = mock.MagicMock()
|
|
|
|
self.sender.send_put = mock.MagicMock()
|
2015-04-22 12:56:50 +01:00
|
|
|
self.sender.send_post = mock.MagicMock()
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
':UPDATES: START\r\n'
|
|
|
|
':UPDATES: END\r\n'))
|
|
|
|
self.sender.updates()
|
|
|
|
self.assertEqual(self.sender.send_delete.mock_calls, [])
|
2015-04-22 12:56:50 +01:00
|
|
|
self.assertEqual(self.sender.send_post.mock_calls, [])
|
2013-12-03 21:12:19 -08:00
|
|
|
self.assertEqual(1, len(self.sender.send_put.mock_calls))
|
|
|
|
args, _kwargs = self.sender.send_put.call_args
|
|
|
|
path, df = args
|
|
|
|
self.assertEqual(path, '/a/c/o')
|
2015-07-21 19:23:00 +05:30
|
|
|
self.assertTrue(isinstance(df, diskfile.DiskFile))
|
2013-12-03 21:12:19 -08:00
|
|
|
self.assertEqual(expected, df.get_metadata())
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
# note that the put line isn't actually sent since we mock send_put;
|
|
|
|
# send_put is tested separately.
|
|
|
|
self.assertEqual(
|
|
|
|
''.join(self.sender.connection.sent),
|
|
|
|
'11\r\n:UPDATES: START\r\n\r\n'
|
|
|
|
'f\r\n:UPDATES: END\r\n\r\n')
|
|
|
|
|
2015-04-22 12:56:50 +01:00
|
|
|
def test_updates_post(self):
|
|
|
|
ts_iter = make_timestamp_iter()
|
|
|
|
device = 'dev'
|
|
|
|
part = '9'
|
|
|
|
object_parts = ('a', 'c', 'o')
|
2015-10-08 15:38:36 +02:00
|
|
|
t1 = next(ts_iter)
|
2015-04-22 12:56:50 +01:00
|
|
|
df = self._make_open_diskfile(
|
|
|
|
device, part, *object_parts, timestamp=t1)
|
2015-10-08 15:38:36 +02:00
|
|
|
t2 = next(ts_iter)
|
2015-04-22 12:56:50 +01:00
|
|
|
metadata = {'X-Timestamp': t2.internal, 'X-Object-Meta-Fruit': 'kiwi'}
|
|
|
|
df.write_metadata(metadata)
|
|
|
|
object_hash = utils.hash_path(*object_parts)
|
|
|
|
df.open()
|
|
|
|
expected = df.get_metadata()
|
|
|
|
self.sender.connection = FakeConnection()
|
|
|
|
self.sender.job = {
|
|
|
|
'device': device,
|
|
|
|
'partition': part,
|
|
|
|
'policy': POLICIES.legacy,
|
|
|
|
'frag_index': 0,
|
|
|
|
}
|
|
|
|
self.sender.node = {}
|
|
|
|
# receiver requested only meta
|
|
|
|
self.sender.send_map = {object_hash: {'meta': True}}
|
|
|
|
self.sender.send_delete = mock.MagicMock()
|
|
|
|
self.sender.send_put = mock.MagicMock()
|
|
|
|
self.sender.send_post = mock.MagicMock()
|
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
':UPDATES: START\r\n'
|
|
|
|
':UPDATES: END\r\n'))
|
|
|
|
self.sender.updates()
|
|
|
|
self.assertEqual(self.sender.send_delete.mock_calls, [])
|
|
|
|
self.assertEqual(self.sender.send_put.mock_calls, [])
|
|
|
|
self.assertEqual(1, len(self.sender.send_post.mock_calls))
|
|
|
|
args, _kwargs = self.sender.send_post.call_args
|
|
|
|
path, df = args
|
|
|
|
self.assertEqual(path, '/a/c/o')
|
|
|
|
self.assertIsInstance(df, diskfile.DiskFile)
|
|
|
|
self.assertEqual(expected, df.get_metadata())
|
|
|
|
# note that the post line isn't actually sent since we mock send_post;
|
|
|
|
# send_post is tested separately.
|
|
|
|
self.assertEqual(
|
|
|
|
''.join(self.sender.connection.sent),
|
|
|
|
'11\r\n:UPDATES: START\r\n\r\n'
|
|
|
|
'f\r\n:UPDATES: END\r\n\r\n')
|
|
|
|
|
|
|
|
def test_updates_put_and_post(self):
|
|
|
|
ts_iter = make_timestamp_iter()
|
|
|
|
device = 'dev'
|
|
|
|
part = '9'
|
|
|
|
object_parts = ('a', 'c', 'o')
|
2015-10-08 15:38:36 +02:00
|
|
|
t1 = next(ts_iter)
|
2015-04-22 12:56:50 +01:00
|
|
|
df = self._make_open_diskfile(
|
|
|
|
device, part, *object_parts, timestamp=t1)
|
2015-10-08 15:38:36 +02:00
|
|
|
t2 = next(ts_iter)
|
2015-04-22 12:56:50 +01:00
|
|
|
metadata = {'X-Timestamp': t2.internal, 'X-Object-Meta-Fruit': 'kiwi'}
|
|
|
|
df.write_metadata(metadata)
|
|
|
|
object_hash = utils.hash_path(*object_parts)
|
|
|
|
df.open()
|
|
|
|
expected = df.get_metadata()
|
|
|
|
self.sender.connection = FakeConnection()
|
|
|
|
self.sender.job = {
|
|
|
|
'device': device,
|
|
|
|
'partition': part,
|
|
|
|
'policy': POLICIES.legacy,
|
|
|
|
'frag_index': 0,
|
|
|
|
}
|
|
|
|
self.sender.node = {}
|
|
|
|
# receiver requested data and meta
|
|
|
|
self.sender.send_map = {object_hash: {'meta': True, 'data': True}}
|
|
|
|
self.sender.send_delete = mock.MagicMock()
|
|
|
|
self.sender.send_put = mock.MagicMock()
|
|
|
|
self.sender.send_post = mock.MagicMock()
|
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
':UPDATES: START\r\n'
|
|
|
|
':UPDATES: END\r\n'))
|
|
|
|
self.sender.updates()
|
|
|
|
self.assertEqual(self.sender.send_delete.mock_calls, [])
|
|
|
|
self.assertEqual(1, len(self.sender.send_put.mock_calls))
|
|
|
|
self.assertEqual(1, len(self.sender.send_post.mock_calls))
|
|
|
|
|
|
|
|
args, _kwargs = self.sender.send_put.call_args
|
|
|
|
path, df = args
|
|
|
|
self.assertEqual(path, '/a/c/o')
|
|
|
|
self.assertIsInstance(df, diskfile.DiskFile)
|
|
|
|
self.assertEqual(expected, df.get_metadata())
|
|
|
|
|
|
|
|
args, _kwargs = self.sender.send_post.call_args
|
|
|
|
path, df = args
|
|
|
|
self.assertEqual(path, '/a/c/o')
|
|
|
|
self.assertIsInstance(df, diskfile.DiskFile)
|
|
|
|
self.assertEqual(expected, df.get_metadata())
|
|
|
|
self.assertEqual(
|
|
|
|
''.join(self.sender.connection.sent),
|
|
|
|
'11\r\n:UPDATES: START\r\n\r\n'
|
|
|
|
'f\r\n:UPDATES: END\r\n\r\n')
|
|
|
|
|
2014-03-18 11:06:52 -07:00
|
|
|
def test_updates_storage_policy_index(self):
|
|
|
|
device = 'dev'
|
|
|
|
part = '9'
|
|
|
|
object_parts = ('a', 'c', 'o')
|
|
|
|
df = self._make_open_diskfile(device, part, *object_parts,
|
2015-03-17 08:32:57 +00:00
|
|
|
policy=POLICIES[0])
|
2014-03-18 11:06:52 -07:00
|
|
|
object_hash = utils.hash_path(*object_parts)
|
|
|
|
expected = df.get_metadata()
|
|
|
|
self.sender.connection = FakeConnection()
|
2015-03-17 08:32:57 +00:00
|
|
|
self.sender.job = {
|
|
|
|
'device': device,
|
|
|
|
'partition': part,
|
|
|
|
'policy': POLICIES[0],
|
|
|
|
'frag_index': 0}
|
2014-03-18 11:06:52 -07:00
|
|
|
self.sender.node = {}
|
2015-04-22 12:56:50 +01:00
|
|
|
self.sender.send_map = {object_hash: {'data': True}}
|
2014-03-18 11:06:52 -07:00
|
|
|
self.sender.send_delete = mock.MagicMock()
|
|
|
|
self.sender.send_put = mock.MagicMock()
|
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
':UPDATES: START\r\n'
|
|
|
|
':UPDATES: END\r\n'))
|
|
|
|
self.sender.updates()
|
|
|
|
args, _kwargs = self.sender.send_put.call_args
|
|
|
|
path, df = args
|
|
|
|
self.assertEqual(path, '/a/c/o')
|
2015-07-21 19:23:00 +05:30
|
|
|
self.assertTrue(isinstance(df, diskfile.DiskFile))
|
2014-03-18 11:06:52 -07:00
|
|
|
self.assertEqual(expected, df.get_metadata())
|
2015-03-17 08:32:57 +00:00
|
|
|
self.assertEqual(os.path.join(self.testdir, 'dev/objects/9/',
|
2014-03-18 11:06:52 -07:00
|
|
|
object_hash[-3:], object_hash),
|
|
|
|
df._datadir)
|
|
|
|
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
def test_updates_read_response_timeout_start(self):
|
|
|
|
self.sender.connection = FakeConnection()
|
2015-04-22 12:56:50 +01:00
|
|
|
self.sender.send_map = {}
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
':UPDATES: START\r\n'
|
|
|
|
':UPDATES: END\r\n'))
|
|
|
|
orig_readline = self.sender.readline
|
|
|
|
|
|
|
|
def delayed_readline():
|
|
|
|
eventlet.sleep(1)
|
|
|
|
return orig_readline()
|
|
|
|
|
|
|
|
self.sender.readline = delayed_readline
|
|
|
|
self.sender.daemon.http_timeout = 0.01
|
|
|
|
self.assertRaises(exceptions.MessageTimeout, self.sender.updates)
|
|
|
|
|
|
|
|
def test_updates_read_response_disconnect_start(self):
|
|
|
|
self.sender.connection = FakeConnection()
|
2015-04-22 12:56:50 +01:00
|
|
|
self.sender.send_map = {}
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.response = FakeResponse(chunk_body='\r\n')
|
|
|
|
exc = None
|
|
|
|
try:
|
|
|
|
self.sender.updates()
|
|
|
|
except exceptions.ReplicationException as err:
|
|
|
|
exc = err
|
|
|
|
self.assertEqual(str(exc), 'Early disconnect')
|
|
|
|
self.assertEqual(
|
|
|
|
''.join(self.sender.connection.sent),
|
|
|
|
'11\r\n:UPDATES: START\r\n\r\n'
|
|
|
|
'f\r\n:UPDATES: END\r\n\r\n')
|
|
|
|
|
|
|
|
def test_updates_read_response_unexp_start(self):
|
|
|
|
self.sender.connection = FakeConnection()
|
2015-04-22 12:56:50 +01:00
|
|
|
self.sender.send_map = {}
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
'anything else\r\n'
|
|
|
|
':UPDATES: START\r\n'
|
|
|
|
':UPDATES: END\r\n'))
|
|
|
|
exc = None
|
|
|
|
try:
|
|
|
|
self.sender.updates()
|
|
|
|
except exceptions.ReplicationException as err:
|
|
|
|
exc = err
|
|
|
|
self.assertEqual(str(exc), "Unexpected response: 'anything else'")
|
|
|
|
self.assertEqual(
|
|
|
|
''.join(self.sender.connection.sent),
|
|
|
|
'11\r\n:UPDATES: START\r\n\r\n'
|
|
|
|
'f\r\n:UPDATES: END\r\n\r\n')
|
|
|
|
|
|
|
|
def test_updates_read_response_timeout_end(self):
|
|
|
|
self.sender.connection = FakeConnection()
|
2015-04-22 12:56:50 +01:00
|
|
|
self.sender.send_map = {}
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
':UPDATES: START\r\n'
|
|
|
|
':UPDATES: END\r\n'))
|
|
|
|
orig_readline = self.sender.readline
|
|
|
|
|
|
|
|
def delayed_readline():
|
|
|
|
rv = orig_readline()
|
|
|
|
if rv == ':UPDATES: END\r\n':
|
|
|
|
eventlet.sleep(1)
|
|
|
|
return rv
|
|
|
|
|
|
|
|
self.sender.readline = delayed_readline
|
|
|
|
self.sender.daemon.http_timeout = 0.01
|
|
|
|
self.assertRaises(exceptions.MessageTimeout, self.sender.updates)
|
|
|
|
|
|
|
|
def test_updates_read_response_disconnect_end(self):
|
|
|
|
self.sender.connection = FakeConnection()
|
2015-04-22 12:56:50 +01:00
|
|
|
self.sender.send_map = {}
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
':UPDATES: START\r\n'
|
|
|
|
'\r\n'))
|
|
|
|
exc = None
|
|
|
|
try:
|
|
|
|
self.sender.updates()
|
|
|
|
except exceptions.ReplicationException as err:
|
|
|
|
exc = err
|
|
|
|
self.assertEqual(str(exc), 'Early disconnect')
|
|
|
|
self.assertEqual(
|
|
|
|
''.join(self.sender.connection.sent),
|
|
|
|
'11\r\n:UPDATES: START\r\n\r\n'
|
|
|
|
'f\r\n:UPDATES: END\r\n\r\n')
|
|
|
|
|
|
|
|
def test_updates_read_response_unexp_end(self):
|
|
|
|
self.sender.connection = FakeConnection()
|
2015-04-22 12:56:50 +01:00
|
|
|
self.sender.send_map = {}
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.response = FakeResponse(
|
|
|
|
chunk_body=(
|
|
|
|
':UPDATES: START\r\n'
|
|
|
|
'anything else\r\n'
|
|
|
|
':UPDATES: END\r\n'))
|
|
|
|
exc = None
|
|
|
|
try:
|
|
|
|
self.sender.updates()
|
|
|
|
except exceptions.ReplicationException as err:
|
|
|
|
exc = err
|
|
|
|
self.assertEqual(str(exc), "Unexpected response: 'anything else'")
|
|
|
|
self.assertEqual(
|
|
|
|
''.join(self.sender.connection.sent),
|
|
|
|
'11\r\n:UPDATES: START\r\n\r\n'
|
|
|
|
'f\r\n:UPDATES: END\r\n\r\n')
|
|
|
|
|
|
|
|
def test_send_delete_timeout(self):
|
|
|
|
self.sender.connection = FakeConnection()
|
|
|
|
self.sender.connection.send = lambda d: eventlet.sleep(1)
|
|
|
|
self.sender.daemon.node_timeout = 0.01
|
|
|
|
exc = None
|
|
|
|
try:
|
2015-02-12 21:30:37 +00:00
|
|
|
self.sender.send_delete('/a/c/o',
|
|
|
|
utils.Timestamp('1381679759.90941'))
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
except exceptions.MessageTimeout as err:
|
|
|
|
exc = err
|
|
|
|
self.assertEqual(str(exc), '0.01 seconds: send_delete')
|
|
|
|
|
|
|
|
def test_send_delete(self):
|
|
|
|
self.sender.connection = FakeConnection()
|
2015-02-12 21:30:37 +00:00
|
|
|
self.sender.send_delete('/a/c/o',
|
|
|
|
utils.Timestamp('1381679759.90941'))
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.assertEqual(
|
|
|
|
''.join(self.sender.connection.sent),
|
|
|
|
'30\r\n'
|
|
|
|
'DELETE /a/c/o\r\n'
|
|
|
|
'X-Timestamp: 1381679759.90941\r\n'
|
|
|
|
'\r\n\r\n')
|
|
|
|
|
|
|
|
def test_send_put_initial_timeout(self):
|
2013-12-03 21:12:19 -08:00
|
|
|
df = self._make_open_diskfile()
|
|
|
|
df._disk_chunk_size = 2
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.connection = FakeConnection()
|
|
|
|
self.sender.connection.send = lambda d: eventlet.sleep(1)
|
|
|
|
self.sender.daemon.node_timeout = 0.01
|
|
|
|
exc = None
|
|
|
|
try:
|
|
|
|
self.sender.send_put('/a/c/o', df)
|
|
|
|
except exceptions.MessageTimeout as err:
|
|
|
|
exc = err
|
|
|
|
self.assertEqual(str(exc), '0.01 seconds: send_put')
|
|
|
|
|
|
|
|
def test_send_put_chunk_timeout(self):
|
2013-12-03 21:12:19 -08:00
|
|
|
df = self._make_open_diskfile()
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.connection = FakeConnection()
|
|
|
|
self.sender.daemon.node_timeout = 0.01
|
2013-12-03 21:12:19 -08:00
|
|
|
|
|
|
|
one_shot = [None]
|
|
|
|
|
|
|
|
def mock_send(data):
|
|
|
|
try:
|
|
|
|
one_shot.pop()
|
|
|
|
except IndexError:
|
|
|
|
eventlet.sleep(1)
|
|
|
|
|
|
|
|
self.sender.connection.send = mock_send
|
|
|
|
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
exc = None
|
|
|
|
try:
|
|
|
|
self.sender.send_put('/a/c/o', df)
|
|
|
|
except exceptions.MessageTimeout as err:
|
|
|
|
exc = err
|
|
|
|
self.assertEqual(str(exc), '0.01 seconds: send_put chunk')
|
|
|
|
|
|
|
|
def test_send_put(self):
|
2015-04-22 12:56:50 +01:00
|
|
|
ts_iter = make_timestamp_iter()
|
2015-10-08 15:38:36 +02:00
|
|
|
t1 = next(ts_iter)
|
2013-12-03 21:12:19 -08:00
|
|
|
body = 'test'
|
|
|
|
extra_metadata = {'Some-Other-Header': 'value'}
|
2015-04-22 12:56:50 +01:00
|
|
|
df = self._make_open_diskfile(body=body, timestamp=t1,
|
2013-12-03 21:12:19 -08:00
|
|
|
extra_metadata=extra_metadata)
|
|
|
|
expected = dict(df.get_metadata())
|
|
|
|
expected['body'] = body
|
|
|
|
expected['chunk_size'] = len(body)
|
2015-04-22 12:56:50 +01:00
|
|
|
# .meta file metadata is not included in expected for data only PUT
|
2015-10-08 15:38:36 +02:00
|
|
|
t2 = next(ts_iter)
|
2015-04-22 12:56:50 +01:00
|
|
|
metadata = {'X-Timestamp': t2.internal, 'X-Object-Meta-Fruit': 'kiwi'}
|
|
|
|
df.write_metadata(metadata)
|
|
|
|
df.open()
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
self.sender.connection = FakeConnection()
|
|
|
|
self.sender.send_put('/a/c/o', df)
|
|
|
|
self.assertEqual(
|
|
|
|
''.join(self.sender.connection.sent),
|
|
|
|
'82\r\n'
|
|
|
|
'PUT /a/c/o\r\n'
|
2013-12-03 21:12:19 -08:00
|
|
|
'Content-Length: %(Content-Length)s\r\n'
|
|
|
|
'ETag: %(ETag)s\r\n'
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
'Some-Other-Header: value\r\n'
|
2013-12-03 21:12:19 -08:00
|
|
|
'X-Timestamp: %(X-Timestamp)s\r\n'
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
'\r\n'
|
|
|
|
'\r\n'
|
2013-12-03 21:12:19 -08:00
|
|
|
'%(chunk_size)s\r\n'
|
|
|
|
'%(body)s\r\n' % expected)
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
|
2015-04-22 12:56:50 +01:00
|
|
|
def test_send_post(self):
|
2015-10-05 16:15:29 +01:00
|
|
|
ts_iter = make_timestamp_iter()
|
2015-04-22 12:56:50 +01:00
|
|
|
# create .data file
|
|
|
|
extra_metadata = {'X-Object-Meta-Foo': 'old_value',
|
|
|
|
'X-Object-Sysmeta-Test': 'test_sysmeta',
|
|
|
|
'Content-Type': 'test_content_type'}
|
2015-10-05 16:15:29 +01:00
|
|
|
ts_0 = next(ts_iter)
|
2015-04-22 12:56:50 +01:00
|
|
|
df = self._make_open_diskfile(extra_metadata=extra_metadata,
|
|
|
|
timestamp=ts_0)
|
|
|
|
# create .meta file
|
2015-10-05 16:15:29 +01:00
|
|
|
ts_1 = next(ts_iter)
|
2015-04-22 12:56:50 +01:00
|
|
|
newer_metadata = {'X-Object-Meta-Foo': 'new_value',
|
|
|
|
'X-Timestamp': ts_1.internal}
|
|
|
|
df.write_metadata(newer_metadata)
|
|
|
|
|
|
|
|
self.sender.connection = FakeConnection()
|
|
|
|
with df.open():
|
|
|
|
self.sender.send_post('/a/c/o', df)
|
|
|
|
self.assertEqual(
|
|
|
|
''.join(self.sender.connection.sent),
|
|
|
|
'4c\r\n'
|
|
|
|
'POST /a/c/o\r\n'
|
|
|
|
'X-Object-Meta-Foo: new_value\r\n'
|
|
|
|
'X-Timestamp: %s\r\n'
|
|
|
|
'\r\n'
|
|
|
|
'\r\n' % ts_1.internal)
|
|
|
|
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
def test_disconnect_timeout(self):
|
|
|
|
self.sender.connection = FakeConnection()
|
|
|
|
self.sender.connection.send = lambda d: eventlet.sleep(1)
|
|
|
|
self.sender.daemon.node_timeout = 0.01
|
|
|
|
self.sender.disconnect()
|
|
|
|
self.assertEqual(''.join(self.sender.connection.sent), '')
|
|
|
|
self.assertTrue(self.sender.connection.closed)
|
|
|
|
|
|
|
|
def test_disconnect(self):
|
|
|
|
self.sender.connection = FakeConnection()
|
|
|
|
self.sender.disconnect()
|
|
|
|
self.assertEqual(''.join(self.sender.connection.sent), '0\r\n\r\n')
|
|
|
|
self.assertTrue(self.sender.connection.closed)
|
|
|
|
|
|
|
|
|
2015-04-22 12:56:50 +01:00
|
|
|
class TestModuleMethods(unittest.TestCase):
|
|
|
|
def test_encode_missing(self):
|
|
|
|
object_hash = '9d41d8cd98f00b204e9800998ecf0abc'
|
|
|
|
ts_iter = make_timestamp_iter()
|
2015-10-08 15:38:36 +02:00
|
|
|
t_data = next(ts_iter)
|
|
|
|
t_meta = next(ts_iter)
|
2015-04-22 12:56:50 +01:00
|
|
|
d_meta_data = t_meta.raw - t_data.raw
|
|
|
|
|
|
|
|
# equal data and meta timestamps -> legacy single timestamp string
|
|
|
|
expected = '%s %s' % (object_hash, t_data.internal)
|
|
|
|
self.assertEqual(
|
|
|
|
expected,
|
|
|
|
ssync_sender.encode_missing(object_hash, t_data, ts_meta=t_data))
|
|
|
|
|
|
|
|
# newer meta timestamp -> hex data delta encoded as extra message part
|
|
|
|
expected = '%s %s m:%x' % (object_hash, t_data.internal, d_meta_data)
|
|
|
|
self.assertEqual(
|
|
|
|
expected,
|
|
|
|
ssync_sender.encode_missing(object_hash, t_data, ts_meta=t_meta))
|
|
|
|
|
|
|
|
# test encode and decode functions invert
|
|
|
|
expected = {'object_hash': object_hash, 'ts_meta': t_meta,
|
|
|
|
'ts_data': t_data}
|
|
|
|
msg = ssync_sender.encode_missing(**expected)
|
|
|
|
actual = ssync_receiver.decode_missing(msg)
|
|
|
|
self.assertEqual(expected, actual)
|
|
|
|
|
|
|
|
def test_decode_wanted(self):
|
|
|
|
parts = ['d']
|
|
|
|
expected = {'data': True}
|
|
|
|
self.assertEqual(ssync_sender.decode_wanted(parts), expected)
|
|
|
|
|
|
|
|
parts = ['m']
|
|
|
|
expected = {'meta': True}
|
|
|
|
self.assertEqual(ssync_sender.decode_wanted(parts), expected)
|
|
|
|
|
|
|
|
parts = ['dm']
|
|
|
|
expected = {'data': True, 'meta': True}
|
|
|
|
self.assertEqual(ssync_sender.decode_wanted(parts), expected)
|
|
|
|
|
2015-09-03 16:13:17 +01:00
|
|
|
# you don't really expect these next few...
|
2015-04-22 12:56:50 +01:00
|
|
|
parts = ['md']
|
|
|
|
expected = {'data': True, 'meta': True}
|
|
|
|
self.assertEqual(ssync_sender.decode_wanted(parts), expected)
|
|
|
|
|
|
|
|
parts = ['xcy', 'funny', {'business': True}]
|
|
|
|
expected = {'data': True}
|
|
|
|
self.assertEqual(ssync_sender.decode_wanted(parts), expected)
|
|
|
|
|
2015-05-01 13:02:29 +01:00
|
|
|
|
Object replication ssync (an rsync alternative)
For this commit, ssync is just a direct replacement for how
we use rsync. Assuming we switch over to ssync completely
someday and drop rsync, we will then be able to improve the
algorithms even further (removing local objects as we
successfully transfer each one rather than waiting for whole
partitions, using an index.db with hash-trees, etc., etc.)
For easier review, this commit can be thought of in distinct
parts:
1) New global_conf_callback functionality for allowing
services to perform setup code before workers, etc. are
launched. (This is then used by ssync in the object
server to create a cross-worker semaphore to restrict
concurrent incoming replication.)
2) A bit of shifting of items up from object server and
replicator to diskfile or DEFAULT conf sections for
better sharing of the same settings. conn_timeout,
node_timeout, client_timeout, network_chunk_size,
disk_chunk_size.
3) Modifications to the object server and replicator to
optionally use ssync in place of rsync. This is done in
a generic enough way that switching to FutureSync should
be easy someday.
4) The biggest part, and (at least for now) completely
optional part, are the new ssync_sender and
ssync_receiver files. Nice and isolated for easier
testing and visibility into test coverage, etc.
All the usual logging, statsd, recon, etc. instrumentation
is still there when using ssync, just as it is when using
rsync.
Beyond the essential error and exceptional condition
logging, I have not added any additional instrumentation at
this time. Unless there is something someone finds super
pressing to have added to the logging, I think such
additions would be better as separate change reviews.
FOR NOW, IT IS NOT RECOMMENDED TO USE SSYNC ON PRODUCTION
CLUSTERS. Some of us will be in a limited fashion to look
for any subtle issues, tuning, etc. but generally ssync is
an experimental feature. In its current implementation it is
probably going to be a bit slower than rsync, but if all
goes according to plan it will end up much faster.
There are no comparisions yet between ssync and rsync other
than some raw virtual machine testing I've done to show it
should compete well enough once we can put it in use in the
real world.
If you Tweet, Google+, or whatever, be sure to indicate it's
experimental. It'd be best to keep it out of deployment
guides, howtos, etc. until we all figure out if we like it,
find it to be stable, etc.
Change-Id: If003dcc6f4109e2d2a42f4873a0779110fff16d6
2013-08-28 16:10:43 +00:00
|
|
|
if __name__ == '__main__':
|
|
|
|
unittest.main()
|