Allow configuration of a back end specific availability zone
"storage_availability_zone" in the [DEFAULT] section of manila's configuration file has allowed deployers to configure and manage both service (scheduler, share manager) and storage system availability. However, quite often manila's services (api, scheduler, share and data managers) are run on a dedicated control plane that is a different failure domain from that of the storage that manila manages. Also, when using share replication, deployers would need to run multiple manila share manager services with different configuration files, each with their own "storage_availability_zone". To allow flexibility of separating service and storage availability zones, we introduce a new configuration option "backend_availability_zone" within the share driver/backend section. When this option is used, it will override the value of the "storage_availability_zone" from the [DEFAULT] section. Change-Id: Ice99a880dd7be7af94dea86b31a6db88be3d7d9b Implements: bp per-backend-availability-zones
This commit is contained in:
parent
8f88779778
commit
a75fe3d7cc
@ -122,7 +122,7 @@ The following examples have been implemented with the ZFSonLinux driver that
|
||||
is a reference implementation in the Shared File Systems service. It operates
|
||||
in ``driver_handles_share_servers=False`` mode and supports the ``readable``
|
||||
type of replication. In the example, we assume a configuration of two
|
||||
Availability Zones (configuration option: ``storage_availability_zone``),
|
||||
Availability Zones [1]_,
|
||||
called `availability_zone_1` and `availability_zone_2`.
|
||||
|
||||
Multiple availability zones are not necessary to use the replication feature.
|
||||
@ -598,3 +598,14 @@ replica's ID to delete a share replica.
|
||||
.. note::
|
||||
You cannot delete the last ``active`` replica with this command. You should
|
||||
use the :command:`manila delete` command to remove the share.
|
||||
|
||||
|
||||
.. [1] When running in a multi-backend configuration, until the Stein
|
||||
release, deployers could only configure one Availability Zone per manila
|
||||
configuration file. This is achieved with the option
|
||||
``storage_availability_zone`` defined under the ``[DEFAULT]`` section.
|
||||
|
||||
Beyond the Stein release, the option ``backend_availability_zone``
|
||||
can be specified in each back end stanza. The value of this
|
||||
configuration option will override any configuration of the
|
||||
``storage_availability_zone`` from the ``[DEFAULT]`` section.
|
||||
|
@ -56,8 +56,9 @@ pools, manila would allow for replication between two pools on the same
|
||||
backend.
|
||||
|
||||
The ``replication_domain`` option is meant to be used in conjunction with the
|
||||
``storage_availability_zone`` option to utilize this solution for Data
|
||||
Protection/Disaster Recovery.
|
||||
``storage_availability_zone`` (or back end specific
|
||||
``backend_availability_zone``) option to utilize
|
||||
this solution for Data Protection/Disaster Recovery.
|
||||
|
||||
|
||||
Replication types
|
||||
|
@ -91,6 +91,7 @@ class Manager(base.Base, PeriodicTasks):
|
||||
host = CONF.host
|
||||
self.host = host
|
||||
self.additional_endpoints = []
|
||||
self.availability_zone = CONF.storage_availability_zone
|
||||
super(Manager, self).__init__(db_driver)
|
||||
|
||||
def periodic_tasks(self, context, raise_on_error=False):
|
||||
|
@ -92,6 +92,7 @@ class Service(service.Service):
|
||||
self.manager = manager_class(host=self.host,
|
||||
service_name=service_name,
|
||||
*args, **kwargs)
|
||||
self.availability_zone = self.manager.availability_zone
|
||||
self.report_interval = report_interval
|
||||
self.periodic_interval = periodic_interval
|
||||
self.periodic_fuzzy_delay = periodic_fuzzy_delay
|
||||
@ -145,13 +146,14 @@ class Service(service.Service):
|
||||
self.timers.append(periodic)
|
||||
|
||||
def _create_service_ref(self, context):
|
||||
zone = CONF.storage_availability_zone
|
||||
service_ref = db.service_create(context,
|
||||
{'host': self.host,
|
||||
'binary': self.binary,
|
||||
'topic': self.topic,
|
||||
'report_count': 0,
|
||||
'availability_zone': zone})
|
||||
service_args = {
|
||||
'host': self.host,
|
||||
'binary': self.binary,
|
||||
'topic': self.topic,
|
||||
'report_count': 0,
|
||||
'availability_zone': self.availability_zone
|
||||
}
|
||||
service_ref = db.service_create(context, service_args)
|
||||
self.service_id = service_ref['id']
|
||||
|
||||
def __getattr__(self, key):
|
||||
@ -244,7 +246,6 @@ class Service(service.Service):
|
||||
def report_state(self):
|
||||
"""Update the state of this service in the datastore."""
|
||||
ctxt = context.get_admin_context()
|
||||
zone = CONF.storage_availability_zone
|
||||
state_catalog = {}
|
||||
try:
|
||||
try:
|
||||
@ -256,8 +257,9 @@ class Service(service.Service):
|
||||
service_ref = db.service_get(ctxt, self.service_id)
|
||||
|
||||
state_catalog['report_count'] = service_ref['report_count'] + 1
|
||||
if zone != service_ref['availability_zone']['name']:
|
||||
state_catalog['availability_zone'] = zone
|
||||
if (self.availability_zone !=
|
||||
service_ref['availability_zone']['name']):
|
||||
state_catalog['availability_zone'] = self.availability_zone
|
||||
|
||||
db.service_update(ctxt,
|
||||
self.service_id, state_catalog)
|
||||
|
@ -126,6 +126,11 @@ share_opts = [
|
||||
"replication between each other. If this option is not "
|
||||
"specified in the group, it means that replication is not "
|
||||
"enabled on the backend."),
|
||||
cfg.StrOpt('backend_availability_zone',
|
||||
default=None,
|
||||
help='Availability zone for this share backend. If not set, '
|
||||
'the ``storage_availability_zone`` option from the '
|
||||
'``[DEFAULT]`` section is used.'),
|
||||
cfg.StrOpt('filter_function',
|
||||
help='String representation for an equation that will be '
|
||||
'used to filter hosts.'),
|
||||
|
@ -219,6 +219,9 @@ class ServiceInstanceManager(object):
|
||||
self.max_time_to_build_instance = self.get_config_option(
|
||||
"max_time_to_build_instance")
|
||||
|
||||
self.availability_zone = self.get_config_option(
|
||||
'backend_availability_zone') or CONF.storage_availability_zone
|
||||
|
||||
if self.get_config_option("driver_handles_share_servers"):
|
||||
self.path_to_public_key = self.get_config_option(
|
||||
"path_to_public_key")
|
||||
@ -564,7 +567,7 @@ class ServiceInstanceManager(object):
|
||||
flavor=self.get_config_option("service_instance_flavor_id"),
|
||||
key_name=key_name,
|
||||
nics=network_data['nics'],
|
||||
availability_zone=CONF.storage_availability_zone,
|
||||
availability_zone=self.availability_zone,
|
||||
**create_kwargs)
|
||||
|
||||
fail_safe_data['instance_id'] = service_instance['id']
|
||||
|
@ -239,6 +239,12 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
configuration=self.configuration,
|
||||
)
|
||||
|
||||
backend_availability_zone = self.driver.configuration.safe_get(
|
||||
'backend_availability_zone')
|
||||
self.availability_zone = (
|
||||
backend_availability_zone or CONF.storage_availability_zone
|
||||
)
|
||||
|
||||
self.access_helper = access.ShareInstanceAccess(self.db, self.driver)
|
||||
self.snapshot_access_helper = (
|
||||
snapshot_access.ShareSnapshotInstanceAccess(self.db, self.driver))
|
||||
@ -1659,7 +1665,7 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
if not share_instance['availability_zone']:
|
||||
share_instance = self.db.share_instance_update(
|
||||
context, share_instance_id,
|
||||
{'availability_zone': CONF.storage_availability_zone},
|
||||
{'availability_zone': self.availability_zone},
|
||||
with_share_data=True
|
||||
)
|
||||
|
||||
@ -1834,7 +1840,7 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
if not share_replica['availability_zone']:
|
||||
share_replica = self.db.share_replica_update(
|
||||
context, share_replica['id'],
|
||||
{'availability_zone': CONF.storage_availability_zone},
|
||||
{'availability_zone': self.availability_zone},
|
||||
with_share_data=True
|
||||
)
|
||||
|
||||
@ -2400,7 +2406,7 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
share_update.update({
|
||||
'status': constants.STATUS_AVAILABLE,
|
||||
'launched_at': timeutils.utcnow(),
|
||||
'availability_zone': CONF.storage_availability_zone,
|
||||
'availability_zone': self.availability_zone,
|
||||
})
|
||||
|
||||
# If the share was managed with `replication_type` extra-spec, the
|
||||
@ -3865,7 +3871,7 @@ class ShareManager(manager.SchedulerDependentManager):
|
||||
def _get_az_for_share_group(self, context, share_group_ref):
|
||||
if not share_group_ref['availability_zone_id']:
|
||||
return self.db.availability_zone_get(
|
||||
context, CONF.storage_availability_zone)['id']
|
||||
context, self.availability_zone)['id']
|
||||
return share_group_ref['availability_zone_id']
|
||||
|
||||
@utils.require_driver_initialized
|
||||
|
@ -75,6 +75,8 @@ def fake_get_config_option(key):
|
||||
return None
|
||||
elif key == 'admin_subnet_id':
|
||||
return None
|
||||
elif key == 'backend_availability_zone':
|
||||
return None
|
||||
else:
|
||||
return mock.Mock()
|
||||
|
||||
|
@ -990,7 +990,7 @@ class ShareManagerTestCase(test.TestCase):
|
||||
replica_2 = fake_replica(id='fake2')
|
||||
self.mock_object(db, 'share_replicas_get_all_by_share',
|
||||
mock.Mock(return_value=[replica, replica_2]))
|
||||
manager.CONF.set_default('storage_availability_zone', 'fake_az')
|
||||
self.share_manager.availability_zone = 'fake_az'
|
||||
fake_access_rules = [{'id': '1'}, {'id': '2'}, {'id': '3'}]
|
||||
self.mock_object(db, 'share_replica_get',
|
||||
mock.Mock(return_value=replica))
|
||||
|
14
releasenotes/notes/per-backend-az-590c68be0e2cb4bd.yaml
Normal file
14
releasenotes/notes/per-backend-az-590c68be0e2cb4bd.yaml
Normal file
@ -0,0 +1,14 @@
|
||||
---
|
||||
features:
|
||||
- |
|
||||
Availability zones may now be configured per backend in a multi-backend
|
||||
configuration. Individual back end sections can now have the configuration
|
||||
option ``backend_availability_zone`` set. If set, this value will override
|
||||
the ``storage_availability_zone`` option from the [DEFAULT] section.
|
||||
upgrade:
|
||||
- The ``storage_availability_zone`` option can now be overridden per
|
||||
backend by using the ``backend_availability_zone`` option within the
|
||||
backend stanza. This allows enabling multiple storage backends that may
|
||||
be deployed in different AZs in the same ``manila.conf`` file if
|
||||
desired, simplifying service architecture around the Share Replication
|
||||
feature.
|
Loading…
Reference in New Issue
Block a user