Optional separate database for placement API
If 'connection' is set in the 'placement_database' conf group use that as the connection URL for the placement database. Otherwise if it is None, the default, then use the entire api_database conf group to configure a database connection. When placement_database.connection is not None a replica of the structure of the API database is used, using the same migrations used for the API database. A placement_context_manager is added and used by the OVO objects in nova.api.openstack.placement.objects.*. If there is no separate placement database, this is still used, but points to the API database. nova.test and nova.test.fixtures are adjusted to add awareness of the placement database. This functionality is being provided to allow deployers to choose between establishing a new database now or requiring a migration later. The default is migration later. A reno is added to explain the existence of the configuration setting. This change returns the behavior removed by the revert in commit 39fb302fd9c8fc57d3e4bea1c60a02ad5067163f but done in a more appropriate way. Note that with the advent of the nova-status command, which checks to see if placement is "ready" the tests here had to be adjusted. If we do allow a separate database the code will now check the separate database (if configured), but nothing is done with regard to migrating from the api to placement database or checking that. blueprint placement-extract Change-Id: I7e1e89cd66397883453935dcf7172d977bf82e84 Implements: blueprint optional-placement-database Co-Authored-By: Roman Podoliaka <rpodolyaka@mirantis.com>
This commit is contained in:
parent
567479ff07
commit
1429760d65
@ -25,7 +25,7 @@ CONSUMER_TBL = models.Consumer.__table__
|
||||
_ALLOC_TBL = models.Allocation.__table__
|
||||
|
||||
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def create_incomplete_consumers(ctx, batch_size):
|
||||
"""Finds all the consumer records that are missing for allocations and
|
||||
creates consumer records for them, using the "incomplete consumer" project
|
||||
@ -62,7 +62,7 @@ def create_incomplete_consumers(ctx, batch_size):
|
||||
return res.rowcount, res.rowcount
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_consumer_by_uuid(ctx, uuid):
|
||||
# The SQL for this looks like the following:
|
||||
# SELECT
|
||||
@ -136,7 +136,7 @@ class Consumer(base.VersionedObject, base.TimestampedObject):
|
||||
return cls._from_db_object(ctx, cls(ctx), res)
|
||||
|
||||
def create(self):
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _create_in_db(ctx):
|
||||
db_obj = models.Consumer(
|
||||
uuid=self.uuid, project_id=self.project.id,
|
||||
|
@ -24,7 +24,7 @@ CONF = cfg.CONF
|
||||
PROJECT_TBL = models.Project.__table__
|
||||
|
||||
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def ensure_incomplete_project(ctx):
|
||||
"""Ensures that a project record is created for the "incomplete consumer
|
||||
project". Returns the internal ID of that record.
|
||||
@ -40,7 +40,7 @@ def ensure_incomplete_project(ctx):
|
||||
return res.inserted_primary_key[0]
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_project_by_external_id(ctx, external_id):
|
||||
projects = sa.alias(PROJECT_TBL, name="p")
|
||||
cols = [
|
||||
@ -81,7 +81,7 @@ class Project(base.VersionedObject):
|
||||
return cls._from_db_object(ctx, cls(ctx), res)
|
||||
|
||||
def create(self):
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _create_in_db(ctx):
|
||||
db_obj = models.Project(external_id=self.external_id)
|
||||
try:
|
||||
|
@ -67,7 +67,7 @@ CONF = cfg.CONF
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _ensure_rc_cache(ctx):
|
||||
"""Ensures that a singleton resource class cache has been created in the
|
||||
module's scope.
|
||||
@ -84,7 +84,7 @@ def _ensure_rc_cache(ctx):
|
||||
@oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True)
|
||||
# Bug #1760322: If the caller raises an exception, we don't want the trait
|
||||
# sync rolled back; so use an .independent transaction
|
||||
@db_api.api_context_manager.writer.independent
|
||||
@db_api.placement_context_manager.writer.independent
|
||||
def _trait_sync(ctx):
|
||||
"""Sync the os_traits symbols to the database.
|
||||
|
||||
@ -282,7 +282,7 @@ def _increment_provider_generation(ctx, rp):
|
||||
return new_generation
|
||||
|
||||
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _add_inventory(context, rp, inventory):
|
||||
"""Add one Inventory that wasn't already on the provider.
|
||||
|
||||
@ -297,7 +297,7 @@ def _add_inventory(context, rp, inventory):
|
||||
rp.generation = _increment_provider_generation(context, rp)
|
||||
|
||||
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _update_inventory(context, rp, inventory):
|
||||
"""Update an inventory already on the provider.
|
||||
|
||||
@ -313,7 +313,7 @@ def _update_inventory(context, rp, inventory):
|
||||
return exceeded
|
||||
|
||||
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _delete_inventory(context, rp, resource_class):
|
||||
"""Delete up to one Inventory of the given resource_class string.
|
||||
|
||||
@ -329,7 +329,7 @@ def _delete_inventory(context, rp, resource_class):
|
||||
rp.generation = _increment_provider_generation(context, rp)
|
||||
|
||||
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _set_inventory(context, rp, inv_list):
|
||||
"""Given an InventoryList object, replaces the inventory of the
|
||||
resource provider in a safe, atomic fashion using the resource
|
||||
@ -385,7 +385,7 @@ def _set_inventory(context, rp, inv_list):
|
||||
return exceeded
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_provider_by_uuid(context, uuid):
|
||||
"""Given a UUID, return a dict of information about the resource provider
|
||||
from the database.
|
||||
@ -419,7 +419,7 @@ def _get_provider_by_uuid(context, uuid):
|
||||
return dict(res)
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_aggregates_by_provider_id(context, rp_id):
|
||||
join_statement = sa.join(
|
||||
_AGG_TBL, _RP_AGG_TBL, sa.and_(
|
||||
@ -429,7 +429,7 @@ def _get_aggregates_by_provider_id(context, rp_id):
|
||||
return [r[0] for r in context.session.execute(sel).fetchall()]
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _anchors_for_sharing_providers(context, rp_ids, get_id=False):
|
||||
"""Given a list of internal IDs of sharing providers, returns a set of
|
||||
tuples of (sharing provider UUID, anchor provider UUID), where each of
|
||||
@ -485,7 +485,7 @@ def _anchors_for_sharing_providers(context, rp_ids, get_id=False):
|
||||
return set([(r[0], r[1]) for r in context.session.execute(sel).fetchall()])
|
||||
|
||||
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _set_aggregates(context, resource_provider, provided_aggregates,
|
||||
increment_generation=False):
|
||||
rp_id = resource_provider.id
|
||||
@ -550,7 +550,7 @@ def _set_aggregates(context, resource_provider, provided_aggregates,
|
||||
context, resource_provider)
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_traits_by_provider_id(context, rp_id):
|
||||
t = sa.alias(_TRAIT_TBL, name='t')
|
||||
rpt = sa.alias(_RP_TRAIT_TBL, name='rpt')
|
||||
@ -602,7 +602,7 @@ def _delete_traits_from_provider(ctx, rp_id, to_delete):
|
||||
ctx.session.execute(del_stmt)
|
||||
|
||||
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _set_traits(context, rp, traits):
|
||||
"""Given a ResourceProvider object and a TraitList object, replaces the set
|
||||
of traits associated with the resource provider.
|
||||
@ -632,7 +632,7 @@ def _set_traits(context, rp, traits):
|
||||
rp.generation = _increment_provider_generation(context, rp)
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _has_child_providers(context, rp_id):
|
||||
"""Returns True if the supplied resource provider has any child providers,
|
||||
False otherwise
|
||||
@ -645,7 +645,7 @@ def _has_child_providers(context, rp_id):
|
||||
return False
|
||||
|
||||
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _set_root_provider_id(context, rp_id, root_id):
|
||||
"""Simply sets the root_provider_id value for a provider identified by
|
||||
rp_id. Used in online data migration.
|
||||
@ -783,7 +783,7 @@ def _provider_ids_matching_aggregates(context, member_of):
|
||||
return [r[0] for r in context.session.execute(sel).fetchall()]
|
||||
|
||||
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _delete_rp_record(context, _id):
|
||||
return context.session.query(models.ResourceProvider).\
|
||||
filter(models.ResourceProvider.id == _id).\
|
||||
@ -917,7 +917,7 @@ class ResourceProvider(base.VersionedObject, base.TimestampedObject):
|
||||
_set_traits(self._context, self, traits)
|
||||
self.obj_reset_changes()
|
||||
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _create_in_db(self, context, updates):
|
||||
parent_id = None
|
||||
root_id = None
|
||||
@ -962,7 +962,7 @@ class ResourceProvider(base.VersionedObject, base.TimestampedObject):
|
||||
self.root_provider_uuid = self.uuid
|
||||
|
||||
@staticmethod
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _delete(context, _id):
|
||||
# Do a quick check to see if the provider is a parent. If it is, don't
|
||||
# allow deleting the provider. Note that the foreign key constraint on
|
||||
@ -1007,7 +1007,7 @@ class ResourceProvider(base.VersionedObject, base.TimestampedObject):
|
||||
if not result:
|
||||
raise exception.NotFound()
|
||||
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _update_in_db(self, context, id, updates):
|
||||
if 'parent_provider_uuid' in updates:
|
||||
# TODO(jaypipes): For now, "re-parenting" and "un-parenting" are
|
||||
@ -1060,7 +1060,7 @@ class ResourceProvider(base.VersionedObject, base.TimestampedObject):
|
||||
reason=_('parent provider UUID does not exist.'))
|
||||
|
||||
@staticmethod
|
||||
@db_api.api_context_manager.writer # Needed for online data migration
|
||||
@db_api.placement_context_manager.writer # For online data migration
|
||||
def _from_db_object(context, resource_provider, db_resource_provider):
|
||||
# Online data migration to populate root_provider_id
|
||||
# TODO(jaypipes): Remove when all root_provider_id values are NOT NULL
|
||||
@ -1076,7 +1076,7 @@ class ResourceProvider(base.VersionedObject, base.TimestampedObject):
|
||||
return resource_provider
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_providers_with_shared_capacity(ctx, rc_id, amount, member_of=None):
|
||||
"""Returns a list of resource provider IDs (internal IDs, not UUIDs)
|
||||
that have capacity for a requested amount of a resource and indicate that
|
||||
@ -1220,7 +1220,7 @@ class ResourceProviderList(base.ObjectListBase, base.VersionedObject):
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_all_by_filters_from_db(context, filters):
|
||||
# Eg. filters can be:
|
||||
# filters = {
|
||||
@ -1469,7 +1469,7 @@ class Inventory(base.VersionedObject, base.TimestampedObject):
|
||||
return int((self.total - self.reserved) * self.allocation_ratio)
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_inventory_by_provider_id(ctx, rp_id):
|
||||
inv = sa.alias(_INV_TBL, name="i")
|
||||
cols = [
|
||||
@ -1545,7 +1545,7 @@ class Allocation(base.VersionedObject, base.TimestampedObject):
|
||||
}
|
||||
|
||||
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _delete_allocations_for_consumer(ctx, consumer_id):
|
||||
"""Deletes any existing allocations that correspond to the allocations to
|
||||
be written. This is wrapped in a transaction, so if the write subsequently
|
||||
@ -1715,7 +1715,7 @@ def _check_capacity_exceeded(ctx, allocs):
|
||||
return res_providers
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_allocations_by_provider_id(ctx, rp_id):
|
||||
allocs = sa.alias(_ALLOC_TBL, name="a")
|
||||
consumers = sa.alias(_CONSUMER_TBL, name="c")
|
||||
@ -1748,7 +1748,7 @@ def _get_allocations_by_provider_id(ctx, rp_id):
|
||||
return [dict(r) for r in ctx.session.execute(sel)]
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_allocations_by_consumer_uuid(ctx, consumer_uuid):
|
||||
allocs = sa.alias(_ALLOC_TBL, name="a")
|
||||
rp = sa.alias(_RP_TBL, name="rp")
|
||||
@ -1786,7 +1786,7 @@ def _get_allocations_by_consumer_uuid(ctx, consumer_uuid):
|
||||
return [dict(r) for r in ctx.session.execute(sel)]
|
||||
|
||||
|
||||
@db_api.api_context_manager.writer.independent
|
||||
@db_api.placement_context_manager.writer.independent
|
||||
def _create_incomplete_consumers_for_provider(ctx, rp_id):
|
||||
# TODO(jaypipes): Remove in Stein after a blocker migration is added.
|
||||
"""Creates consumer record if consumer relationship between allocations ->
|
||||
@ -1831,7 +1831,7 @@ def _create_incomplete_consumers_for_provider(ctx, rp_id):
|
||||
res.rowcount)
|
||||
|
||||
|
||||
@db_api.api_context_manager.writer.independent
|
||||
@db_api.placement_context_manager.writer.independent
|
||||
def _create_incomplete_consumer(ctx, consumer_id):
|
||||
# TODO(jaypipes): Remove in Stein after a blocker migration is added.
|
||||
"""Creates consumer record if consumer relationship between allocations ->
|
||||
@ -1870,7 +1870,7 @@ class AllocationList(base.ObjectListBase, base.VersionedObject):
|
||||
}
|
||||
|
||||
@oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True)
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _set_allocations(self, context, allocs):
|
||||
"""Write a set of allocations.
|
||||
|
||||
@ -2082,7 +2082,7 @@ class UsageList(base.ObjectListBase, base.VersionedObject):
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_all_by_resource_provider_uuid(context, rp_uuid):
|
||||
query = (context.session.query(models.Inventory.resource_class_id,
|
||||
func.coalesce(func.sum(models.Allocation.used), 0))
|
||||
@ -2101,7 +2101,7 @@ class UsageList(base.ObjectListBase, base.VersionedObject):
|
||||
return result
|
||||
|
||||
@staticmethod
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_all_by_project_user(context, project_id, user_id=None):
|
||||
query = (context.session.query(models.Allocation.resource_class_id,
|
||||
func.coalesce(func.sum(models.Allocation.used), 0))
|
||||
@ -2181,7 +2181,7 @@ class ResourceClass(base.VersionedObject, base.TimestampedObject):
|
||||
return obj
|
||||
|
||||
@staticmethod
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_next_id(context):
|
||||
"""Utility method to grab the next resource class identifier to use for
|
||||
user-defined resource classes.
|
||||
@ -2240,7 +2240,7 @@ class ResourceClass(base.VersionedObject, base.TimestampedObject):
|
||||
raise exception.MaxDBRetriesExceeded(action=msg)
|
||||
|
||||
@staticmethod
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _create_in_db(context, updates):
|
||||
next_id = ResourceClass._get_next_id(context)
|
||||
rc = models.ResourceClass()
|
||||
@ -2264,7 +2264,7 @@ class ResourceClass(base.VersionedObject, base.TimestampedObject):
|
||||
_RC_CACHE.clear()
|
||||
|
||||
@staticmethod
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _destroy(context, _id, name):
|
||||
# Don't delete the resource class if it is referred to in the
|
||||
# inventories table.
|
||||
@ -2293,7 +2293,7 @@ class ResourceClass(base.VersionedObject, base.TimestampedObject):
|
||||
_RC_CACHE.clear()
|
||||
|
||||
@staticmethod
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _save(context, id, name, updates):
|
||||
db_rc = context.session.query(models.ResourceClass).filter_by(
|
||||
id=id).first()
|
||||
@ -2312,7 +2312,7 @@ class ResourceClassList(base.ObjectListBase, base.VersionedObject):
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_all(context):
|
||||
_ensure_rc_cache(context)
|
||||
customs = list(context.session.query(models.ResourceClass).all())
|
||||
@ -2349,7 +2349,7 @@ class Trait(base.VersionedObject, base.TimestampedObject):
|
||||
return trait
|
||||
|
||||
@staticmethod
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _create_in_db(context, updates):
|
||||
trait = models.Trait()
|
||||
trait.update(updates)
|
||||
@ -2374,7 +2374,7 @@ class Trait(base.VersionedObject, base.TimestampedObject):
|
||||
self._from_db_object(self._context, self, db_trait)
|
||||
|
||||
@staticmethod
|
||||
@db_api.api_context_manager.writer # trait sync can cause a write
|
||||
@db_api.placement_context_manager.writer # trait sync can cause a write
|
||||
def _get_by_name_from_db(context, name):
|
||||
_ensure_trait_sync(context)
|
||||
result = context.session.query(models.Trait).filter_by(
|
||||
@ -2389,7 +2389,7 @@ class Trait(base.VersionedObject, base.TimestampedObject):
|
||||
return cls._from_db_object(context, cls(), db_trait)
|
||||
|
||||
@staticmethod
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _destroy_in_db(context, _id, name):
|
||||
num = context.session.query(models.ResourceProviderTrait).filter(
|
||||
models.ResourceProviderTrait.trait_id == _id).count()
|
||||
@ -2424,7 +2424,7 @@ class TraitList(base.ObjectListBase, base.VersionedObject):
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
@db_api.api_context_manager.writer # trait sync can cause a write
|
||||
@db_api.placement_context_manager.writer # trait sync can cause a write
|
||||
def _get_all_from_db(context, filters):
|
||||
_ensure_trait_sync(context)
|
||||
if not filters:
|
||||
@ -2535,7 +2535,7 @@ class ProviderSummary(base.VersionedObject):
|
||||
return set(res.resource_class for res in self.resources)
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_usages_by_provider(ctx, rp_ids):
|
||||
"""Returns a row iterator of usage records grouped by resource provider ID
|
||||
and resource class ID for all resource providers
|
||||
@ -2606,7 +2606,7 @@ def _get_usages_by_provider(ctx, rp_ids):
|
||||
return ctx.session.execute(query).fetchall()
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_provider_ids_having_any_trait(ctx, traits):
|
||||
"""Returns a list of resource provider internal IDs that have ANY of the
|
||||
supplied traits.
|
||||
@ -2627,7 +2627,7 @@ def _get_provider_ids_having_any_trait(ctx, traits):
|
||||
return [r[0] for r in ctx.session.execute(sel)]
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_provider_ids_having_all_traits(ctx, required_traits):
|
||||
"""Returns a list of resource provider internal IDs that have ALL of the
|
||||
required traits.
|
||||
@ -2656,7 +2656,7 @@ def _get_provider_ids_having_all_traits(ctx, required_traits):
|
||||
return [r[0] for r in ctx.session.execute(sel)]
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _has_provider_trees(ctx):
|
||||
"""Simple method that returns whether provider trees (i.e. nested resource
|
||||
providers) are in use in the deployment at all. This information is used to
|
||||
@ -2673,7 +2673,7 @@ def _has_provider_trees(ctx):
|
||||
return len(res) > 0
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_provider_ids_matching(ctx, resources, required_traits,
|
||||
forbidden_traits, member_of=None):
|
||||
"""Returns a list of tuples of (internal provider ID, root provider ID)
|
||||
@ -2805,7 +2805,7 @@ def _get_provider_ids_matching(ctx, resources, required_traits,
|
||||
return [(r[0], r[1]) for r in ctx.session.execute(sel)]
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _provider_aggregates(ctx, rp_ids):
|
||||
"""Given a list of resource provider internal IDs, returns a dict,
|
||||
keyed by those provider IDs, of sets of aggregate ids associated
|
||||
@ -2830,7 +2830,7 @@ def _provider_aggregates(ctx, rp_ids):
|
||||
return res
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_providers_with_resource(ctx, rc_id, amount):
|
||||
"""Returns a set of tuples of (provider ID, root provider ID) of providers
|
||||
that satisfy the request for a single resource class.
|
||||
@ -2889,7 +2889,7 @@ def _get_providers_with_resource(ctx, rc_id, amount):
|
||||
return res
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_trees_with_traits(ctx, rp_ids, required_traits, forbidden_traits):
|
||||
"""Given a list of provider IDs, filter them to return a set of tuples of
|
||||
(provider ID, root provider ID) of providers which belong to a tree that
|
||||
@ -2978,7 +2978,7 @@ def _get_trees_with_traits(ctx, rp_ids, required_traits, forbidden_traits):
|
||||
return [(rp_id, root_id) for rp_id, root_id in res]
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_trees_matching_all(ctx, resources, required_traits, forbidden_traits,
|
||||
sharing, member_of):
|
||||
"""Returns a list of two-tuples (provider internal ID, root provider
|
||||
@ -3458,7 +3458,7 @@ def _alloc_candidates_multiple_providers(ctx, requested_resources,
|
||||
return alloc_requests, list(summaries.values())
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _provider_traits(ctx, rp_ids):
|
||||
"""Given a list of resource provider internal IDs, returns a dict, keyed by
|
||||
those provider IDs, of string trait names associated with that provider.
|
||||
@ -3483,7 +3483,7 @@ def _provider_traits(ctx, rp_ids):
|
||||
return res
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _trait_ids_from_names(ctx, names):
|
||||
"""Given a list of string trait names, returns a dict, keyed by those
|
||||
string names, of the corresponding internal integer trait ID.
|
||||
@ -3813,8 +3813,8 @@ class AllocationCandidates(base.VersionedObject):
|
||||
def _get_by_one_request(context, request):
|
||||
"""Get allocation candidates for one RequestGroup.
|
||||
|
||||
Must be called from within an api_context_manager.reader (or writer)
|
||||
context.
|
||||
Must be called from within an placement_context_manager.reader
|
||||
(or writer) context.
|
||||
|
||||
:param context: Nova RequestContext.
|
||||
:param request: One nova.api.openstack.placement.util.RequestGroup
|
||||
@ -3895,7 +3895,7 @@ class AllocationCandidates(base.VersionedObject):
|
||||
# resource_providers table via ResourceProvider.get_by_uuid, which does
|
||||
# data migration to populate the root_provider_uuid. Change this back to a
|
||||
# reader when that migration is no longer happening.
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _get_by_requests(cls, context, requests, limit=None,
|
||||
group_policy=None):
|
||||
candidates = {}
|
||||
|
@ -24,7 +24,7 @@ CONF = cfg.CONF
|
||||
USER_TBL = models.User.__table__
|
||||
|
||||
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def ensure_incomplete_user(ctx):
|
||||
"""Ensures that a user record is created for the "incomplete consumer
|
||||
user". Returns the internal ID of that record.
|
||||
@ -40,7 +40,7 @@ def ensure_incomplete_user(ctx):
|
||||
return res.inserted_primary_key[0]
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_user_by_external_id(ctx, external_id):
|
||||
users = sa.alias(USER_TBL, name="u")
|
||||
cols = [
|
||||
@ -81,7 +81,7 @@ class User(base.VersionedObject):
|
||||
return cls._from_db_object(ctx, cls(ctx), res)
|
||||
|
||||
def create(self):
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _create_in_db(ctx):
|
||||
db_obj = models.User(external_id=self.external_id)
|
||||
try:
|
||||
|
@ -104,10 +104,61 @@ def enrich_help_text(alt_db_opts):
|
||||
# texts here if needed.
|
||||
alt_db_opt.help = db_opt.help + alt_db_opt.help
|
||||
|
||||
# NOTE(cdent): See the note above on api_db_group. The same issues
|
||||
# apply here.
|
||||
|
||||
placement_db_group = cfg.OptGroup('placement_database',
|
||||
title='Placement API database options',
|
||||
help="""
|
||||
The *Placement API Database* is a separate database which can be used with the
|
||||
placement service. This database is optional: if the connection option is not
|
||||
set, the nova api database will be used instead.
|
||||
""")
|
||||
|
||||
placement_db_opts = [
|
||||
cfg.StrOpt('connection',
|
||||
help='',
|
||||
secret=True),
|
||||
cfg.StrOpt('connection_parameters',
|
||||
default='',
|
||||
help=''),
|
||||
cfg.BoolOpt('sqlite_synchronous',
|
||||
default=True,
|
||||
help=''),
|
||||
cfg.StrOpt('slave_connection',
|
||||
secret=True,
|
||||
help=''),
|
||||
cfg.StrOpt('mysql_sql_mode',
|
||||
default='TRADITIONAL',
|
||||
help=''),
|
||||
cfg.IntOpt('connection_recycle_time',
|
||||
default=3600,
|
||||
help=''),
|
||||
cfg.IntOpt('max_pool_size',
|
||||
help=''),
|
||||
cfg.IntOpt('max_retries',
|
||||
default=10,
|
||||
help=''),
|
||||
cfg.IntOpt('retry_interval',
|
||||
default=10,
|
||||
help=''),
|
||||
cfg.IntOpt('max_overflow',
|
||||
help=''),
|
||||
cfg.IntOpt('connection_debug',
|
||||
default=0,
|
||||
help=''),
|
||||
cfg.BoolOpt('connection_trace',
|
||||
default=False,
|
||||
help=''),
|
||||
cfg.IntOpt('pool_timeout',
|
||||
help=''),
|
||||
] # noqa
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
oslo_db_options.set_defaults(conf, connection=_DEFAULT_SQL_CONNECTION)
|
||||
conf.register_opts(api_db_opts, group=api_db_group)
|
||||
conf.register_opts(placement_db_opts, group=placement_db_group)
|
||||
|
||||
|
||||
def list_opts():
|
||||
@ -119,6 +170,8 @@ def list_opts():
|
||||
# in the "sample.conf" file, I omit the listing of the "oslo_db_options"
|
||||
# here.
|
||||
enrich_help_text(api_db_opts)
|
||||
enrich_help_text(placement_db_opts)
|
||||
return {
|
||||
api_db_group: api_db_opts
|
||||
api_db_group: api_db_opts,
|
||||
placement_db_group: placement_db_opts,
|
||||
}
|
||||
|
@ -79,6 +79,7 @@ LOG = logging.getLogger(__name__)
|
||||
|
||||
main_context_manager = enginefacade.transaction_context()
|
||||
api_context_manager = enginefacade.transaction_context()
|
||||
placement_context_manager = enginefacade.transaction_context()
|
||||
|
||||
|
||||
def _get_db_conf(conf_group, connection=None):
|
||||
@ -99,6 +100,12 @@ def _context_manager_from_context(context):
|
||||
def configure(conf):
|
||||
main_context_manager.configure(**_get_db_conf(conf.database))
|
||||
api_context_manager.configure(**_get_db_conf(conf.api_database))
|
||||
if conf.placement_database.connection is None:
|
||||
placement_context_manager.configure(
|
||||
**_get_db_conf(conf.api_database))
|
||||
else:
|
||||
placement_context_manager.configure(
|
||||
**_get_db_conf(conf.placement_database))
|
||||
|
||||
if profiler_sqlalchemy and CONF.profiler.enabled \
|
||||
and CONF.profiler.trace_sqlalchemy:
|
||||
@ -141,6 +148,10 @@ def get_api_engine():
|
||||
return api_context_manager.get_legacy_facade().get_engine()
|
||||
|
||||
|
||||
def get_placement_engine():
|
||||
return placement_context_manager.get_legacy_facade().get_engine()
|
||||
|
||||
|
||||
_SHADOW_TABLE_PREFIX = 'shadow_'
|
||||
_DEFAULT_QUOTA_NAME = 'default'
|
||||
PER_PROJECT_QUOTAS = ['fixed_ips', 'floating_ips', 'networks']
|
||||
|
@ -31,6 +31,7 @@ from nova.i18n import _
|
||||
INIT_VERSION = {}
|
||||
INIT_VERSION['main'] = 215
|
||||
INIT_VERSION['api'] = 0
|
||||
INIT_VERSION['placement'] = 0
|
||||
_REPOSITORY = {}
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
@ -41,6 +42,8 @@ def get_engine(database='main', context=None):
|
||||
return db_session.get_engine(context=context)
|
||||
if database == 'api':
|
||||
return db_session.get_api_engine()
|
||||
if database == 'placement':
|
||||
return db_session.get_placement_engine()
|
||||
|
||||
|
||||
def db_sync(version=None, database='main', context=None):
|
||||
@ -169,7 +172,10 @@ def _find_migrate_repo(database='main'):
|
||||
"""Get the path for the migrate repository."""
|
||||
global _REPOSITORY
|
||||
rel_path = 'migrate_repo'
|
||||
if database == 'api':
|
||||
if database == 'api' or database == 'placement':
|
||||
# NOTE(cdent): For the time being the placement database (if
|
||||
# it is being used) is a replica (in structure) of the api
|
||||
# database.
|
||||
rel_path = os.path.join('api_migrations', 'migrate_repo')
|
||||
path = os.path.join(os.path.abspath(os.path.dirname(__file__)),
|
||||
rel_path)
|
||||
|
@ -290,6 +290,7 @@ class TestCase(testtools.TestCase):
|
||||
# NOTE(danms): Full database setup involves a cell0, cell1,
|
||||
# and the relevant mappings.
|
||||
self.useFixture(nova_fixtures.Database(database='api'))
|
||||
self.useFixture(nova_fixtures.Database(database='placement'))
|
||||
self._setup_cells()
|
||||
self.useFixture(nova_fixtures.DefaultFlavorsFixture())
|
||||
elif not self.USES_DB_SELF:
|
||||
|
@ -58,7 +58,7 @@ from nova.tests import uuidsentinel
|
||||
_TRUE_VALUES = ('True', 'true', '1', 'yes')
|
||||
|
||||
CONF = cfg.CONF
|
||||
DB_SCHEMA = {'main': "", 'api': ""}
|
||||
DB_SCHEMA = {'main': "", 'api': "", 'placement': ""}
|
||||
SESSION_CONFIGURED = False
|
||||
|
||||
|
||||
@ -571,7 +571,7 @@ class Database(fixtures.Fixture):
|
||||
def __init__(self, database='main', connection=None):
|
||||
"""Create a database fixture.
|
||||
|
||||
:param database: The type of database, 'main' or 'api'
|
||||
:param database: The type of database, 'main', 'api' or 'placement'
|
||||
:param connection: The connection string to use
|
||||
"""
|
||||
super(Database, self).__init__()
|
||||
@ -592,6 +592,8 @@ class Database(fixtures.Fixture):
|
||||
self.get_engine = session.get_engine
|
||||
elif database == 'api':
|
||||
self.get_engine = session.get_api_engine
|
||||
elif database == 'placement':
|
||||
self.get_engine = session.get_placement_engine
|
||||
|
||||
def _cache_schema(self):
|
||||
global DB_SCHEMA
|
||||
@ -625,7 +627,7 @@ class DatabaseAtVersion(fixtures.Fixture):
|
||||
"""Create a database fixture.
|
||||
|
||||
:param version: Max version to sync to (or None for current)
|
||||
:param database: The type of database, 'main' or 'api'
|
||||
:param database: The type of database, 'main', 'api', 'placement'
|
||||
"""
|
||||
super(DatabaseAtVersion, self).__init__()
|
||||
self.database = database
|
||||
@ -634,6 +636,8 @@ class DatabaseAtVersion(fixtures.Fixture):
|
||||
self.get_engine = session.get_engine
|
||||
elif database == 'api':
|
||||
self.get_engine = session.get_api_engine
|
||||
elif database == 'placement':
|
||||
self.get_engine = session.get_placement_engine
|
||||
|
||||
def cleanup(self):
|
||||
engine = self.get_engine()
|
||||
|
@ -1954,7 +1954,7 @@ class AllocationCandidatesTestCase(tb.PlacementDbBaseTestCase):
|
||||
names = map(six.text_type, names)
|
||||
sel = sa.select([rp_obj._RP_TBL.c.id])
|
||||
sel = sel.where(rp_obj._RP_TBL.c.name.in_(names))
|
||||
with self.api_db.get_engine().connect() as conn:
|
||||
with self.placement_db.get_engine().connect() as conn:
|
||||
rp_ids = set([r[0] for r in conn.execute(sel)])
|
||||
return rp_ids
|
||||
|
||||
|
@ -53,7 +53,8 @@ class PlacementDbBaseTestCase(test.NoDBTestCase):
|
||||
def setUp(self):
|
||||
super(PlacementDbBaseTestCase, self).setUp()
|
||||
self.useFixture(fixtures.Database())
|
||||
self.api_db = self.useFixture(fixtures.Database(database='api'))
|
||||
self.placement_db = self.useFixture(
|
||||
fixtures.Database(database='placement'))
|
||||
# Reset the _TRAITS_SYNCED global before we start and after
|
||||
# we are done since other tests (notably the gabbi tests)
|
||||
# may have caused it to change.
|
||||
|
@ -55,7 +55,7 @@ class ConsumerTestCase(tb.PlacementDbBaseTestCase):
|
||||
self.assertRaises(exception.ConsumerExists, c.create)
|
||||
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _get_allocs_with_no_consumer_relationship(ctx):
|
||||
alloc_to_consumer = sa.outerjoin(
|
||||
ALLOC_TBL, CONSUMER_TBL,
|
||||
@ -75,10 +75,10 @@ class CreateIncompleteConsumersTestCase(test.NoDBTestCase):
|
||||
def setUp(self):
|
||||
super(CreateIncompleteConsumersTestCase, self).setUp()
|
||||
self.useFixture(fixtures.Database())
|
||||
self.api_db = self.useFixture(fixtures.Database(database='api'))
|
||||
self.api_db = self.useFixture(fixtures.Database(database='placement'))
|
||||
self.ctx = context.RequestContext('fake-user', 'fake-project')
|
||||
|
||||
@db_api.api_context_manager.writer
|
||||
@db_api.placement_context_manager.writer
|
||||
def _create_incomplete_allocations(self, ctx):
|
||||
# Create some allocations with consumers that don't exist in the
|
||||
# consumers table to represent old allocations that we expect to be
|
||||
@ -105,7 +105,7 @@ class CreateIncompleteConsumersTestCase(test.NoDBTestCase):
|
||||
res = ctx.session.execute(sel).fetchall()
|
||||
self.assertEqual(0, len(res))
|
||||
|
||||
@db_api.api_context_manager.reader
|
||||
@db_api.placement_context_manager.reader
|
||||
def _check_incomplete_consumers(self, ctx):
|
||||
incomplete_external_id = CONF.placement.incomplete_consumer_project_id
|
||||
|
||||
|
@ -26,7 +26,7 @@ class TestResourceClassCache(test.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
super(TestResourceClassCache, self).setUp()
|
||||
self.db = self.useFixture(fixtures.Database(database='api'))
|
||||
self.db = self.useFixture(fixtures.Database(database='placement'))
|
||||
self.context = mock.Mock()
|
||||
sess_mock = mock.Mock()
|
||||
sess_mock.connection.side_effect = self.db.get_engine().connect
|
||||
|
@ -120,7 +120,7 @@ class ResourceProviderTestCase(tb.PlacementDbBaseTestCase):
|
||||
the provider's UUID.
|
||||
"""
|
||||
rp_tbl = rp_obj._RP_TBL
|
||||
conn = self.api_db.get_engine().connect()
|
||||
conn = self.placement_db.get_engine().connect()
|
||||
|
||||
# First, set up a record for an "old-style" resource provider with no
|
||||
# root provider UUID.
|
||||
@ -396,7 +396,7 @@ class ResourceProviderTestCase(tb.PlacementDbBaseTestCase):
|
||||
self.assertEqual([], rps.objects)
|
||||
|
||||
rp_tbl = rp_obj._RP_TBL
|
||||
conn = self.api_db.get_engine().connect()
|
||||
conn = self.placement_db.get_engine().connect()
|
||||
|
||||
# First, set up a record for an "old-style" resource provider with no
|
||||
# root provider UUID.
|
||||
@ -1736,7 +1736,7 @@ class ResourceClassListTestCase(tb.PlacementDbBaseTestCase):
|
||||
('CUSTOM_IRON_NFV', 10001),
|
||||
('CUSTOM_IRON_ENTERPRISE', 10002),
|
||||
]
|
||||
with self.api_db.get_engine().connect() as conn:
|
||||
with self.placement_db.get_engine().connect() as conn:
|
||||
for custom in customs:
|
||||
c_name, c_id = custom
|
||||
ins = rp_obj._RC_TBL.insert().values(id=c_id, name=c_name)
|
||||
@ -2198,7 +2198,7 @@ class ResourceProviderTraitTestCase(tb.PlacementDbBaseTestCase):
|
||||
list all traits, os_traits have been synchronized.
|
||||
"""
|
||||
std_traits = os_traits.get_traits()
|
||||
conn = self.api_db.get_engine().connect()
|
||||
conn = self.placement_db.get_engine().connect()
|
||||
|
||||
def _db_traits(conn):
|
||||
sel = sa.select([rp_obj._TRAIT_TBL.c.name])
|
||||
|
@ -61,6 +61,8 @@ class APIFixture(fixture.GabbiFixture):
|
||||
self.conf.set_override('connection', "sqlite://", group='database')
|
||||
self.conf.set_override('connection', "sqlite://",
|
||||
group='api_database')
|
||||
self.conf.set_override('connection', "sqlite://",
|
||||
group='placement_database')
|
||||
|
||||
# Register CORS opts, but do not set config. This has the
|
||||
# effect of exercising the "don't use cors" path in
|
||||
@ -73,12 +75,14 @@ class APIFixture(fixture.GabbiFixture):
|
||||
config.parse_args([], default_config_files=[], configure_db=False,
|
||||
init_rpc=False)
|
||||
|
||||
# NOTE(cdent): The main database is not used but we still need to
|
||||
# manage it to make the fixtures work correctly and not cause
|
||||
# NOTE(cdent): All three database fixtures need to be
|
||||
# managed for database handling to work and not cause
|
||||
# conflicts with other tests in the same process.
|
||||
self._reset_db_flags()
|
||||
self.placement_db_fixture = fixtures.Database('placement')
|
||||
self.api_db_fixture = fixtures.Database('api')
|
||||
self.main_db_fixture = fixtures.Database('main')
|
||||
self.placement_db_fixture.reset()
|
||||
self.api_db_fixture.reset()
|
||||
self.main_db_fixture.reset()
|
||||
|
||||
@ -96,6 +100,7 @@ class APIFixture(fixture.GabbiFixture):
|
||||
os.environ['ALT_PARENT_PROVIDER_UUID'] = uuidutils.generate_uuid()
|
||||
|
||||
def stop_fixture(self):
|
||||
self.placement_db_fixture.cleanup()
|
||||
self.api_db_fixture.cleanup()
|
||||
self.main_db_fixture.cleanup()
|
||||
|
||||
|
@ -29,7 +29,7 @@ class TestDirect(test.NoDBTestCase):
|
||||
|
||||
def setUp(self):
|
||||
super(TestDirect, self).setUp()
|
||||
self.api_db = self.useFixture(fixtures.Database(database='api'))
|
||||
self.api_db = self.useFixture(fixtures.Database(database='placement'))
|
||||
self._reset_traits_synced()
|
||||
self.context = context.get_admin_context()
|
||||
self.addCleanup(self._reset_traits_synced)
|
||||
|
23
releasenotes/notes/placement-database-2e087f379273535d.yaml
Normal file
23
releasenotes/notes/placement-database-2e087f379273535d.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
---
|
||||
features:
|
||||
- |
|
||||
An optional configuration group ``placement_database`` can be used in
|
||||
nova.conf to configure a separate database for use with the placement
|
||||
API.
|
||||
|
||||
If ``placement_database.connection`` has a value then the
|
||||
``placement_database`` configuration group will be used to configure a
|
||||
separate placement database, including using ``connection`` to identify the
|
||||
target database. That database will have a schema that is a replica of all
|
||||
the tables used in the API database. The new database will be created and
|
||||
synchronized when the ``nova-manage api_db sync`` command is run.
|
||||
|
||||
When the ``placement_database.connection`` setting is omitted the existing
|
||||
settings for the ``api_database`` will be used for hosting placement data.
|
||||
|
||||
Setting ``placement_database.connection`` and calling
|
||||
``nova-manage api_db sync`` will only create tables. No data will be
|
||||
migrated. In an existing OpenStack deployment, if there is existing
|
||||
placement data in the ``nova_api`` database this will not be copied. It is
|
||||
up to the deployment to manually replicate that data in a fashion that
|
||||
works best for the environment.
|
Loading…
Reference in New Issue
Block a user