Browse Source

Use external placement in functional tests

Adjust the fixtures used by the functional tests so they
use placement database and web fixtures defined by placement
code. To avoid making redundant changes, the solely placement-
related unit and functional tests are removed, but the placement
code itself is not (yet).

openstack-placement is required by the functional tests. It is not
added to test-requirements as we do not want unit tests to depend
on placement in any way, and we enforce this by not having placement
in the test env.

The concept of tox-siblings is used to ensure that the
placement requirement will be satisfied correctly if there is a
depends-on. To make this happen, the functional jobs defined in
.zuul.yaml are updated to require openstack/placement.

tox.ini has to be updated to use a envdir that is the same
name as job. Otherwise the tox siblings role in ansible cannot work.

The handling of the placement fixtures is moved out of nova/test.py
into the functional tests that actually use it because we do not
want unit tests (which get the base test class out of test.py) to
have anything to do with placement. This requires adjusting some
test files to use absolute import.

Similarly, a test of the comparison function for the api samples tests
is moved into functional, because it depends on placement functionality,

TestUpgradeCheckResourceProviders in unit.cmd.test_status is moved into
a new test file: nova/tests/functional/test_nova_status.py. This is done
because it requires the PlacementFixture, which is only available to
functional tests. A MonkeyPatch is required in the test to make sure that
the right context managers are used at the right time in the command
itself (otherwise some tables do no exist). In the test itself, to avoid
speaking directly to the placement database, which would require
manipulating the RequestContext objects, resource providers are now
created over the API.

Co-Authored-By: Balazs Gibizer <balazs.gibizer@ericsson.com>
Change-Id: Idaed39629095f86d24a54334c699a26c218c6593
changes/41/617941/31
Chris Dent 3 years ago
parent
commit
787bb33606
  1. 6
      .zuul.yaml
  2. 3
      nova/cmd/manage.py
  3. 1
      nova/cmd/status.py
  4. 10
      nova/test.py
  5. 145
      nova/tests/fixtures.py
  6. 0
      nova/tests/functional/api/openstack/placement/__init__.py
  7. 69
      nova/tests/functional/api/openstack/placement/base.py
  8. 0
      nova/tests/functional/api/openstack/placement/db/__init__.py
  9. 2800
      nova/tests/functional/api/openstack/placement/db/test_allocation_candidates.py
  10. 129
      nova/tests/functional/api/openstack/placement/db/test_base.py
  11. 329
      nova/tests/functional/api/openstack/placement/db/test_consumer.py
  12. 31
      nova/tests/functional/api/openstack/placement/db/test_project.py
  13. 359
      nova/tests/functional/api/openstack/placement/db/test_reshape.py
  14. 145
      nova/tests/functional/api/openstack/placement/db/test_resource_class_cache.py
  15. 2391
      nova/tests/functional/api/openstack/placement/db/test_resource_provider.py
  16. 31
      nova/tests/functional/api/openstack/placement/db/test_user.py
  17. 0
      nova/tests/functional/api/openstack/placement/fixtures/__init__.py
  18. 81
      nova/tests/functional/api/openstack/placement/fixtures/capture.py
  19. 431
      nova/tests/functional/api/openstack/placement/fixtures/gabbits.py
  20. 49
      nova/tests/functional/api/openstack/placement/fixtures/placement.py
  21. 39
      nova/tests/functional/api/openstack/placement/gabbits/aggregate-policy.yaml
  22. 204
      nova/tests/functional/api/openstack/placement/gabbits/aggregate.yaml
  23. 77
      nova/tests/functional/api/openstack/placement/gabbits/allocation-bad-class.yaml
  24. 141
      nova/tests/functional/api/openstack/placement/gabbits/allocation-candidates-member-of.yaml
  25. 18
      nova/tests/functional/api/openstack/placement/gabbits/allocation-candidates-policy.yaml
  26. 416
      nova/tests/functional/api/openstack/placement/gabbits/allocation-candidates.yaml
  27. 130
      nova/tests/functional/api/openstack/placement/gabbits/allocations-1-12.yaml
  28. 152
      nova/tests/functional/api/openstack/placement/gabbits/allocations-1-8.yaml
  29. 255
      nova/tests/functional/api/openstack/placement/gabbits/allocations-1.28.yaml
  30. 97
      nova/tests/functional/api/openstack/placement/gabbits/allocations-bug-1714072.yaml
  31. 71
      nova/tests/functional/api/openstack/placement/gabbits/allocations-bug-1778591.yaml
  32. 70
      nova/tests/functional/api/openstack/placement/gabbits/allocations-bug-1778743.yaml
  33. 102
      nova/tests/functional/api/openstack/placement/gabbits/allocations-bug-1779717.yaml
  34. 76
      nova/tests/functional/api/openstack/placement/gabbits/allocations-policy.yaml
  35. 399
      nova/tests/functional/api/openstack/placement/gabbits/allocations-post.yaml
  36. 509
      nova/tests/functional/api/openstack/placement/gabbits/allocations.yaml
  37. 207
      nova/tests/functional/api/openstack/placement/gabbits/basic-http.yaml
  38. 38
      nova/tests/functional/api/openstack/placement/gabbits/bug-1674694.yaml
  39. 32
      nova/tests/functional/api/openstack/placement/gabbits/confirm-auth.yaml
  40. 47
      nova/tests/functional/api/openstack/placement/gabbits/cors.yaml
  41. 41
      nova/tests/functional/api/openstack/placement/gabbits/ensure-consumer.yaml
  42. 474
      nova/tests/functional/api/openstack/placement/gabbits/granular.yaml
  43. 85
      nova/tests/functional/api/openstack/placement/gabbits/inventory-policy.yaml
  44. 812
      nova/tests/functional/api/openstack/placement/gabbits/inventory.yaml
  45. 22
      nova/tests/functional/api/openstack/placement/gabbits/microversion-bug-1724065.yaml
  46. 90
      nova/tests/functional/api/openstack/placement/gabbits/microversion.yaml
  47. 25
      nova/tests/functional/api/openstack/placement/gabbits/non-cors.yaml
  48. 20
      nova/tests/functional/api/openstack/placement/gabbits/reshaper-policy.yaml
  49. 558
      nova/tests/functional/api/openstack/placement/gabbits/reshaper.yaml
  50. 80
      nova/tests/functional/api/openstack/placement/gabbits/resource-class-in-use.yaml
  51. 21
      nova/tests/functional/api/openstack/placement/gabbits/resource-classes-1-6.yaml
  52. 49
      nova/tests/functional/api/openstack/placement/gabbits/resource-classes-1-7.yaml
  53. 117
      nova/tests/functional/api/openstack/placement/gabbits/resource-classes-last-modified.yaml
  54. 40
      nova/tests/functional/api/openstack/placement/gabbits/resource-classes-policy.yaml
  55. 325
      nova/tests/functional/api/openstack/placement/gabbits/resource-classes.yaml
  56. 181
      nova/tests/functional/api/openstack/placement/gabbits/resource-provider-aggregates.yaml
  57. 123
      nova/tests/functional/api/openstack/placement/gabbits/resource-provider-bug-1779818.yaml
  58. 48
      nova/tests/functional/api/openstack/placement/gabbits/resource-provider-duplication.yaml
  59. 106
      nova/tests/functional/api/openstack/placement/gabbits/resource-provider-links.yaml
  60. 48
      nova/tests/functional/api/openstack/placement/gabbits/resource-provider-policy.yaml
  61. 156
      nova/tests/functional/api/openstack/placement/gabbits/resource-provider-resources-query.yaml
  62. 775
      nova/tests/functional/api/openstack/placement/gabbits/resource-provider.yaml
  63. 143
      nova/tests/functional/api/openstack/placement/gabbits/shared-resources.yaml
  64. 55
      nova/tests/functional/api/openstack/placement/gabbits/traits-policy.yaml
  65. 487
      nova/tests/functional/api/openstack/placement/gabbits/traits.yaml
  66. 40
      nova/tests/functional/api/openstack/placement/gabbits/unicode.yaml
  67. 33
      nova/tests/functional/api/openstack/placement/gabbits/usage-policy.yaml
  68. 120
      nova/tests/functional/api/openstack/placement/gabbits/usage.yaml
  69. 159
      nova/tests/functional/api/openstack/placement/gabbits/with-allocations.yaml
  70. 77
      nova/tests/functional/api/openstack/placement/test_direct.py
  71. 44
      nova/tests/functional/api/openstack/placement/test_placement_api.py
  72. 50
      nova/tests/functional/api/openstack/placement/test_verify_policy.py
  73. 1
      nova/tests/functional/api_paste_fixture.py
  74. 2
      nova/tests/functional/api_sample_tests/api_sample_base.py
  75. 0
      nova/tests/functional/api_sample_tests/test_compare_result.py
  76. 4
      nova/tests/functional/compute/test_resource_tracker.py
  77. 150
      nova/tests/functional/fixtures.py
  78. 6
      nova/tests/functional/integrated_helpers.py
  79. 4
      nova/tests/functional/libvirt/base.py
  80. 3
      nova/tests/functional/notification_sample_tests/notification_sample_base.py
  81. 3
      nova/tests/functional/regressions/test_bug_1595962.py
  82. 3
      nova/tests/functional/regressions/test_bug_1671648.py
  83. 3
      nova/tests/functional/regressions/test_bug_1675570.py
  84. 22
      nova/tests/functional/regressions/test_bug_1679750.py
  85. 3
      nova/tests/functional/regressions/test_bug_1682693.py
  86. 3
      nova/tests/functional/regressions/test_bug_1702454.py
  87. 3
      nova/tests/functional/regressions/test_bug_1713783.py
  88. 3
      nova/tests/functional/regressions/test_bug_1718455.py
  89. 3
      nova/tests/functional/regressions/test_bug_1718512.py
  90. 3
      nova/tests/functional/regressions/test_bug_1719730.py
  91. 3
      nova/tests/functional/regressions/test_bug_1735407.py
  92. 3
      nova/tests/functional/regressions/test_bug_1741307.py
  93. 3
      nova/tests/functional/regressions/test_bug_1746483.py
  94. 3
      nova/tests/functional/regressions/test_bug_1764883.py
  95. 3
      nova/tests/functional/regressions/test_bug_1780373.py
  96. 3
      nova/tests/functional/regressions/test_bug_1781710.py
  97. 3
      nova/tests/functional/regressions/test_bug_1784353.py
  98. 3
      nova/tests/functional/regressions/test_bug_1797580.py
  99. 3
      nova/tests/functional/regressions/test_bug_1806064.py
  100. 5
      nova/tests/functional/test_aggregates.py

6
.zuul.yaml

@ -48,6 +48,8 @@
Run tox-based functional tests for the OpenStack Nova project with Nova
specific irrelevant-files list. Uses tox with the ``functional``
environment.
required-projects:
- openstack/placement
irrelevant-files: &functional-irrelevant-files
- ^.*\.rst$
- ^api-.*$
@ -56,6 +58,7 @@
- ^releasenotes/.*$
vars:
tox_envlist: functional
tox_install_siblings: true
timeout: 3600
- job:
@ -65,9 +68,12 @@
Run tox-based functional tests for the OpenStack Nova project
under cPython version 3.5. with Nova specific irrelevant-files list.
Uses tox with the ``functional-py35`` environment.
required-projects:
- openstack/placement
irrelevant-files: *functional-irrelevant-files
vars:
tox_envlist: functional-py35
tox_install_siblings: true
timeout: 3600
- job:

3
nova/cmd/manage.py

@ -45,6 +45,7 @@ import six
import six.moves.urllib.parse as urlparse
from sqlalchemy.engine import url as sqla_url
# FIXME(cdent): This is a speedbump in the extraction process
from nova.api.openstack.placement.objects import consumer as consumer_obj
from nova.cmd import common as cmd_common
from nova.compute import api as compute_api
@ -416,6 +417,7 @@ class DbCommands(object):
# need to be populated if it was not specified during boot time.
instance_obj.populate_missing_availability_zones,
# Added in Rocky
# FIXME(cdent): This is a factor that needs to be addressed somehow
consumer_obj.create_incomplete_consumers,
# Added in Rocky
instance_mapping_obj.populate_queued_for_delete,
@ -1987,6 +1989,7 @@ class PlacementCommands(object):
return num_processed
# FIXME(cdent): This needs to be addressed as part of extraction.
@action_description(
_("Iterates over non-cell0 cells looking for instances which do "
"not have allocations in the Placement service, or have incomplete "

1
nova/cmd/status.py

@ -251,6 +251,7 @@ class UpgradeCommands(object):
# and resource class, so we can simply count the number of inventories
# records for the given resource class and those will uniquely identify
# the number of resource providers we care about.
# FIXME(cdent): This will be a different project soon.
meta = MetaData(bind=placement_db.get_placement_engine())
inventories = Table('inventories', meta, autoload=True)
return select([sqlfunc.count()]).select_from(

10
nova/test.py

@ -49,7 +49,6 @@ from oslotest import moxstubout
import six
import testtools
from nova.api.openstack.placement.objects import resource_provider
from nova import context
from nova.db import api as db
from nova import exception
@ -260,7 +259,6 @@ class TestCase(testtools.TestCase):
# NOTE(danms): Full database setup involves a cell0, cell1,
# and the relevant mappings.
self.useFixture(nova_fixtures.Database(database='api'))
self.useFixture(nova_fixtures.Database(database='placement'))
self._setup_cells()
self.useFixture(nova_fixtures.DefaultFlavorsFixture())
elif not self.USES_DB_SELF:
@ -281,12 +279,6 @@ class TestCase(testtools.TestCase):
# caching of that value.
utils._IS_NEUTRON = None
# Reset the traits sync and rc cache flags
def _reset_traits():
resource_provider._TRAITS_SYNCED = False
_reset_traits()
self.addCleanup(_reset_traits)
resource_provider._RC_CACHE = None
# Reset the global QEMU version flag.
images.QEMU_VERSION = None
@ -296,8 +288,6 @@ class TestCase(testtools.TestCase):
self.addCleanup(self._clear_attrs)
self.useFixture(fixtures.EnvironmentVariable('http_proxy'))
self.policy = self.useFixture(policy_fixture.PolicyFixture())
self.placement_policy = self.useFixture(
policy_fixture.PlacementPolicyFixture())
self.useFixture(nova_fixtures.PoisonFunctions())

145
nova/tests/fixtures.py

@ -26,8 +26,6 @@ import random
import warnings
import fixtures
from keystoneauth1 import adapter as ka
from keystoneauth1 import session as ks
import mock
from neutronclient.common import exceptions as neutron_client_exc
from oslo_concurrency import lockutils
@ -41,7 +39,6 @@ from requests import adapters
from wsgi_intercept import interceptor
from nova.api.openstack.compute import tenant_networks
from nova.api.openstack.placement import db_api as placement_db
from nova.api.openstack import wsgi_app
from nova.api import wsgi
from nova.compute import rpcapi as compute_rpcapi
@ -57,12 +54,11 @@ from nova import quota as nova_quota
from nova import rpc
from nova import service
from nova.tests.functional.api import client
from nova.tests.functional.api.openstack.placement.fixtures import placement
_TRUE_VALUES = ('True', 'true', '1', 'yes')
CONF = cfg.CONF
DB_SCHEMA = {'main': "", 'api': "", 'placement': ""}
DB_SCHEMA = {'main': "", 'api': ""}
SESSION_CONFIGURED = False
@ -631,7 +627,7 @@ class Database(fixtures.Fixture):
def __init__(self, database='main', connection=None):
"""Create a database fixture.
:param database: The type of database, 'main', 'api' or 'placement'
:param database: The type of database, 'main', or 'api'
:param connection: The connection string to use
"""
super(Database, self).__init__()
@ -640,7 +636,6 @@ class Database(fixtures.Fixture):
global SESSION_CONFIGURED
if not SESSION_CONFIGURED:
session.configure(CONF)
placement_db.configure(CONF)
SESSION_CONFIGURED = True
self.database = database
if database == 'main':
@ -652,8 +647,6 @@ class Database(fixtures.Fixture):
self.get_engine = session.get_engine
elif database == 'api':
self.get_engine = session.get_api_engine
elif database == 'placement':
self.get_engine = placement_db.get_placement_engine
def _cache_schema(self):
global DB_SCHEMA
@ -687,7 +680,7 @@ class DatabaseAtVersion(fixtures.Fixture):
"""Create a database fixture.
:param version: Max version to sync to (or None for current)
:param database: The type of database, 'main', 'api', 'placement'
:param database: The type of database, 'main', 'api'
"""
super(DatabaseAtVersion, self).__init__()
self.database = database
@ -696,8 +689,6 @@ class DatabaseAtVersion(fixtures.Fixture):
self.get_engine = session.get_engine
elif database == 'api':
self.get_engine = session.get_api_engine
elif database == 'placement':
self.get_engine = placement_db.get_placement_engine
def cleanup(self):
engine = self.get_engine()
@ -1853,136 +1844,6 @@ class CinderFixtureNewAttachFlow(fixtures.Fixture):
fake_get_all_volume_types)
class PlacementApiClient(object):
def __init__(self, placement_fixture):
self.fixture = placement_fixture
def get(self, url, **kwargs):
return client.APIResponse(self.fixture._fake_get(None, url, **kwargs))
def put(self, url, body, **kwargs):
return client.APIResponse(
self.fixture._fake_put(None, url, body, **kwargs))
def post(self, url, body, **kwargs):
return client.APIResponse(
self.fixture._fake_post(None, url, body, **kwargs))
class PlacementFixture(placement.PlacementFixture):
"""A fixture to placement operations.
Runs a local WSGI server bound on a free port and having the Placement
application with NoAuth middleware.
This fixture also prevents calling the ServiceCatalog for getting the
endpoint.
It's possible to ask for a specific token when running the fixtures so
all calls would be passing this token.
Most of the time users of this fixture will also want the placement
database fixture (called first) as well:
self.useFixture(nova_fixtures.Database(database='placement'))
That is left as a manual step so tests may have fine grain control, and
because it is likely that these fixtures will continue to evolve as
the separation of nova and placement continues.
"""
def setUp(self):
super(PlacementFixture, self).setUp()
# Turn off manipulation of socket_options in TCPKeepAliveAdapter
# to keep wsgi-intercept happy. Replace it with the method
# from its superclass.
self.useFixture(fixtures.MonkeyPatch(
'keystoneauth1.session.TCPKeepAliveAdapter.init_poolmanager',
adapters.HTTPAdapter.init_poolmanager))
self._client = ka.Adapter(ks.Session(auth=None), raise_exc=False)
# NOTE(sbauza): We need to mock the scheduler report client because
# we need to fake Keystone by directly calling the endpoint instead
# of looking up the service catalog, like we did for the OSAPIFixture.
self.useFixture(fixtures.MonkeyPatch(
'nova.scheduler.client.report.SchedulerReportClient.get',
self._fake_get))
self.useFixture(fixtures.MonkeyPatch(
'nova.scheduler.client.report.SchedulerReportClient.post',
self._fake_post))
self.useFixture(fixtures.MonkeyPatch(
'nova.scheduler.client.report.SchedulerReportClient.put',
self._fake_put))
self.useFixture(fixtures.MonkeyPatch(
'nova.scheduler.client.report.SchedulerReportClient.delete',
self._fake_delete))
self.api = PlacementApiClient(self)
@staticmethod
def _update_headers_with_version(headers, **kwargs):
version = kwargs.get("version")
if version is not None:
# TODO(mriedem): Perform some version discovery at some point.
headers.update({
'OpenStack-API-Version': 'placement %s' % version
})
def _fake_get(self, *args, **kwargs):
(url,) = args[1:]
# TODO(sbauza): The current placement NoAuthMiddleware returns a 401
# in case a token is not provided. We should change that by creating
# a fake token so we could remove adding the header below.
headers = {'x-auth-token': self.token}
self._update_headers_with_version(headers, **kwargs)
return self._client.get(
url,
endpoint_override=self.endpoint,
headers=headers)
def _fake_post(self, *args, **kwargs):
(url, data) = args[1:]
# NOTE(sdague): using json= instead of data= sets the
# media type to application/json for us. Placement API is
# more sensitive to this than other APIs in the OpenStack
# ecosystem.
# TODO(sbauza): The current placement NoAuthMiddleware returns a 401
# in case a token is not provided. We should change that by creating
# a fake token so we could remove adding the header below.
headers = {'x-auth-token': self.token}
self._update_headers_with_version(headers, **kwargs)
return self._client.post(
url, json=data,
endpoint_override=self.endpoint,
headers=headers)
def _fake_put(self, *args, **kwargs):
(url, data) = args[1:]
# NOTE(sdague): using json= instead of data= sets the
# media type to application/json for us. Placement API is
# more sensitive to this than other APIs in the OpenStack
# ecosystem.
# TODO(sbauza): The current placement NoAuthMiddleware returns a 401
# in case a token is not provided. We should change that by creating
# a fake token so we could remove adding the header below.
headers = {'x-auth-token': self.token}
self._update_headers_with_version(headers, **kwargs)
return self._client.put(
url, json=data,
endpoint_override=self.endpoint,
headers=headers)
def _fake_delete(self, *args, **kwargs):
(url,) = args[1:]
# TODO(sbauza): The current placement NoAuthMiddleware returns a 401
# in case a token is not provided. We should change that by creating
# a fake token so we could remove adding the header below.
return self._client.delete(
url,
endpoint_override=self.endpoint,
headers={'x-auth-token': self.token})
class UnHelperfulClientChannel(privsep_daemon._ClientChannel):
def __init__(self, context):
raise Exception('You have attempted to start a privsep helper. '

0
nova/tests/functional/api/openstack/placement/__init__.py

69
nova/tests/functional/api/openstack/placement/base.py

@ -1,69 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_config import fixture as config_fixture
from oslotest import output
import testtools
from nova.api.openstack.placement import context
from nova.api.openstack.placement import deploy
from nova.api.openstack.placement.objects import resource_provider
from nova.tests import fixtures
from nova.tests.functional.api.openstack.placement.fixtures import capture
from nova.tests.unit import policy_fixture
CONF = cfg.CONF
class TestCase(testtools.TestCase):
"""A base test case for placement functional tests.
Sets up minimum configuration for database and policy handling
and establishes the placement database.
"""
def setUp(self):
super(TestCase, self).setUp()
# Manage required configuration
conf_fixture = self.useFixture(config_fixture.Config(CONF))
# The Database fixture will get confused if only one of the databases
# is configured.
for group in ('placement_database', 'api_database', 'database'):
conf_fixture.config(
group=group,
connection='sqlite://',
sqlite_synchronous=False)
CONF([], default_config_files=[])
self.useFixture(policy_fixture.PlacementPolicyFixture())
self.useFixture(capture.Logging())
self.useFixture(output.CaptureOutput())
# Filter ignorable warnings during test runs.
self.useFixture(capture.WarningsFixture())
self.placement_db = self.useFixture(
fixtures.Database(database='placement'))
self._reset_database()
self.context = context.RequestContext()
# Do database syncs, such as traits sync.
deploy.update_database()
self.addCleanup(self._reset_database)
@staticmethod
def _reset_database():
"""Reset database sync flags to base state."""
resource_provider._TRAITS_SYNCED = False
resource_provider._RC_CACHE = None

0
nova/tests/functional/api/openstack/placement/db/__init__.py

2800
nova/tests/functional/api/openstack/placement/db/test_allocation_candidates.py

File diff suppressed because it is too large

129
nova/tests/functional/api/openstack/placement/db/test_base.py

@ -1,129 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Base class and convenience utilities for functional placement tests."""
from oslo_utils.fixture import uuidsentinel as uuids
from oslo_utils import uuidutils
from nova.api.openstack.placement import exception
from nova.api.openstack.placement.objects import consumer as consumer_obj
from nova.api.openstack.placement.objects import project as project_obj
from nova.api.openstack.placement.objects import resource_provider as rp_obj
from nova.api.openstack.placement.objects import user as user_obj
from nova.tests.functional.api.openstack.placement import base
def create_provider(context, name, *aggs, **kwargs):
parent = kwargs.get('parent')
root = kwargs.get('root')
uuid = kwargs.get('uuid', getattr(uuids, name))
rp = rp_obj.ResourceProvider(context, name=name, uuid=uuid)
if parent:
rp.parent_provider_uuid = parent
if root:
rp.root_provider_uuid = root
rp.create()
if aggs:
rp.set_aggregates(aggs)
return rp
def add_inventory(rp, rc, total, **kwargs):
kwargs.setdefault('max_unit', total)
inv = rp_obj.Inventory(rp._context, resource_provider=rp,
resource_class=rc, total=total, **kwargs)
inv.obj_set_defaults()
rp.add_inventory(inv)
return inv
def set_traits(rp, *traits):
tlist = []
for tname in traits:
try:
trait = rp_obj.Trait.get_by_name(rp._context, tname)
except exception.TraitNotFound:
trait = rp_obj.Trait(rp._context, name=tname)
trait.create()
tlist.append(trait)
rp.set_traits(rp_obj.TraitList(objects=tlist))
return tlist
def ensure_consumer(ctx, user, project, consumer_id=None):
# NOTE(efried): If not specified, use a random consumer UUID - we don't
# want to override any existing allocations from the test case.
consumer_id = consumer_id or uuidutils.generate_uuid()
try:
consumer = consumer_obj.Consumer.get_by_uuid(ctx, consumer_id)
except exception.NotFound:
consumer = consumer_obj.Consumer(
ctx, uuid=consumer_id, user=user, project=project)
consumer.create()
return consumer
def set_allocation(ctx, rp, consumer, rc_used_dict):
alloc = [
rp_obj.Allocation(
ctx, resource_provider=rp, resource_class=rc,
consumer=consumer, used=used)
for rc, used in rc_used_dict.items()
]
alloc_list = rp_obj.AllocationList(ctx, objects=alloc)
alloc_list.replace_all()
return alloc_list
class PlacementDbBaseTestCase(base.TestCase):
def setUp(self):
super(PlacementDbBaseTestCase, self).setUp()
# we use context in some places and ctx in other. We should only use
# context, but let's paper over that for now.
self.ctx = self.context
self.user_obj = user_obj.User(self.ctx, external_id='fake-user')
self.user_obj.create()
self.project_obj = project_obj.Project(
self.ctx, external_id='fake-project')
self.project_obj.create()
# For debugging purposes, populated by _create_provider and used by
# _validate_allocation_requests to make failure results more readable.
self.rp_uuid_to_name = {}
def _create_provider(self, name, *aggs, **kwargs):
rp = create_provider(self.ctx, name, *aggs, **kwargs)
self.rp_uuid_to_name[rp.uuid] = name
return rp
def allocate_from_provider(self, rp, rc, used, consumer_id=None,
consumer=None):
if consumer is None:
consumer = ensure_consumer(
self.ctx, self.user_obj, self.project_obj, consumer_id)
alloc_list = set_allocation(self.ctx, rp, consumer, {rc: used})
return alloc_list
def _make_allocation(self, inv_dict, alloc_dict):
rp = self._create_provider('allocation_resource_provider')
disk_inv = rp_obj.Inventory(context=self.ctx,
resource_provider=rp, **inv_dict)
inv_list = rp_obj.InventoryList(objects=[disk_inv])
rp.set_inventory(inv_list)
consumer_id = alloc_dict['consumer_id']
consumer = ensure_consumer(
self.ctx, self.user_obj, self.project_obj, consumer_id)
alloc = rp_obj.Allocation(self.ctx, resource_provider=rp,
consumer=consumer, **alloc_dict)
alloc_list = rp_obj.AllocationList(self.ctx, objects=[alloc])
alloc_list.replace_all()
return rp, alloc

329
nova/tests/functional/api/openstack/placement/db/test_consumer.py

@ -1,329 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_utils.fixture import uuidsentinel as uuids
import sqlalchemy as sa
from nova.api.openstack.placement import db_api
from nova.api.openstack.placement import exception
from nova.api.openstack.placement.objects import consumer as consumer_obj
from nova.api.openstack.placement.objects import project as project_obj
from nova.api.openstack.placement.objects import resource_provider as rp_obj
from nova.api.openstack.placement.objects import user as user_obj
from nova import rc_fields as fields
from nova.tests.functional.api.openstack.placement import base
from nova.tests.functional.api.openstack.placement.db import test_base as tb
CONF = cfg.CONF
CONSUMER_TBL = consumer_obj.CONSUMER_TBL
PROJECT_TBL = project_obj.PROJECT_TBL
USER_TBL = user_obj.USER_TBL
ALLOC_TBL = rp_obj._ALLOC_TBL
class ConsumerTestCase(tb.PlacementDbBaseTestCase):
def test_non_existing_consumer(self):
self.assertRaises(exception.ConsumerNotFound,
consumer_obj.Consumer.get_by_uuid, self.ctx,
uuids.non_existing_consumer)
def test_create_and_get(self):
u = user_obj.User(self.ctx, external_id='another-user')
u.create()
p = project_obj.Project(self.ctx, external_id='another-project')
p.create()
c = consumer_obj.Consumer(
self.ctx, uuid=uuids.consumer, user=u, project=p)
c.create()
c = consumer_obj.Consumer.get_by_uuid(self.ctx, uuids.consumer)
self.assertEqual(1, c.id)
# Project ID == 1 is fake-project created in setup
self.assertEqual(2, c.project.id)
# User ID == 1 is fake-user created in setup
self.assertEqual(2, c.user.id)
self.assertRaises(exception.ConsumerExists, c.create)
def test_update(self):
"""Tests the scenario where a user supplies a different project/user ID
for an allocation's consumer and we call Consumer.update() to save that
information to the consumers table.
"""
# First, create the consumer with the "fake-user" and "fake-project"
# user/project in the base test class's setUp
c = consumer_obj.Consumer(
self.ctx, uuid=uuids.consumer, user=self.user_obj,
project=self.project_obj)
c.create()
c = consumer_obj.Consumer.get_by_uuid(self.ctx, uuids.consumer)
self.assertEqual(self.project_obj.id, c.project.id)
self.assertEqual(self.user_obj.id, c.user.id)
# Now change the consumer's project and user to a different project
another_user = user_obj.User(self.ctx, external_id='another-user')
another_user.create()
another_proj = project_obj.Project(
self.ctx, external_id='another-project')
another_proj.create()
c.project = another_proj
c.user = another_user
c.update()
c = consumer_obj.Consumer.get_by_uuid(self.ctx, uuids.consumer)
self.assertEqual(another_proj.id, c.project.id)
self.assertEqual(another_user.id, c.user.id)
@db_api.placement_context_manager.reader
def _get_allocs_with_no_consumer_relationship(ctx):
alloc_to_consumer = sa.outerjoin(
ALLOC_TBL, CONSUMER_TBL,
ALLOC_TBL.c.consumer_id == CONSUMER_TBL.c.uuid)
sel = sa.select([ALLOC_TBL.c.consumer_id])
sel = sel.select_from(alloc_to_consumer)
sel = sel.where(CONSUMER_TBL.c.id.is_(None))
return ctx.session.execute(sel).fetchall()
# NOTE(jaypipes): The tb.PlacementDbBaseTestCase creates a project and user
# which is why we don't base off that. We want a completely bare DB for this
# test.
class CreateIncompleteConsumersTestCase(base.TestCase):
def setUp(self):
super(CreateIncompleteConsumersTestCase, self).setUp()
self.ctx = self.context
@db_api.placement_context_manager.writer
def _create_incomplete_allocations(self, ctx, num_of_consumer_allocs=1):
# Create some allocations with consumers that don't exist in the
# consumers table to represent old allocations that we expect to be
# "cleaned up" with consumers table records that point to the sentinel
# project/user records.
c1_missing_uuid = uuids.c1_missing
c2_missing_uuid = uuids.c2_missing
c3_missing_uuid = uuids.c3_missing
for c_uuid in (c1_missing_uuid, c2_missing_uuid, c3_missing_uuid):
# Create $num_of_consumer_allocs allocations per consumer with
# different resource classes.
for resource_class_id in range(num_of_consumer_allocs):
ins_stmt = ALLOC_TBL.insert().values(
resource_provider_id=1,
resource_class_id=resource_class_id,
consumer_id=c_uuid, used=1)
ctx.session.execute(ins_stmt)
# Verify there are no records in the projects/users table
project_count = ctx.session.scalar(
sa.select([sa.func.count('*')]).select_from(PROJECT_TBL))
self.assertEqual(0, project_count)
user_count = ctx.session.scalar(
sa.select([sa.func.count('*')]).select_from(USER_TBL))
self.assertEqual(0, user_count)
# Verify there are no consumer records for the missing consumers
sel = CONSUMER_TBL.select(
CONSUMER_TBL.c.uuid.in_([c1_missing_uuid, c2_missing_uuid]))
res = ctx.session.execute(sel).fetchall()
self.assertEqual(0, len(res))
@db_api.placement_context_manager.reader
def _check_incomplete_consumers(self, ctx):
incomplete_project_id = CONF.placement.incomplete_consumer_project_id
# Verify we have a record in projects for the missing sentinel
sel = PROJECT_TBL.select(
PROJECT_TBL.c.external_id == incomplete_project_id)
rec = ctx.session.execute(sel).first()
self.assertEqual(incomplete_project_id, rec['external_id'])
incomplete_proj_id = rec['id']
# Verify we have a record in users for the missing sentinel
incomplete_user_id = CONF.placement.incomplete_consumer_user_id
sel = user_obj.USER_TBL.select(
USER_TBL.c.external_id == incomplete_user_id)
rec = ctx.session.execute(sel).first()
self.assertEqual(incomplete_user_id, rec['external_id'])
incomplete_user_id = rec['id']
# Verify there are records in the consumers table for our old
# allocation records created in the pre-migration setup and that the
# projects and users referenced in those consumer records point to the
# incomplete project/user
sel = CONSUMER_TBL.select(CONSUMER_TBL.c.uuid == uuids.c1_missing)
missing_c1 = ctx.session.execute(sel).first()
self.assertEqual(incomplete_proj_id, missing_c1['project_id'])
self.assertEqual(incomplete_user_id, missing_c1['user_id'])
sel = CONSUMER_TBL.select(CONSUMER_TBL.c.uuid == uuids.c2_missing)
missing_c2 = ctx.session.execute(sel).first()
self.assertEqual(incomplete_proj_id, missing_c2['project_id'])
self.assertEqual(incomplete_user_id, missing_c2['user_id'])
# Ensure there are no more allocations with incomplete consumers
res = _get_allocs_with_no_consumer_relationship(ctx)
self.assertEqual(0, len(res))
def test_create_incomplete_consumers(self):
"""Test the online data migration that creates incomplete consumer
records along with the incomplete consumer project/user records.
"""
self._create_incomplete_allocations(self.ctx)
# We do a "really online" online data migration for incomplete
# consumers when calling AllocationList.get_all_by_consumer_id() and
# AllocationList.get_all_by_resource_provider() and there are still
# incomplete consumer records. So, to simulate a situation where the
# operator has yet to run the nova-manage online_data_migration CLI
# tool completely, we first call
# consumer_obj.create_incomplete_consumers() with a batch size of 1.
# This should mean there will be two allocation records still remaining
# with a missing consumer record (since we create 3 total to begin
# with). We then query the allocations table directly to grab that
# consumer UUID in the allocations table that doesn't refer to a
# consumer table record and call
# AllocationList.get_all_by_consumer_id() with that consumer UUID. This
# should create the remaining missing consumer record "inline" in the
# AllocationList.get_all_by_consumer_id() method.
# After that happens, there should still be a single allocation record
# that is missing a relation to the consumers table. We call the
# AllocationList.get_all_by_resource_provider() method and verify that
# method cleans up the remaining incomplete consumers relationship.
res = consumer_obj.create_incomplete_consumers(self.ctx, 1)
self.assertEqual((1, 1), res)
# Grab the consumer UUID for the allocation record with a
# still-incomplete consumer record.
res = _get_allocs_with_no_consumer_relationship(self.ctx)
self.assertEqual(2, len(res))
still_missing = res[0][0]
rp_obj.AllocationList.get_all_by_consumer_id(self.ctx, still_missing)
# There should still be a single missing consumer relationship. Let's
# grab that and call AllocationList.get_all_by_resource_provider()
# which should clean that last one up for us.
res = _get_allocs_with_no_consumer_relationship(self.ctx)
self.assertEqual(1, len(res))
still_missing = res[0][0]
rp1 = rp_obj.ResourceProvider(self.ctx, id=1)
rp_obj.AllocationList.get_all_by_resource_provider(self.ctx, rp1)
# get_all_by_resource_provider() should have auto-completed the still
# missing consumer record and _check_incomplete_consumers() should
# assert correctly that there are no more incomplete consumer records.
self._check_incomplete_consumers(self.ctx)
res = consumer_obj.create_incomplete_consumers(self.ctx, 10)
self.assertEqual((0, 0), res)
def test_create_incomplete_consumers_multiple_allocs_per_consumer(self):
"""Tests that missing consumer records are created when listing
allocations against a resource provider or running the online data
migration routine when the consumers have multiple allocations on the
same provider.
"""
self._create_incomplete_allocations(self.ctx, num_of_consumer_allocs=2)
# Run the online data migration to migrate one consumer. The batch size
# needs to be large enough to hit more than one consumer for this test
# where each consumer has two allocations.
res = consumer_obj.create_incomplete_consumers(self.ctx, 2)
self.assertEqual((2, 2), res)
# Migrate the rest by listing allocations on the resource provider.
rp1 = rp_obj.ResourceProvider(self.ctx, id=1)
rp_obj.AllocationList.get_all_by_resource_provider(self.ctx, rp1)
self._check_incomplete_consumers(self.ctx)
res = consumer_obj.create_incomplete_consumers(self.ctx, 10)
self.assertEqual((0, 0), res)
class DeleteConsumerIfNoAllocsTestCase(tb.PlacementDbBaseTestCase):
def test_delete_consumer_if_no_allocs(self):
"""AllocationList.replace_all() should attempt to delete consumers that
no longer have any allocations. Due to the REST API not having any way
to query for consumers directly (only via the GET
/allocations/{consumer_uuid} endpoint which returns an empty dict even
when no consumer record exists for the {consumer_uuid}) we need to do
this functional test using only the object layer.
"""
# We will use two consumers in this test, only one of which will get
# all of its allocations deleted in a transaction (and we expect that
# consumer record to be deleted)
c1 = consumer_obj.Consumer(
self.ctx, uuid=uuids.consumer1, user=self.user_obj,
project=self.project_obj)
c1.create()
c2 = consumer_obj.Consumer(
self.ctx, uuid=uuids.consumer2, user=self.user_obj,
project=self.project_obj)
c2.create()
# Create some inventory that we will allocate
cn1 = self._create_provider('cn1')
tb.add_inventory(cn1, fields.ResourceClass.VCPU, 8)
tb.add_inventory(cn1, fields.ResourceClass.MEMORY_MB, 2048)
tb.add_inventory(cn1, fields.ResourceClass.DISK_GB, 2000)
# Now allocate some of that inventory to two different consumers
allocs = [
rp_obj.Allocation(
self.ctx, consumer=c1, resource_provider=cn1,
resource_class=fields.ResourceClass.VCPU, used=1),
rp_obj.Allocation(
self.ctx, consumer=c1, resource_provider=cn1,
resource_class=fields.ResourceClass.MEMORY_MB, used=512),
rp_obj.Allocation(
self.ctx, consumer=c2, resource_provider=cn1,
resource_class=fields.ResourceClass.VCPU, used=1),
rp_obj.Allocation(
self.ctx, consumer=c2, resource_provider=cn1,
resource_class=fields.ResourceClass.MEMORY_MB, used=512),
]
alloc_list = rp_obj.AllocationList(self.ctx, objects=allocs)
alloc_list.replace_all()
# Validate that we have consumer records for both consumers
for c_uuid in (uuids.consumer1, uuids.consumer2):
c_obj = consumer_obj.Consumer.get_by_uuid(self.ctx, c_uuid)
self.assertIsNotNone(c_obj)
# OK, now "remove" the allocation for consumer2 by setting the used
# value for both allocated resources to 0 and re-running the
# AllocationList.replace_all(). This should end up deleting the
# consumer record for consumer2
allocs = [
rp_obj.Allocation(
self.ctx, consumer=c2, resource_provider=cn1,
resource_class=fields.ResourceClass.VCPU, used=0),
rp_obj.Allocation(
self.ctx, consumer=c2, resource_provider=cn1,
resource_class=fields.ResourceClass.MEMORY_MB, used=0),
]
alloc_list = rp_obj.AllocationList(self.ctx, objects=allocs)
alloc_list.replace_all()
# consumer1 should still exist...
c_obj = consumer_obj.Consumer.get_by_uuid(self.ctx, uuids.consumer1)
self.assertIsNotNone(c_obj)
# but not consumer2...
self.assertRaises(
exception.NotFound, consumer_obj.Consumer.get_by_uuid,
self.ctx, uuids.consumer2)
# DELETE /allocations/{consumer_uuid} is the other place where we
# delete all allocations for a consumer. Let's delete all for consumer1
# and check that the consumer record is deleted
alloc_list = rp_obj.AllocationList.get_all_by_consumer_id(
self.ctx, uuids.consumer1)
alloc_list.delete_all()
# consumer1 should no longer exist in the DB since we just deleted all
# of its allocations
self.assertRaises(
exception.NotFound, consumer_obj.Consumer.get_by_uuid,
self.ctx, uuids.consumer1)

31
nova/tests/functional/api/openstack/placement/db/test_project.py

@ -1,31 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_utils.fixture import uuidsentinel as uuids
from nova.api.openstack.placement import exception
from nova.api.openstack.placement.objects import project as project_obj
from nova.tests.functional.api.openstack.placement.db import test_base as tb
class ProjectTestCase(tb.PlacementDbBaseTestCase):
def test_non_existing_project(self):
self.assertRaises(
exception.ProjectNotFound, project_obj.Project.get_by_external_id,
self.ctx, uuids.non_existing_project)
def test_create_and_get(self):
p = project_obj.Project(self.ctx, external_id='another-project')
p.create()
p = project_obj.Project.get_by_external_id(self.ctx, 'another-project')
# Project ID == 1 is fake-project created in setup
self.assertEqual(2, p.id)
self.assertRaises(exception.ProjectExists, p.create)

359
nova/tests/functional/api/openstack/placement/db/test_reshape.py

@ -1,359 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_utils.fixture import uuidsentinel as uuids
from nova.api.openstack.placement import exception
from nova.api.openstack.placement.objects import consumer as consumer_obj
from nova.api.openstack.placement.objects import resource_provider as rp_obj
from nova.tests.functional.api.openstack.placement.db import test_base as tb
def alloc_for_rc(alloc_list, rc):
for alloc in alloc_list:
if alloc.resource_class == rc:
return alloc
class ReshapeTestCase(tb.PlacementDbBaseTestCase):
"""Test 'replace the world' reshape transaction."""
def test_reshape(self):
"""We set up the following scenario:
BEFORE: single compute node setup
A single compute node with:
- VCPU, MEMORY_MB, DISK_GB inventory
- Two instances consuming CPU, RAM and DISK from that compute node
AFTER: hierarchical + shared storage setup
A compute node parent provider with:
- MEMORY_MB
Two NUMA node child providers containing:
- VCPU
Shared storage provider with:
- DISK_GB
Both instances have their resources split among the providers and
shared storage accordingly
"""
# First create our consumers
i1_uuid = uuids.instance1
i1_consumer = consumer_obj.Consumer(
self.ctx, uuid=i1_uuid, user=self.user_obj,
project=self.project_obj)
i1_consumer.create()
i2_uuid = uuids.instance2
i2_consumer = consumer_obj.Consumer(
self.ctx, uuid=i2_uuid, user=self.user_obj,
project=self.project_obj)
i2_consumer.create()
cn1 = self._create_provider('cn1')
tb.add_inventory(cn1, 'VCPU', 16)
tb.add_inventory(cn1, 'MEMORY_MB', 32768)
tb.add_inventory(cn1, 'DISK_GB', 1000)
# Allocate both instances against the single compute node
for consumer in (i1_consumer, i2_consumer):
allocs = [
rp_obj.Allocation(
self.ctx, resource_provider=cn1,
resource_class='VCPU', consumer=consumer, used=2),
rp_obj.Allocation(
self.ctx, resource_provider=cn1,
resource_class='MEMORY_MB', consumer=consumer, used=1024),
rp_obj.Allocation(
self.ctx, resource_provider=cn1,
resource_class='DISK_GB', consumer=consumer, used=100),
]
alloc_list = rp_obj.AllocationList(self.ctx, objects=allocs)
alloc_list.replace_all()
# Verify we have the allocations we expect for the BEFORE scenario
before_allocs_i1 = rp_obj.AllocationList.get_all_by_consumer_id(
self.ctx, i1_uuid)
self.assertEqual(3, len(before_allocs_i1))
self.assertEqual(cn1.uuid, before_allocs_i1[0].resource_provider.uuid)
before_allocs_i2 = rp_obj.AllocationList.get_all_by_consumer_id(
self.ctx, i2_uuid)
self.assertEqual(3, len(before_allocs_i2))
self.assertEqual(cn1.uuid, before_allocs_i2[2].resource_provider.uuid)
# Before we issue the actual reshape() call, we need to first create
# the child providers and sharing storage provider. These are actions
# that the virt driver or external agent is responsible for performing
# *before* attempting any reshape activity.
cn1_numa0 = self._create_provider('cn1_numa0', parent=cn1.uuid)
cn1_numa1 = self._create_provider('cn1_numa1', parent=cn1.uuid)
ss = self._create_provider('ss')
# OK, now emulate the call to POST /reshaper that will be triggered by
# a virt driver wanting to replace the world and change its modeling
# from a single provider to a nested provider tree along with a sharing
# storage provider.
after_inventories = {
# cn1 keeps the RAM only
cn1: rp_obj.InventoryList(self.ctx, objects=[
rp_obj.Inventory(
self.ctx, resource_provider=cn1,
resource_class='MEMORY_MB', total=32768, reserved=0,
max_unit=32768, min_unit=1, step_size=1,
allocation_ratio=1.0),
]),
# each NUMA node gets half of the CPUs
cn1_numa0: rp_obj.InventoryList(self.ctx, objects=[
rp_obj.Inventory(
self.ctx, resource_provider=cn1_numa0,
resource_class='VCPU', total=8, reserved=0,
max_unit=8, min_unit=1, step_size=1,
allocation_ratio=1.0),
]),
cn1_numa1: rp_obj.InventoryList(self.ctx, objects=[
rp_obj.Inventory(
self.ctx, resource_provider=cn1_numa1,
resource_class='VCPU', total=8, reserved=0,
max_unit=8, min_unit=1, step_size=1,
allocation_ratio=1.0),
]),
# The sharing provider gets a bunch of disk
ss: rp_obj.InventoryList(self.ctx, objects=[
rp_obj.Inventory(
self.ctx, resource_provider=ss,
resource_class='DISK_GB', total=100000, reserved=0,
max_unit=1000, min_unit=1, step_size=1,
allocation_ratio=1.0),
]),
}
# We do a fetch from the DB for each instance to get its latest
# generation. This would be done by the resource tracker or scheduler
# report client before issuing the call to reshape() because the
# consumers representing the two instances above will have had their
# generations incremented in the original call to PUT
# /allocations/{consumer_uuid}
i1_consumer = consumer_obj.Consumer.get_by_uuid(self.ctx, i1_uuid)
i2_consumer = consumer_obj.Consumer.get_by_uuid(self.ctx, i2_uuid)
after_allocs = rp_obj.AllocationList(self.ctx, objects=[
# instance1 gets VCPU from NUMA0, MEMORY_MB from cn1 and DISK_GB
# from the sharing storage provider
rp_obj.Allocation(
self.ctx, resource_provider=cn1_numa0, resource_class='VCPU',
consumer=i1_consumer, used=2),
rp_obj.Allocation(
self.ctx, resource_provider=cn1, resource_class='MEMORY_MB',
consumer=i1_consumer, used=1024),
rp_obj.Allocation(
self.ctx, resource_provider=ss, resource_class='DISK_GB',
consumer=i1_consumer, used=100),
# instance2 gets VCPU from NUMA1, MEMORY_MB from cn1 and DISK_GB
# from the sharing storage provider
rp_obj.Allocation(
self.ctx, resource_provider=cn1_numa1, resource_class='VCPU',
consumer=i2_consumer, used=2),
rp_obj.Allocation(
self.ctx, resource_provider=cn1, resource_class='MEMORY_MB',
consumer=i2_consumer, used=1024),
rp_obj.Allocation(
self.ctx, resource_provider=ss, resource_class='DISK_GB',
consumer=i2_consumer, used=100),
])
rp_obj.reshape(self.ctx, after_inventories, after_allocs)
# Verify that the inventories have been moved to the appropriate
# providers in the AFTER scenario
# The root compute node should only have MEMORY_MB, nothing else
cn1_inv = rp_obj.InventoryList.get_all_by_resource_provider(
self.ctx, cn1)
self.assertEqual(1, len(cn1_inv))
self.assertEqual('MEMORY_MB', cn1_inv[0].resource_class)
self.assertEqual(32768, cn1_inv[0].total)
# Each NUMA node should only have half the original VCPU, nothing else
numa0_inv = rp_obj.InventoryList.get_all_by_resource_provider(
self.ctx, cn1_numa0)
self.assertEqual(1, len(numa0_inv))
self.assertEqual('VCPU', numa0_inv[0].resource_class)
self.assertEqual(8, numa0_inv[0].total)
numa1_inv = rp_obj.InventoryList.get_all_by_resource_provider(
self.ctx, cn1_numa1)
self.assertEqual(1, len(numa1_inv))
self.assertEqual('VCPU', numa1_inv[0].resource_class)
self.assertEqual(8, numa1_inv[0].total)
# The sharing storage provider should only have DISK_GB, nothing else
ss_inv = rp_obj.InventoryList.get_all_by_resource_provider(
self.ctx, ss)
self.assertEqual(1, len(ss_inv))
self.assertEqual('DISK_GB', ss_inv[0].resource_class)
self.assertEqual(100000, ss_inv[0].total)
# Verify we have the allocations we expect for the AFTER scenario
after_allocs_i1 = rp_obj.AllocationList.get_all_by_consumer_id(
self.ctx, i1_uuid)
self.assertEqual(3, len(after_allocs_i1))
# Our VCPU allocation should be in the NUMA0 node
vcpu_alloc = alloc_for_rc(after_allocs_i1, 'VCPU')
self.assertIsNotNone(vcpu_alloc)
self.assertEqual(cn1_numa0.uuid, vcpu_alloc.resource_provider.uuid)
# Our DISK_GB allocation should be in the sharing provider
disk_alloc = alloc_for_rc(after_allocs_i1, 'DISK_GB')
self.assertIsNotNone(disk_alloc)
self.assertEqual(ss.uuid, disk_alloc.resource_provider.uuid)
# And our MEMORY_MB should remain on the root compute node
ram_alloc = alloc_for_rc(after_allocs_i1, 'MEMORY_MB')
self.assertIsNotNone(ram_alloc)
self.assertEqual(cn1.uuid, ram_alloc.resource_provider.uuid)
after_allocs_i2 = rp_obj.AllocationList.get_all_by_consumer_id(
self.ctx, i2_uuid)
self.assertEqual(3, len(after_allocs_i2))
# Our VCPU allocation should be in the NUMA1 node
vcpu_alloc = alloc_for_rc(after_allocs_i2, 'VCPU')
self.assertIsNotNone(vcpu_alloc)
self.assertEqual(cn1_numa1.uuid, vcpu_alloc.resource_provider.uuid)
# Our DISK_GB allocation should be in the sharing provider
disk_alloc = alloc_for_rc(after_allocs_i2, 'DISK_GB')
self.assertIsNotNone(disk_alloc)
self.assertEqual(ss.uuid, disk_alloc.resource_provider.uuid)
# And our MEMORY_MB should remain on the root compute node
ram_alloc = alloc_for_rc(after_allocs_i2, 'MEMORY_MB')
self.assertIsNotNone(ram_alloc)
self.assertEqual(cn1.uuid, ram_alloc.resource_provider.uuid)
def test_reshape_concurrent_inventory_update(self):
"""Valid failure scenario for reshape(). We test a situation where the
virt driver has constructed it's "after inventories and allocations"
and sent those to the POST /reshape endpoint. The reshape POST handler
does a quick check of the resource provider generations sent in the
payload and they all check out.
However, right before the call to resource_provider.reshape(), another
thread legitimately changes the inventory of one of the providers
involved in the reshape transaction. We should get a
ConcurrentUpdateDetected in this case.
"""
# First create our consumers
i1_uuid = uuids.instance1
i1_consumer = consumer_obj.Consumer(
self.ctx, uuid=i1_uuid, user=self.user_obj,
project=self.project_obj)
i1_consumer.create()
# then all our original providers
cn1 = self._create_provider('cn1')
tb.add_inventory(cn1, 'VCPU', 16)
tb.add_inventory(cn1, 'MEMORY_MB', 32768)
tb.add_inventory(cn1, 'DISK_GB', 1000)
# Allocate an instance on our compute node
allocs = [
rp_obj.Allocation(
self.ctx, resource_provider=cn1,
resource_class='VCPU', consumer=i1_consumer, used=2),
rp_obj.Allocation(
self.ctx, resource_provider=cn1,
resource_class='MEMORY_MB', consumer=i1_consumer, used=1024),
rp_obj.Allocation(
self.ctx, resource_provider=cn1,
resource_class='DISK_GB', consumer=i1_consumer, used=100),
]
alloc_list = rp_obj.AllocationList(self.ctx, objects=allocs)
alloc_list.replace_all()
# Before we issue the actual reshape() call, we need to first create
# the child providers and sharing storage provider. These are actions
# that the virt driver or external agent is responsible for performing
# *before* attempting any reshape activity.
cn1_numa0 = self._create_provider('cn1_numa0', parent=cn1.uuid)
cn1_numa1 = self._create_provider('cn1_numa1', parent=cn1.uuid)
ss = self._create_provider('ss')
# OK, now emulate the call to POST /reshaper that will be triggered by
# a virt driver wanting to replace the world and change its modeling
# from a single provider to a nested provider tree along with a sharing
# storage provider.
after_inventories = {
# cn1 keeps the RAM only
cn1: rp_obj.InventoryList(self.ctx, objects=[
rp_obj.Inventory(
self.ctx, resource_provider=cn1,
resource_class='MEMORY_MB', total=32768, reserved=0,
max_unit=32768, min_unit=1, step_size=1,
allocation_ratio=1.0),
]),
# each NUMA node gets half of the CPUs
cn1_numa0: rp_obj.InventoryList(self.ctx, objects=[
rp_obj.Inventory(
self.ctx, resource_provider=cn1_numa0,
resource_class='VCPU', total=8, reserved=0,
max_unit=8, min_unit=1, step_size=1,
allocation_ratio=1.0),
]),
cn1_numa1: rp_obj.InventoryList(self.ctx, objects=[
rp_obj.Inventory(
self.ctx, resource_provider=cn1_numa1,
resource_class='VCPU', total=8, reserved=0,
max_unit=8, min_unit=1, step_size=1,
allocation_ratio=1.0),
]),
# The sharing provider gets a bunch of disk
ss: rp_obj.InventoryList(self.ctx, objects=[
rp_obj.Inventory(
self.ctx, resource_provider=ss,
resource_class='DISK_GB', total=100000, reserved=0,
max_unit=1000, min_unit=1, step_size=1,
allocation_ratio=1.0),
]),
}
# We do a fetch from the DB for each instance to get its latest
# generation. This would be done by the resource tracker or scheduler
# report client before issuing the call to reshape() because the
# consumers representing the two instances above will have had their
# generations incremented in the original call to PUT
# /allocations/{consumer_uuid}
i1_consumer = consumer_obj.Consumer.get_by_uuid(self.ctx, i1_uuid)
after_allocs = rp_obj.AllocationList(self.ctx, objects=[
# instance1 gets VCPU from NUMA0, MEMORY_MB from cn1 and DISK_GB
# from the sharing storage provider
rp_obj.Allocation(
self.ctx, resource_provider=cn1_numa0, resource_class='VCPU',
consumer=i1_consumer, used=2),
rp_obj.Allocation(
self.ctx, resource_provider=cn1, resource_class='MEMORY_MB',
consumer=i1_consumer, used=1024),
rp_obj.Allocation(
self.ctx, resource_provider=ss, resource_class='DISK_GB',
consumer=i1_consumer, used=100),
])
# OK, now before we call reshape(), here we emulate another thread
# changing the inventory for the sharing storage provider in between
# the time in the REST handler when the sharing storage provider's
# generation was validated and the actual call to reshape()
ss_threadB = rp_obj.ResourceProvider.get_by_uuid(self.ctx, ss.uuid)
# Reduce the amount of storage to 2000, from 100000.
new_ss_inv = rp_obj.InventoryList(self.ctx, objects=[
rp_obj.Inventory(
self.ctx, resource_provider=ss_threadB,
resource_class='DISK_GB', total=2000, reserved=0,
max_unit=1000, min_unit=1, step_size=1,
allocation_ratio=1.0)])
ss_threadB.set_inventory(new_ss_inv)
# Double check our storage provider's generation is now greater than
# the original storage provider record being sent to reshape()
self.assertGreater(ss_threadB.generation, ss.generation)
# And we should legitimately get a failure now to reshape() due to
# another thread updating one of the involved provider's generations
self.assertRaises(
exception.ConcurrentUpdateDetected,
rp_obj.reshape, self.ctx, after_inventories, after_allocs)

145
nova/tests/functional/api/openstack/placement/db/test_resource_class_cache.py

@ -1,145 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import mock
from oslo_utils import timeutils
from nova.api.openstack.placement import exception
from nova.api.openstack.placement import resource_class_cache as rc_cache
from nova import rc_fields as fields
from nova.tests.functional.api.openstack.placement import base
class TestResourceClassCache(base.TestCase):
def setUp(self):
super(TestResourceClassCache, self).setUp()
db = self.placement_db
self.context = mock.Mock()
sess_mock = mock.Mock()
sess_mock.connection.side_effect = db.get_engine().connect
self.context.session = sess_mock
@mock.patch('sqlalchemy.select')
def test_rc_cache_std_no_db(self, sel_mock):
"""Test that looking up either an ID or a string in the resource class
cache for a standardized resource class does not result in a DB
call.
"""
cache = rc_cache.ResourceClassCache(self.context)
self.assertEqual('VCPU', cache.string_from_id(0))
self.assertEqual('MEMORY_MB', cache.string_from_id(1))
self.assertEqual(0, cache.id_from_string('VCPU'))
self.assertEqual(1, cache.id_from_string('MEMORY_MB'))
self.assertFalse(sel_mock.called)
def test_standards(self):
cache = rc_cache.ResourceClassCache(self.context)
standards = cache.STANDARDS
self.assertEqual(len(standards), len(fields.ResourceClass.STANDARD))
names = (rc['name'] for rc in standards)
for name in fields.ResourceClass.STANDARD:
self.assertIn(name, names)
cache = rc_cache.ResourceClassCache(self.context)
standards2 = cache.STANDARDS
self.assertEqual(id(standards), id(standards2))
def test_standards_have_time_fields(self):
cache = rc_cache.ResourceClassCache(self.context)
standards = cache.STANDARDS
first_standard = standards[0]
self.assertIn('updated_at', first_standard)
self.assertIn('created_at', first_standard)
self.assertIsNone(first_standard['updated_at'])
self.assertIsNone(first_standard['created_at'])
def test_standard_has_time_fields(self):
cache = rc_cache.ResourceClassCache(self.context)
vcpu_class = cache.all_from_string('VCPU')
expected = {'id': 0, 'name': 'VCPU', 'updated_at': None,
'created_at': None}
self.assertEqual(expected, vcpu_class)
def test_rc_cache_custom(self):
"""Test that non-standard, custom resource classes hit the database and
return appropriate results, caching the results after a single
query.
"""
cache = rc_cache.ResourceClassCache(self.context)
# Haven't added anything to the DB yet, so should raise
# ResourceClassNotFound
self.assertRaises(exception.ResourceClassNotFound,
cache.string_from_id, 1001)
self.assertRaises(exception.ResourceClassNotFound,
cache.id_from_string, "IRON_NFV")
# Now add to the database and verify appropriate results...
with self.context.session.connection() as conn:
ins_stmt = rc_cache._RC_TBL.insert().values(
id=1001,
name='IRON_NFV'
)
conn.execute(ins_stmt)
self.assertEqual('IRON_NFV', cache.string_from_id(1001))
self.assertEqual(1001, cache.id_from_string('IRON_NFV'))
# Try same again and verify we don't hit the DB.
with mock.patch('sqlalchemy.select') as sel_mock:
self.assertEqual('IRON_NFV', cache.string_from_id(1001))
self.assertEqual(1001, cache.id_from_string('IRON_NFV'))
self.assertFalse(sel_mock.called)
# Verify all fields available from all_from_string
iron_nfv_class = cache.all_from_string('IRON_NFV')
self.assertEqual(1001, iron_nfv_class['id'])
self.assertEqual('IRON_NFV', iron_nfv_class['name'])
# updated_at not set on insert
self.assertIsNone(iron_nfv_class['updated_at'])
self.assertIsInstance(iron_nfv_class['created_at'], datetime.datetime)
# Update IRON_NFV (this is a no-op but will set updated_at)
with self.context.session.connection() as conn:
# NOTE(cdent): When using explict SQL that names columns,
# the automatic timestamp handling provided by the oslo_db
# TimestampMixin is not provided. created_at is a default
# but updated_at is an onupdate.
upd_stmt = rc_cache._RC_TBL.update().where(
rc_cache._RC_TBL.c.id == 1001).values(
name='IRON_NFV', updated_at=timeutils.utcnow())
conn.execute(upd_stmt)
# reset cache
cache = rc_cache.ResourceClassCache(self.