Browse Source

Use external placement in functional tests

Adjust the fixtures used by the functional tests so they
use placement database and web fixtures defined by placement
code. To avoid making redundant changes, the solely placement-
related unit and functional tests are removed, but the placement
code itself is not (yet).

openstack-placement is required by the functional tests. It is not
added to test-requirements as we do not want unit tests to depend
on placement in any way, and we enforce this by not having placement
in the test env.

The concept of tox-siblings is used to ensure that the
placement requirement will be satisfied correctly if there is a
depends-on. To make this happen, the functional jobs defined in
.zuul.yaml are updated to require openstack/placement.

tox.ini has to be updated to use a envdir that is the same
name as job. Otherwise the tox siblings role in ansible cannot work.

The handling of the placement fixtures is moved out of nova/test.py
into the functional tests that actually use it because we do not
want unit tests (which get the base test class out of test.py) to
have anything to do with placement. This requires adjusting some
test files to use absolute import.

Similarly, a test of the comparison function for the api samples tests
is moved into functional, because it depends on placement functionality,

TestUpgradeCheckResourceProviders in unit.cmd.test_status is moved into
a new test file: nova/tests/functional/test_nova_status.py. This is done
because it requires the PlacementFixture, which is only available to
functional tests. A MonkeyPatch is required in the test to make sure that
the right context managers are used at the right time in the command
itself (otherwise some tables do no exist). In the test itself, to avoid
speaking directly to the placement database, which would require
manipulating the RequestContext objects, resource providers are now
created over the API.

Co-Authored-By: Balazs Gibizer <balazs.gibizer@ericsson.com>
Change-Id: Idaed39629095f86d24a54334c699a26c218c6593
tags/19.0.0.0rc1
Chris Dent 7 months ago
parent
commit
787bb33606
100 changed files with 227 additions and 15513 deletions
  1. 6
    0
      .zuul.yaml
  2. 3
    0
      nova/cmd/manage.py
  3. 1
    0
      nova/cmd/status.py
  4. 0
    10
      nova/test.py
  5. 3
    142
      nova/tests/fixtures.py
  6. 0
    0
      nova/tests/functional/api/openstack/placement/__init__.py
  7. 0
    69
      nova/tests/functional/api/openstack/placement/base.py
  8. 0
    0
      nova/tests/functional/api/openstack/placement/db/__init__.py
  9. 0
    2800
      nova/tests/functional/api/openstack/placement/db/test_allocation_candidates.py
  10. 0
    129
      nova/tests/functional/api/openstack/placement/db/test_base.py
  11. 0
    329
      nova/tests/functional/api/openstack/placement/db/test_consumer.py
  12. 0
    31
      nova/tests/functional/api/openstack/placement/db/test_project.py
  13. 0
    359
      nova/tests/functional/api/openstack/placement/db/test_reshape.py
  14. 0
    145
      nova/tests/functional/api/openstack/placement/db/test_resource_class_cache.py
  15. 0
    2391
      nova/tests/functional/api/openstack/placement/db/test_resource_provider.py
  16. 0
    31
      nova/tests/functional/api/openstack/placement/db/test_user.py
  17. 0
    0
      nova/tests/functional/api/openstack/placement/fixtures/__init__.py
  18. 0
    81
      nova/tests/functional/api/openstack/placement/fixtures/capture.py
  19. 0
    431
      nova/tests/functional/api/openstack/placement/fixtures/gabbits.py
  20. 0
    49
      nova/tests/functional/api/openstack/placement/fixtures/placement.py
  21. 0
    39
      nova/tests/functional/api/openstack/placement/gabbits/aggregate-policy.yaml
  22. 0
    204
      nova/tests/functional/api/openstack/placement/gabbits/aggregate.yaml
  23. 0
    77
      nova/tests/functional/api/openstack/placement/gabbits/allocation-bad-class.yaml
  24. 0
    141
      nova/tests/functional/api/openstack/placement/gabbits/allocation-candidates-member-of.yaml
  25. 0
    18
      nova/tests/functional/api/openstack/placement/gabbits/allocation-candidates-policy.yaml
  26. 0
    416
      nova/tests/functional/api/openstack/placement/gabbits/allocation-candidates.yaml
  27. 0
    130
      nova/tests/functional/api/openstack/placement/gabbits/allocations-1-12.yaml
  28. 0
    152
      nova/tests/functional/api/openstack/placement/gabbits/allocations-1-8.yaml
  29. 0
    255
      nova/tests/functional/api/openstack/placement/gabbits/allocations-1.28.yaml
  30. 0
    97
      nova/tests/functional/api/openstack/placement/gabbits/allocations-bug-1714072.yaml
  31. 0
    71
      nova/tests/functional/api/openstack/placement/gabbits/allocations-bug-1778591.yaml
  32. 0
    70
      nova/tests/functional/api/openstack/placement/gabbits/allocations-bug-1778743.yaml
  33. 0
    102
      nova/tests/functional/api/openstack/placement/gabbits/allocations-bug-1779717.yaml
  34. 0
    76
      nova/tests/functional/api/openstack/placement/gabbits/allocations-policy.yaml
  35. 0
    399
      nova/tests/functional/api/openstack/placement/gabbits/allocations-post.yaml
  36. 0
    509
      nova/tests/functional/api/openstack/placement/gabbits/allocations.yaml
  37. 0
    207
      nova/tests/functional/api/openstack/placement/gabbits/basic-http.yaml
  38. 0
    38
      nova/tests/functional/api/openstack/placement/gabbits/bug-1674694.yaml
  39. 0
    32
      nova/tests/functional/api/openstack/placement/gabbits/confirm-auth.yaml
  40. 0
    47
      nova/tests/functional/api/openstack/placement/gabbits/cors.yaml
  41. 0
    41
      nova/tests/functional/api/openstack/placement/gabbits/ensure-consumer.yaml
  42. 0
    474
      nova/tests/functional/api/openstack/placement/gabbits/granular.yaml
  43. 0
    85
      nova/tests/functional/api/openstack/placement/gabbits/inventory-policy.yaml
  44. 0
    812
      nova/tests/functional/api/openstack/placement/gabbits/inventory.yaml
  45. 0
    22
      nova/tests/functional/api/openstack/placement/gabbits/microversion-bug-1724065.yaml
  46. 0
    90
      nova/tests/functional/api/openstack/placement/gabbits/microversion.yaml
  47. 0
    25
      nova/tests/functional/api/openstack/placement/gabbits/non-cors.yaml
  48. 0
    20
      nova/tests/functional/api/openstack/placement/gabbits/reshaper-policy.yaml
  49. 0
    558
      nova/tests/functional/api/openstack/placement/gabbits/reshaper.yaml
  50. 0
    80
      nova/tests/functional/api/openstack/placement/gabbits/resource-class-in-use.yaml
  51. 0
    21
      nova/tests/functional/api/openstack/placement/gabbits/resource-classes-1-6.yaml
  52. 0
    49
      nova/tests/functional/api/openstack/placement/gabbits/resource-classes-1-7.yaml
  53. 0
    117
      nova/tests/functional/api/openstack/placement/gabbits/resource-classes-last-modified.yaml
  54. 0
    40
      nova/tests/functional/api/openstack/placement/gabbits/resource-classes-policy.yaml
  55. 0
    325
      nova/tests/functional/api/openstack/placement/gabbits/resource-classes.yaml
  56. 0
    181
      nova/tests/functional/api/openstack/placement/gabbits/resource-provider-aggregates.yaml
  57. 0
    123
      nova/tests/functional/api/openstack/placement/gabbits/resource-provider-bug-1779818.yaml
  58. 0
    48
      nova/tests/functional/api/openstack/placement/gabbits/resource-provider-duplication.yaml
  59. 0
    106
      nova/tests/functional/api/openstack/placement/gabbits/resource-provider-links.yaml
  60. 0
    48
      nova/tests/functional/api/openstack/placement/gabbits/resource-provider-policy.yaml
  61. 0
    156
      nova/tests/functional/api/openstack/placement/gabbits/resource-provider-resources-query.yaml
  62. 0
    775
      nova/tests/functional/api/openstack/placement/gabbits/resource-provider.yaml
  63. 0
    143
      nova/tests/functional/api/openstack/placement/gabbits/shared-resources.yaml
  64. 0
    55
      nova/tests/functional/api/openstack/placement/gabbits/traits-policy.yaml
  65. 0
    487
      nova/tests/functional/api/openstack/placement/gabbits/traits.yaml
  66. 0
    40
      nova/tests/functional/api/openstack/placement/gabbits/unicode.yaml
  67. 0
    33
      nova/tests/functional/api/openstack/placement/gabbits/usage-policy.yaml
  68. 0
    120
      nova/tests/functional/api/openstack/placement/gabbits/usage.yaml
  69. 0
    159
      nova/tests/functional/api/openstack/placement/gabbits/with-allocations.yaml
  70. 0
    77
      nova/tests/functional/api/openstack/placement/test_direct.py
  71. 0
    44
      nova/tests/functional/api/openstack/placement/test_placement_api.py
  72. 0
    50
      nova/tests/functional/api/openstack/placement/test_verify_policy.py
  73. 1
    0
      nova/tests/functional/api_paste_fixture.py
  74. 0
    2
      nova/tests/functional/api_sample_tests/api_sample_base.py
  75. 0
    0
      nova/tests/functional/api_sample_tests/test_compare_result.py
  76. 2
    2
      nova/tests/functional/compute/test_resource_tracker.py
  77. 150
    0
      nova/tests/functional/fixtures.py
  78. 3
    3
      nova/tests/functional/integrated_helpers.py
  79. 2
    2
      nova/tests/functional/libvirt/base.py
  80. 2
    1
      nova/tests/functional/notification_sample_tests/notification_sample_base.py
  81. 2
    1
      nova/tests/functional/regressions/test_bug_1595962.py
  82. 2
    1
      nova/tests/functional/regressions/test_bug_1671648.py
  83. 2
    1
      nova/tests/functional/regressions/test_bug_1675570.py
  84. 18
    4
      nova/tests/functional/regressions/test_bug_1679750.py
  85. 2
    1
      nova/tests/functional/regressions/test_bug_1682693.py
  86. 2
    1
      nova/tests/functional/regressions/test_bug_1702454.py
  87. 2
    1
      nova/tests/functional/regressions/test_bug_1713783.py
  88. 2
    1
      nova/tests/functional/regressions/test_bug_1718455.py
  89. 2
    1
      nova/tests/functional/regressions/test_bug_1718512.py
  90. 2
    1
      nova/tests/functional/regressions/test_bug_1719730.py
  91. 2
    1
      nova/tests/functional/regressions/test_bug_1735407.py
  92. 2
    1
      nova/tests/functional/regressions/test_bug_1741307.py
  93. 2
    1
      nova/tests/functional/regressions/test_bug_1746483.py
  94. 2
    1
      nova/tests/functional/regressions/test_bug_1764883.py
  95. 2
    1
      nova/tests/functional/regressions/test_bug_1780373.py
  96. 2
    1
      nova/tests/functional/regressions/test_bug_1781710.py
  97. 2
    1
      nova/tests/functional/regressions/test_bug_1784353.py
  98. 2
    1
      nova/tests/functional/regressions/test_bug_1797580.py
  99. 2
    1
      nova/tests/functional/regressions/test_bug_1806064.py
  100. 0
    0
      nova/tests/functional/test_aggregates.py

+ 6
- 0
.zuul.yaml View File

@@ -48,6 +48,8 @@
48 48
       Run tox-based functional tests for the OpenStack Nova project with Nova
49 49
       specific irrelevant-files list. Uses tox with the ``functional``
50 50
       environment.
51
+    required-projects:
52
+      - openstack/placement
51 53
     irrelevant-files: &functional-irrelevant-files
52 54
       - ^.*\.rst$
53 55
       - ^api-.*$
@@ -56,6 +58,7 @@
56 58
       - ^releasenotes/.*$
57 59
     vars:
58 60
       tox_envlist: functional
61
+      tox_install_siblings: true
59 62
     timeout: 3600
60 63
 
61 64
 - job:
@@ -65,9 +68,12 @@
65 68
       Run tox-based functional tests for the OpenStack Nova project
66 69
       under cPython version 3.5. with Nova specific irrelevant-files list.
67 70
       Uses tox with the ``functional-py35`` environment.
71
+    required-projects:
72
+      - openstack/placement
68 73
     irrelevant-files: *functional-irrelevant-files
69 74
     vars:
70 75
       tox_envlist: functional-py35
76
+      tox_install_siblings: true
71 77
     timeout: 3600
72 78
 
73 79
 - job:

+ 3
- 0
nova/cmd/manage.py View File

@@ -45,6 +45,7 @@ import six
45 45
 import six.moves.urllib.parse as urlparse
46 46
 from sqlalchemy.engine import url as sqla_url
47 47
 
48
+# FIXME(cdent): This is a speedbump in the extraction process
48 49
 from nova.api.openstack.placement.objects import consumer as consumer_obj
49 50
 from nova.cmd import common as cmd_common
50 51
 from nova.compute import api as compute_api
@@ -416,6 +417,7 @@ class DbCommands(object):
416 417
         # need to be populated if it was not specified during boot time.
417 418
         instance_obj.populate_missing_availability_zones,
418 419
         # Added in Rocky
420
+        # FIXME(cdent): This is a factor that needs to be addressed somehow
419 421
         consumer_obj.create_incomplete_consumers,
420 422
         # Added in Rocky
421 423
         instance_mapping_obj.populate_queued_for_delete,
@@ -1987,6 +1989,7 @@ class PlacementCommands(object):
1987 1989
 
1988 1990
         return num_processed
1989 1991
 
1992
+    # FIXME(cdent): This needs to be addressed as part of extraction.
1990 1993
     @action_description(
1991 1994
         _("Iterates over non-cell0 cells looking for instances which do "
1992 1995
           "not have allocations in the Placement service, or have incomplete "

+ 1
- 0
nova/cmd/status.py View File

@@ -251,6 +251,7 @@ class UpgradeCommands(object):
251 251
         # and resource class, so we can simply count the number of inventories
252 252
         # records for the given resource class and those will uniquely identify
253 253
         # the number of resource providers we care about.
254
+        # FIXME(cdent): This will be a different project soon.
254 255
         meta = MetaData(bind=placement_db.get_placement_engine())
255 256
         inventories = Table('inventories', meta, autoload=True)
256 257
         return select([sqlfunc.count()]).select_from(

+ 0
- 10
nova/test.py View File

@@ -49,7 +49,6 @@ from oslotest import moxstubout
49 49
 import six
50 50
 import testtools
51 51
 
52
-from nova.api.openstack.placement.objects import resource_provider
53 52
 from nova import context
54 53
 from nova.db import api as db
55 54
 from nova import exception
@@ -260,7 +259,6 @@ class TestCase(testtools.TestCase):
260 259
             # NOTE(danms): Full database setup involves a cell0, cell1,
261 260
             # and the relevant mappings.
262 261
             self.useFixture(nova_fixtures.Database(database='api'))
263
-            self.useFixture(nova_fixtures.Database(database='placement'))
264 262
             self._setup_cells()
265 263
             self.useFixture(nova_fixtures.DefaultFlavorsFixture())
266 264
         elif not self.USES_DB_SELF:
@@ -281,12 +279,6 @@ class TestCase(testtools.TestCase):
281 279
         # caching of that value.
282 280
         utils._IS_NEUTRON = None
283 281
 
284
-        # Reset the traits sync and rc cache flags
285
-        def _reset_traits():
286
-            resource_provider._TRAITS_SYNCED = False
287
-        _reset_traits()
288
-        self.addCleanup(_reset_traits)
289
-        resource_provider._RC_CACHE = None
290 282
         # Reset the global QEMU version flag.
291 283
         images.QEMU_VERSION = None
292 284
 
@@ -296,8 +288,6 @@ class TestCase(testtools.TestCase):
296 288
         self.addCleanup(self._clear_attrs)
297 289
         self.useFixture(fixtures.EnvironmentVariable('http_proxy'))
298 290
         self.policy = self.useFixture(policy_fixture.PolicyFixture())
299
-        self.placement_policy = self.useFixture(
300
-            policy_fixture.PlacementPolicyFixture())
301 291
 
302 292
         self.useFixture(nova_fixtures.PoisonFunctions())
303 293
 

+ 3
- 142
nova/tests/fixtures.py View File

@@ -26,8 +26,6 @@ import random
26 26
 import warnings
27 27
 
28 28
 import fixtures
29
-from keystoneauth1 import adapter as ka
30
-from keystoneauth1 import session as ks
31 29
 import mock
32 30
 from neutronclient.common import exceptions as neutron_client_exc
33 31
 from oslo_concurrency import lockutils
@@ -41,7 +39,6 @@ from requests import adapters
41 39
 from wsgi_intercept import interceptor
42 40
 
43 41
 from nova.api.openstack.compute import tenant_networks
44
-from nova.api.openstack.placement import db_api as placement_db
45 42
 from nova.api.openstack import wsgi_app
46 43
 from nova.api import wsgi
47 44
 from nova.compute import rpcapi as compute_rpcapi
@@ -57,12 +54,11 @@ from nova import quota as nova_quota
57 54
 from nova import rpc
58 55
 from nova import service
59 56
 from nova.tests.functional.api import client
60
-from nova.tests.functional.api.openstack.placement.fixtures import placement
61 57
 
62 58
 _TRUE_VALUES = ('True', 'true', '1', 'yes')
63 59
 
64 60
 CONF = cfg.CONF
65
-DB_SCHEMA = {'main': "", 'api': "", 'placement': ""}
61
+DB_SCHEMA = {'main': "", 'api': ""}
66 62
 SESSION_CONFIGURED = False
67 63
 
68 64
 
@@ -631,7 +627,7 @@ class Database(fixtures.Fixture):
631 627
     def __init__(self, database='main', connection=None):
632 628
         """Create a database fixture.
633 629
 
634
-        :param database: The type of database, 'main', 'api' or 'placement'
630
+        :param database: The type of database, 'main', or 'api'
635 631
         :param connection: The connection string to use
636 632
         """
637 633
         super(Database, self).__init__()
@@ -640,7 +636,6 @@ class Database(fixtures.Fixture):
640 636
         global SESSION_CONFIGURED
641 637
         if not SESSION_CONFIGURED:
642 638
             session.configure(CONF)
643
-            placement_db.configure(CONF)
644 639
             SESSION_CONFIGURED = True
645 640
         self.database = database
646 641
         if database == 'main':
@@ -652,8 +647,6 @@ class Database(fixtures.Fixture):
652 647
                 self.get_engine = session.get_engine
653 648
         elif database == 'api':
654 649
             self.get_engine = session.get_api_engine
655
-        elif database == 'placement':
656
-            self.get_engine = placement_db.get_placement_engine
657 650
 
658 651
     def _cache_schema(self):
659 652
         global DB_SCHEMA
@@ -687,7 +680,7 @@ class DatabaseAtVersion(fixtures.Fixture):
687 680
         """Create a database fixture.
688 681
 
689 682
         :param version: Max version to sync to (or None for current)
690
-        :param database: The type of database, 'main', 'api', 'placement'
683
+        :param database: The type of database, 'main', 'api'
691 684
         """
692 685
         super(DatabaseAtVersion, self).__init__()
693 686
         self.database = database
@@ -696,8 +689,6 @@ class DatabaseAtVersion(fixtures.Fixture):
696 689
             self.get_engine = session.get_engine
697 690
         elif database == 'api':
698 691
             self.get_engine = session.get_api_engine
699
-        elif database == 'placement':
700
-            self.get_engine = placement_db.get_placement_engine
701 692
 
702 693
     def cleanup(self):
703 694
         engine = self.get_engine()
@@ -1853,136 +1844,6 @@ class CinderFixtureNewAttachFlow(fixtures.Fixture):
1853 1844
                            fake_get_all_volume_types)
1854 1845
 
1855 1846
 
1856
-class PlacementApiClient(object):
1857
-    def __init__(self, placement_fixture):
1858
-        self.fixture = placement_fixture
1859
-
1860
-    def get(self, url, **kwargs):
1861
-        return client.APIResponse(self.fixture._fake_get(None, url, **kwargs))
1862
-
1863
-    def put(self, url, body, **kwargs):
1864
-        return client.APIResponse(
1865
-            self.fixture._fake_put(None, url, body, **kwargs))
1866
-
1867
-    def post(self, url, body, **kwargs):
1868
-        return client.APIResponse(
1869
-            self.fixture._fake_post(None, url, body, **kwargs))
1870
-
1871
-
1872
-class PlacementFixture(placement.PlacementFixture):
1873
-    """A fixture to placement operations.
1874
-
1875
-    Runs a local WSGI server bound on a free port and having the Placement
1876
-    application with NoAuth middleware.
1877
-    This fixture also prevents calling the ServiceCatalog for getting the
1878
-    endpoint.
1879
-
1880
-    It's possible to ask for a specific token when running the fixtures so
1881
-    all calls would be passing this token.
1882
-
1883
-    Most of the time users of this fixture will also want the placement
1884
-    database fixture (called first) as well:
1885
-
1886
-        self.useFixture(nova_fixtures.Database(database='placement'))
1887
-
1888
-    That is left as a manual step so tests may have fine grain control, and
1889
-    because it is likely that these fixtures will continue to evolve as
1890
-    the separation of nova and placement continues.
1891
-    """
1892
-
1893
-    def setUp(self):
1894
-        super(PlacementFixture, self).setUp()
1895
-
1896
-        # Turn off manipulation of socket_options in TCPKeepAliveAdapter
1897
-        # to keep wsgi-intercept happy. Replace it with the method
1898
-        # from its superclass.
1899
-        self.useFixture(fixtures.MonkeyPatch(
1900
-            'keystoneauth1.session.TCPKeepAliveAdapter.init_poolmanager',
1901
-            adapters.HTTPAdapter.init_poolmanager))
1902
-
1903
-        self._client = ka.Adapter(ks.Session(auth=None), raise_exc=False)
1904
-        # NOTE(sbauza): We need to mock the scheduler report client because
1905
-        # we need to fake Keystone by directly calling the endpoint instead
1906
-        # of looking up the service catalog, like we did for the OSAPIFixture.
1907
-        self.useFixture(fixtures.MonkeyPatch(
1908
-            'nova.scheduler.client.report.SchedulerReportClient.get',
1909
-            self._fake_get))
1910
-        self.useFixture(fixtures.MonkeyPatch(
1911
-            'nova.scheduler.client.report.SchedulerReportClient.post',
1912
-            self._fake_post))
1913
-        self.useFixture(fixtures.MonkeyPatch(
1914
-            'nova.scheduler.client.report.SchedulerReportClient.put',
1915
-            self._fake_put))
1916
-        self.useFixture(fixtures.MonkeyPatch(
1917
-            'nova.scheduler.client.report.SchedulerReportClient.delete',
1918
-            self._fake_delete))
1919
-
1920
-        self.api = PlacementApiClient(self)
1921
-
1922
-    @staticmethod
1923
-    def _update_headers_with_version(headers, **kwargs):
1924
-        version = kwargs.get("version")
1925
-        if version is not None:
1926
-            # TODO(mriedem): Perform some version discovery at some point.
1927
-            headers.update({
1928
-                'OpenStack-API-Version': 'placement %s' % version
1929
-            })
1930
-
1931
-    def _fake_get(self, *args, **kwargs):
1932
-        (url,) = args[1:]
1933
-        # TODO(sbauza): The current placement NoAuthMiddleware returns a 401
1934
-        # in case a token is not provided. We should change that by creating
1935
-        # a fake token so we could remove adding the header below.
1936
-        headers = {'x-auth-token': self.token}
1937
-        self._update_headers_with_version(headers, **kwargs)
1938
-        return self._client.get(
1939
-            url,
1940
-            endpoint_override=self.endpoint,
1941
-            headers=headers)
1942
-
1943
-    def _fake_post(self, *args, **kwargs):
1944
-        (url, data) = args[1:]
1945
-        # NOTE(sdague): using json= instead of data= sets the
1946
-        # media type to application/json for us. Placement API is
1947
-        # more sensitive to this than other APIs in the OpenStack
1948
-        # ecosystem.
1949
-        # TODO(sbauza): The current placement NoAuthMiddleware returns a 401
1950
-        # in case a token is not provided. We should change that by creating
1951
-        # a fake token so we could remove adding the header below.
1952
-        headers = {'x-auth-token': self.token}
1953
-        self._update_headers_with_version(headers, **kwargs)
1954
-        return self._client.post(
1955
-            url, json=data,
1956
-            endpoint_override=self.endpoint,
1957
-            headers=headers)
1958
-
1959
-    def _fake_put(self, *args, **kwargs):
1960
-        (url, data) = args[1:]
1961
-        # NOTE(sdague): using json= instead of data= sets the
1962
-        # media type to application/json for us. Placement API is
1963
-        # more sensitive to this than other APIs in the OpenStack
1964
-        # ecosystem.
1965
-        # TODO(sbauza): The current placement NoAuthMiddleware returns a 401
1966
-        # in case a token is not provided. We should change that by creating
1967
-        # a fake token so we could remove adding the header below.
1968
-        headers = {'x-auth-token': self.token}
1969
-        self._update_headers_with_version(headers, **kwargs)
1970
-        return self._client.put(
1971
-            url, json=data,
1972
-            endpoint_override=self.endpoint,
1973
-            headers=headers)
1974
-
1975
-    def _fake_delete(self, *args, **kwargs):
1976
-        (url,) = args[1:]
1977
-        # TODO(sbauza): The current placement NoAuthMiddleware returns a 401
1978
-        # in case a token is not provided. We should change that by creating
1979
-        # a fake token so we could remove adding the header below.
1980
-        return self._client.delete(
1981
-            url,
1982
-            endpoint_override=self.endpoint,
1983
-            headers={'x-auth-token': self.token})
1984
-
1985
-
1986 1847
 class UnHelperfulClientChannel(privsep_daemon._ClientChannel):
1987 1848
     def __init__(self, context):
1988 1849
         raise Exception('You have attempted to start a privsep helper. '

+ 0
- 0
nova/tests/functional/api/openstack/placement/__init__.py View File


+ 0
- 69
nova/tests/functional/api/openstack/placement/base.py View File

@@ -1,69 +0,0 @@
1
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
2
-#    not use this file except in compliance with the License. You may obtain
3
-#    a copy of the License at
4
-#
5
-#         http://www.apache.org/licenses/LICENSE-2.0
6
-#
7
-#    Unless required by applicable law or agreed to in writing, software
8
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
9
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
10
-#    License for the specific language governing permissions and limitations
11
-#    under the License.
12
-
13
-from oslo_config import cfg
14
-from oslo_config import fixture as config_fixture
15
-from oslotest import output
16
-import testtools
17
-
18
-from nova.api.openstack.placement import context
19
-from nova.api.openstack.placement import deploy
20
-from nova.api.openstack.placement.objects import resource_provider
21
-from nova.tests import fixtures
22
-from nova.tests.functional.api.openstack.placement.fixtures import capture
23
-from nova.tests.unit import policy_fixture
24
-
25
-
26
-CONF = cfg.CONF
27
-
28
-
29
-class TestCase(testtools.TestCase):
30
-    """A base test case for placement functional tests.
31
-
32
-    Sets up minimum configuration for database and policy handling
33
-    and establishes the placement database.
34
-    """
35
-
36
-    def setUp(self):
37
-        super(TestCase, self).setUp()
38
-
39
-        # Manage required configuration
40
-        conf_fixture = self.useFixture(config_fixture.Config(CONF))
41
-        # The Database fixture will get confused if only one of the databases
42
-        # is configured.
43
-        for group in ('placement_database', 'api_database', 'database'):
44
-            conf_fixture.config(
45
-                group=group,
46
-                connection='sqlite://',
47
-                sqlite_synchronous=False)
48
-        CONF([], default_config_files=[])
49
-
50
-        self.useFixture(policy_fixture.PlacementPolicyFixture())
51
-
52
-        self.useFixture(capture.Logging())
53
-        self.useFixture(output.CaptureOutput())
54
-        # Filter ignorable warnings during test runs.
55
-        self.useFixture(capture.WarningsFixture())
56
-
57
-        self.placement_db = self.useFixture(
58
-            fixtures.Database(database='placement'))
59
-        self._reset_database()
60
-        self.context = context.RequestContext()
61
-        # Do database syncs, such as traits sync.
62
-        deploy.update_database()
63
-        self.addCleanup(self._reset_database)
64
-
65
-    @staticmethod
66
-    def _reset_database():
67
-        """Reset database sync flags to base state."""
68
-        resource_provider._TRAITS_SYNCED = False
69
-        resource_provider._RC_CACHE = None

+ 0
- 0
nova/tests/functional/api/openstack/placement/db/__init__.py View File


+ 0
- 2800
nova/tests/functional/api/openstack/placement/db/test_allocation_candidates.py
File diff suppressed because it is too large
View File


+ 0
- 129
nova/tests/functional/api/openstack/placement/db/test_base.py View File

@@ -1,129 +0,0 @@
1
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
2
-#    not use this file except in compliance with the License. You may obtain
3
-#    a copy of the License at
4
-#
5
-#         http://www.apache.org/licenses/LICENSE-2.0
6
-#
7
-#    Unless required by applicable law or agreed to in writing, software
8
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
9
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
10
-#    License for the specific language governing permissions and limitations
11
-#    under the License.
12
-"""Base class and convenience utilities for functional placement tests."""
13
-
14
-from oslo_utils.fixture import uuidsentinel as uuids
15
-from oslo_utils import uuidutils
16
-
17
-from nova.api.openstack.placement import exception
18
-from nova.api.openstack.placement.objects import consumer as consumer_obj
19
-from nova.api.openstack.placement.objects import project as project_obj
20
-from nova.api.openstack.placement.objects import resource_provider as rp_obj
21
-from nova.api.openstack.placement.objects import user as user_obj
22
-from nova.tests.functional.api.openstack.placement import base
23
-
24
-
25
-def create_provider(context, name, *aggs, **kwargs):
26
-    parent = kwargs.get('parent')
27
-    root = kwargs.get('root')
28
-    uuid = kwargs.get('uuid', getattr(uuids, name))
29
-    rp = rp_obj.ResourceProvider(context, name=name, uuid=uuid)
30
-    if parent:
31
-        rp.parent_provider_uuid = parent
32
-    if root:
33
-        rp.root_provider_uuid = root
34
-    rp.create()
35
-    if aggs:
36
-        rp.set_aggregates(aggs)
37
-    return rp
38
-
39
-
40
-def add_inventory(rp, rc, total, **kwargs):
41
-    kwargs.setdefault('max_unit', total)
42
-    inv = rp_obj.Inventory(rp._context, resource_provider=rp,
43
-                           resource_class=rc, total=total, **kwargs)
44
-    inv.obj_set_defaults()
45
-    rp.add_inventory(inv)
46
-    return inv
47
-
48
-
49
-def set_traits(rp, *traits):
50
-    tlist = []
51
-    for tname in traits:
52
-        try:
53
-            trait = rp_obj.Trait.get_by_name(rp._context, tname)
54
-        except exception.TraitNotFound:
55
-            trait = rp_obj.Trait(rp._context, name=tname)
56
-            trait.create()
57
-        tlist.append(trait)
58
-    rp.set_traits(rp_obj.TraitList(objects=tlist))
59
-    return tlist
60
-
61
-
62
-def ensure_consumer(ctx, user, project, consumer_id=None):
63
-    # NOTE(efried): If not specified, use a random consumer UUID - we don't
64
-    # want to override any existing allocations from the test case.
65
-    consumer_id = consumer_id or uuidutils.generate_uuid()
66
-    try:
67
-        consumer = consumer_obj.Consumer.get_by_uuid(ctx, consumer_id)
68
-    except exception.NotFound:
69
-        consumer = consumer_obj.Consumer(
70
-            ctx, uuid=consumer_id, user=user, project=project)
71
-        consumer.create()
72
-    return consumer
73
-
74
-
75
-def set_allocation(ctx, rp, consumer, rc_used_dict):
76
-    alloc = [
77
-        rp_obj.Allocation(
78
-            ctx, resource_provider=rp, resource_class=rc,
79
-            consumer=consumer, used=used)
80
-        for rc, used in rc_used_dict.items()
81
-    ]
82
-    alloc_list = rp_obj.AllocationList(ctx, objects=alloc)
83
-    alloc_list.replace_all()
84
-    return alloc_list
85
-
86
-
87
-class PlacementDbBaseTestCase(base.TestCase):
88
-
89
-    def setUp(self):
90
-        super(PlacementDbBaseTestCase, self).setUp()
91
-        # we use context in some places and ctx in other. We should only use
92
-        # context, but let's paper over that for now.
93
-        self.ctx = self.context
94
-        self.user_obj = user_obj.User(self.ctx, external_id='fake-user')
95
-        self.user_obj.create()
96
-        self.project_obj = project_obj.Project(
97
-            self.ctx, external_id='fake-project')
98
-        self.project_obj.create()
99
-        # For debugging purposes, populated by _create_provider and used by
100
-        # _validate_allocation_requests to make failure results more readable.
101
-        self.rp_uuid_to_name = {}
102
-
103
-    def _create_provider(self, name, *aggs, **kwargs):
104
-        rp = create_provider(self.ctx, name, *aggs, **kwargs)
105
-        self.rp_uuid_to_name[rp.uuid] = name
106
-        return rp
107
-
108
-    def allocate_from_provider(self, rp, rc, used, consumer_id=None,
109
-                               consumer=None):
110
-        if consumer is None:
111
-            consumer = ensure_consumer(
112
-                self.ctx, self.user_obj, self.project_obj, consumer_id)
113
-        alloc_list = set_allocation(self.ctx, rp, consumer, {rc: used})
114
-        return alloc_list
115
-
116
-    def _make_allocation(self, inv_dict, alloc_dict):
117
-        rp = self._create_provider('allocation_resource_provider')
118
-        disk_inv = rp_obj.Inventory(context=self.ctx,
119
-                resource_provider=rp, **inv_dict)
120
-        inv_list = rp_obj.InventoryList(objects=[disk_inv])
121
-        rp.set_inventory(inv_list)
122
-        consumer_id = alloc_dict['consumer_id']
123
-        consumer = ensure_consumer(
124
-            self.ctx, self.user_obj, self.project_obj, consumer_id)
125
-        alloc = rp_obj.Allocation(self.ctx, resource_provider=rp,
126
-                consumer=consumer, **alloc_dict)
127
-        alloc_list = rp_obj.AllocationList(self.ctx, objects=[alloc])
128
-        alloc_list.replace_all()
129
-        return rp, alloc

+ 0
- 329
nova/tests/functional/api/openstack/placement/db/test_consumer.py View File

@@ -1,329 +0,0 @@
1
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
2
-#    not use this file except in compliance with the License. You may obtain
3
-#    a copy of the License at
4
-#
5
-#         http://www.apache.org/licenses/LICENSE-2.0
6
-#
7
-#    Unless required by applicable law or agreed to in writing, software
8
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
9
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
10
-#    License for the specific language governing permissions and limitations
11
-#    under the License.
12
-
13
-from oslo_config import cfg
14
-from oslo_utils.fixture import uuidsentinel as uuids
15
-import sqlalchemy as sa
16
-
17
-from nova.api.openstack.placement import db_api
18
-from nova.api.openstack.placement import exception
19
-from nova.api.openstack.placement.objects import consumer as consumer_obj
20
-from nova.api.openstack.placement.objects import project as project_obj
21
-from nova.api.openstack.placement.objects import resource_provider as rp_obj
22
-from nova.api.openstack.placement.objects import user as user_obj
23
-from nova import rc_fields as fields
24
-from nova.tests.functional.api.openstack.placement import base
25
-from nova.tests.functional.api.openstack.placement.db import test_base as tb
26
-
27
-
28
-CONF = cfg.CONF
29
-CONSUMER_TBL = consumer_obj.CONSUMER_TBL
30
-PROJECT_TBL = project_obj.PROJECT_TBL
31
-USER_TBL = user_obj.USER_TBL
32
-ALLOC_TBL = rp_obj._ALLOC_TBL
33
-
34
-
35
-class ConsumerTestCase(tb.PlacementDbBaseTestCase):
36
-    def test_non_existing_consumer(self):
37
-        self.assertRaises(exception.ConsumerNotFound,
38
-            consumer_obj.Consumer.get_by_uuid, self.ctx,
39
-            uuids.non_existing_consumer)
40
-
41
-    def test_create_and_get(self):
42
-        u = user_obj.User(self.ctx, external_id='another-user')
43
-        u.create()
44
-        p = project_obj.Project(self.ctx, external_id='another-project')
45
-        p.create()
46
-        c = consumer_obj.Consumer(
47
-            self.ctx, uuid=uuids.consumer, user=u, project=p)
48
-        c.create()
49
-        c = consumer_obj.Consumer.get_by_uuid(self.ctx, uuids.consumer)
50
-        self.assertEqual(1, c.id)
51
-        # Project ID == 1 is fake-project created in setup
52
-        self.assertEqual(2, c.project.id)
53
-        # User ID == 1 is fake-user created in setup
54
-        self.assertEqual(2, c.user.id)
55
-        self.assertRaises(exception.ConsumerExists, c.create)
56
-
57
-    def test_update(self):
58
-        """Tests the scenario where a user supplies a different project/user ID
59
-        for an allocation's consumer and we call Consumer.update() to save that
60
-        information to the consumers table.
61
-        """
62
-        # First, create the consumer with the "fake-user" and "fake-project"
63
-        # user/project in the base test class's setUp
64
-        c = consumer_obj.Consumer(
65
-            self.ctx, uuid=uuids.consumer, user=self.user_obj,
66
-            project=self.project_obj)
67
-        c.create()
68
-        c = consumer_obj.Consumer.get_by_uuid(self.ctx, uuids.consumer)
69
-        self.assertEqual(self.project_obj.id, c.project.id)
70
-        self.assertEqual(self.user_obj.id, c.user.id)
71
-
72
-        # Now change the consumer's project and user to a different project
73
-        another_user = user_obj.User(self.ctx, external_id='another-user')
74
-        another_user.create()
75
-        another_proj = project_obj.Project(
76
-            self.ctx, external_id='another-project')
77
-        another_proj.create()
78
-
79
-        c.project = another_proj
80
-        c.user = another_user
81
-        c.update()
82
-        c = consumer_obj.Consumer.get_by_uuid(self.ctx, uuids.consumer)
83
-        self.assertEqual(another_proj.id, c.project.id)
84
-        self.assertEqual(another_user.id, c.user.id)
85
-
86
-
87
-@db_api.placement_context_manager.reader
88
-def _get_allocs_with_no_consumer_relationship(ctx):
89
-    alloc_to_consumer = sa.outerjoin(
90
-        ALLOC_TBL, CONSUMER_TBL,
91
-        ALLOC_TBL.c.consumer_id == CONSUMER_TBL.c.uuid)
92
-    sel = sa.select([ALLOC_TBL.c.consumer_id])
93
-    sel = sel.select_from(alloc_to_consumer)
94
-    sel = sel.where(CONSUMER_TBL.c.id.is_(None))
95
-    return ctx.session.execute(sel).fetchall()
96
-
97
-
98
-# NOTE(jaypipes): The tb.PlacementDbBaseTestCase creates a project and user
99
-# which is why we don't base off that. We want a completely bare DB for this
100
-# test.
101
-class CreateIncompleteConsumersTestCase(base.TestCase):
102
-
103
-    def setUp(self):
104
-        super(CreateIncompleteConsumersTestCase, self).setUp()
105
-        self.ctx = self.context
106
-
107
-    @db_api.placement_context_manager.writer
108
-    def _create_incomplete_allocations(self, ctx, num_of_consumer_allocs=1):
109
-        # Create some allocations with consumers that don't exist in the
110
-        # consumers table to represent old allocations that we expect to be
111
-        # "cleaned up" with consumers table records that point to the sentinel
112
-        # project/user records.
113
-        c1_missing_uuid = uuids.c1_missing
114
-        c2_missing_uuid = uuids.c2_missing
115
-        c3_missing_uuid = uuids.c3_missing
116
-        for c_uuid in (c1_missing_uuid, c2_missing_uuid, c3_missing_uuid):
117
-            # Create $num_of_consumer_allocs allocations per consumer with
118
-            # different resource classes.
119
-            for resource_class_id in range(num_of_consumer_allocs):
120
-                ins_stmt = ALLOC_TBL.insert().values(
121
-                    resource_provider_id=1,
122
-                    resource_class_id=resource_class_id,
123
-                    consumer_id=c_uuid, used=1)
124
-                ctx.session.execute(ins_stmt)
125
-        # Verify there are no records in the projects/users table
126
-        project_count = ctx.session.scalar(
127
-            sa.select([sa.func.count('*')]).select_from(PROJECT_TBL))
128
-        self.assertEqual(0, project_count)
129
-        user_count = ctx.session.scalar(
130
-            sa.select([sa.func.count('*')]).select_from(USER_TBL))
131
-        self.assertEqual(0, user_count)
132
-        # Verify there are no consumer records for the missing consumers
133
-        sel = CONSUMER_TBL.select(
134
-            CONSUMER_TBL.c.uuid.in_([c1_missing_uuid, c2_missing_uuid]))
135
-        res = ctx.session.execute(sel).fetchall()
136
-        self.assertEqual(0, len(res))
137
-
138
-    @db_api.placement_context_manager.reader
139
-    def _check_incomplete_consumers(self, ctx):
140
-        incomplete_project_id = CONF.placement.incomplete_consumer_project_id
141
-
142
-        # Verify we have a record in projects for the missing sentinel
143
-        sel = PROJECT_TBL.select(
144
-            PROJECT_TBL.c.external_id == incomplete_project_id)
145
-        rec = ctx.session.execute(sel).first()
146
-        self.assertEqual(incomplete_project_id, rec['external_id'])
147
-        incomplete_proj_id = rec['id']
148
-
149
-        # Verify we have a record in users for the missing sentinel
150
-        incomplete_user_id = CONF.placement.incomplete_consumer_user_id
151
-        sel = user_obj.USER_TBL.select(
152
-            USER_TBL.c.external_id == incomplete_user_id)
153
-        rec = ctx.session.execute(sel).first()
154
-        self.assertEqual(incomplete_user_id, rec['external_id'])
155
-        incomplete_user_id = rec['id']
156
-
157
-        # Verify there are records in the consumers table for our old
158
-        # allocation records created in the pre-migration setup and that the
159
-        # projects and users referenced in those consumer records point to the
160
-        # incomplete project/user
161
-        sel = CONSUMER_TBL.select(CONSUMER_TBL.c.uuid == uuids.c1_missing)
162
-        missing_c1 = ctx.session.execute(sel).first()
163
-        self.assertEqual(incomplete_proj_id, missing_c1['project_id'])
164
-        self.assertEqual(incomplete_user_id, missing_c1['user_id'])
165
-        sel = CONSUMER_TBL.select(CONSUMER_TBL.c.uuid == uuids.c2_missing)
166
-        missing_c2 = ctx.session.execute(sel).first()
167
-        self.assertEqual(incomplete_proj_id, missing_c2['project_id'])
168
-        self.assertEqual(incomplete_user_id, missing_c2['user_id'])
169
-
170
-        # Ensure there are no more allocations with incomplete consumers
171
-        res = _get_allocs_with_no_consumer_relationship(ctx)
172
-        self.assertEqual(0, len(res))
173
-
174
-    def test_create_incomplete_consumers(self):
175
-        """Test the online data migration that creates incomplete consumer
176
-        records along with the incomplete consumer project/user records.
177
-        """
178
-        self._create_incomplete_allocations(self.ctx)
179
-        # We do a "really online" online data migration for incomplete
180
-        # consumers when calling AllocationList.get_all_by_consumer_id() and
181
-        # AllocationList.get_all_by_resource_provider() and there are still
182
-        # incomplete consumer records. So, to simulate a situation where the
183
-        # operator has yet to run the nova-manage online_data_migration CLI
184
-        # tool completely, we first call
185
-        # consumer_obj.create_incomplete_consumers() with a batch size of 1.
186
-        # This should mean there will be two allocation records still remaining
187
-        # with a missing consumer record (since we create 3 total to begin
188
-        # with). We then query the allocations table directly to grab that
189
-        # consumer UUID in the allocations table that doesn't refer to a
190
-        # consumer table record and call
191
-        # AllocationList.get_all_by_consumer_id() with that consumer UUID. This
192
-        # should create the remaining missing consumer record "inline" in the
193
-        # AllocationList.get_all_by_consumer_id() method.
194
-        # After that happens, there should still be a single allocation record
195
-        # that is missing a relation to the consumers table. We call the
196
-        # AllocationList.get_all_by_resource_provider() method and verify that
197
-        # method cleans up the remaining incomplete consumers relationship.
198
-        res = consumer_obj.create_incomplete_consumers(self.ctx, 1)
199
-        self.assertEqual((1, 1), res)
200
-
201
-        # Grab the consumer UUID for the allocation record with a
202
-        # still-incomplete consumer record.
203
-        res = _get_allocs_with_no_consumer_relationship(self.ctx)
204
-        self.assertEqual(2, len(res))
205
-        still_missing = res[0][0]
206
-        rp_obj.AllocationList.get_all_by_consumer_id(self.ctx, still_missing)
207
-
208
-        # There should still be a single missing consumer relationship. Let's
209
-        # grab that and call AllocationList.get_all_by_resource_provider()
210
-        # which should clean that last one up for us.
211
-        res = _get_allocs_with_no_consumer_relationship(self.ctx)
212
-        self.assertEqual(1, len(res))
213
-        still_missing = res[0][0]
214
-        rp1 = rp_obj.ResourceProvider(self.ctx, id=1)
215
-        rp_obj.AllocationList.get_all_by_resource_provider(self.ctx, rp1)
216
-
217
-        # get_all_by_resource_provider() should have auto-completed the still
218
-        # missing consumer record and _check_incomplete_consumers() should
219
-        # assert correctly that there are no more incomplete consumer records.
220
-        self._check_incomplete_consumers(self.ctx)
221
-        res = consumer_obj.create_incomplete_consumers(self.ctx, 10)
222
-        self.assertEqual((0, 0), res)
223
-
224
-    def test_create_incomplete_consumers_multiple_allocs_per_consumer(self):
225
-        """Tests that missing consumer records are created when listing
226
-        allocations against a resource provider or running the online data
227
-        migration routine when the consumers have multiple allocations on the
228
-        same provider.
229
-        """
230
-        self._create_incomplete_allocations(self.ctx, num_of_consumer_allocs=2)
231
-        # Run the online data migration to migrate one consumer. The batch size
232
-        # needs to be large enough to hit more than one consumer for this test
233
-        # where each consumer has two allocations.
234
-        res = consumer_obj.create_incomplete_consumers(self.ctx, 2)
235
-        self.assertEqual((2, 2), res)
236
-        # Migrate the rest by listing allocations on the resource provider.
237
-        rp1 = rp_obj.ResourceProvider(self.ctx, id=1)
238
-        rp_obj.AllocationList.get_all_by_resource_provider(self.ctx, rp1)
239
-        self._check_incomplete_consumers(self.ctx)
240
-        res = consumer_obj.create_incomplete_consumers(self.ctx, 10)
241
-        self.assertEqual((0, 0), res)
242
-
243
-
244
-class DeleteConsumerIfNoAllocsTestCase(tb.PlacementDbBaseTestCase):
245
-    def test_delete_consumer_if_no_allocs(self):
246
-        """AllocationList.replace_all() should attempt to delete consumers that
247
-        no longer have any allocations. Due to the REST API not having any way
248
-        to query for consumers directly (only via the GET
249
-        /allocations/{consumer_uuid} endpoint which returns an empty dict even
250
-        when no consumer record exists for the {consumer_uuid}) we need to do
251
-        this functional test using only the object layer.
252
-        """
253
-        # We will use two consumers in this test, only one of which will get
254
-        # all of its allocations deleted in a transaction (and we expect that
255
-        # consumer record to be deleted)
256
-        c1 = consumer_obj.Consumer(
257
-            self.ctx, uuid=uuids.consumer1, user=self.user_obj,
258
-            project=self.project_obj)
259
-        c1.create()
260
-        c2 = consumer_obj.Consumer(
261
-            self.ctx, uuid=uuids.consumer2, user=self.user_obj,
262
-            project=self.project_obj)
263
-        c2.create()
264
-
265
-        # Create some inventory that we will allocate
266
-        cn1 = self._create_provider('cn1')
267
-        tb.add_inventory(cn1, fields.ResourceClass.VCPU, 8)
268
-        tb.add_inventory(cn1, fields.ResourceClass.MEMORY_MB, 2048)
269
-        tb.add_inventory(cn1, fields.ResourceClass.DISK_GB, 2000)
270
-
271
-        # Now allocate some of that inventory to two different consumers
272
-        allocs = [
273
-            rp_obj.Allocation(
274
-                self.ctx, consumer=c1, resource_provider=cn1,
275
-                resource_class=fields.ResourceClass.VCPU, used=1),
276
-            rp_obj.Allocation(
277
-                self.ctx, consumer=c1, resource_provider=cn1,
278
-                resource_class=fields.ResourceClass.MEMORY_MB, used=512),
279
-            rp_obj.Allocation(
280
-                self.ctx, consumer=c2, resource_provider=cn1,
281
-                resource_class=fields.ResourceClass.VCPU, used=1),
282
-            rp_obj.Allocation(
283
-                self.ctx, consumer=c2, resource_provider=cn1,
284
-                resource_class=fields.ResourceClass.MEMORY_MB, used=512),
285
-        ]
286
-        alloc_list = rp_obj.AllocationList(self.ctx, objects=allocs)
287
-        alloc_list.replace_all()
288
-
289
-        # Validate that we have consumer records for both consumers
290
-        for c_uuid in (uuids.consumer1, uuids.consumer2):
291
-            c_obj = consumer_obj.Consumer.get_by_uuid(self.ctx, c_uuid)
292
-            self.assertIsNotNone(c_obj)
293
-
294
-        # OK, now "remove" the allocation for consumer2 by setting the used
295
-        # value for both allocated resources to 0 and re-running the
296
-        # AllocationList.replace_all(). This should end up deleting the
297
-        # consumer record for consumer2
298
-        allocs = [
299
-            rp_obj.Allocation(
300
-                self.ctx, consumer=c2, resource_provider=cn1,
301
-                resource_class=fields.ResourceClass.VCPU, used=0),
302
-            rp_obj.Allocation(
303
-                self.ctx, consumer=c2, resource_provider=cn1,
304
-                resource_class=fields.ResourceClass.MEMORY_MB, used=0),
305
-        ]
306
-        alloc_list = rp_obj.AllocationList(self.ctx, objects=allocs)
307
-        alloc_list.replace_all()
308
-
309
-        # consumer1 should still exist...
310
-        c_obj = consumer_obj.Consumer.get_by_uuid(self.ctx, uuids.consumer1)
311
-        self.assertIsNotNone(c_obj)
312
-
313
-        # but not consumer2...
314
-        self.assertRaises(
315
-            exception.NotFound, consumer_obj.Consumer.get_by_uuid,
316
-            self.ctx, uuids.consumer2)
317
-
318
-        # DELETE /allocations/{consumer_uuid} is the other place where we
319
-        # delete all allocations for a consumer. Let's delete all for consumer1
320
-        # and check that the consumer record is deleted
321
-        alloc_list = rp_obj.AllocationList.get_all_by_consumer_id(
322
-            self.ctx, uuids.consumer1)
323
-        alloc_list.delete_all()
324
-
325
-        # consumer1 should no longer exist in the DB since we just deleted all
326
-        # of its allocations
327
-        self.assertRaises(
328
-            exception.NotFound, consumer_obj.Consumer.get_by_uuid,
329
-            self.ctx, uuids.consumer1)

+ 0
- 31
nova/tests/functional/api/openstack/placement/db/test_project.py View File

@@ -1,31 +0,0 @@
1
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
2
-#    not use this file except in compliance with the License. You may obtain
3
-#    a copy of the License at
4
-#
5
-#         http://www.apache.org/licenses/LICENSE-2.0
6
-#
7
-#    Unless required by applicable law or agreed to in writing, software
8
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
9
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
10
-#    License for the specific language governing permissions and limitations
11
-#    under the License.
12
-from oslo_utils.fixture import uuidsentinel as uuids
13
-
14
-from nova.api.openstack.placement import exception
15
-from nova.api.openstack.placement.objects import project as project_obj
16
-from nova.tests.functional.api.openstack.placement.db import test_base as tb
17
-
18
-
19
-class ProjectTestCase(tb.PlacementDbBaseTestCase):
20
-    def test_non_existing_project(self):
21
-        self.assertRaises(
22
-            exception.ProjectNotFound, project_obj.Project.get_by_external_id,
23
-            self.ctx, uuids.non_existing_project)
24
-
25
-    def test_create_and_get(self):
26
-        p = project_obj.Project(self.ctx, external_id='another-project')
27
-        p.create()
28
-        p = project_obj.Project.get_by_external_id(self.ctx, 'another-project')
29
-        # Project ID == 1 is fake-project created in setup
30
-        self.assertEqual(2, p.id)
31
-        self.assertRaises(exception.ProjectExists, p.create)

+ 0
- 359
nova/tests/functional/api/openstack/placement/db/test_reshape.py View File

@@ -1,359 +0,0 @@
1
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
2
-#    not use this file except in compliance with the License. You may obtain
3
-#    a copy of the License at
4
-#
5
-#         http://www.apache.org/licenses/LICENSE-2.0
6
-#
7
-#    Unless required by applicable law or agreed to in writing, software
8
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
9
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
10
-#    License for the specific language governing permissions and limitations
11
-#    under the License.
12
-from oslo_utils.fixture import uuidsentinel as uuids
13
-
14
-from nova.api.openstack.placement import exception
15
-from nova.api.openstack.placement.objects import consumer as consumer_obj
16
-from nova.api.openstack.placement.objects import resource_provider as rp_obj
17
-from nova.tests.functional.api.openstack.placement.db import test_base as tb
18
-
19
-
20
-def alloc_for_rc(alloc_list, rc):
21
-    for alloc in alloc_list:
22
-        if alloc.resource_class == rc:
23
-            return alloc
24
-
25
-
26
-class ReshapeTestCase(tb.PlacementDbBaseTestCase):
27
-    """Test 'replace the world' reshape transaction."""
28
-
29
-    def test_reshape(self):
30
-        """We set up the following scenario:
31
-
32
-        BEFORE: single compute node setup
33
-
34
-          A single compute node with:
35
-            - VCPU, MEMORY_MB, DISK_GB inventory
36
-            - Two instances consuming CPU, RAM and DISK from that compute node
37
-
38
-        AFTER: hierarchical + shared storage setup
39
-
40
-          A compute node parent provider with:
41
-            - MEMORY_MB
42
-          Two NUMA node child providers containing:
43
-            - VCPU
44
-          Shared storage provider with:
45
-            - DISK_GB
46
-          Both instances have their resources split among the providers and
47
-          shared storage accordingly
48
-        """
49
-        # First create our consumers
50
-        i1_uuid = uuids.instance1
51
-        i1_consumer = consumer_obj.Consumer(
52
-            self.ctx, uuid=i1_uuid, user=self.user_obj,
53
-            project=self.project_obj)
54
-        i1_consumer.create()
55
-
56
-        i2_uuid = uuids.instance2
57
-        i2_consumer = consumer_obj.Consumer(
58
-            self.ctx, uuid=i2_uuid, user=self.user_obj,
59
-            project=self.project_obj)
60
-        i2_consumer.create()
61
-
62
-        cn1 = self._create_provider('cn1')
63
-        tb.add_inventory(cn1, 'VCPU', 16)
64
-        tb.add_inventory(cn1, 'MEMORY_MB', 32768)
65
-        tb.add_inventory(cn1, 'DISK_GB', 1000)
66
-
67
-        # Allocate both instances against the single compute node
68
-        for consumer in (i1_consumer, i2_consumer):
69
-            allocs = [
70
-                rp_obj.Allocation(
71
-                    self.ctx, resource_provider=cn1,
72
-                    resource_class='VCPU', consumer=consumer, used=2),
73
-                rp_obj.Allocation(
74
-                    self.ctx, resource_provider=cn1,
75
-                    resource_class='MEMORY_MB', consumer=consumer, used=1024),
76
-                rp_obj.Allocation(
77
-                    self.ctx, resource_provider=cn1,
78
-                    resource_class='DISK_GB', consumer=consumer, used=100),
79
-            ]
80
-            alloc_list = rp_obj.AllocationList(self.ctx, objects=allocs)
81
-            alloc_list.replace_all()
82
-
83
-        # Verify we have the allocations we expect for the BEFORE scenario
84
-        before_allocs_i1 = rp_obj.AllocationList.get_all_by_consumer_id(
85
-            self.ctx, i1_uuid)
86
-        self.assertEqual(3, len(before_allocs_i1))
87
-        self.assertEqual(cn1.uuid, before_allocs_i1[0].resource_provider.uuid)
88
-        before_allocs_i2 = rp_obj.AllocationList.get_all_by_consumer_id(
89
-            self.ctx, i2_uuid)
90
-        self.assertEqual(3, len(before_allocs_i2))
91
-        self.assertEqual(cn1.uuid, before_allocs_i2[2].resource_provider.uuid)
92
-
93
-        # Before we issue the actual reshape() call, we need to first create
94
-        # the child providers and sharing storage provider. These are actions
95
-        # that the virt driver or external agent is responsible for performing
96
-        # *before* attempting any reshape activity.
97
-        cn1_numa0 = self._create_provider('cn1_numa0', parent=cn1.uuid)
98
-        cn1_numa1 = self._create_provider('cn1_numa1', parent=cn1.uuid)
99
-        ss = self._create_provider('ss')
100
-
101
-        # OK, now emulate the call to POST /reshaper that will be triggered by
102
-        # a virt driver wanting to replace the world and change its modeling
103
-        # from a single provider to a nested provider tree along with a sharing
104
-        # storage provider.
105
-        after_inventories = {
106
-            # cn1 keeps the RAM only
107
-            cn1: rp_obj.InventoryList(self.ctx, objects=[
108
-                rp_obj.Inventory(
109
-                    self.ctx, resource_provider=cn1,
110
-                    resource_class='MEMORY_MB', total=32768, reserved=0,
111
-                    max_unit=32768, min_unit=1, step_size=1,
112
-                    allocation_ratio=1.0),
113
-            ]),
114
-            # each NUMA node gets half of the CPUs
115
-            cn1_numa0: rp_obj.InventoryList(self.ctx, objects=[
116
-                rp_obj.Inventory(
117
-                    self.ctx, resource_provider=cn1_numa0,
118
-                    resource_class='VCPU', total=8, reserved=0,
119
-                    max_unit=8, min_unit=1, step_size=1,
120
-                    allocation_ratio=1.0),
121
-            ]),
122
-            cn1_numa1: rp_obj.InventoryList(self.ctx, objects=[
123
-                rp_obj.Inventory(
124
-                    self.ctx, resource_provider=cn1_numa1,
125
-                    resource_class='VCPU', total=8, reserved=0,
126
-                    max_unit=8, min_unit=1, step_size=1,
127
-                    allocation_ratio=1.0),
128
-            ]),
129
-            # The sharing provider gets a bunch of disk
130
-            ss: rp_obj.InventoryList(self.ctx, objects=[
131
-                rp_obj.Inventory(
132
-                    self.ctx, resource_provider=ss,
133
-                    resource_class='DISK_GB', total=100000, reserved=0,
134
-                    max_unit=1000, min_unit=1, step_size=1,
135
-                    allocation_ratio=1.0),
136
-            ]),
137
-        }
138
-        # We do a fetch from the DB for each instance to get its latest
139
-        # generation. This would be done by the resource tracker or scheduler
140
-        # report client before issuing the call to reshape() because the
141
-        # consumers representing the two instances above will have had their
142
-        # generations incremented in the original call to PUT
143
-        # /allocations/{consumer_uuid}
144
-        i1_consumer = consumer_obj.Consumer.get_by_uuid(self.ctx, i1_uuid)
145
-        i2_consumer = consumer_obj.Consumer.get_by_uuid(self.ctx, i2_uuid)
146
-        after_allocs = rp_obj.AllocationList(self.ctx, objects=[
147
-            # instance1 gets VCPU from NUMA0, MEMORY_MB from cn1 and DISK_GB
148
-            # from the sharing storage provider
149
-            rp_obj.Allocation(
150
-                self.ctx, resource_provider=cn1_numa0, resource_class='VCPU',
151
-                consumer=i1_consumer, used=2),
152
-            rp_obj.Allocation(
153
-                self.ctx, resource_provider=cn1, resource_class='MEMORY_MB',
154
-                consumer=i1_consumer, used=1024),
155
-            rp_obj.Allocation(
156
-                self.ctx, resource_provider=ss, resource_class='DISK_GB',
157
-                consumer=i1_consumer, used=100),
158
-            # instance2 gets VCPU from NUMA1, MEMORY_MB from cn1 and DISK_GB
159
-            # from the sharing storage provider
160
-            rp_obj.Allocation(
161
-                self.ctx, resource_provider=cn1_numa1, resource_class='VCPU',
162
-                consumer=i2_consumer, used=2),
163
-            rp_obj.Allocation(
164
-                self.ctx, resource_provider=cn1, resource_class='MEMORY_MB',
165
-                consumer=i2_consumer, used=1024),
166
-            rp_obj.Allocation(
167
-                self.ctx, resource_provider=ss, resource_class='DISK_GB',
168
-                consumer=i2_consumer, used=100),
169
-        ])
170
-        rp_obj.reshape(self.ctx, after_inventories, after_allocs)
171
-
172
-        # Verify that the inventories have been moved to the appropriate
173
-        # providers in the AFTER scenario
174
-
175
-        # The root compute node should only have MEMORY_MB, nothing else
176
-        cn1_inv = rp_obj.InventoryList.get_all_by_resource_provider(
177
-            self.ctx, cn1)
178
-        self.assertEqual(1, len(cn1_inv))
179
-        self.assertEqual('MEMORY_MB', cn1_inv[0].resource_class)
180
-        self.assertEqual(32768, cn1_inv[0].total)
181
-        # Each NUMA node should only have half the original VCPU, nothing else
182
-        numa0_inv = rp_obj.InventoryList.get_all_by_resource_provider(
183
-            self.ctx, cn1_numa0)
184
-        self.assertEqual(1, len(numa0_inv))
185
-        self.assertEqual('VCPU', numa0_inv[0].resource_class)
186
-        self.assertEqual(8, numa0_inv[0].total)
187
-        numa1_inv = rp_obj.InventoryList.get_all_by_resource_provider(
188
-            self.ctx, cn1_numa1)
189
-        self.assertEqual(1, len(numa1_inv))
190
-        self.assertEqual('VCPU', numa1_inv[0].resource_class)
191
-        self.assertEqual(8, numa1_inv[0].total)
192
-        # The sharing storage provider should only have DISK_GB, nothing else
193
-        ss_inv = rp_obj.InventoryList.get_all_by_resource_provider(
194
-            self.ctx, ss)
195
-        self.assertEqual(1, len(ss_inv))
196
-        self.assertEqual('DISK_GB', ss_inv[0].resource_class)
197
-        self.assertEqual(100000, ss_inv[0].total)
198
-
199
-        # Verify we have the allocations we expect for the AFTER scenario
200
-        after_allocs_i1 = rp_obj.AllocationList.get_all_by_consumer_id(
201
-            self.ctx, i1_uuid)
202
-        self.assertEqual(3, len(after_allocs_i1))
203
-        # Our VCPU allocation should be in the NUMA0 node
204
-        vcpu_alloc = alloc_for_rc(after_allocs_i1, 'VCPU')
205
-        self.assertIsNotNone(vcpu_alloc)
206
-        self.assertEqual(cn1_numa0.uuid, vcpu_alloc.resource_provider.uuid)
207
-        # Our DISK_GB allocation should be in the sharing provider
208
-        disk_alloc = alloc_for_rc(after_allocs_i1, 'DISK_GB')
209
-        self.assertIsNotNone(disk_alloc)
210
-        self.assertEqual(ss.uuid, disk_alloc.resource_provider.uuid)
211
-        # And our MEMORY_MB should remain on the root compute node
212
-        ram_alloc = alloc_for_rc(after_allocs_i1, 'MEMORY_MB')
213
-        self.assertIsNotNone(ram_alloc)
214
-        self.assertEqual(cn1.uuid, ram_alloc.resource_provider.uuid)
215
-
216
-        after_allocs_i2 = rp_obj.AllocationList.get_all_by_consumer_id(
217
-            self.ctx, i2_uuid)
218
-        self.assertEqual(3, len(after_allocs_i2))
219
-        # Our VCPU allocation should be in the NUMA1 node
220
-        vcpu_alloc = alloc_for_rc(after_allocs_i2, 'VCPU')
221
-        self.assertIsNotNone(vcpu_alloc)
222
-        self.assertEqual(cn1_numa1.uuid, vcpu_alloc.resource_provider.uuid)
223
-        # Our DISK_GB allocation should be in the sharing provider
224
-        disk_alloc = alloc_for_rc(after_allocs_i2, 'DISK_GB')
225
-        self.assertIsNotNone(disk_alloc)
226
-        self.assertEqual(ss.uuid, disk_alloc.resource_provider.uuid)
227
-        # And our MEMORY_MB should remain on the root compute node
228
-        ram_alloc = alloc_for_rc(after_allocs_i2, 'MEMORY_MB')
229
-        self.assertIsNotNone(ram_alloc)
230
-        self.assertEqual(cn1.uuid, ram_alloc.resource_provider.uuid)
231
-
232
-    def test_reshape_concurrent_inventory_update(self):
233
-        """Valid failure scenario for reshape(). We test a situation where the
234
-        virt driver has constructed it's "after inventories and allocations"
235
-        and sent those to the POST /reshape endpoint. The reshape POST handler
236
-        does a quick check of the resource provider generations sent in the
237
-        payload and they all check out.
238
-
239
-        However, right before the call to resource_provider.reshape(), another
240
-        thread legitimately changes the inventory of one of the providers
241
-        involved in the reshape transaction. We should get a
242
-        ConcurrentUpdateDetected in this case.
243
-        """
244
-        # First create our consumers
245
-        i1_uuid = uuids.instance1
246
-        i1_consumer = consumer_obj.Consumer(
247
-            self.ctx, uuid=i1_uuid, user=self.user_obj,
248
-            project=self.project_obj)
249
-        i1_consumer.create()
250
-
251
-        # then all our original providers
252
-        cn1 = self._create_provider('cn1')
253
-        tb.add_inventory(cn1, 'VCPU', 16)
254
-        tb.add_inventory(cn1, 'MEMORY_MB', 32768)
255
-        tb.add_inventory(cn1, 'DISK_GB', 1000)
256
-
257
-        # Allocate an instance on our compute node
258
-        allocs = [
259
-            rp_obj.Allocation(
260
-                self.ctx, resource_provider=cn1,
261
-                resource_class='VCPU', consumer=i1_consumer, used=2),
262
-            rp_obj.Allocation(
263
-                self.ctx, resource_provider=cn1,
264
-                resource_class='MEMORY_MB', consumer=i1_consumer, used=1024),
265
-            rp_obj.Allocation(
266
-                self.ctx, resource_provider=cn1,
267
-                resource_class='DISK_GB', consumer=i1_consumer, used=100),
268
-        ]
269
-        alloc_list = rp_obj.AllocationList(self.ctx, objects=allocs)
270
-        alloc_list.replace_all()
271
-
272
-        # Before we issue the actual reshape() call, we need to first create
273
-        # the child providers and sharing storage provider. These are actions
274
-        # that the virt driver or external agent is responsible for performing
275
-        # *before* attempting any reshape activity.
276
-        cn1_numa0 = self._create_provider('cn1_numa0', parent=cn1.uuid)
277
-        cn1_numa1 = self._create_provider('cn1_numa1', parent=cn1.uuid)
278
-        ss = self._create_provider('ss')
279
-
280
-        # OK, now emulate the call to POST /reshaper that will be triggered by
281
-        # a virt driver wanting to replace the world and change its modeling
282
-        # from a single provider to a nested provider tree along with a sharing
283
-        # storage provider.
284
-        after_inventories = {
285
-            # cn1 keeps the RAM only
286
-            cn1: rp_obj.InventoryList(self.ctx, objects=[
287
-                rp_obj.Inventory(
288
-                    self.ctx, resource_provider=cn1,
289
-                    resource_class='MEMORY_MB', total=32768, reserved=0,
290
-                    max_unit=32768, min_unit=1, step_size=1,
291
-                    allocation_ratio=1.0),
292
-            ]),
293
-            # each NUMA node gets half of the CPUs
294
-            cn1_numa0: rp_obj.InventoryList(self.ctx, objects=[
295
-                rp_obj.Inventory(
296
-                    self.ctx, resource_provider=cn1_numa0,
297
-                    resource_class='VCPU', total=8, reserved=0,
298
-                    max_unit=8, min_unit=1, step_size=1,
299
-                    allocation_ratio=1.0),
300
-            ]),
301
-            cn1_numa1: rp_obj.InventoryList(self.ctx, objects=[
302
-                rp_obj.Inventory(
303
-                    self.ctx, resource_provider=cn1_numa1,
304
-                    resource_class='VCPU', total=8, reserved=0,
305
-                    max_unit=8, min_unit=1, step_size=1,
306
-                    allocation_ratio=1.0),
307
-            ]),
308
-            # The sharing provider gets a bunch of disk
309
-            ss: rp_obj.InventoryList(self.ctx, objects=[
310
-                rp_obj.Inventory(
311
-                    self.ctx, resource_provider=ss,
312
-                    resource_class='DISK_GB', total=100000, reserved=0,
313
-                    max_unit=1000, min_unit=1, step_size=1,
314
-                    allocation_ratio=1.0),
315
-            ]),
316
-        }
317
-        # We do a fetch from the DB for each instance to get its latest
318
-        # generation. This would be done by the resource tracker or scheduler
319
-        # report client before issuing the call to reshape() because the
320
-        # consumers representing the two instances above will have had their
321
-        # generations incremented in the original call to PUT
322
-        # /allocations/{consumer_uuid}
323
-        i1_consumer = consumer_obj.Consumer.get_by_uuid(self.ctx, i1_uuid)
324
-        after_allocs = rp_obj.AllocationList(self.ctx, objects=[
325
-            # instance1 gets VCPU from NUMA0, MEMORY_MB from cn1 and DISK_GB
326
-            # from the sharing storage provider
327
-            rp_obj.Allocation(
328
-                self.ctx, resource_provider=cn1_numa0, resource_class='VCPU',
329
-                consumer=i1_consumer, used=2),
330
-            rp_obj.Allocation(
331
-                self.ctx, resource_provider=cn1, resource_class='MEMORY_MB',
332
-                consumer=i1_consumer, used=1024),
333
-            rp_obj.Allocation(
334
-                self.ctx, resource_provider=ss, resource_class='DISK_GB',
335
-                consumer=i1_consumer, used=100),
336
-        ])
337
-
338
-        # OK, now before we call reshape(), here we emulate another thread
339
-        # changing the inventory for the sharing storage provider in between
340
-        # the time in the REST handler when the sharing storage provider's
341
-        # generation was validated and the actual call to reshape()
342
-        ss_threadB = rp_obj.ResourceProvider.get_by_uuid(self.ctx, ss.uuid)
343
-        # Reduce the amount of storage to 2000, from 100000.
344
-        new_ss_inv = rp_obj.InventoryList(self.ctx, objects=[
345
-            rp_obj.Inventory(
346
-                self.ctx, resource_provider=ss_threadB,
347
-                resource_class='DISK_GB', total=2000, reserved=0,
348
-                    max_unit=1000, min_unit=1, step_size=1,
349
-                    allocation_ratio=1.0)])
350
-        ss_threadB.set_inventory(new_ss_inv)
351
-        # Double check our storage provider's generation is now greater than
352
-        # the original storage provider record being sent to reshape()
353
-        self.assertGreater(ss_threadB.generation, ss.generation)
354
-
355
-        # And we should legitimately get a failure now to reshape() due to
356
-        # another thread updating one of the involved provider's generations
357
-        self.assertRaises(
358
-            exception.ConcurrentUpdateDetected,
359
-            rp_obj.reshape, self.ctx, after_inventories, after_allocs)

+ 0
- 145
nova/tests/functional/api/openstack/placement/db/test_resource_class_cache.py View File

@@ -1,145 +0,0 @@
1
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
2
-#    not use this file except in compliance with the License. You may obtain
3
-#    a copy of the License at
4
-#
5
-#         http://www.apache.org/licenses/LICENSE-2.0
6
-#
7
-#    Unless required by applicable law or agreed to in writing, software
8
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
9
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
10
-#    License for the specific language governing permissions and limitations
11
-#    under the License.
12
-
13
-import datetime
14
-import mock
15
-
16
-from oslo_utils import timeutils
17
-
18
-from nova.api.openstack.placement import exception
19
-from nova.api.openstack.placement import resource_class_cache as rc_cache
20
-from nova import rc_fields as fields
21
-from nova.tests.functional.api.openstack.placement import base
22
-
23
-
24
-class TestResourceClassCache(base.TestCase):
25
-
26
-    def setUp(self):
27
-        super(TestResourceClassCache, self).setUp()
28
-        db = self.placement_db
29
-        self.context = mock.Mock()
30
-        sess_mock = mock.Mock()
31
-        sess_mock.connection.side_effect = db.get_engine().connect
32
-        self.context.session = sess_mock
33
-
34
-    @mock.patch('sqlalchemy.select')
35
-    def test_rc_cache_std_no_db(self, sel_mock):
36
-        """Test that looking up either an ID or a string in the resource class
37
-        cache for a standardized resource class does not result in a DB
38
-        call.
39
-        """
40
-        cache = rc_cache.ResourceClassCache(self.context)
41
-
42
-        self.assertEqual('VCPU', cache.string_from_id(0))
43
-        self.assertEqual('MEMORY_MB', cache.string_from_id(1))
44
-        self.assertEqual(0, cache.id_from_string('VCPU'))
45
-        self.assertEqual(1, cache.id_from_string('MEMORY_MB'))
46
-
47
-        self.assertFalse(sel_mock.called)
48
-
49
-    def test_standards(self):
50
-        cache = rc_cache.ResourceClassCache(self.context)
51
-        standards = cache.STANDARDS
52
-
53
-        self.assertEqual(len(standards), len(fields.ResourceClass.STANDARD))
54
-        names = (rc['name'] for rc in standards)
55
-        for name in fields.ResourceClass.STANDARD:
56
-            self.assertIn(name, names)
57
-
58
-        cache = rc_cache.ResourceClassCache(self.context)
59
-        standards2 = cache.STANDARDS
60
-        self.assertEqual(id(standards), id(standards2))
61
-
62
-    def test_standards_have_time_fields(self):
63
-        cache = rc_cache.ResourceClassCache(self.context)
64
-        standards = cache.STANDARDS
65
-
66
-        first_standard = standards[0]
67
-        self.assertIn('updated_at', first_standard)
68
-        self.assertIn('created_at', first_standard)
69
-        self.assertIsNone(first_standard['updated_at'])
70
-        self.assertIsNone(first_standard['created_at'])
71
-
72
-    def test_standard_has_time_fields(self):
73
-        cache = rc_cache.ResourceClassCache(self.context)
74
-
75
-        vcpu_class = cache.all_from_string('VCPU')
76
-        expected = {'id': 0, 'name': 'VCPU', 'updated_at': None,
77
-                    'created_at': None}
78
-        self.assertEqual(expected, vcpu_class)
79
-
80
-    def test_rc_cache_custom(self):
81
-        """Test that non-standard, custom resource classes hit the database and
82
-        return appropriate results, caching the results after a single
83
-        query.
84
-        """
85
-        cache = rc_cache.ResourceClassCache(self.context)
86
-
87
-        # Haven't added anything to the DB yet, so should raise
88
-        # ResourceClassNotFound
89
-        self.assertRaises(exception.ResourceClassNotFound,
90
-                          cache.string_from_id, 1001)
91
-        self.assertRaises(exception.ResourceClassNotFound,
92
-                          cache.id_from_string, "IRON_NFV")
93
-
94
-        # Now add to the database and verify appropriate results...
95
-        with self.context.session.connection() as conn:
96
-            ins_stmt = rc_cache._RC_TBL.insert().values(
97
-                id=1001,
98
-                name='IRON_NFV'
99
-            )
100
-            conn.execute(ins_stmt)
101
-
102
-        self.assertEqual('IRON_NFV', cache.string_from_id(1001))
103
-        self.assertEqual(1001, cache.id_from_string('IRON_NFV'))
104
-
105
-        # Try same again and verify we don't hit the DB.
106
-        with mock.patch('sqlalchemy.select') as sel_mock:
107
-            self.assertEqual('IRON_NFV', cache.string_from_id(1001))
108
-            self.assertEqual(1001, cache.id_from_string('IRON_NFV'))
109
-            self.assertFalse(sel_mock.called)
110
-
111
-        # Verify all fields available from all_from_string
112
-        iron_nfv_class = cache.all_from_string('IRON_NFV')
113
-        self.assertEqual(1001, iron_nfv_class['id'])
114
-        self.assertEqual('IRON_NFV', iron_nfv_class['name'])
115
-        # updated_at not set on insert
116
-        self.assertIsNone(iron_nfv_class['updated_at'])
117
-        self.assertIsInstance(iron_nfv_class['created_at'], datetime.datetime)
118
-
119
-        # Update IRON_NFV (this is a no-op but will set updated_at)
120
-        with self.context.session.connection() as conn:
121
-            # NOTE(cdent): When using explict SQL that names columns,
122
-            # the automatic timestamp handling provided by the oslo_db
123
-            # TimestampMixin is not provided. created_at is a default
124
-            # but updated_at is an onupdate.
125
-            upd_stmt = rc_cache._RC_TBL.update().where(
126
-                rc_cache._RC_TBL.c.id == 1001).values(
127
-                    name='IRON_NFV', updated_at=timeutils.utcnow())
128
-            conn.execute(upd_stmt)
129
-
130
-        # reset cache
131
-        cache = rc_cache.ResourceClassCache(self.context)
132
-
133
-        iron_nfv_class = cache.all_from_string('IRON_NFV')
134
-        # updated_at set on update
135
-        self.assertIsInstance(iron_nfv_class['updated_at'], datetime.datetime)
136
-
137
-    def test_rc_cache_miss(self):
138
-        """Test that we raise ResourceClassNotFound if an unknown resource
139
-        class ID or string is searched for.
140
-        """
141
-        cache = rc_cache.ResourceClassCache(self.context)
142
-        self.assertRaises(exception.ResourceClassNotFound,
143
-                          cache.string_from_id, 99999999)
144
-        self.assertRaises(exception.ResourceClassNotFound,
145
-                          cache.id_from_string, 'UNKNOWN')

+ 0
- 2391
nova/tests/functional/api/openstack/placement/db/test_resource_provider.py
File diff suppressed because it is too large
View File


+ 0
- 31
nova/tests/functional/api/openstack/placement/db/test_user.py View File

@@ -1,31 +0,0 @@
1
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
2
-#    not use this file except in compliance with the License. You may obtain
3
-#    a copy of the License at
4
-#
5
-#         http://www.apache.org/licenses/LICENSE-2.0
6
-#
7
-#    Unless required by applicable law or agreed to in writing, software
8
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
9
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
10
-#    License for the specific language governing permissions and limitations
11
-#    under the License.
12
-from oslo_utils.fixture import uuidsentinel as uuids
13
-
14
-from nova.api.openstack.placement import exception
15
-from nova.api.openstack.placement.objects import user as user_obj
16
-from nova.tests.functional.api.openstack.placement.db import test_base as tb
17
-
18
-
19
-class UserTestCase(tb.PlacementDbBaseTestCase):
20
-    def test_non_existing_user(self):
21
-        self.assertRaises(
22
-            exception.UserNotFound, user_obj.User.get_by_external_id,
23
-            self.ctx, uuids.non_existing_user)
24
-
25
-    def test_create_and_get(self):
26
-        u = user_obj.User(self.ctx, external_id='another-user')
27
-        u.create()
28
-        u = user_obj.User.get_by_external_id(self.ctx, 'another-user')
29
-        # User ID == 1 is fake-user created in setup
30
-        self.assertEqual(2, u.id)
31
-        self.assertRaises(exception.UserExists, u.create)

+ 0
- 0
nova/tests/functional/api/openstack/placement/fixtures/__init__.py View File


+ 0
- 81
nova/tests/functional/api/openstack/placement/fixtures/capture.py View File

@@ -1,81 +0,0 @@
1
-# All Rights Reserved.
2
-#
3
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
4
-#    not use this file except in compliance with the License. You may obtain
5
-#    a copy of the License at
6
-#
7
-#         http://www.apache.org/licenses/LICENSE-2.0
8
-#
9
-#    Unless required by applicable law or agreed to in writing, software
10
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12
-#    License for the specific language governing permissions and limitations
13
-#    under the License.
14
-
15
-import logging
16
-import warnings
17
-
18
-import fixtures
19
-from oslotest import log
20
-
21
-
22
-class NullHandler(logging.Handler):
23
-    """custom default NullHandler to attempt to format the record.
24
-
25
-    Used in conjunction with Logging below to detect formatting errors
26
-    in debug logs.
27
-    """
28
-    def handle(self, record):
29
-        self.format(record)
30
-
31
-    def emit(self, record):
32
-        pass
33
-
34
-    def createLock(self):
35
-        self.lock = None
36
-
37
-
38
-class Logging(log.ConfigureLogging):
39
-    """A logging fixture providing two important fixtures.
40
-
41
-    One is to capture logs for later inspection.
42
-
43
-    The other is to make sure that DEBUG logs, even if not captured,
44
-    are formatted.
45
-    """
46
-
47
-    def __init__(self):
48
-        super(Logging, self).__init__()
49
-        # If level was not otherwise set, default to INFO.
50
-        if self.level is None:
51
-            self.level = logging.INFO
52
-        # Always capture logs, unlike the parent.
53
-        self.capture_logs = True
54
-
55
-    def setUp(self):
56
-        super(Logging, self).setUp()
57
-        if self.level > logging.DEBUG:
58
-            handler = NullHandler()
59
-            self.useFixture(fixtures.LogHandler(handler, nuke_handlers=False))
60
-            handler.setLevel(logging.DEBUG)
61
-
62
-
63
-class WarningsFixture(fixtures.Fixture):
64
-    """Filter or escalates certain warnings during test runs.
65
-
66
-    Add additional entries as required. Remove when obsolete.
67
-    """
68
-
69
-    def setUp(self):
70
-        super(WarningsFixture, self).setUp()
71
-
72
-        # Ignore policy scope warnings.
73
-        warnings.filterwarnings('ignore',
74
-                                message="Policy .* failed scope check",
75
-                                category=UserWarning)
76
-        # The UUIDFields emits a warning if the value is not a  valid UUID.
77
-        # Let's escalate that to an exception in the test to prevent adding
78
-        # violations.
79
-        warnings.filterwarnings('error', message=".*invalid UUID.*")
80
-
81
-        self.addCleanup(warnings.resetwarnings)

+ 0
- 431
nova/tests/functional/api/openstack/placement/fixtures/gabbits.py View File

@@ -1,431 +0,0 @@
1
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
2
-#    not use this file except in compliance with the License. You may obtain
3
-#    a copy of the License at
4
-#
5
-#         http://www.apache.org/licenses/LICENSE-2.0
6
-#
7
-#    Unless required by applicable law or agreed to in writing, software
8
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
9
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
10
-#    License for the specific language governing permissions and limitations
11
-#    under the License.
12
-
13
-import os
14
-
15
-from gabbi import fixture
16
-from oslo_config import cfg
17
-from oslo_config import fixture as config_fixture
18
-from oslo_middleware import cors
19
-from oslo_policy import opts as policy_opts
20
-from oslo_utils.fixture import uuidsentinel as uuids
21
-from oslo_utils import uuidutils
22
-from oslotest import output
23
-
24
-from nova.api.openstack.placement import context
25
-from nova.api.openstack.placement import deploy
26
-from nova.api.openstack.placement.objects import project as project_obj
27
-from nova.api.openstack.placement.objects import resource_provider as rp_obj
28
-from nova.api.openstack.placement.objects import user as user_obj
29
-from nova.api.openstack.placement import policies
30
-from nova import rc_fields as fields
31
-from nova.tests import fixtures
32
-from nova.tests.functional.api.openstack.placement.db import test_base as tb
33
-from nova.tests.functional.api.openstack.placement.fixtures import capture
34
-from nova.tests.unit import policy_fixture
35
-
36
-
37
-CONF = cfg.CONF
38
-
39
-
40
-def setup_app():
41
-    return deploy.loadapp(CONF)
42
-
43
-
44
-class APIFixture(fixture.GabbiFixture):
45
-    """Setup the required backend fixtures for a basic placement service."""
46
-
47
-    def start_fixture(self):
48
-        # Set up stderr and stdout captures by directly driving the
49
-        # existing nova fixtures that do that. This captures the
50
-        # output that happens outside individual tests (for
51
-        # example database migrations).
52
-        self.standard_logging_fixture = capture.Logging()
53
-        self.standard_logging_fixture.setUp()
54
-        self.output_stream_fixture = output.CaptureOutput()
55
-        self.output_stream_fixture.setUp()
56
-        # Filter ignorable warnings during test runs.
57
-        self.warnings_fixture = capture.WarningsFixture()
58
-        self.warnings_fixture.setUp()
59
-
60
-        self.conf_fixture = config_fixture.Config(CONF)
61
-        self.conf_fixture.setUp()
62
-        # The Database fixture will get confused if only one of the databases
63
-        # is configured.
64
-        for group in ('placement_database', 'api_database', 'database'):
65
-            self.conf_fixture.config(
66
-                group=group,
67
-                connection='sqlite://',
68
-                sqlite_synchronous=False)
69
-        self.conf_fixture.config(
70
-            group='api', auth_strategy='noauth2')
71
-
72
-        self.context = context.RequestContext()
73
-
74
-        # Register CORS opts, but do not set config. This has the
75
-        # effect of exercising the "don't use cors" path in
76
-        # deploy.py. Without setting some config the group will not
77
-        # be present.
78
-        CONF.register_opts(cors.CORS_OPTS, 'cors')
79
-        # Set default policy opts, otherwise the deploy module can
80
-        # NoSuchOptError.
81
-        policy_opts.set_defaults(CONF)
82
-
83
-        # Make sure default_config_files is an empty list, not None.
84
-        # If None /etc/nova/nova.conf is read and confuses results.
85
-        CONF([], default_config_files=[])
86
-
87
-        self._reset_db_flags()
88
-        self.placement_db_fixture = fixtures.Database('placement')
89
-        self.placement_db_fixture.setUp()
90
-        # Do this now instead of waiting for the WSGI app to start so that
91
-        # fixtures can have traits.
92
-        deploy.update_database()
93
-
94
-        os.environ['RP_UUID'] = uuidutils.generate_uuid()
95
-        os.environ['RP_NAME'] = uuidutils.generate_uuid()
96
-        os.environ['CUSTOM_RES_CLASS'] = 'CUSTOM_IRON_NFV'
97
-        os.environ['PROJECT_ID'] = uuidutils.generate_uuid()
98
-        os.environ['USER_ID'] = uuidutils.generate_uuid()
99
-        os.environ['PROJECT_ID_ALT'] = uuidutils.generate_uuid()
100
-        os.environ['USER_ID_ALT'] = uuidutils.generate_uuid()
101
-        os.environ['INSTANCE_UUID'] = uuidutils.generate_uuid()
102
-        os.environ['MIGRATION_UUID'] = uuidutils.generate_uuid()
103
-        os.environ['CONSUMER_UUID'] = uuidutils.generate_uuid()
104
-        os.environ['PARENT_PROVIDER_UUID'] = uuidutils.generate_uuid()
105
-        os.environ['ALT_PARENT_PROVIDER_UUID'] = uuidutils.generate_uuid()
106
-
107
-    def stop_fixture(self):
108
-        self.placement_db_fixture.cleanUp()
109
-
110
-        # Since we clean up the DB, we need to reset the traits sync
111
-        # flag to make sure the next run will recreate the traits and
112
-        # reset the _RC_CACHE so that any cached resource classes
113
-        # are flushed.
114
-        self._reset_db_flags()
115
-
116
-        self.warnings_fixture.cleanUp()
117
-        self.output_stream_fixture.cleanUp()
118
-        self.standard_logging_fixture.cleanUp()
119
-        self.conf_fixture.cleanUp()
120
-
121
-    @staticmethod
122
-    def _reset_db_flags():
123
-        rp_obj._TRAITS_SYNCED = False
124
-        rp_obj._RC_CACHE = None
125
-
126
-
127
-class AllocationFixture(APIFixture):
128
-    """An APIFixture that has some pre-made Allocations.
129
-
130
-         +----- same user----+          alt_user
131
-         |                   |             |
132
-    +----+----------+ +------+-----+ +-----+---------+
133
-    | consumer1     | | consumer2  | | alt_consumer  |
134
-    |  DISK_GB:1000 | |   VCPU: 6  | |  VCPU: 1      |
135
-    |               | |            | |  DISK_GB:20   |
136
-    +-------------+-+ +------+-----+ +-+-------------+
137
-                  |          |         |
138
-                +-+----------+---------+-+
139
-                |     rp                 |
140
-                |      VCPU: 10          |
141
-                |      DISK_GB:2048      |
142
-                +------------------------+
143
-    """
144
-    def start_fixture(self):
145
-        super(AllocationFixture, self).start_fixture()
146
-
147
-        # For use creating and querying allocations/usages
148
-        os.environ['ALT_USER_ID'] = uuidutils.generate_uuid()
149
-        project_id = os.environ['PROJECT_ID']
150
-        user_id = os.environ['USER_ID']
151
-        alt_user_id = os.environ['ALT_USER_ID']
152
-
153
-        user = user_obj.User(self.context, external_id=user_id)
154
-        user.create()
155
-        alt_user = user_obj.User(self.context, external_id=alt_user_id)
156
-        alt_user.create()
157
-        project = project_obj.Project(self.context, external_id=project_id)
158
-        project.create()
159
-
160
-        # Stealing from the super
161
-        rp_name = os.environ['RP_NAME']
162
-        rp_uuid = os.environ['RP_UUID']
163
-        # Create the rp with VCPU and DISK_GB inventory
164
-        rp = tb.create_provider(self.context, rp_name, uuid=rp_uuid)
165
-        tb.add_inventory(rp, 'DISK_GB', 2048,
166
-                         step_size=10, min_unit=10, max_unit=1000)
167
-        tb.add_inventory(rp, 'VCPU', 10, max_unit=10)
168
-
169
-        # Create a first consumer for the DISK_GB allocations
170
-        consumer1 = tb.ensure_consumer(self.context, user, project)
171
-        tb.set_allocation(self.context, rp, consumer1, {'DISK_GB': 1000})
172
-        os.environ['CONSUMER_0'] = consumer1.uuid
173
-
174
-        # Create a second consumer for the VCPU allocations
175
-        consumer2 = tb.ensure_consumer(self.context, user, project)
176
-        tb.set_allocation(self.context, rp, consumer2, {'VCPU': 6})
177
-        os.environ['CONSUMER_ID'] = consumer2.uuid
178
-
179
-        # Create a consumer object for a different user
180
-        alt_consumer = tb.ensure_consumer(self.context, alt_user, project)
181
-        os.environ['ALT_CONSUMER_ID'] = alt_consumer.uuid
182
-
183
-        # Create a couple of allocations for a different user.
184
-        tb.set_allocation(self.context, rp, alt_consumer,
185
-                          {'DISK_GB': 20, 'VCPU': 1})
186
-
187
-        # The ALT_RP_XXX variables are for a resource provider that has
188
-        # not been created in the Allocation fixture
189
-        os.environ['ALT_RP_UUID'] = uuidutils.generate_uuid()
190
-        os.environ['ALT_RP_NAME'] = uuidutils.generate_uuid()
191
-
192
-
193
-class SharedStorageFixture(APIFixture):
194
-    """An APIFixture that has some two compute nodes without local storage
195
-    associated by aggregate to a provider of shared storage. Both compute
196
-    nodes have respectively two numa node resource providers, each of
197
-    which has a pf resource provider.
198
-
199
-                     +-------------------------------------+
200
-                     |  sharing storage (ss)               |
201
-                     |   DISK_GB:2000                      |
202
-                     |   traits: MISC_SHARES_VIA_AGGREGATE |
203
-                     +-----------------+-------------------+
204
-                                       | aggregate
205
-        +--------------------------+   |   +------------------------+
206
-        | compute node (cn1)       |---+---| compute node (cn2)     |
207
-        |  CPU: 24                 |       |  CPU: 24               |
208
-        |  MEMORY_MB: 128*1024     |       |  MEMORY_MB: 128*1024   |
209
-        |  traits: HW_CPU_X86_SSE, |       |                        |
210
-        |          HW_CPU_X86_SSE2 |       |                        |
211
-        +--------------------------+       +------------------------+
212
-             |               |                 |                |
213
-        +---------+      +---------+      +---------+      +---------+
214
-        | numa1_1 |      | numa1_2 |      | numa2_1 |      | numa2_2 |
215
-        +---------+      +---------+      +---------+      +---------+
216
-             |                |                |                |
217
-     +---------------++---------------++---------------++----------------+
218
-     | pf1_1         || pf1_2         || pf2_1         || pf2_2          |
219
-     | SRIOV_NET_VF:8|| SRIOV_NET_VF:8|| SRIOV_NET_VF:8|| SRIOV_NET_VF:8 |
220
-     +---------------++---------------++---------------++----------------+
221
-    """
222
-
223
-    def start_fixture(self):
224
-        super(SharedStorageFixture, self).start_fixture()
225
-
226
-        agg_uuid = uuidutils.generate_uuid()
227
-
228
-        cn1 = tb.create_provider(self.context, 'cn1', agg_uuid)
229
-        cn2 = tb.create_provider(self.context, 'cn2', agg_uuid)
230
-        ss = tb.create_provider(self.context, 'ss', agg_uuid)
231
-
232
-        numa1_1 = tb.create_provider(self.context, 'numa1_1', parent=cn1.uuid)
233
-        numa1_2 = tb.create_provider(self.context, 'numa1_2', parent=cn1.uuid)
234
-        numa2_1 = tb.create_provider(self.context, 'numa2_1', parent=cn2.uuid)
235
-        numa2_2 = tb.create_provider(self.context, 'numa2_2', parent=cn2.uuid)
236
-
237
-        pf1_1 = tb.create_provider(self.context, 'pf1_1', parent=numa1_1.uuid)
238
-        pf1_2 = tb.create_provider(self.context, 'pf1_2', parent=numa1_2.uuid)
239
-        pf2_1 = tb.create_provider(self.context, 'pf2_1', parent=numa2_1.uuid)
240
-        pf2_2 = tb.create_provider(self.context, 'pf2_2', parent=numa2_2.uuid)
241
-
242
-        os.environ['AGG_UUID'] = agg_uuid
243
-
244
-        os.environ['CN1_UUID'] = cn1.uuid
245
-        os.environ['CN2_UUID'] = cn2.uuid
246
-        os.environ['SS_UUID'] = ss.uuid
247
-
248
-        os.environ['NUMA1_1_UUID'] = numa1_1.uuid
249
-        os.environ['NUMA1_2_UUID'] = numa1_2.uuid
250
-        os.environ['NUMA2_1_UUID'] = numa2_1.uuid
251
-        os.environ['NUMA2_2_UUID'] = numa2_2.uuid
252
-
253
-        os.environ['PF1_1_UUID'] = pf1_1.uuid
254
-        os.environ['PF1_2_UUID'] = pf1_2.uuid
255
-        os.environ['PF2_1_UUID'] = pf2_1.uuid
256
-        os.environ['PF2_2_UUID'] = pf2_2.uuid
257
-
258
-        # Populate compute node inventory for VCPU and RAM
259
-        for cn in (cn1, cn2):
260
-            tb.add_inventory(cn, fields.ResourceClass.VCPU, 24,
261
-                             allocation_ratio=16.0)
262
-            tb.add_inventory(cn, fields.ResourceClass.MEMORY_MB, 128 * 1024,
263
-                             allocation_ratio=1.5)
264
-        tb.set_traits(cn1, 'HW_CPU_X86_SSE', 'HW_CPU_X86_SSE2')
265
-
266
-        # Populate shared storage provider with DISK_GB inventory and
267
-        # mark it shared among any provider associated via aggregate
268
-        tb.add_inventory(ss, fields.ResourceClass.DISK_GB, 2000,
269
-                         reserved=100, allocation_ratio=1.0)
270
-        tb.set_traits(ss, 'MISC_SHARES_VIA_AGGREGATE')
271
-
272
-        # Populate PF inventory for VF
273
-        for pf in (pf1_1, pf1_2, pf2_1, pf2_2):
274
-            tb.add_inventory(pf, fields.ResourceClass.SRIOV_NET_VF,
275
-                             8, allocation_ratio=1.0)
276
-
277
-
278
-class NonSharedStorageFixture(APIFixture):
279
-    """An APIFixture that has two compute nodes with local storage that do not
280
-    use shared storage.
281
-    """
282
-    def start_fixture(self):
283
-        super(NonSharedStorageFixture, self).start_fixture()
284
-
285
-        aggA_uuid = uuidutils.generate_uuid()
286
-        aggB_uuid = uuidutils.generate_uuid()
287
-        aggC_uuid = uuidutils.generate_uuid()
288
-        os.environ['AGGA_UUID'] = aggA_uuid
289
-        os.environ['AGGB_UUID'] = aggB_uuid
290
-        os.environ['AGGC_UUID'] = aggC_uuid
291
-
292
-        cn1 = tb.create_provider(self.context, 'cn1')
293
-        cn2 = tb.create_provider(self.context, 'cn2')
294
-
295
-        os.environ['CN1_UUID'] = cn1.uuid
296
-        os.environ['CN2_UUID'] = cn2.uuid
297
-
298
-        # Populate compute node inventory for VCPU, RAM and DISK
299
-        for cn in (cn1, cn2):
300
-            tb.add_inventory(cn, 'VCPU', 24)
301
-            tb.add_inventory(cn, 'MEMORY_MB', 128 * 1024)
302
-            tb.add_inventory(cn, 'DISK_GB', 2000)
303
-
304
-
305
-class CORSFixture(APIFixture):
306
-    """An APIFixture that turns on CORS."""
307
-
308
-    def start_fixture(self):
309
-        super(CORSFixture, self).start_fixture()
310
-        # NOTE(cdent): If we remove this override, then the cors
311
-        # group ends up not existing in the conf, so when deploy.py
312
-        # wants to load the CORS middleware, it will not.
313
-        self.conf_fixture.config(
314
-            group='cors',
315
-            allowed_origin='http://valid.example.com')
316
-
317
-
318
-class GranularFixture(APIFixture):
319
-    """An APIFixture that sets up the following provider environment for
320
-    testing granular resource requests.
321
-
322
-+========================++========================++========================+
323
-|cn_left                 ||cn_middle               ||cn_right                |
324
-|VCPU: 8                 ||VCPU: 8                 ||VCPU: 8                 |
325
-|MEMORY_MB: 4096         ||MEMORY_MB: 4096         ||MEMORY_MB: 4096         |
326
-|DISK_GB: 500            ||SRIOV_NET_VF: 8         ||DISK_GB: 500            |
327
-|VGPU: 8                 ||CUSTOM_NET_MBPS: 4000   ||VGPU: 8                 |
328
-|SRIOV_NET_VF: 8         ||traits: HW_CPU_X86_AVX, ||  - max_unit: 2         |
329
-|CUSTOM_NET_MBPS: 4000   ||        HW_CPU_X86_AVX2,||traits: HW_CPU_X86_MMX, |
330
-|traits: HW_CPU_X86_AVX, ||        HW_CPU_X86_SSE, ||        HW_GPU_API_DXVA,|
331
-|        HW_CPU_X86_AVX2,||        HW_NIC_ACCEL_TLS||        CUSTOM_DISK_SSD,|
332
-|        HW_GPU_API_DXVA,|+=+=====+================++==+========+============+
333
-|        HW_NIC_DCB_PFC, |  :     :                    :        : a
334
-|        CUSTOM_FOO      +..+     +--------------------+        : g
335
-+========================+  : a   :                             : g
336
-                            : g   :                             : C
337
-+========================+  : g   :             +===============+======+
338
-|shr_disk_1              |  : A   :             |shr_net               |
339
-|DISK_GB: 1000           +..+     :             |SRIOV_NET_VF: 16      |
340
-|traits: CUSTOM_DISK_SSD,|  :     : a           |CUSTOM_NET_MBPS: 40000|
341
-|  MISC_SHARES_VIA_AGG...|  :     : g           |traits: MISC_SHARES...|
342
-+========================+  :     : g           +======================+
343
-+=======================+   :     : B
344
-|shr_disk_2             +...+     :
345
-|DISK_GB: 1000          |         :
346
-|traits: MISC_SHARES... +.........+
347
-+=======================+
348
-    """
349
-    def start_fixture(self):
350
-        super(GranularFixture, self).start_fixture()
351
-
352
-        rp_obj.ResourceClass(
353
-            context=self.context, name='CUSTOM_NET_MBPS').create()
354
-
355
-        os.environ['AGGA'] = uuids.aggA
356
-        os.environ['AGGB'] = uuids.aggB
357
-        os.environ['AGGC'] = uuids.aggC
358
-
359
-        cn_left = tb.create_provider(self.context, 'cn_left', uuids.aggA)
360
-        os.environ['CN_LEFT'] = cn_left.uuid
361
-        tb.add_inventory(cn_left, 'VCPU', 8)
362
-        tb.add_inventory(cn_left, 'MEMORY_MB', 4096)
363
-        tb.add_inventory(cn_left, 'DISK_GB', 500)
364
-        tb.add_inventory(cn_left, 'VGPU', 8)
365
-        tb.add_inventory(cn_left, 'SRIOV_NET_VF', 8)
366
-        tb.add_inventory(cn_left, 'CUSTOM_NET_MBPS', 4000)
367
-        tb.set_traits(cn_left, 'HW_CPU_X86_AVX', 'HW_CPU_X86_AVX2',
368
-                      'HW_GPU_API_DXVA', 'HW_NIC_DCB_PFC', 'CUSTOM_FOO')
369
-
370
-        cn_middle = tb.create_provider(
371
-            self.context, 'cn_middle', uuids.aggA, uuids.aggB)
372
-        os.environ['CN_MIDDLE'] = cn_middle.uuid
373
-        tb.add_inventory(cn_middle, 'VCPU', 8)
374
-        tb.add_inventory(cn_middle, 'MEMORY_MB', 4096)
375
-        tb.add_inventory(cn_middle, 'SRIOV_NET_VF', 8)
376
-        tb.add_inventory(cn_middle, 'CUSTOM_NET_MBPS', 4000)
377
-        tb.set_traits(cn_middle, 'HW_CPU_X86_AVX', 'HW_CPU_X86_AVX2',
378
-                      'HW_CPU_X86_SSE', 'HW_NIC_ACCEL_TLS')
379
-
380
-        cn_right = tb.create_provider(
381
-            self.context, 'cn_right', uuids.aggB, uuids.aggC)
382
-        os.environ['CN_RIGHT'] = cn_right.uuid
383
-        tb.add_inventory(cn_right, 'VCPU', 8)
384
-        tb.add_inventory(cn_right, 'MEMORY_MB', 4096)
385
-        tb.add_inventory(cn_right, 'DISK_GB', 500)
386
-        tb.add_inventory(cn_right, 'VGPU', 8, max_unit=2)
387
-        tb.set_traits(cn_right, 'HW_CPU_X86_MMX', 'HW_GPU_API_DXVA',
388
-                      'CUSTOM_DISK_SSD')
389
-
390
-        shr_disk_1 = tb.create_provider(self.context, 'shr_disk_1', uuids.aggA)
391
-        os.environ['SHR_DISK_1'] = shr_disk_1.uuid
392
-        tb.add_inventory(shr_disk_1, 'DISK_GB', 1000)
393
-        tb.set_traits(shr_disk_1, 'MISC_SHARES_VIA_AGGREGATE',
394
-                      'CUSTOM_DISK_SSD')
395
-
396
-        shr_disk_2 = tb.create_provider(
397
-            self.context, 'shr_disk_2', uuids.aggA, uuids.aggB)
398
-        os.environ['SHR_DISK_2'] = shr_disk_2.uuid
399
-        tb.add_inventory(shr_disk_2, 'DISK_GB', 1000)
400
-        tb.set_traits(shr_disk_2, 'MISC_SHARES_VIA_AGGREGATE')
401
-
402
-        shr_net = tb.create_provider(self.context, 'shr_net', uuids.aggC)
403
-        os.environ['SHR_NET'] = shr_net.uuid
404
-        tb.add_inventory(shr_net, 'SRIOV_NET_VF', 16)
405
-        tb.add_inventory(shr_net, 'CUSTOM_NET_MBPS', 40000)
406
-        tb.set_traits(shr_net, 'MISC_SHARES_VIA_AGGREGATE')
407
-
408
-
409
-class OpenPolicyFixture(APIFixture):
410
-    """An APIFixture that changes all policy rules to allow non-admins."""
411
-
412
-    def start_fixture(self):
413
-        super(OpenPolicyFixture, self).start_fixture()
414
-        self.placement_policy_fixture = policy_fixture.PlacementPolicyFixture()
415
-        self.placement_policy_fixture.setUp()
416
-        # Get all of the registered rules and set them to '@' to allow any
417
-        # user to have access. The nova policy "admin_or_owner" concept does
418
-        # not really apply to most of placement resources since they do not
419
-        # have a user_id/project_id attribute.
420
-        rules = {}
421
-        for rule in policies.list_rules():
422
-            name = rule.name
423
-            # Ignore "base" rules for role:admin.
424
-            if name in ['placement', 'admin_api']:
425
-                continue
426
-            rules[name] = '@'
427
-        self.placement_policy_fixture.set_rules(rules)
428
-
429
-    def stop_fixture(self):
430
-        super(OpenPolicyFixture, self).stop_fixture()
431
-        self.placement_policy_fixture.cleanUp()

+ 0
- 49
nova/tests/functional/api/openstack/placement/fixtures/placement.py View File

@@ -1,49 +0,0 @@
1
-#    Licensed under the Apache License, Version 2.0 (the "License"); you may
2
-#    not use this file except in compliance with the License. You may obtain
3
-#    a copy of the License at
4
-#
5
-#         http://www.apache.org/licenses/LICENSE-2.0
6
-#
7
-#    Unless required by applicable law or agreed to in writing, software
8
-#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
9
-#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
10
-#    License for the specific language governing permissions and limitations
11
-#    under the License.
12
-
13
-import fixtures
14
-from oslo_config import cfg
15
-from oslo_config import fixture as config_fixture
16
-from oslo_utils import uuidutils
17
-from wsgi_intercept import interceptor
18
-
19
-from nova.api.openstack.placement import deploy
20
-
21
-
22
-CONF = cfg.CONF
23
-
24
-
25
-class PlacementFixture(fixtures.Fixture):
26
-    """A fixture to placement operations.
27
-
28
-    Runs a local WSGI server bound on a free port and having the Placement
29
-    application with NoAuth middleware.
30
-    This fixture also prevents calling the ServiceCatalog for getting the
31
-    endpoint.
32
-
33
-    It's possible to ask for a specific token when running the fixtures so
34
-    all calls would be passing this token.
35
-    """
36
-    def __init__(self, token='admin'):
37
-        self.token = token
38
-
39
-    def setUp(self):
40
-        super(PlacementFixture, self).setUp()
41
-
42
-        conf_fixture = config_fixture.Config(CONF)
43
-        conf_fixture.config(group='api', auth_strategy='noauth2')
44
-        loader = deploy.loadapp(CONF)
45
-        app = lambda: loader
46
-        self.endpoint = 'http://%s/placement' % uuidutils.generate_uuid()
47
-        intercept = interceptor.RequestsInterceptor(app, url=self.endpoint)
48
-        intercept.install_intercept()
49
-        self.addCleanup(intercept.uninstall_intercept)

+ 0
- 39
nova/tests/functional/api/openstack/placement/gabbits/aggregate-policy.yaml View File

@@ -1,39 +0,0 @@
1
-# This tests the individual CRUD operations on
2
-# /resource_providers/{uuid}/aggregates* using a non-admin user with an
3
-# open policy configuration. The response validation is intentionally minimal.
4
-fixtures:
5
-    - OpenPolicyFixture
6
-
7
-defaults:
8
-    request_headers:
9
-        x-auth-token: user
10
-        accept: application/json
11
-        content-type: application/json
12
-        openstack-api-version: placement latest
13
-
14
-vars:
15
-    - &agg_1 f918801a-5e54-4bee-9095-09a9d0c786b8
16
-    - &agg_2 a893eb5c-e2a0-4251-ab26-f71d3b0cfc0b
17
-
18
-tests:
19
-
20
-- name: post new resource provider
21
-  POST: /resource_providers
22
-  data:
23
-      name: $ENVIRON['RP_NAME']
24
-      uuid: $ENVIRON['RP_UUID']
25
-  status: 200
26
-
27
-- name: put some aggregates
28
-  PUT: /resource_providers/$ENVIRON['RP_UUID']/aggregates
29
-  data:
30
-      resource_provider_generation: 0
31
-      aggregates:
32
-        - *agg_1
33
-        - *agg_2
34
-  status: 200
35
-
36
-- name: get those aggregates
37
-  GET: $LAST_URL
38
-  response_json_paths:
39
-      $.aggregates.`len`: 2

+ 0
- 204
nova/tests/functional/api/openstack/placement/gabbits/aggregate.yaml View File

@@ -1,204 +0,0 @@
1
-
2
-fixtures:
3
-    - APIFixture
4
-
5
-defaults:
6
-    request_headers:
7
-        x-auth-token: admin
8
-        accept: application/json
9
-        content-type: application/json
10
-        openstack-api-version: placement latest
11
-
12
-vars:
13
-    - &agg_1 f918801a-5e54-4bee-9095-09a9d0c786b8
14
-    - &agg_2 a893eb5c-e2a0-4251-ab26-f71d3b0cfc0b
15
-
16
-tests:
17
-- name: get aggregates for bad resource provider
18
-  GET: /resource_providers/6984bb2d-830d-4c8d-ac64-c5a8103664be/aggregates
19
-  status: 404
20
-  response_json_paths:
21
-      $.errors[0].title: Not Found
22
-
23
-- name: put aggregates for bad resource provider
24
-  PUT: /resource_providers/6984bb2d-830d-4c8d-ac64-c5a8103664be/aggregates
25
-  data: []
26
-  status: 404
27
-  response_json_paths:
28
-      $.errors[0].title: Not Found
29
-
30
-- name: post new resource provider
31
-  POST: /resource_providers
32
-  data:
33
-      name: $ENVIRON['RP_NAME']
34
-      uuid: $ENVIRON['RP_UUID']
35
-  status: 200
36
-  response_headers:
37
-      location: //resource_providers/[a-f0-9-]+/
38
-
39
-- name: get empty aggregates
40
-  GET: /resource_providers/$ENVIRON['RP_UUID']/aggregates
41
-  response_json_paths:
42
-      $.aggregates: []
43
-
44
-- name: aggregates 404 for out of date microversion get
45
-  GET: /resource_providers/$ENVIRON['RP_UUID']/aggregates
46
-  request_headers:
47
-      openstack-api-version: placement 1.0
48
-  status: 404
49
-  response_json_paths:
50
-      $.errors[0].title: Not Found
51
-
52
-- name: aggregates 404 for out of date microversion put
53
-  PUT: /resource_providers/$ENVIRON['RP_UUID']/aggregates
54
-  request_headers:
55
-      openstack-api-version: placement 1.0
56
-  status: 404
57
-  response_json_paths:
58
-      $.errors[0].title: Not Found
59
-
60
-- name: put some aggregates - old payload and new microversion
61
-  PUT: $LAST_URL
62
-  data:
63
-      - *agg_1
64
-      - *agg_2
65
-  status: 400
66
-  response_strings:
67
-      - JSON does not validate
68
-  response_json_paths:
69
-      $.errors[0].title: Bad Request
70
-
71
-- name: put some aggregates - new payload and old microversion
72
-  PUT: $LAST_URL
73
-  request_headers:
74
-      openstack-api-version: placement 1.18
75
-  data:
76
-      resource_provider_generation: 0
77
-      aggregates:
78
-        - *agg_1
79
-        - *agg_2
80
-  status: 400
81
-  response_strings:
82
-      - JSON does not validate
83
-  response_json_paths:
84
-      $.errors[0].title: Bad Request
85
-
86
-- name: put some aggregates - new payload and new microversion
87
-  PUT: $LAST_URL
88
-  data:
89
-      resource_provider_generation: 0
90
-      aggregates:
91
-        - *agg_1
92
-        - *agg_2
93
-  status: 200
94
-  response_headers:
95
-      content-type: /application/json/
96
-      cache-control: no-cache
97
-      # Does last-modified look like a legit timestamp?
98
-      last-modified:  /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/
99
-  response_json_paths:
100
-      $.aggregates[0]: *agg_1
101
-      $.aggregates[1]: *agg_2
102
-      $.resource_provider_generation: 1
103
-
104
-- name: get those aggregates
105
-  GET: $LAST_URL
106
-  response_headers:
107
-      cache-control: no-cache
108
-      # Does last-modified look like a legit timestamp?
109
-      last-modified:  /^\w+, \d+ \w+ \d{4} [\d:]+ GMT$/
110
-  response_json_paths:
111
-      $.aggregates.`len`: 2
112
-
113
-- name: clear those aggregates - generation conflict
114
-  PUT: $LAST_URL
115
-  data:
116
-      resource_provider_generation: 0
117
-      aggregates: []
118
-  status: 409
119
-  response_json_paths:
120
-      $.errors[0].code: placement.concurrent_update
121
-
122
-- name: clear those aggregates
123
-  PUT: $LAST_URL
124
-  data:
125
-      resource_provider_generation: 1
126
-      aggregates: []
127
-  status: 200
128
-  response_json_paths:
129
-      $.aggregates: []
130
-
131
-- name: get empty aggregates again
132
-  GET: /resource_providers/$ENVIRON['RP_UUID']/aggregates
133
-  response_json_paths:
134
-      $.aggregates: []
135
-
136
-- name: put non json
137
-  PUT: $LAST_URL
138
-  data: '{"bad", "not json"}'
139
-  status: 400
140
-  response_strings:
141
-      - Malformed JSON
142
-  response_json_paths:
143
-      $.errors[0].title: Bad Request
144
-
145
-- name: put invalid json no generation
146
-  PUT: $LAST_URL
147
-  data:
148
-      aggregates:
149
-          - *agg_1
150
-          - *agg_2
151
-  status: 400
152
-  response_strings:
153
-      - JSON does not validate
154
-  response_json_paths:
155
-      $.errors[0].title: Bad Request
156
-
157
-- name: put invalid json not uuids
158
-  PUT: $LAST_URL
159
-  data:
160
-      aggregates:
161
-        - harry
162
-        - sally
163
-      resource_provider_generation: 2
164
-  status: 400
165
-  response_strings:
166
-      - "is not a 'uuid'"
167
-  response_json_paths:
168
-      $.errors[0].title: Bad Request
169
-
170
-- name: put same aggregates twice
171
-  PUT: $LAST_URL
172
-  data:
173
-      aggregates:
174
-          - *agg_1
175
-          - *agg_1
176
-      resource_provider_generation: 2
177
-  status: 400
178
-  response_strings:
179
-      - has non-unique elements
180
-  response_json_paths:
181
-      $.errors[0].title: Bad Request
182
-
183
-# The next two tests confirm that prior to version 1.15 we do
184
-# not set the cache-control or last-modified headers on either
185
-# PUT or GET.
186
-
187
-- name: put some aggregates v1.14
188
-  PUT: $LAST_URL
189
-  request_headers:
190
-      openstack-api-version: placement 1.14
191
-  data:
192
-      - *agg_1
193
-      - *agg_2
194
-  response_forbidden_headers:
195
-      - last-modified
196
-      - cache-control
197
-
198
-- name: get those aggregates v1.14
199
-  GET: $LAST_URL
200
-  request_headers:
201
-      openstack-api-version: placement 1.14
202
-  response_forbidden_headers:
203
-      - last-modified
204
-      - cache-control

+ 0
- 77
nova/tests/functional/api/openstack/placement/gabbits/allocation-bad-class.yaml View File

@@ -1,77 +0,0 @@
1
-
2
-fixtures:
3
-    - APIFixture
4
-
5
-defaults:
6
-    request_headers:
7
-        x-auth-token: admin
8
-        accept: application/json
9
-        content-type: application/json
10
-        # Using <= 1.11 allows the PUT /allocations/{uuid} below
11
-        # to work with the older request form.
12
-        openstack-api-version: placement 1.11
13
-
14
-tests:
15
-
16
-- name: create a resource provider
17
-  POST: /resource_providers
18
-  data:
19
-      name: an rp
20
-  status: 201
21
-
22
-- name: get resource provider
23
-  GET: $LOCATION
24
-  status: 200
25
-
26
-- name: create a resource class
27
-  PUT: /resource_classes/CUSTOM_GOLD
28
-  status: 201
29
-
30
-- name: add inventory to an rp
31
-  PUT: /resource_providers/$HISTORY['get resource provider'].$RESPONSE['$.uuid']/inventories
32
-  data:
33
-      resource_provider_generation: 0
34
-      inventories:
35
-          VCPU:
36
-              total: 24
37
-          CUSTOM_GOLD:
38
-              total: 5
39
-  status: 200
40
-
41
-- name: allocate some of it two
42
-  desc: this is the one that used to raise a 500
43
-  PUT: /allocations/6d9f83db-6eb5-49f6-84b0-5d03c6aa9fc8
44
-  data:
45
-      allocations:
46
-          - resource_provider:
47
-                uuid: $HISTORY['get resource provider'].$RESPONSE['$.uuid']
48
-            resources:
49
-                DISK_GB: 5
50
-                CUSTOM_GOLD: 1
51
-      project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784
52
-      user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70
53
-  status: 409
54
-
55
-- name: allocate some of it custom
56
-  PUT: /allocations/6d9f83db-6eb5-49f6-84b0-5d03c6aa9fc8
57
-  data:
58
-      allocations:
59
-          - resource_provider:
60
-                uuid: $HISTORY['get resource provider'].$RESPONSE['$.uuid']
61
-            resources:
62
-                CUSTOM_GOLD: 1
63
-      project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784
64
-      user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70
65
-  status: 204
66
-
67
-- name: allocate some of it standard
68
-  PUT: /allocations/6d9f83db-6eb5-49f6-84b0-5d03c6aa9fc8
69
-  data:
70
-      allocations:
71
-          - resource_provider:
72
-                uuid: $HISTORY['get resource provider'].$RESPONSE['$.uuid']
73
-            resources:
74
-                DISK_GB: 1
75
-      project_id: 42a32c07-3eeb-4401-9373-68a8cdca6784
76
-      user_id: 66cb2f29-c86d-47c3-8af5-69ae7b778c70
77
-  status: 409

+ 0
- 141
nova/tests/functional/api/openstack/placement/gabbits/allocation-candidates-member-of.yaml View File

@@ -1,141 +0,0 @@
1
-# Tests of allocation candidates API
2
-
3
-fixtures:
4
-    - NonSharedStorageFixture
5
-
6
-defaults:
7
-    request_headers:
8
-        x-auth-token: admin
9
-        content-type: application/json
10
-        accept: application/json
11
-        openstack-api-version: placement 1.24
12
-
13
-tests:
14
-
15
-- name: get bad member_of microversion
16
-  GET: /allocation_candidates?resources=VCPU:1&member_of=in:$ENVIRON['AGGA_UUID'],$ENVIRON['AGGB_UUID']
17
-  request_headers:
18
-      openstack-api-version: placement 1.18
19
-  status: 400
20
-  response_strings:
21
-      - Invalid query string parameters
22
-      - "'member_of' was unexpected"
23
-
24
-- name: get allocation candidates invalid member_of value
25
-  GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&member_of=INVALID_UUID
26
-  status: 400
27
-  response_strings:
28
-      - Expected 'member_of' parameter to contain valid UUID(s).
29
-
30
-- name: get allocation candidates no 'in:' for multiple member_of
31
-  GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&member_of=$ENVIRON['AGGA_UUID'],$ENVIRON['AGGB_UUID']
32
-  status: 400
33
-  response_strings:
34
-      - Multiple values for 'member_of' must be prefixed with the 'in:' keyword
35
-
36
-- name: get allocation candidates multiple member_of with 'in:' but invalid values