nova-manage: Add flavor scanning to migrate_to_unified_limits

This makes 'nova-manage limits migrate_to_unified_limits' scan the API
database for flavors and detect if any resource classes are missing
registered limits in Keystone.

Related to blueprint unified-limits-nova-unset-limits

Change-Id: I431176fd4d09201c551d8f82c71515cd4616cfea
This commit is contained in:
melanie witt 2024-07-13 08:23:37 +00:00
parent 3b530ac15b
commit 294e21c803
9 changed files with 636 additions and 75 deletions

View File

@ -1813,7 +1813,8 @@ limits migrate_to_unified_limits
.. code-block:: shell
nova-manage limits migrate_to_unified_limits [--project-id <project-id>]
[--region-id <region-id>] [--verbose] [--dry-run]
[--region-id <region-id>] [--verbose] [--dry-run] [--quiet]
[--no-embedded-flavor-scan]
Migrate quota limits from the Nova database to unified limits in Keystone.
@ -1821,26 +1822,25 @@ This command is useful for operators to migrate from legacy quotas to unified
limits. Limits are migrated by copying them from the Nova database to Keystone
by creating them using the Keystone API.
The Nova configuration file used by ``nova-manage`` must have a ``[keystone]``
section that contains authentication settings in order for the Keystone API
calls to succeed. As an example:
The Nova configuration file used by ``nova-manage`` must have a
:oslo.config:group:`keystone_authtoken` section that contains authentication
settings in order for the Keystone API calls to succeed. As an example:
.. code-block:: ini
[keystone]
region_name = RegionOne
[keystone_authtoken]
project_domain_name = Default
project_name = service
user_domain_name = Default
username = nova
auth_url = http://127.0.0.1/identity
auth_type = password
username = admin
password = <password>
system_scope = all
By default `Keystone policy configuration`_, access to create, update, and
delete in the `unified limits API`_ is restricted to callers with
`system-scoped authorization tokens`_. The ``system_scope = all`` setting
indicates the scope for system operations. You will need to ensure that the
user configured under ``[keystone]`` has the necessary role and scope.
delete in the `unified limits API`_ is restricted to callers with the ``admin``
role. You will need to ensure that the user configured under
:oslo.config:group:`keystone_authtoken` has the necessary role and scope.
.. warning::
@ -1859,10 +1859,14 @@ user configured under ``[keystone]`` has the necessary role and scope.
.. _Keystone policy configuration: https://docs.openstack.org/keystone/latest/configuration/policy.html
.. _unified limits API: https://docs.openstack.org/api-ref/identity/v3/index.html#unified-limits
.. _system-scoped authorization tokens: https://docs.openstack.org/keystone/latest/admin/tokens-overview.html#system-scoped-tokens
.. versionadded:: 28.0.0 (2023.2 Bobcat)
.. versionchanged:: 31.0.0 (2025.1 Epoxy)
Added flavor scanning for resource classes missing limits along with the
--quiet and --no-embedded-flavor-scan options.
.. rubric:: Options
.. option:: --project-id <project-id>
@ -1879,7 +1883,16 @@ user configured under ``[keystone]`` has the necessary role and scope.
.. option:: --dry-run
Show what limits would be created without actually creating them.
Show what limits would be created without actually creating them. Flavors
will still be scanned for resource classes missing limits.
.. option:: --quiet
Do not output anything during execution.
.. option:: --no-embedded-flavor-scan
Do not scan instances embedded flavors for resource classes missing limits.
.. rubric:: Return codes
@ -1895,6 +1908,8 @@ user configured under ``[keystone]`` has the necessary role and scope.
- An unexpected error occurred
* - 2
- Failed to connect to the database
* - 3
- Missing registered limits were identified
See Also

View File

@ -273,3 +273,102 @@ openstack --os-compute-api-version 2.37 \
# Delete the servers.
openstack server delete metadata-items-test1 metadata-items-test2
# Test 'nova-manage limits migrate_to_unified_limits' by creating a test region
# with no registered limits in it, run the nova-manage command, and verify the
# expected limits were created and warned about.
echo "Testing nova-manage limits migrate_to_unified_limits"
ul_test_region=RegionTestNovaUnifiedLimits
openstack --os-cloud devstack-admin region create $ul_test_region
# Verify there are no registered limits in the test region.
registered_limits=$(openstack --os-cloud devstack registered limit list \
--region $ul_test_region -f value)
if [[ "$registered_limits" != "" ]]; then
echo "There should be no registered limits in the test region; failing"
exit 2
fi
# Get existing legacy quota limits to use for verification.
legacy_limits=$(openstack --os-cloud devstack quota show --compute -f value -c "Resource" -c "Limit")
# Requires Bash 4.
declare -A legacy_name_limit_map
while read name limit
do legacy_name_limit_map["$name"]="$limit"
done <<< "$legacy_limits"
set +e
set -o pipefail
$MANAGE limits migrate_to_unified_limits --region-id $ul_test_region --verbose | tee /tmp/output
rc=$?
set +o pipefail
set -e
if [[ ${rc} -eq 0 ]]; then
echo "nova-manage should have warned about unset registered limits; failing"
exit 2
fi
# Verify there are now registered limits in the test region.
registered_limits=$(openstack --os-cloud devstack registered limit list \
--region $ul_test_region -f value -c "Resource Name" -c "Default Limit")
if [[ "$registered_limits" == "" ]]; then
echo "There should be registered limits in the test region now; failing"
exit 2
fi
# Get the new unified limits to use for verification.
declare -A unified_name_limit_map
while read name limit
do unified_name_limit_map["$name"]="$limit"
done <<< "$registered_limits"
declare -A old_to_new_name_map
old_to_new_name_map["instances"]="servers"
old_to_new_name_map["cores"]="class:VCPU"
old_to_new_name_map["ram"]="class:MEMORY_MB"
old_to_new_name_map["properties"]="server_metadata_items"
old_to_new_name_map["injected-files"]="server_injected_files"
old_to_new_name_map["injected-file-size"]="server_injected_file_content_bytes"
old_to_new_name_map["injected-path-size"]="server_injected_file_path_bytes"
old_to_new_name_map["key-pairs"]="server_key_pairs"
old_to_new_name_map["server-groups"]="server_groups"
old_to_new_name_map["server-group-members"]="server_group_members"
for old_name in "${!old_to_new_name_map[@]}"; do
new_name="${old_to_new_name_map[$old_name]}"
if [[ "${legacy_name_limit_map[$old_name]}" != "${unified_name_limit_map[$new_name]}" ]]; then
echo "Legacy limit value does not match unified limit value; failing"
exit 2
fi
done
# Create the missing registered limits that were warned about earlier.
missing_limits=$(grep missing /tmp/output | awk '{print $2}')
while read limit
do openstack --os-cloud devstack-system-admin registered limit create \
--region $ul_test_region --service nova --default-limit 5 $limit
done <<< "$missing_limits"
# Run migrate_to_unified_limits again. There should be a success message in the
# output because there should be no resources found that are missing registered
# limits.
$MANAGE limits migrate_to_unified_limits --region-id $ul_test_region --verbose
rc=$?
if [[ ${rc} -ne 0 ]]; then
echo "nova-manage should have output a success message; failing"
exit 2
fi
registered_limit_ids=$(openstack --os-cloud devstack registered limit list \
--region $ul_test_region -f value -c "ID")
openstack --os-cloud devstack-system-admin registered limit delete $registered_limit_ids
openstack --os-cloud devstack-admin region delete $ul_test_region

View File

@ -27,6 +27,7 @@ import functools
import os
import re
import sys
import textwrap
import time
import traceback
import typing as ty
@ -50,8 +51,10 @@ from sqlalchemy.engine import url as sqla_url
from nova.cmd import common as cmd_common
from nova.compute import api
from nova.compute import instance_actions
from nova.compute import instance_list as list_instances
from nova.compute import rpcapi
import nova.conf
from nova.conf import utils as conf_utils
from nova import config
from nova import context
from nova.db import constants as db_const
@ -3439,8 +3442,9 @@ class ImagePropertyCommands:
class LimitsCommands():
def _create_unified_limits(self, ctxt, legacy_defaults, project_id,
region_id, output, dry_run):
def _create_unified_limits(self, ctxt, keystone_api, service_id,
legacy_defaults, project_id, region_id, output,
dry_run):
return_code = 0
# Create registered (default) limits first.
@ -3462,24 +3466,6 @@ class LimitsCommands():
unified_to_legacy_names['class:PCPU'] = 'pcores'
legacy_to_unified_names['pcores'] = 'class:PCPU'
# For auth, a section for [keystone] is required in the config:
#
# [keystone]
# region_name = RegionOne
# user_domain_name = Default
# password = <password>
# username = <username>
# auth_url = http://127.0.0.1/identity
# auth_type = password
# system_scope = all
#
# The configured user needs 'role:admin and system_scope:all' by
# default in order to create limits in Keystone.
keystone_api = utils.get_sdk_adapter('identity')
# Service ID is required in unified limits APIs.
service_id = keystone_api.find_service('nova').id
# Retrieve the existing resource limits from Keystone.
registered_limits = keystone_api.registered_limits(region_id=region_id)
@ -3567,6 +3553,133 @@ class LimitsCommands():
return return_code
@staticmethod
def _get_resources_from_flavor(flavor, warn_output):
resources = set()
for spec in [
s for s in flavor.extra_specs if s.startswith('resources:')]:
resources.add('class:' + spec.lstrip('resources:'))
try:
for resource in scheduler_utils.resources_for_limits(flavor,
is_bfv=False):
resources.add('class:' + resource)
except Exception as e:
# This is to be resilient about potential extra spec translation
# bugs like https://bugs.launchpad.net/nova/+bug/2088831
msg = _('An exception was raised: %s, skipping flavor %s'
% (str(e), flavor.flavorid))
warn_output(msg)
return resources
def _get_resources_from_api_flavors(self, ctxt, output, warn_output):
msg = _('Scanning flavors in API database for resource classes ...')
output(msg)
resources = set()
marker = None
while True:
flavors = objects.FlavorList.get_all(ctxt, limit=500,
marker=marker)
for flavor in flavors:
resources |= self._get_resources_from_flavor(
flavor, warn_output)
if not flavors:
break
marker = flavors[-1].flavorid
return resources
def _get_resources_from_embedded_flavors(self, ctxt, project_id, output,
warn_output):
project_str = f' project {project_id}' if project_id else ''
msg = _('Scanning%s non-deleted instances embedded flavors for '
'resource classes ...' % project_str)
output(msg)
resources = set()
down_cell_uuids = set()
marker = None
while True:
filters = {'deleted': False}
if project_id:
filters['project_id'] = project_id
instances, cells = list_instances.get_instance_objects_sorted(
ctxt, filters=filters, limit=500, marker=marker,
expected_attrs=['flavor'], sort_keys=None, sort_dirs=None)
down_cell_uuids |= set(cells)
for instance in instances:
resources |= self._get_resources_from_flavor(
instance.flavor, warn_output)
if not instances:
break
marker = instances[-1].uuid
return resources, down_cell_uuids
def _scan_flavors(self, ctxt, keystone_api, service_id, project_id,
region_id, output, warn_output, verbose,
no_embedded_flavor_scan):
return_code = 0
# We already know we need to check class:DISK_GB because it is not a
# legacy resource from a quota perspective.
flavor_resources = set(['class:DISK_GB'])
# Scan existing flavors to check whether any requestable resources are
# missing registered limits in Keystone.
flavor_resources |= self._get_resources_from_api_flavors(
ctxt, output, warn_output)
down_cell_uuids = None
if not no_embedded_flavor_scan:
# Scan the embedded flavors of non-deleted instances.
resources, down_cell_uuids = (
self._get_resources_from_embedded_flavors(
ctxt, project_id, output, warn_output))
flavor_resources |= resources
# Retrieve the existing resource limits from Keystone (we may have
# added new ones above).
registered_limits = keystone_api.registered_limits(
service_id=service_id, region_id=region_id)
existing_limits = {
li.resource_name: li.default_limit for li in registered_limits}
table = prettytable.PrettyTable()
table.align = 'l'
table.field_names = ['Resource', 'Registered Limit']
table.sortby = 'Resource'
found_missing = False
for resource in flavor_resources:
if resource in existing_limits:
if verbose:
table.add_row([resource, existing_limits[resource]])
else:
found_missing = True
table.add_row([resource, 'missing'])
if table.rows:
msg = _(
'The following resource classes were found during the scan:\n')
warn_output(msg)
warn_output(table)
if down_cell_uuids:
msg = _(
'NOTE: Cells %s did not respond and their data is not '
'included in this table.' % down_cell_uuids)
warn_output('\n' + textwrap.fill(msg, width=80))
if found_missing:
msg = _(
'WARNING: It is strongly recommended to create registered '
'limits for resource classes missing limits in Keystone '
'before proceeding.')
warn_output('\n' + textwrap.fill(msg, width=80))
return_code = 3
else:
msg = _(
'SUCCESS: All resource classes have registered limits set.')
warn_output(msg)
return return_code
@action_description(
_("Copy quota limits from the Nova API database to Keystone."))
@args('--project-id', metavar='<project-id>', dest='project_id',
@ -3577,21 +3690,37 @@ class LimitsCommands():
help='Provide verbose output during execution.')
@args('--dry-run', action='store_true', dest='dry_run', default=False,
help='Show what limits would be created without actually '
'creating them.')
'creating them. Flavors will still be scanned for resource '
'classes missing limits.')
@args('--quiet', action='store_true', dest='quiet', default=False,
help='Do not output anything during execution.')
@args('--no-embedded-flavor-scan', action='store_true',
dest='no_embedded_flavor_scan', default=False,
help='Do not scan instances embedded flavors for resource classes '
'missing limits.')
def migrate_to_unified_limits(self, project_id=None, region_id=None,
verbose=False, dry_run=False):
verbose=False, dry_run=False, quiet=False,
no_embedded_flavor_scan=False):
"""Migrate quota limits from legacy quotas to unified limits.
Return codes:
* 0: Command completed successfully.
* 1: An unexpected error occurred.
* 2: Failed to connect to the database.
* 3: Missing registered limits were identified.
"""
if verbose and quiet:
print('--verbose and --quiet are mutually exclusive')
return 1
ctxt = context.get_admin_context()
output = lambda msg: None
if verbose:
output = lambda msg: print(msg)
# Verbose output is optional details.
output = lambda msg: print(msg) if verbose else None
# In general, we always want to show important warning output (for
# example, warning about missing registered limits). Only suppress
# warning output if --quiet was specified by the caller.
warn_output = lambda msg: None if quiet else print(msg)
output(_('Reading default limits from the Nova API database ...'))
@ -3619,9 +3748,33 @@ class LimitsCommands():
f'Found default limits in the database: {legacy_defaults} ...')
output(_(msg))
# For auth, reuse the [keystone_authtoken] section.
if not hasattr(CONF, 'keystone_authtoken'):
conf_utils.register_ksa_opts(
CONF, 'keystone_authtoken', 'identity')
keystone_api = utils.get_sdk_adapter(
'identity', conf_group='keystone_authtoken')
# Service ID is required in unified limits APIs.
service_id = keystone_api.find_service('nova').id
try:
return self._create_unified_limits(
ctxt, legacy_defaults, project_id, region_id, output, dry_run)
result = self._create_unified_limits(
ctxt, keystone_api, service_id, legacy_defaults, project_id,
region_id, output, dry_run)
if result:
# If there was an error, just return now.
return result
result = self._scan_flavors(
ctxt, keystone_api, service_id, project_id, region_id,
output, warn_output, verbose, no_embedded_flavor_scan)
return result
except db_exc.CantStartEngineError:
print(_('Failed to connect to the database so aborting this '
'migration attempt. Please check your config file to make '
'sure that [api_database]/connection and '
'[database]/connection are set and run this '
'command again.'))
return 2
except Exception as e:
msg = (f'Unexpected error, see nova-manage.log for the full '
f'trace: {str(e)}')

View File

@ -28,7 +28,7 @@ keystone_group = cfg.OptGroup(
def register_opts(conf):
conf.register_group(keystone_group)
confutils.register_ksa_opts(conf, keystone_group.name,
DEFAULT_SERVICE_TYPE, include_auth=True)
DEFAULT_SERVICE_TYPE, include_auth=False)
def list_opts():

View File

@ -19,6 +19,7 @@
import collections
import contextlib
from contextlib import contextmanager
import copy
import functools
from importlib.abc import MetaPathFinder
import logging as std_logging
@ -1020,6 +1021,7 @@ class OSAPIFixture(fixtures.Fixture):
self.other_api = client.TestOpenStackClient(
'other', base_url, project_id=self.project_id,
roles=['other'])
self.base_url = base_url
# Provide a way to access the wsgi application to tests using
# the fixture.
self.app = app
@ -2035,6 +2037,14 @@ class GreenThreadPoolShutdownWait(fixtures.Fixture):
class UnifiedLimitsFixture(fixtures.Fixture):
"""A fixture that models Keystone unified limits for testing.
Although there exists a LimitFixture in oslo.limit, we need a fixture that
both oslo.limit and bare OpenStack SDK calls could hook into for unified
limits testing. We do some of our own logic outside of oslo.limit and call
the OpenStack SDK directly and we need them both to see the same limits.
"""
def setUp(self):
super().setUp()
self.mock_sdk_adapter = mock.Mock()
@ -2047,7 +2057,17 @@ class UnifiedLimitsFixture(fixtures.Fixture):
self.useFixture(fixtures.MockPatch(
'nova.utils.get_sdk_adapter', fake_get_sdk_adapter))
self.useFixture(fixtures.MockPatch(
'oslo_limit.limit._get_keystone_connection',
return_value=self.mock_sdk_adapter))
# These are needed by oslo.limit.
self.mock_sdk_adapter.get.return_value.json.return_value = {
'model': {'name': 'flat'}}
self.mock_sdk_adapter.get_endpoint.return_value.service_id = None
self.mock_sdk_adapter.get_endpoint.return_value.region_id = None
# These are Keystone API calls that oslo.limit will also use.
self.mock_sdk_adapter.registered_limits.side_effect = (
self.registered_limits)
self.mock_sdk_adapter.limits.side_effect = self.limits
@ -2058,21 +2078,33 @@ class UnifiedLimitsFixture(fixtures.Fixture):
self.registered_limits_list = []
self.limits_list = []
def registered_limits(self, region_id=None):
def registered_limits(
self, region_id=None, resource_name=None, service_id=None):
registered_limits_list = copy.deepcopy(self.registered_limits_list)
if region_id:
return [rl for rl in self.registered_limits_list
if rl.region_id == region_id]
return self.registered_limits_list
registered_limits_list = [rl for rl in registered_limits_list
if rl.region_id == region_id]
if resource_name:
registered_limits_list = [rl for rl in registered_limits_list
if rl.resource_name == resource_name]
for registered_limit in registered_limits_list:
yield registered_limit
def limits(self, project_id=None, region_id=None):
limits_list = self.limits_list
def limits(
self, project_id=None, region_id=None, resource_name=None,
service_id=None):
limits_list = copy.deepcopy(self.limits_list)
if project_id:
limits_list = [pl for pl in limits_list
if pl.project_id == project_id]
if region_id:
limits_list = [pl for pl in limits_list
if pl.region_id == region_id]
return limits_list
if resource_name:
limits_list = [pl for pl in limits_list
if pl.resource_name == resource_name]
for limit in limits_list:
yield limit
def create_registered_limit(self, **attrs):
rl = collections.namedtuple(

View File

@ -396,6 +396,9 @@ class InstanceHelperMixin:
return flavor['id']
def _delete_flavor(self, flavor_id):
self.api_fixture.admin_api.delete_flavor(flavor_id)
def _create_image(self, metadata):
image = {
'id': 'c456eb30-91d7-4f43-8f46-2efd9eccd744',

View File

@ -26,6 +26,7 @@ from oslo_utils.fixture import uuidsentinel as uuids
from oslo_utils import timeutils
from nova.cmd import manage
from nova.compute import instance_list as list_instances
from nova import config
from nova import context
from nova import exception
@ -33,6 +34,7 @@ from nova.network import constants
from nova import objects
from nova import test
from nova.tests import fixtures as nova_fixtures
from nova.tests.functional.api import client as api_client
from nova.tests.functional import fixtures as func_fixtures
from nova.tests.functional import integrated_helpers
from nova.tests.functional import test_servers_resource_request as test_res_req
@ -2472,7 +2474,11 @@ class TestDBArchiveDeletedRowsMultiCellTaskLog(
self.output.getvalue(), r'\| %s.task_log\s+\| 2' % cell_name)
class TestNovaManageLimits(test.TestCase):
class TestNovaManageLimits(integrated_helpers.ProviderUsageBaseTestCase):
# This is required by the parent class.
compute_driver = 'fake.MediumFakeDriver'
NUMBER_OF_CELLS = 2
def setUp(self):
super().setUp()
@ -2481,6 +2487,11 @@ class TestNovaManageLimits(test.TestCase):
self.output = StringIO()
self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output))
self.ul_api = self.useFixture(nova_fixtures.UnifiedLimitsFixture())
# Start two compute services, one per cell
self.compute1 = self.start_service('compute', host='host1',
cell_name='cell1')
self.compute2 = self.start_service('compute', host='host2',
cell_name='cell2')
@mock.patch('nova.quota.QUOTAS.get_defaults')
def test_migrate_to_unified_limits_no_db_access(self, mock_get_defaults):
@ -2579,18 +2590,18 @@ class TestNovaManageLimits(test.TestCase):
objects.Quotas.create_limit(self.ctxt, uuids.project, 'instances', 25)
# Verify there are no unified limits yet.
registered_limits = self.ul_api.registered_limits()
registered_limits = list(self.ul_api.registered_limits())
self.assertEqual(0, len(registered_limits))
limits = self.ul_api.limits(project_id=uuids.project)
limits = list(self.ul_api.limits(project_id=uuids.project))
self.assertEqual(0, len(limits))
# Verify that --dry-run works to not actually create limits.
self.cli.migrate_to_unified_limits(dry_run=True)
# There should still be no unified limits yet.
registered_limits = self.ul_api.registered_limits()
registered_limits = list(self.ul_api.registered_limits())
self.assertEqual(0, len(registered_limits))
limits = self.ul_api.limits(project_id=uuids.project)
limits = list(self.ul_api.limits(project_id=uuids.project))
self.assertEqual(0, len(limits))
# Migrate the limits.
@ -2615,7 +2626,7 @@ class TestNovaManageLimits(test.TestCase):
'server_group_members': 12,
}
registered_limits = self.ul_api.registered_limits()
registered_limits = list(self.ul_api.registered_limits())
self.assertEqual(11, len(registered_limits))
for rl in registered_limits:
self.assertEqual(
@ -2627,42 +2638,49 @@ class TestNovaManageLimits(test.TestCase):
'servers': 25,
}
limits = self.ul_api.limits(project_id=uuids.project)
limits = list(self.ul_api.limits(project_id=uuids.project))
self.assertEqual(2, len(limits))
for pl in limits:
self.assertEqual(
expected_limits[pl.resource_name], pl.resource_limit)
# Verify there are no project limits for a different project.
other_project_limits = self.ul_api.limits(
project_id=uuids.otherproject)
other_project_limits = list(self.ul_api.limits(
project_id=uuids.otherproject))
self.assertEqual(0, len(other_project_limits))
# Try migrating limits for a specific region.
region_registered_limits = self.ul_api.registered_limits(
region_id=uuids.region)
region_registered_limits = list(self.ul_api.registered_limits(
region_id=uuids.region))
self.assertEqual(0, len(region_registered_limits))
self.cli.migrate_to_unified_limits(
result = self.cli.migrate_to_unified_limits(
region_id=uuids.region, verbose=True)
region_registered_limits = self.ul_api.registered_limits(
region_id=uuids.region)
# There is a missing registered limit for class:DISK_GB.
self.assertEqual(3, result)
region_registered_limits = list(self.ul_api.registered_limits(
region_id=uuids.region))
self.assertEqual(11, len(region_registered_limits))
for rl in region_registered_limits:
self.assertEqual(
expected_registered_limits[rl.resource_name], rl.default_limit)
# Create a registered limit for class:DISK_GB.
self.ul_api.create_registered_limit(
resource_name='class:DISK_GB', default_limit=10)
# Try migrating project limits for that region.
region_limits = self.ul_api.limits(
project_id=uuids.project, region_id=uuids.region)
region_limits = list(self.ul_api.limits(
project_id=uuids.project, region_id=uuids.region))
self.assertEqual(0, len(region_limits))
self.cli.migrate_to_unified_limits(
project_id=uuids.project, region_id=uuids.region, verbose=True)
region_limits = self.ul_api.limits(
project_id=uuids.project, region_id=uuids.region)
region_limits = list(self.ul_api.limits(
project_id=uuids.project, region_id=uuids.region))
self.assertEqual(2, len(region_limits))
for pl in region_limits:
self.assertEqual(
@ -2671,16 +2689,243 @@ class TestNovaManageLimits(test.TestCase):
# Verify no --verbose outputs nothing, migrate limits for a different
# project after clearing stdout.
self.output = StringIO()
self.assertEqual('', self.output.getvalue())
self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output))
# Create a limit for the other project.
objects.Quotas.create_limit(self.ctxt, uuids.otherproject, 'ram', 2048)
self.cli.migrate_to_unified_limits(project_id=uuids.otherproject)
other_project_limits = self.ul_api.limits(
result = self.cli.migrate_to_unified_limits(
project_id=uuids.otherproject)
other_project_limits = list(self.ul_api.limits(
project_id=uuids.otherproject))
self.assertEqual(1, len(other_project_limits))
# Output should still be empty after migrating.
self.assertEqual('', self.output.getvalue())
# Output should show success after migrating.
self.assertIn('SUCCESS', self.output.getvalue())
self.assertEqual(0, result)
def _add_to_inventory(self, resource):
# Add resource to inventory for both computes.
for rp in self._get_all_providers():
inv = self._get_provider_inventory(rp['uuid'])
inv[resource] = {'total': 10}
self._update_inventory(
rp['uuid'], {'inventories': inv,
'resource_provider_generation': rp['generation']})
def _create_flavor_and_add_to_inventory(self, resource):
# Create a flavor for the resource.
flavor_id = self._create_flavor(
vcpu=1, memory_mb=512, disk=1, ephemeral=0,
extra_spec={f'resources:{resource}': 1})
self._add_to_inventory(resource)
return flavor_id
def test_migrate_to_unified_limits_flavor_scanning(self):
# Create a few flavors in the API database.
for resource in ('NUMA_CORE', 'PCPU', 'NUMA_SOCKET'):
self._create_flavor(
vcpu=1, memory_mb=512, disk=1, ephemeral=0,
extra_spec={f'resources:{resource}': 1})
# Create a few instances with embedded flavors that are *not* in the
# API database.
self._create_resource_class('CUSTOM_BAREMETAL_SMALL')
# Create servers on both computes (and cells).
hosts = ('host1', 'host2')
for i, resource in enumerate(
('VGPU', 'CUSTOM_BAREMETAL_SMALL', 'PGPU')):
flavor_id = self._create_flavor_and_add_to_inventory(resource)
# Create servers on both computes (and thus cells) and two
# projects: nova.tests.fixtures.nova.PROJECT_ID and 'other'.
server = self._create_server(
flavor_id=flavor_id, host=hosts[i % 2], networks='none')
# Delete the flavor so it can only be detected by scanning
# embedded flavors.
self._delete_flavor(flavor_id)
# Delete the last instance which has resources:PGPU. It should not be
# included because the instance is deleted.
self._delete_server(server)
result = self.cli.migrate_to_unified_limits()
# PCPU will have had a registered limit created for it based on VCPU,
# so it should also not be included in the list.
self.assertIn('WARNING', self.output.getvalue())
self.assertIn('class:CUSTOM_BAREMETAL_SMALL', self.output.getvalue())
self.assertIn('class:DISK_GB', self.output.getvalue())
self.assertIn('class:NUMA_CORE', self.output.getvalue())
self.assertIn('class:NUMA_SOCKET', self.output.getvalue())
self.assertIn('class:VGPU', self.output.getvalue())
self.assertEqual(5, self.output.getvalue().count('class:'))
self.assertEqual(3, result)
# Now create registered limits for all of the resources in the list.
resources = (
'CUSTOM_BAREMETAL_SMALL', 'DISK_GB', 'NUMA_CORE', 'NUMA_SOCKET',
'VGPU')
for resource in resources:
self.ul_api.create_registered_limit(
resource_name='class:' + resource, default_limit=10)
# Reset the output and run the migrate command again.
self.output = StringIO()
self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output))
result = self.cli.migrate_to_unified_limits()
# The output should be the success message because there are no longer
# any resources missing registered limits.
self.assertIn('SUCCESS', self.output.getvalue())
# Return code should be 0 for success.
self.assertEqual(0, result)
def test_migrate_to_unified_limits_flavor_scanning_resource_request(self):
# Create one server that has extra specs that will get translated into
# resource classes.
extra_spec = {
'hw:mem_encryption': 'true',
'hw:cpu_policy': 'dedicated',
}
flavor_id = self._create_flavor(
name='fakeflavor', vcpu=1, memory_mb=512, disk=1, ephemeral=0,
extra_spec=extra_spec)
self._add_to_inventory('MEM_ENCRYPTION_CONTEXT')
image_id = self._create_image(
metadata={'hw_firmware_type': 'uefi'})['id']
self._create_server(
flavor_id=flavor_id, networks='none', image_uuid=image_id)
result = self.cli.migrate_to_unified_limits()
# FIXME(melwitt): Update this to remove the exception messages and add
# class:MEM_ENCRYPTION_CONTEXT to the table when
# https://bugs.launchpad.net/nova/+bug/2088831 is fixed.
# Message is output two times: one for API database scan and one for
# embedded flavor scan.
self.assertEqual(2, self.output.getvalue().count('exception'))
self.assertIn('WARNING', self.output.getvalue())
self.assertIn('class:DISK_GB', self.output.getvalue())
self.assertEqual(1, self.output.getvalue().count('class:'))
self.assertEqual(3, result)
def test_migrate_to_unified_limits_flavor_scanning_project(self):
# Create a client that uses a different project.
other_api = api_client.TestOpenStackClient(
'other', self.api.base_url, project_id='other',
roles=['reader', 'member'])
other_api.microversion = '2.74'
self._create_resource_class('CUSTOM_GOLD')
apis = (self.api, other_api)
for i, resource in enumerate(('VGPU', 'CUSTOM_GOLD')):
flavor_id = self._create_flavor_and_add_to_inventory(resource)
# Create servers for two projects:
# nova.tests.fixtures.nova.PROJECT_ID and 'other'.
self._create_server(
flavor_id=flavor_id, api=apis[i % 2], networks='none')
# Delete the flavor so it can only be detected by scanning embedded
# flavors.
self._delete_flavor(flavor_id)
# Scope the command to project 'other'. This should cause
# VGPU to not be detected in the embedded flavors.
result = self.cli.migrate_to_unified_limits(project_id='other')
# DISK_GB will also be found because it's a known standard resource
# class that we know will be allocated.
self.assertIn('WARNING', self.output.getvalue())
self.assertIn('class:CUSTOM_GOLD', self.output.getvalue())
self.assertIn('class:DISK_GB', self.output.getvalue())
self.assertEqual(2, self.output.getvalue().count('class:'))
self.assertEqual(3, result)
@mock.patch.object(
manage.LimitsCommands, '_get_resources_from_embedded_flavors',
new=mock.NonCallableMock())
def test_migrate_to_unified_limits_no_embedded_flavor_scan(self):
# Create a few flavors in the API database.
for resource in ('NUMA_CORE', 'PCPU', 'NUMA_SOCKET'):
self._create_flavor(
vcpu=1, memory_mb=512, disk=1, ephemeral=0,
extra_spec={f'resources:{resource}': 1})
# Create a few instances with embedded flavors that are *not* in the
# API database.
self._create_resource_class('CUSTOM_BAREMETAL_SMALL')
# Create servers on both computes (and cells).
hosts = ('host1', 'host2')
for i, resource in enumerate(
('VGPU', 'CUSTOM_BAREMETAL_SMALL', 'PGPU')):
flavor_id = self._create_flavor_and_add_to_inventory(resource)
# Create servers on both computes (and thus cells) and two
# projects: nova.tests.fixtures.nova.PROJECT_ID and 'other'.
self._create_server(
flavor_id=flavor_id, host=hosts[i % 2], networks='none')
# Delete the flavor so it can only be detected by scanning embedded
# flavors.
self._delete_flavor(flavor_id)
result = self.cli.migrate_to_unified_limits(
no_embedded_flavor_scan=True)
# VGPU, CUSTOM_BAREMETAL_SMALL, and PGPU should not be included in the
# output because the embedded flavor scan should have been skipped.
self.assertIn('WARNING', self.output.getvalue())
self.assertIn('class:DISK_GB', self.output.getvalue())
self.assertIn('class:NUMA_CORE', self.output.getvalue())
self.assertIn('class:NUMA_SOCKET', self.output.getvalue())
self.assertEqual(3, self.output.getvalue().count('class:'))
self.assertEqual(3, result)
def test_migrate_to_unified_limits_flavor_scanning_down_cell(self):
# Fake a down cell returned from the instance list.
real_get_instance_objects_sorted = (
list_instances.get_instance_objects_sorted)
def fake_get_instance_objects_sorted(*args, **kwargs):
instances, down_cells = real_get_instance_objects_sorted(
*args, **kwargs)
return instances, [uuids.down_cell]
self.useFixture(fixtures.MockPatchObject(
list_instances, 'get_instance_objects_sorted',
fake_get_instance_objects_sorted))
self._create_resource_class('CUSTOM_GOLD')
for i, resource in enumerate(('VGPU', 'CUSTOM_GOLD')):
flavor_id = self._create_flavor_and_add_to_inventory(resource)
# Create servers for two projects:
# nova.tests.fixtures.nova.PROJECT_ID and 'other'.
self._create_server(flavor_id=flavor_id, networks='none')
# Delete the flavor so it can only be detected by scanning embedded
# flavors.
self._delete_flavor(flavor_id)
result = self.cli.migrate_to_unified_limits()
# DISK_GB will also be found because it's a known standard resource
# class that we know will be allocated.
self.assertIn('WARNING', self.output.getvalue())
self.assertIn("Cells {'%s'}" % uuids.down_cell, self.output.getvalue())
self.assertIn('class:CUSTOM_GOLD', self.output.getvalue())
self.assertIn('class:DISK_GB', self.output.getvalue())
self.assertIn('class:VGPU', self.output.getvalue())
self.assertEqual(3, self.output.getvalue().count('class:'))
self.assertEqual(3, result)

View File

@ -965,7 +965,7 @@ def get_ksa_adapter(service_type, ksa_auth=None, ksa_session=None,
min_version=min_version, max_version=max_version, raise_exc=False)
def get_sdk_adapter(service_type, check_service=False):
def get_sdk_adapter(service_type, check_service=False, conf_group=None):
"""Construct an openstacksdk-brokered Adapter for a given service type.
We expect to find a conf group whose name corresponds to the service_type's
@ -976,12 +976,14 @@ def get_sdk_adapter(service_type, check_service=False):
is to be constructed.
:param check_service: If True, we will query the endpoint to make sure the
service is alive, raising ServiceUnavailable if it is not.
:param conf_group: String name of the conf group to use, otherwise the name
of the service_type will be used.
:return: An openstack.proxy.Proxy object for the specified service_type.
:raise: ConfGroupForServiceTypeNotFound If no conf group name could be
found for the specified service_type.
:raise: ServiceUnavailable if check_service is True and the service is down
"""
confgrp = _get_conf_group(service_type)
confgrp = conf_group or _get_conf_group(service_type)
sess = _get_auth_and_session(confgrp)[1]
try:
conn = connection.Connection(

View File

@ -0,0 +1,12 @@
features:
- |
The ``nova-manage limits migrate_to_unified_limits`` command will now scan
the API and cell databases to detect resource classes that do not have
registered limits set in Keystone and report them to the console.
The purpose of the flavor scan is to assist operators who are migrating
from legacy quotas to unified limits quotas. The current behavior with
unified limits is to fail quota checks if resources requested are missing
registered limits in Keystone. With flavor scanning in
``migrate_to_unified_limits``, operators can easily determine what resource
classes for which they need to create registered limits.