Merge "Add [quota]unified_limits_resource_(strategy|list)"
This commit is contained in:
commit
68eb977654
@ -393,3 +393,40 @@ corresponding unified limits.
|
||||
quotas are not supported in unified limits.
|
||||
|
||||
.. _nova-manage: https://docs.openstack.org/nova/latest/cli/nova-manage.html#limits-migrate-to-unified-limits
|
||||
|
||||
|
||||
Require or ignore resources
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The :oslo.config:option:`quota.unified_limits_resource_strategy` and
|
||||
:oslo.config:option:`quota.unified_limits_resource_list` configuration options
|
||||
are available for operators to specify which cloud resources they will require
|
||||
to have registered limits set in Keystone. The default strategy is ``require``
|
||||
and the default resource list contains the ``servers`` resource.
|
||||
|
||||
When ``unified_limits_resource_strategy = require``, if a resource in
|
||||
``unified_limits_resource_list`` is requested and has no registered limit set,
|
||||
the quota limit for that resource will be considered to be 0 and all requests
|
||||
to allocate that resource will be rejected for being over quota. Any resource
|
||||
not in the list will be considered to have unlimited quota.
|
||||
|
||||
When ``unified_limits_resource_strategy = ignore``, if a resource in
|
||||
``unified_limits_resource_list`` is requested and has no registered limit set,
|
||||
the quota limit for that resource will be considered to be unlimited and all
|
||||
requests to allocate that resource will be accepted. Any resource not in the
|
||||
list will be considered to have 0 quota.
|
||||
|
||||
The options should be configured for the :program:`nova-api` and
|
||||
:program:`nova-conductor` services. The :program:`nova-conductor` service
|
||||
performs quota enforcement when :oslo.config:option:`quota.recheck_quota` is
|
||||
``True`` (the default).
|
||||
|
||||
The ``unified_limits_resource_list`` list can also be set to an empty list.
|
||||
|
||||
Example configuration values:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[quota]
|
||||
unified_limits_resource_strategy = require
|
||||
unified_limits_resource_list = servers,class:VCPU,class:MEMORY_MB,class:DISK_GB
|
||||
|
@ -14,8 +14,37 @@
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import re
|
||||
|
||||
import os_resource_classes as orc
|
||||
from oslo_config import cfg
|
||||
from oslo_config import types as cfg_types
|
||||
|
||||
|
||||
class UnifiedLimitsResource(cfg_types.String):
|
||||
|
||||
# NOTE(melwitt): Attempting to import nova.limit.(local|placement) for
|
||||
# LEGACY_LIMITS resource names results in error:
|
||||
# AttributeError: module 'nova' has no attribute 'conf'
|
||||
resources = {
|
||||
'server_metadata_items', 'server_injected_files',
|
||||
'server_injected_file_content_bytes',
|
||||
'server_injected_file_path_bytes', 'server_key_pairs', 'server_groups',
|
||||
'server_group_members', 'servers'}
|
||||
|
||||
def __call__(self, value):
|
||||
super().__call__(value)
|
||||
valid_resources = self.resources
|
||||
valid_resources |= {f'class:{cls}' for cls in orc.STANDARDS}
|
||||
custom_regex = r'^class:CUSTOM_[A-Z0-9_]+$'
|
||||
if value in valid_resources or re.fullmatch(custom_regex, value):
|
||||
return value
|
||||
msg = (
|
||||
f'Value {value} is not a valid resource class name. Must be '
|
||||
f'one of: {valid_resources} or a custom resource class name '
|
||||
f'of the form {custom_regex[1:-1]}')
|
||||
raise ValueError(msg)
|
||||
|
||||
|
||||
quota_group = cfg.OptGroup(
|
||||
name='quota',
|
||||
@ -269,6 +298,87 @@ query during each quota check, if this configuration option is set to True.
|
||||
Operators who want to avoid the performance hit from the EXISTS queries should
|
||||
wait to set this configuration option to True until after they have completed
|
||||
their online data migrations via ``nova-manage db online_data_migrations``.
|
||||
"""),
|
||||
cfg.StrOpt(
|
||||
'unified_limits_resource_strategy',
|
||||
default='require',
|
||||
choices=[
|
||||
('require', 'Require the resources in '
|
||||
'``unified_limits_resource_list`` to have registered limits set '
|
||||
'in Keystone'),
|
||||
('ignore', 'Ignore the resources in '
|
||||
'``unified_limits_resource_list`` if they do not have registered '
|
||||
'limits set in Keystone'),
|
||||
],
|
||||
help="""
|
||||
Specify the semantics of the ``unified_limits_resource_list``.
|
||||
|
||||
When the quota driver is set to the ``UnifiedLimitsDriver``, resources may be
|
||||
specified to ether require registered limits set in Keystone or ignore if they
|
||||
do not have registered limits set.
|
||||
|
||||
When set to ``require``, if a resource in ``unified_limits_resource_list`` is
|
||||
requested and has no registered limit set, the quota limit for that resource
|
||||
will be considered to be 0 and all requests to allocate that resource will be
|
||||
rejected for being over quota.
|
||||
|
||||
When set to ``ignore``, if a resource in ``unified_limits_resource_list`` is
|
||||
requested and has no registered limit set, the quota limit for that resource
|
||||
will be considered to be unlimited and all requests to allocate that resource
|
||||
will be accepted.
|
||||
|
||||
Related options:
|
||||
|
||||
* ``unified_limits_resource_list``: This must contain either resources for
|
||||
which to require registered limits set or resources to ignore if they do not
|
||||
have registered limits set. It can also be set to an empty list.
|
||||
"""),
|
||||
cfg.ListOpt(
|
||||
'unified_limits_resource_list',
|
||||
item_type=UnifiedLimitsResource(),
|
||||
default=['servers'],
|
||||
help="""
|
||||
Specify a list of resources to require or ignore registered limits.
|
||||
|
||||
When the quota driver is set to the ``UnifiedLimitsDriver``, require or ignore
|
||||
resources in this list to have registered limits set in Keystone.
|
||||
|
||||
When ``unified_limits_resource_strategy`` is ``require``, if a resource in this
|
||||
list is requested and has no registered limit set, the quota limit for that
|
||||
resource will be considered to be 0 and all requests to allocate that resource
|
||||
will be rejected for being over quota.
|
||||
|
||||
When ``unified_limits_resource_strategy`` is ``ignore``, if a resource in this
|
||||
list is requested and has no registered limit set, the quota limit for that
|
||||
resource will be considered to be unlimited and all requests to allocate that
|
||||
resource will be accepted.
|
||||
|
||||
The list can also be set to an empty list.
|
||||
|
||||
Valid list item values are:
|
||||
|
||||
* ``servers``
|
||||
|
||||
* ``class:<Placement resource class name>``
|
||||
|
||||
* ``server_key_pairs``
|
||||
|
||||
* ``server_groups``
|
||||
|
||||
* ``server_group_members``
|
||||
|
||||
* ``server_metadata_items``
|
||||
|
||||
* ``server_injected_files``
|
||||
|
||||
* ``server_injected_file_content_bytes``
|
||||
|
||||
* ``server_injected_file_path_bytes``
|
||||
|
||||
Related options:
|
||||
|
||||
* ``unified_limits_resource_strategy``: This must be set to ``require`` or
|
||||
``ignore``
|
||||
"""),
|
||||
]
|
||||
|
||||
|
@ -139,9 +139,10 @@ def enforce_api_limit(entity_type: str, count: int) -> None:
|
||||
try:
|
||||
enforcer.enforce(None, {entity_type: count})
|
||||
except limit_exceptions.ProjectOverLimit as e:
|
||||
# Copy the exception message to a OverQuota to propagate to the
|
||||
# API layer.
|
||||
raise EXCEPTIONS.get(entity_type, exception.OverQuota)(str(e))
|
||||
if nova_limit_utils.should_enforce(e):
|
||||
# Copy the exception message to a OverQuota to propagate to the
|
||||
# API layer.
|
||||
raise EXCEPTIONS.get(entity_type, exception.OverQuota)(str(e))
|
||||
|
||||
|
||||
def enforce_db_limit(
|
||||
@ -188,9 +189,10 @@ def enforce_db_limit(
|
||||
try:
|
||||
enforcer.enforce(None, {entity_type: delta})
|
||||
except limit_exceptions.ProjectOverLimit as e:
|
||||
# Copy the exception message to a OverQuota to propagate to the
|
||||
# API layer.
|
||||
raise EXCEPTIONS.get(entity_type, exception.OverQuota)(str(e))
|
||||
if nova_limit_utils.should_enforce(e):
|
||||
# Copy the exception message to a OverQuota to propagate to the
|
||||
# API layer.
|
||||
raise EXCEPTIONS.get(entity_type, exception.OverQuota)(str(e))
|
||||
|
||||
|
||||
def _convert_keys_to_legacy_name(
|
||||
|
@ -177,18 +177,19 @@ def enforce_num_instances_and_flavor(
|
||||
try:
|
||||
enforcer.enforce(project_id, deltas)
|
||||
except limit_exceptions.ProjectOverLimit as e:
|
||||
# NOTE(johngarbutt) we can do better, but this is very simple
|
||||
LOG.debug("Limit check failed with count %s retrying with count %s",
|
||||
max_count, max_count - 1)
|
||||
try:
|
||||
return enforce_num_instances_and_flavor(context, project_id,
|
||||
flavor, is_bfvm, min_count,
|
||||
max_count - 1,
|
||||
enforcer=enforcer)
|
||||
except ValueError:
|
||||
# Copy the *original* exception message to a OverQuota to
|
||||
# propagate to the API layer
|
||||
raise exception.TooManyInstances(str(e))
|
||||
if limit_utils.should_enforce(e):
|
||||
# NOTE(johngarbutt) we can do better, but this is very simple
|
||||
LOG.debug(
|
||||
"Limit check failed with count %s retrying with count %s",
|
||||
max_count, max_count - 1)
|
||||
try:
|
||||
return enforce_num_instances_and_flavor(
|
||||
context, project_id, flavor, is_bfvm, min_count,
|
||||
max_count - 1, enforcer=enforcer)
|
||||
except ValueError:
|
||||
# Copy the *original* exception message to a OverQuota to
|
||||
# propagate to the API layer
|
||||
raise exception.TooManyInstances(str(e))
|
||||
|
||||
# no problems with max_count, so we return max count
|
||||
return max_count
|
||||
|
@ -12,11 +12,125 @@
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_limit import exception as limit_exceptions
|
||||
from oslo_limit import limit
|
||||
from oslo_log import log as logging
|
||||
|
||||
import nova.conf
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
CONF = nova.conf.CONF
|
||||
|
||||
UNIFIED_LIMITS_DRIVER = "nova.quota.UnifiedLimitsDriver"
|
||||
ENDPOINT = None
|
||||
|
||||
|
||||
def use_unified_limits():
|
||||
return CONF.quota.driver == UNIFIED_LIMITS_DRIVER
|
||||
|
||||
|
||||
def _endpoint():
|
||||
global ENDPOINT
|
||||
if ENDPOINT is None:
|
||||
# This is copied from oslo_limit/limit.py
|
||||
endpoint_id = CONF.oslo_limit.endpoint_id
|
||||
if not endpoint_id:
|
||||
raise ValueError("endpoint_id is not configured")
|
||||
enforcer = limit.Enforcer(lambda: None)
|
||||
ENDPOINT = enforcer.connection.get_endpoint(endpoint_id)
|
||||
return ENDPOINT
|
||||
|
||||
|
||||
def should_enforce(exc: limit_exceptions.ProjectOverLimit) -> bool:
|
||||
"""Whether the exceeded resource limit should be enforced.
|
||||
|
||||
Given a ProjectOverLimit exception from oslo.limit, check whether the
|
||||
involved limit(s) should be enforced. This is needed if we need more logic
|
||||
than is available by default in oslo.limit.
|
||||
|
||||
:param exc: An oslo.limit ProjectOverLimit exception instance, which
|
||||
contains a list of OverLimitInfo. Each OverLimitInfo includes a
|
||||
resource_name, limit, current_usage, and delta.
|
||||
"""
|
||||
# If any exceeded limit is greater than zero, it means an explicitly set
|
||||
# limit has been enforced. And if any explicitly set limit has gone over
|
||||
# quota, the enforcement should be upheld and there is no need to consider
|
||||
# the potential for unset limits.
|
||||
if any(info.limit > 0 for info in exc.over_limit_info_list):
|
||||
return True
|
||||
|
||||
# Next, if all of the exceeded limits are -1, we don't need to enforce and
|
||||
# we can avoid calling Keystone for the list of registered limits.
|
||||
#
|
||||
# A value of -1 is documented in Keystone as meaning unlimited:
|
||||
#
|
||||
# "Note
|
||||
# The default limit of registered limit and the resource limit of project
|
||||
# limit now are limited from -1 to 2147483647 (integer). -1 means no limit
|
||||
# and 2147483647 is the max value for user to define limits."
|
||||
#
|
||||
# https://docs.openstack.org/keystone/latest/admin/unified-limits.html#what-is-a-limit
|
||||
#
|
||||
# but oslo.limit enforce does not treat -1 as unlimited at this time and
|
||||
# instead uses its literal integer value. We will consider any negative
|
||||
# limit value as unlimited.
|
||||
if all(info.limit < 0 for info in exc.over_limit_info_list):
|
||||
return False
|
||||
|
||||
# Only resources with exceeded limits of "0" are candidates for
|
||||
# enforcement.
|
||||
#
|
||||
# A limit of "0" in the over_limit_info_list means that oslo.limit is
|
||||
# telling us the limit is 0. But oslo.limit returns 0 for two cases:
|
||||
# a) it found a limit of 0 in Keystone or b) it did not find a limit in
|
||||
# Keystone at all.
|
||||
#
|
||||
# We will need to query the list of registered limits from Keystone in
|
||||
# order to determine whether each "0" limit is case a) or case b).
|
||||
enforce_candidates = {
|
||||
info.resource_name for info in exc.over_limit_info_list
|
||||
if info.limit == 0}
|
||||
|
||||
# Get a list of all the registered limits. There is not a way to filter by
|
||||
# resource names however this will do one API call whereas the alternative
|
||||
# is calling GET /registered_limits/{registered_limit_id} for each resource
|
||||
# name.
|
||||
enforcer = limit.Enforcer(lambda: None)
|
||||
registered_limits = list(enforcer.connection.registered_limits(
|
||||
service_id=_endpoint().service_id, region_id=_endpoint().region_id))
|
||||
|
||||
# Make a set of resource names of the registered limits.
|
||||
have_limits_set = {limit.resource_name for limit in registered_limits}
|
||||
|
||||
# If any candidates have limits set, enforce. It means at least one limit
|
||||
# has been explicitly set to 0.
|
||||
if enforce_candidates & have_limits_set:
|
||||
return True
|
||||
|
||||
# The resource list will be either a require list or an ignore list.
|
||||
require_or_ignore = CONF.quota.unified_limits_resource_list
|
||||
|
||||
strategy = CONF.quota.unified_limits_resource_strategy
|
||||
enforced = enforce_candidates
|
||||
if strategy == 'require':
|
||||
# Resources that are in both the candidate list and in the require list
|
||||
# should be enforced.
|
||||
enforced = enforce_candidates & set(require_or_ignore)
|
||||
elif strategy == 'ignore':
|
||||
# Resources that are in the candidate list but are not in the ignore
|
||||
# list should be enforced.
|
||||
enforced = enforce_candidates - set(require_or_ignore)
|
||||
else:
|
||||
LOG.error(
|
||||
f'Invalid strategy value: {strategy} is specified in the '
|
||||
'[quota]unified_limits_resource_strategy config option, so '
|
||||
f'enforcing for resources {enforced}')
|
||||
# Log in case we need to debug unexpected enforcement or non-enforcement.
|
||||
msg = (
|
||||
f'enforcing for resources {enforced}' if enforced else 'not enforcing')
|
||||
LOG.debug(
|
||||
f'Resources {enforce_candidates} have no registered limits set in '
|
||||
f'Keystone. [quota]unified_limits_resource_strategy is {strategy} and '
|
||||
f'[quota]unified_limits_resource_list is {require_or_ignore}, '
|
||||
f'so {msg}')
|
||||
return bool(enforced)
|
||||
|
@ -329,6 +329,9 @@ class TestCase(base.BaseTestCase):
|
||||
# Reset the global key manager
|
||||
nova.crypto._KEYMGR = None
|
||||
|
||||
# Reset the global endpoint
|
||||
nova.limit.utils.ENDPOINT = None
|
||||
|
||||
def _setup_cells(self):
|
||||
"""Setup a normal cellsv2 environment.
|
||||
|
||||
|
@ -10,6 +10,8 @@
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from unittest import mock
|
||||
|
||||
from oslo_limit import fixture as limit_fixture
|
||||
from oslo_serialization import base64
|
||||
from oslo_utils.fixture import uuidsentinel as uuids
|
||||
@ -18,6 +20,7 @@ from nova import context as nova_context
|
||||
from nova.limit import local as local_limit
|
||||
from nova.objects import flavor as flavor_obj
|
||||
from nova.objects import instance_group as group_obj
|
||||
from nova.tests import fixtures as nova_fixtures
|
||||
from nova.tests.functional.api import client
|
||||
from nova.tests.functional import integrated_helpers
|
||||
|
||||
@ -220,3 +223,212 @@ class UnifiedLimitsTest(integrated_helpers._IntegratedTestBase):
|
||||
self.assertIn('server_group_members', e.response.text)
|
||||
|
||||
self.admin_api.delete_server(server['id'])
|
||||
|
||||
|
||||
class ResourceStrategyTest(integrated_helpers._IntegratedTestBase):
|
||||
|
||||
def setUp(self):
|
||||
super().setUp()
|
||||
# Use different project_ids for non-admin and admin.
|
||||
self.api.project_id = 'fake'
|
||||
self.admin_api.project_id = 'admin'
|
||||
|
||||
self.flags(driver="nova.quota.UnifiedLimitsDriver", group='quota')
|
||||
self.ctx = nova_context.get_admin_context()
|
||||
self.ul_api = self.useFixture(nova_fixtures.UnifiedLimitsFixture())
|
||||
|
||||
def test_invalid_value_in_resource_list(self):
|
||||
# First two have casing issues, next doesn't have the "class:" prefix,
|
||||
# last is a typo.
|
||||
invalid_names = (
|
||||
'class:vcpu', 'class:CUSTOM_thing', 'VGPU', 'class:MEMRY_MB')
|
||||
for name in invalid_names:
|
||||
e = self.assertRaises(
|
||||
ValueError, self.flags, unified_limits_resource_list=[name],
|
||||
group='quota')
|
||||
self.assertIsInstance(e, ValueError)
|
||||
self.assertIn('not a valid resource class name', str(e))
|
||||
|
||||
def test_valid_custom_resource_classes(self):
|
||||
valid_names = ('class:CUSTOM_GOLD', 'class:CUSTOM_A5_1')
|
||||
for name in valid_names:
|
||||
self.flags(unified_limits_resource_list=[name], group='quota')
|
||||
|
||||
@mock.patch('nova.limit.utils.LOG.error')
|
||||
def test_invalid_strategy_configuration(self, mock_log_error):
|
||||
# Quota should be enforced and fail the check if there is somehow an
|
||||
# invalid strategy value.
|
||||
self.stub_out(
|
||||
'nova.limit.utils.CONF.quota.unified_limits_resource_strategy',
|
||||
'bogus')
|
||||
e = self.assertRaises(
|
||||
client.OpenStackApiException, self._create_server)
|
||||
self.assertEqual(403, e.response.status_code)
|
||||
expected = (
|
||||
'Invalid strategy value: bogus is specified in the '
|
||||
'[quota]unified_limits_resource_strategy config option, so '
|
||||
'enforcing for resources')
|
||||
mock_log_error.assert_called()
|
||||
self.assertIn(expected, mock_log_error.call_args.args[0])
|
||||
|
||||
@mock.patch('nova.limit.utils.LOG.debug')
|
||||
def test_required_limits_set(self, mock_log_debug):
|
||||
# Required resources configured are: 'servers', MEMORY_MB, and VCPU
|
||||
self.flags(unified_limits_resource_strategy='require', group='quota')
|
||||
require = ['servers', 'class:MEMORY_MB', 'class:VCPU']
|
||||
self.flags(unified_limits_resource_list=require, group='quota')
|
||||
# Purposely not setting any quota for DISK_GB.
|
||||
self.ul_api.create_registered_limit(
|
||||
resource_name='servers', default_limit=4)
|
||||
self.ul_api.create_registered_limit(
|
||||
resource_name='class:VCPU', default_limit=8)
|
||||
self.ul_api.create_registered_limit(
|
||||
resource_name='class:MEMORY_MB', default_limit=32768)
|
||||
# Server create should succeed because required resources VCPU,
|
||||
# MEMORY_MB, and 'servers' have registered limits.
|
||||
self._create_server()
|
||||
unset_limits = set(['class:DISK_GB'])
|
||||
call = mock.call(
|
||||
f'Resources {unset_limits} have no registered limits set in '
|
||||
f'Keystone. [quota]unified_limits_resource_strategy is require '
|
||||
f'and [quota]unified_limits_resource_list is {require}, so not '
|
||||
'enforcing')
|
||||
# The message will be logged twice -- once in nova-api and once in
|
||||
# nova-conductor because of the quota recheck after resource creation.
|
||||
self.assertEqual([call, call], mock_log_debug.mock_calls)
|
||||
|
||||
def test_some_required_limits_not_set(self):
|
||||
# Now add DISK_GB as a required resource.
|
||||
self.flags(unified_limits_resource_strategy='require', group='quota')
|
||||
self.flags(unified_limits_resource_list=[
|
||||
'servers', 'class:MEMORY_MB', 'class:VCPU', 'class:DISK_GB'],
|
||||
group='quota')
|
||||
# Purposely not setting any quota for DISK_GB.
|
||||
self.ul_api.create_registered_limit(
|
||||
resource_name='servers', default_limit=-1)
|
||||
self.ul_api.create_registered_limit(
|
||||
resource_name='class:VCPU', default_limit=8)
|
||||
self.ul_api.create_registered_limit(
|
||||
resource_name='class:MEMORY_MB', default_limit=32768)
|
||||
# Server create should fail because required resource DISK_GB does not
|
||||
# have a registered limit set.
|
||||
e = self.assertRaises(
|
||||
client.OpenStackApiException, self._create_server, api=self.api)
|
||||
self.assertEqual(403, e.response.status_code)
|
||||
|
||||
@mock.patch('nova.limit.utils.LOG.debug')
|
||||
def test_no_required_limits(self, mock_log_debug):
|
||||
# Configured to not require any resource limits.
|
||||
self.flags(unified_limits_resource_strategy='require', group='quota')
|
||||
self.flags(unified_limits_resource_list=[], group='quota')
|
||||
# Server create should succeed because no resource registered limits
|
||||
# are required to be set.
|
||||
self._create_server()
|
||||
# The message will be logged twice -- once in nova-api and once in
|
||||
# nova-conductor because of the quota recheck after resource creation.
|
||||
self.assertEqual(2, mock_log_debug.call_count)
|
||||
|
||||
@mock.patch('nova.limit.utils.LOG.debug')
|
||||
def test_ignored_limits_set(self, mock_log_debug):
|
||||
# Ignored unset limit resources configured is DISK_GB.
|
||||
self.flags(unified_limits_resource_strategy='ignore', group='quota')
|
||||
ignore = ['class:DISK_GB']
|
||||
self.flags(unified_limits_resource_list=ignore, group='quota')
|
||||
# Purposely not setting any quota for DISK_GB.
|
||||
self.ul_api.create_registered_limit(
|
||||
resource_name='servers', default_limit=4)
|
||||
self.ul_api.create_registered_limit(
|
||||
resource_name='class:VCPU', default_limit=8)
|
||||
self.ul_api.create_registered_limit(
|
||||
resource_name='class:MEMORY_MB', default_limit=32768)
|
||||
# Server create should succeed because class:DISK_GB is specified in
|
||||
# the ignore unset limit list.
|
||||
self._create_server()
|
||||
unset_limits = set(['class:DISK_GB'])
|
||||
call = mock.call(
|
||||
f'Resources {unset_limits} have no registered limits set in '
|
||||
f'Keystone. [quota]unified_limits_resource_strategy is ignore and '
|
||||
f'[quota]unified_limits_resource_list is {ignore}, so not '
|
||||
'enforcing')
|
||||
# The message will be logged twice -- once in nova-api and once in
|
||||
# nova-conductor because of the quota recheck after resource creation.
|
||||
self.assertEqual([call, call], mock_log_debug.mock_calls)
|
||||
|
||||
def test_some_ignored_limits_not_set(self):
|
||||
# Configured to ignore only one unset resource limit.
|
||||
self.flags(unified_limits_resource_strategy='ignore', group='quota')
|
||||
self.flags(unified_limits_resource_list=[
|
||||
'class:DISK_GB'], group='quota')
|
||||
# Purposely not setting any quota for servers.
|
||||
self.ul_api.create_registered_limit(
|
||||
resource_name='class:VCPU', default_limit=8)
|
||||
self.ul_api.create_registered_limit(
|
||||
resource_name='class:MEMORY_MB', default_limit=32768)
|
||||
# Server create should fail because although resource DISK_GB does not
|
||||
# have a registered limit set and it is in the ignore list, resource
|
||||
# 'servers' does not have a limit set and it is not in the ignore list.
|
||||
e = self.assertRaises(
|
||||
client.OpenStackApiException, self._create_server, api=self.api)
|
||||
self.assertEqual(403, e.response.status_code)
|
||||
|
||||
def test_no_ignored_limits(self):
|
||||
# Configured to not ignore any unset resource limits.
|
||||
self.flags(unified_limits_resource_strategy='ignore', group='quota')
|
||||
self.flags(unified_limits_resource_list=[], group='quota')
|
||||
# Server create should fail because resource DISK_GB does not have a
|
||||
# registered limit set and it is not in the ignore list.
|
||||
e = self.assertRaises(
|
||||
client.OpenStackApiException, self._create_server, api=self.api)
|
||||
self.assertEqual(403, e.response.status_code)
|
||||
|
||||
def test_all_unlimited(self):
|
||||
# -1 is documented in Keystone as meaning unlimited:
|
||||
#
|
||||
# https://docs.openstack.org/keystone/latest/admin/unified-limits.html#what-is-a-limit
|
||||
#
|
||||
# but oslo.limit enforce does not treat -1 as unlimited at this time
|
||||
# and instead uses its literal integer value.
|
||||
#
|
||||
# Test that we consider -1 to be unlimited in Nova and the server
|
||||
# create should succeed.
|
||||
for resource in (
|
||||
'servers', 'class:VCPU', 'class:MEMORY_MB', 'class:DISK_GB'):
|
||||
self.ul_api.create_registered_limit(
|
||||
resource_name=resource, default_limit=-1)
|
||||
self._create_server()
|
||||
|
||||
def test_default_unlimited_but_project_limited(self):
|
||||
# If the default limit is set to -1 unlimited but the project has a
|
||||
# limit, quota should be enforced at the project level.
|
||||
# Note that it is not valid to set a project limit without first
|
||||
# setting a registered limit -- Keystone will not allow it.
|
||||
self.ul_api.create_registered_limit(
|
||||
resource_name='servers', default_limit=-1)
|
||||
self.ul_api.create_limit(
|
||||
project_id='fake', resource_name='servers', resource_limit=1)
|
||||
# First server should succeed because we have a project limit of 1.
|
||||
self._create_server()
|
||||
# Second server should fail because it would exceed the project limit
|
||||
# of 1.
|
||||
e = self.assertRaises(
|
||||
client.OpenStackApiException, self._create_server, api=self.api)
|
||||
self.assertEqual(403, e.response.status_code)
|
||||
|
||||
def test_default_limited_but_project_unlimited(self):
|
||||
# If the default limit is set to a value but the project has a limit
|
||||
# set to -1 unlimited, quota should be enforced at the project level.
|
||||
# Note that it is not valid to set a project limit without first
|
||||
# setting a registered limit -- Keystone will not allow it.
|
||||
self.ul_api.create_registered_limit(
|
||||
resource_name='servers', default_limit=0)
|
||||
self.ul_api.create_limit(
|
||||
project_id='fake', resource_name='servers', resource_limit=-1)
|
||||
# First server should succeed because we have a default limit of 0 and
|
||||
# a project limit of -1 unlimited.
|
||||
self._create_server()
|
||||
# Try to create a server in a different project (admin project) -- this
|
||||
# should fail because the default limit has been explicitly set to 0.
|
||||
e = self.assertRaises(
|
||||
client.OpenStackApiException, self._create_server,
|
||||
api=self.admin_api)
|
||||
self.assertEqual(403, e.response.status_code)
|
||||
|
@ -16,7 +16,6 @@
|
||||
from unittest import mock
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_limit import fixture as limit_fixture
|
||||
from oslo_utils.fixture import uuidsentinel as uuids
|
||||
from oslo_utils import uuidutils
|
||||
import webob
|
||||
@ -27,6 +26,7 @@ from nova import exception
|
||||
from nova.limit import local as local_limit
|
||||
from nova import objects
|
||||
from nova import test
|
||||
from nova.tests import fixtures as nova_fixtures
|
||||
from nova.tests.unit.api.openstack import fakes
|
||||
|
||||
|
||||
@ -210,8 +210,9 @@ class ServerGroupQuotasUnifiedLimitsTestV21(ServerGroupQuotasTestV21):
|
||||
self.flags(driver='nova.quota.UnifiedLimitsDriver', group='quota')
|
||||
self.req = fakes.HTTPRequest.blank('')
|
||||
self.controller = sg_v21.ServerGroupController()
|
||||
self.limit_fixture = self.useFixture(
|
||||
limit_fixture.LimitFixture({'server_groups': 10}, {}))
|
||||
self.ul_api = self.useFixture(nova_fixtures.UnifiedLimitsFixture())
|
||||
self.ul_api.create_registered_limit(
|
||||
resource_name='server_groups', default_limit=10)
|
||||
|
||||
@mock.patch('nova.limit.local.enforce_db_limit')
|
||||
def test_create_server_group_during_recheck(self, mock_enforce):
|
||||
@ -238,7 +239,10 @@ class ServerGroupQuotasUnifiedLimitsTestV21(ServerGroupQuotasTestV21):
|
||||
delta=1)
|
||||
|
||||
def test_create_group_fails_with_zero_quota(self):
|
||||
self.limit_fixture.reglimits = {'server_groups': 0}
|
||||
# UnifiedLimitsFixture doesn't support update of a limit.
|
||||
self.ul_api.registered_limits_list.clear()
|
||||
self.ul_api.create_registered_limit(
|
||||
resource_name='server_groups', default_limit=0)
|
||||
sgroup = {'name': 'test', 'policies': ['anti-affinity']}
|
||||
exc = self.assertRaises(webob.exc.HTTPForbidden,
|
||||
self.controller.create,
|
||||
@ -247,7 +251,10 @@ class ServerGroupQuotasUnifiedLimitsTestV21(ServerGroupQuotasTestV21):
|
||||
self.assertIn(msg, str(exc))
|
||||
|
||||
def test_create_only_one_group_when_limit_is_one(self):
|
||||
self.limit_fixture.reglimits = {'server_groups': 1}
|
||||
# UnifiedLimitsFixture doesn't support update of a limit.
|
||||
self.ul_api.registered_limits_list.clear()
|
||||
self.ul_api.create_registered_limit(
|
||||
resource_name='server_groups', default_limit=1)
|
||||
policies = ['anti-affinity']
|
||||
sgroup = {'name': 'test', 'policies': policies}
|
||||
res_dict = self.controller.create(
|
||||
|
@ -26,6 +26,7 @@ from nova import exception
|
||||
from nova.limit import local as local_limit
|
||||
from nova.objects import keypair as keypair_obj
|
||||
from nova import quota
|
||||
from nova.tests import fixtures as nova_fixtures
|
||||
from nova.tests.unit.compute import test_compute
|
||||
from nova.tests.unit import fake_crypto
|
||||
from nova.tests.unit.objects import test_keypair
|
||||
@ -165,8 +166,9 @@ class CreateImportSharedTestMixIn(object):
|
||||
|
||||
def test_quota_unified_limits(self):
|
||||
self.flags(driver="nova.quota.UnifiedLimitsDriver", group="quota")
|
||||
self.useFixture(limit_fixture.LimitFixture(
|
||||
{'server_key_pairs': 0}, {}))
|
||||
ul_api = self.useFixture(nova_fixtures.UnifiedLimitsFixture())
|
||||
ul_api.create_registered_limit(
|
||||
resource_name='server_key_pairs', default_limit=0)
|
||||
msg = ("Resource %s is over limit" % local_limit.KEY_PAIRS)
|
||||
self.assertKeypairRaises(exception.KeypairLimitExceeded, msg, 'foo')
|
||||
|
||||
@ -177,8 +179,9 @@ class CreateImportSharedTestMixIn(object):
|
||||
recheck because a parallel request filled up the quota first.
|
||||
"""
|
||||
self.flags(driver="nova.quota.UnifiedLimitsDriver", group="quota")
|
||||
self.useFixture(limit_fixture.LimitFixture(
|
||||
{'server_key_pairs': 100}, {}))
|
||||
ul_api = self.useFixture(nova_fixtures.UnifiedLimitsFixture())
|
||||
ul_api.create_registered_limit(
|
||||
resource_name='server_key_pairs', default_limit=100)
|
||||
# First quota check succeeds, second (recheck) fails.
|
||||
mock_enforce.side_effect = [
|
||||
None, exception.KeypairLimitExceeded('oslo.limit message')]
|
||||
|
@ -26,6 +26,7 @@ from nova.limit import local as local_limit
|
||||
from nova.limit import utils as limit_utils
|
||||
from nova import objects
|
||||
from nova import test
|
||||
from nova.tests import fixtures as nova_fixtures
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
@ -72,7 +73,8 @@ class TestLocalLimits(test.NoDBTestCase):
|
||||
self.assertEqual(expected, str(e))
|
||||
|
||||
def test_enforce_api_limit_no_registered_limit_found(self):
|
||||
self.useFixture(limit_fixture.LimitFixture({}, {}))
|
||||
self.flags(unified_limits_resource_strategy='ignore', group='quota')
|
||||
self.useFixture(nova_fixtures.UnifiedLimitsFixture())
|
||||
e = self.assertRaises(exception.MetadataLimitExceeded,
|
||||
local_limit.enforce_api_limit,
|
||||
local_limit.SERVER_METADATA_ITEMS, 42)
|
||||
@ -160,7 +162,8 @@ class TestLocalLimits(test.NoDBTestCase):
|
||||
|
||||
@mock.patch.object(objects.KeyPairList, "get_count_by_user")
|
||||
def test_enforce_db_limit_no_registered_limit_found(self, mock_count):
|
||||
self.useFixture(limit_fixture.LimitFixture({}, {}))
|
||||
self.flags(unified_limits_resource_strategy='ignore', group='quota')
|
||||
self.useFixture(nova_fixtures.UnifiedLimitsFixture())
|
||||
mock_count.return_value = 5
|
||||
e = self.assertRaises(exception.KeypairLimitExceeded,
|
||||
local_limit.enforce_db_limit, self.context,
|
||||
|
@ -0,0 +1,36 @@
|
||||
features:
|
||||
- |
|
||||
New configuration options ``[quota]unified_limits_resource_strategy`` and
|
||||
``[quota]unified_limits_resource_list`` have
|
||||
been added to enable operators to specify a list of resources that are
|
||||
either required or ignored to have registered limits set. The default
|
||||
strategy is ``require`` and the default resource list contains ``servers``.
|
||||
The configured list is only used when ``[quota]driver`` is set to the
|
||||
``UnifiedLimitsDriver``.
|
||||
|
||||
When ``unified_limits_resource_strategy = require``, if a resource in
|
||||
``unified_limits_resource_list`` is requested and has no registered limit
|
||||
set, the quota limit for that resource will be considered to be 0 and all
|
||||
requests to allocate that resource will be rejected for being over quota.
|
||||
Any resource not in the list will be considered to have unlimited quota.
|
||||
|
||||
When ``unified_limits_resource_strategy = ignore``, if a resource in
|
||||
``unified_limits_resource_list`` is requested and has no registered limit
|
||||
set, the quota limit for that resource will be considered to be unlimited
|
||||
and all requests to allocate that resource will be accepted. Any resource
|
||||
not in the list will be considered to have 0 quota.
|
||||
|
||||
The options should be configured for the :program:`nova-api` and
|
||||
:program:`nova-conductor` services. The :program:`nova-conductor` service
|
||||
performs quota enforcement when ``[quota]recheck_quota`` is ``True`` (the
|
||||
default).
|
||||
|
||||
The ``unified_limits_resource_list`` list can also be set to an empty list.
|
||||
upgrade:
|
||||
- |
|
||||
When the ``[quota]driver`` configuration option is set to the
|
||||
``UnifiedLimitsDriver``, a limit of ``-1`` in Keystone will now be
|
||||
considered as unlimited and the ``servers`` resource will be considered to
|
||||
be required to have a registered limit set in Keystone because of the
|
||||
values for ``[quota]unified_limits_resource_strategy`` and
|
||||
``unified_limits_resource_strategy``.
|
Loading…
Reference in New Issue
Block a user