Merge "Microversion 1.35: root_required"
This commit is contained in:
@@ -42,6 +42,7 @@ Request
|
|||||||
- in_treeN: allocation_candidates_in_tree_granular
|
- in_treeN: allocation_candidates_in_tree_granular
|
||||||
- group_policy: allocation_candidates_group_policy
|
- group_policy: allocation_candidates_group_policy
|
||||||
- limit: allocation_candidates_limit
|
- limit: allocation_candidates_limit
|
||||||
|
- root_required: allocation_candidates_root_required
|
||||||
|
|
||||||
Response (microversions 1.12 - )
|
Response (microversions 1.12 - )
|
||||||
--------------------------------
|
--------------------------------
|
||||||
|
|||||||
@@ -173,6 +173,21 @@ allocation_candidates_member_of_granular:
|
|||||||
It is an error to specify a ``member_ofN`` parameter without a
|
It is an error to specify a ``member_ofN`` parameter without a
|
||||||
corresponding ``resourcesN`` parameter with the same suffix.
|
corresponding ``resourcesN`` parameter with the same suffix.
|
||||||
min_version: 1.25
|
min_version: 1.25
|
||||||
|
allocation_candidates_root_required:
|
||||||
|
type: string
|
||||||
|
in: query
|
||||||
|
required: false
|
||||||
|
min_version: 1.35
|
||||||
|
description: |
|
||||||
|
A comma-separated list of trait requirements that the root provider of the
|
||||||
|
(non-sharing) tree must satisfy::
|
||||||
|
|
||||||
|
root_required=COMPUTE_SUPPORTS_MULTI_ATTACH,!CUSTOM_WINDOWS_LICENSED
|
||||||
|
|
||||||
|
Allocation requests in the response will be limited to those whose
|
||||||
|
(non-sharing) tree's root provider satisfies the specified trait
|
||||||
|
requirements. Traits which are forbidden (must **not** be present on the
|
||||||
|
root provider) are expressed by prefixing the trait with a ``!``.
|
||||||
project_id: &project_id
|
project_id: &project_id
|
||||||
type: string
|
type: string
|
||||||
in: query
|
in: query
|
||||||
|
|||||||
@@ -384,8 +384,8 @@ resource of ``VCPU``, and not applied to the suffixed resource, ``DISK_GB``.
|
|||||||
When you want to have ``VCPU`` from wherever and ``DISK_GB`` from ``SS1``,
|
When you want to have ``VCPU`` from wherever and ``DISK_GB`` from ``SS1``,
|
||||||
the request may look like::
|
the request may look like::
|
||||||
|
|
||||||
GET: /allocation_candidates?resources=VCPU:1
|
GET /allocation_candidates?resources=VCPU:1
|
||||||
&resources1=DISK_GB:10&in_tree1=<SS1 uuid>
|
&resources1=DISK_GB:10&in_tree1=<SS1 uuid>
|
||||||
|
|
||||||
which will stick to the first sharing provider for ``DISK_GB``.
|
which will stick to the first sharing provider for ``DISK_GB``.
|
||||||
|
|
||||||
@@ -397,15 +397,101 @@ which will stick to the first sharing provider for ``DISK_GB``.
|
|||||||
When you want to have ``VCPU`` from ``CN1`` and ``DISK_GB`` from ``SS1``,
|
When you want to have ``VCPU`` from ``CN1`` and ``DISK_GB`` from ``SS1``,
|
||||||
the request may look like::
|
the request may look like::
|
||||||
|
|
||||||
GET: /allocation_candidates?resources1=VCPU:1&in_tree1=<CN1 uuid>
|
GET /allocation_candidates?resources1=VCPU:1&in_tree1=<CN1 uuid>
|
||||||
&resources2=DISK_GB:10&in_tree2=<SS1 uuid>
|
&resources2=DISK_GB:10&in_tree2=<SS1 uuid>
|
||||||
&group_policy=isolate
|
&group_policy=isolate
|
||||||
|
|
||||||
which will return only 2 candidates.
|
which will return only 2 candidates.
|
||||||
|
|
||||||
1. ``NUMA1_1`` (``VCPU``) + ``SS1`` (``DISK_GB``)
|
1. ``NUMA1_1`` (``VCPU``) + ``SS1`` (``DISK_GB``)
|
||||||
2. ``NUMA1_2`` (``VCPU``) + ``SS1`` (``DISK_GB``)
|
2. ``NUMA1_2`` (``VCPU``) + ``SS1`` (``DISK_GB``)
|
||||||
|
|
||||||
|
.. _`filtering by root provider traits`:
|
||||||
|
|
||||||
|
Filtering by Root Provider Traits
|
||||||
|
=================================
|
||||||
|
|
||||||
|
When traits are associated with a particular resource, the provider tree should
|
||||||
|
be constructed such that the traits are associated with the provider possessing
|
||||||
|
the inventory of that resource. For example, trait ``HW_CPU_X86_AVX2`` is a
|
||||||
|
trait associated with the ``VCPU`` resource, so it should be placed on the
|
||||||
|
resource provider with ``VCPU`` inventory, wherever that provider is positioned
|
||||||
|
in the tree structure. (A NUMA-aware host may model ``VCPU`` inventory in a
|
||||||
|
child provider, whereas a non-NUMA-aware host may model it in the root
|
||||||
|
provider.)
|
||||||
|
|
||||||
|
On the other hand, some traits are associated not with a resource, but with the
|
||||||
|
provider itself. For example, a compute host may be capable of
|
||||||
|
``COMPUTE_VOLUME_MULTI_ATTACH``, or be associated with a
|
||||||
|
``CUSTOM_WINDOWS_LICENSE_POOL``. In this case it is recommended that the root
|
||||||
|
resource provider be used to represent the concept of the "compute host"; so
|
||||||
|
these kinds of traits should always be placed on the root resource provider.
|
||||||
|
|
||||||
|
The following environment illustrates the above concepts::
|
||||||
|
|
||||||
|
+---------------------------------+ +-------------------------------------------+
|
||||||
|
|+-------------------------------+| | +-------------------------------+ |
|
||||||
|
|| Compute Node (NON_NUMA_CN) || | | Compute Node (NUMA_CN) | |
|
||||||
|
|| VCPU: 8, || | | DISK_GB: 1000 | |
|
||||||
|
|| MEMORY_MB: 1024 || | | traits: | |
|
||||||
|
|| DISK_GB: 1000 || | | STORAGE_DISK_SSD, | |
|
||||||
|
|| traits: || | | COMPUTE_VOLUME_MULTI_ATTACH | |
|
||||||
|
|| HW_CPU_X86_AVX2, || | +-------+-------------+---------+ |
|
||||||
|
|| STORAGE_DISK_SSD, || | nested | | nested |
|
||||||
|
|| COMPUTE_VOLUME_MULTI_ATTACH, || |+-----------+-------+ +---+---------------+|
|
||||||
|
|| CUSTOM_WINDOWS_LICENSE_POOL || || NUMA1 | | NUMA2 ||
|
||||||
|
|+-------------------------------+| || VCPU: 4 | | VCPU: 4 ||
|
||||||
|
+---------------------------------+ || MEMORY_MB: 1024 | | MEMORY_MB: 1024 ||
|
||||||
|
|| | | traits: ||
|
||||||
|
|| | | HW_CPU_X86_AVX2 ||
|
||||||
|
|+-------------------+ +-------------------+|
|
||||||
|
+-------------------------------------------+
|
||||||
|
|
||||||
|
A tree modeled in this fashion can take advantage of the `root_required`_
|
||||||
|
query parameter to return only allocation candidates from trees which possess
|
||||||
|
(or do not possess) specific traits on their root provider. For example,
|
||||||
|
to return allocation candidates including ``VCPU`` with the ``HW_CPU_X86_AVX2``
|
||||||
|
instruction set from hosts capable of ``COMPUTE_VOLUME_MULTI_ATTACH``, a
|
||||||
|
request may look like::
|
||||||
|
|
||||||
|
GET /allocation_candidates
|
||||||
|
?resources1=VCPU:1,MEMORY_MB:512&required1=HW_CPU_X86_AVX2
|
||||||
|
&resources2=DISK_GB:100
|
||||||
|
&group_policy=none
|
||||||
|
&root_required=COMPUTE_VOLUME_MULTI_ATTACH
|
||||||
|
|
||||||
|
This will return results from both ``NUMA_CN`` and ``NON_NUMA_CN`` because
|
||||||
|
both have the ``COMPUTE_VOLUME_MULTI_ATTACH`` trait on the root provider; but
|
||||||
|
only ``NUMA2`` has ``HW_CPU_X86_AVX2`` so there will only be one result from
|
||||||
|
``NUMA_CN``.
|
||||||
|
|
||||||
|
1. ``NON_NUMA_CN`` (``VCPU``, ``MEMORY_MB``, ``DISK_GB``)
|
||||||
|
2. ``NUMA_CN`` (``DISK_GB``) + ``NUMA2`` (``VCPU``, ``MEMORY_MB``)
|
||||||
|
|
||||||
|
To restrict allocation candidates to only those not in your
|
||||||
|
``CUSTOM_WINDOWS_LICENSE_POOL``, a request may look like::
|
||||||
|
|
||||||
|
GET /allocation_candidates
|
||||||
|
?resources1=VCPU:1,MEMORY_MB:512
|
||||||
|
&resources2=DISK_GB:100
|
||||||
|
&group_policy=none
|
||||||
|
&root_required=!CUSTOM_WINDOWS_LICENSE_POOL
|
||||||
|
|
||||||
|
This will return results only from ``NUMA_CN`` because ``NON_NUMA_CN`` has the
|
||||||
|
forbidden ``CUSTOM_WINDOWS_LICENSE_POOL`` on the root provider.
|
||||||
|
|
||||||
|
1. ``NUMA_CN`` (``DISK_GB``) + ``NUMA1`` (``VCPU``, ``MEMORY_MB``)
|
||||||
|
2. ``NUMA_CN`` (``DISK_GB``) + ``NUMA2`` (``VCPU``, ``MEMORY_MB``)
|
||||||
|
|
||||||
|
The syntax of the ``root_required`` query parameter is identical to that of
|
||||||
|
``required[$S]``: multiple trait strings may be specified, separated by commas,
|
||||||
|
each optionally prefixed with ``!`` to indicate that it is forbidden.
|
||||||
|
|
||||||
|
.. note:: ``root_required`` may not be suffixed, and may be specified only
|
||||||
|
once, as it applies only to the root provider.
|
||||||
|
|
||||||
|
.. note:: When sharing providers are involved in the request, ``root_required``
|
||||||
|
applies only to the root of the non-sharing provider tree.
|
||||||
|
|
||||||
.. _`Nested Resource Providers`: https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/nested-resource-providers.html
|
.. _`Nested Resource Providers`: https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/nested-resource-providers.html
|
||||||
.. _`POST /resource_providers`: https://developer.openstack.org/api-ref/placement/
|
.. _`POST /resource_providers`: https://developer.openstack.org/api-ref/placement/
|
||||||
@@ -418,3 +504,4 @@ which will return only 2 candidates.
|
|||||||
.. _`Granular Resource Request`: https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/granular-resource-requests.html
|
.. _`Granular Resource Request`: https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/granular-resource-requests.html
|
||||||
.. _`Filter Allocation Candidates by Provider Tree`: https://specs.openstack.org/openstack/nova-specs/specs/stein/implemented/alloc-candidates-in-tree.html
|
.. _`Filter Allocation Candidates by Provider Tree`: https://specs.openstack.org/openstack/nova-specs/specs/stein/implemented/alloc-candidates-in-tree.html
|
||||||
.. _`Support subtree filter`: https://review.opendev.org/#/c/595236/
|
.. _`Support subtree filter`: https://review.opendev.org/#/c/595236/
|
||||||
|
.. _`root_required`: https://review.opendev.org/#/c/662191/5/doc/source/specs/train/approved/2005575-nested-magic-1.rst@304
|
||||||
@@ -46,3 +46,6 @@ PROVIDER_IN_USE = 'placement.resource_provider.inuse'
|
|||||||
PROVIDER_CANNOT_DELETE_PARENT = (
|
PROVIDER_CANNOT_DELETE_PARENT = (
|
||||||
'placement.resource_provider.cannot_delete_parent')
|
'placement.resource_provider.cannot_delete_parent')
|
||||||
RESOURCE_PROVIDER_NOT_FOUND = 'placement.resource_provider.not_found'
|
RESOURCE_PROVIDER_NOT_FOUND = 'placement.resource_provider.not_found'
|
||||||
|
ILLEGAL_DUPLICATE_QUERYPARAM = 'placement.query.duplicate_key'
|
||||||
|
# Failure of a post-schema value check
|
||||||
|
QUERYPARAM_BAD_VALUE = 'placement.query.bad_value'
|
||||||
|
|||||||
@@ -256,7 +256,9 @@ def list_allocation_candidates(req):
|
|||||||
context.can(policies.LIST)
|
context.can(policies.LIST)
|
||||||
want_version = req.environ[microversion.MICROVERSION_ENVIRON]
|
want_version = req.environ[microversion.MICROVERSION_ENVIRON]
|
||||||
get_schema = schema.GET_SCHEMA_1_10
|
get_schema = schema.GET_SCHEMA_1_10
|
||||||
if want_version.matches((1, 33)):
|
if want_version.matches((1, 35)):
|
||||||
|
get_schema = schema.GET_SCHEMA_1_35
|
||||||
|
elif want_version.matches((1, 33)):
|
||||||
get_schema = schema.GET_SCHEMA_1_33
|
get_schema = schema.GET_SCHEMA_1_33
|
||||||
elif want_version.matches((1, 31)):
|
elif want_version.matches((1, 31)):
|
||||||
get_schema = schema.GET_SCHEMA_1_31
|
get_schema = schema.GET_SCHEMA_1_31
|
||||||
|
|||||||
@@ -18,6 +18,7 @@ import re
|
|||||||
|
|
||||||
import webob
|
import webob
|
||||||
|
|
||||||
|
from placement import errors
|
||||||
from placement import microversion
|
from placement import microversion
|
||||||
from placement.schemas import common
|
from placement.schemas import common
|
||||||
from placement import util
|
from placement import util
|
||||||
@@ -38,6 +39,14 @@ _QS_KEY_PATTERN_1_33 = re.compile(
|
|||||||
common.GROUP_PAT_1_33))
|
common.GROUP_PAT_1_33))
|
||||||
|
|
||||||
|
|
||||||
|
def _fix_one_forbidden(traits):
|
||||||
|
forbidden = [trait for trait in traits if trait.startswith('!')]
|
||||||
|
required = traits - set(forbidden)
|
||||||
|
forbidden = set(trait.lstrip('!') for trait in forbidden)
|
||||||
|
conflicts = forbidden & required
|
||||||
|
return required, forbidden, conflicts
|
||||||
|
|
||||||
|
|
||||||
class RequestGroup(object):
|
class RequestGroup(object):
|
||||||
def __init__(self, use_same_provider=True, resources=None,
|
def __init__(self, use_same_provider=True, resources=None,
|
||||||
required_traits=None, forbidden_traits=None, member_of=None,
|
required_traits=None, forbidden_traits=None, member_of=None,
|
||||||
@@ -151,12 +160,8 @@ class RequestGroup(object):
|
|||||||
def _fix_forbidden(by_suffix):
|
def _fix_forbidden(by_suffix):
|
||||||
conflicting_traits = []
|
conflicting_traits = []
|
||||||
for suff, group in by_suffix.items():
|
for suff, group in by_suffix.items():
|
||||||
forbidden = [trait for trait in group.required_traits
|
group.required_traits, group.forbidden_traits, conflicts = (
|
||||||
if trait.startswith('!')]
|
_fix_one_forbidden(group.required_traits))
|
||||||
group.required_traits = group.required_traits - set(forbidden)
|
|
||||||
group.forbidden_traits = set([trait.lstrip('!') for trait in
|
|
||||||
forbidden])
|
|
||||||
conflicts = group.forbidden_traits & group.required_traits
|
|
||||||
if conflicts:
|
if conflicts:
|
||||||
conflicting_traits.append('required%s: (%s)'
|
conflicting_traits.append('required%s: (%s)'
|
||||||
% (suff, ', '.join(conflicts)))
|
% (suff, ', '.join(conflicts)))
|
||||||
@@ -164,6 +169,7 @@ class RequestGroup(object):
|
|||||||
msg = (
|
msg = (
|
||||||
'Conflicting required and forbidden traits found in the '
|
'Conflicting required and forbidden traits found in the '
|
||||||
'following traits keys: %s')
|
'following traits keys: %s')
|
||||||
|
# TODO(efried): comment=errors.QUERYPARAM_BAD_VALUE
|
||||||
raise webob.exc.HTTPBadRequest(
|
raise webob.exc.HTTPBadRequest(
|
||||||
msg % ', '.join(conflicting_traits))
|
msg % ', '.join(conflicting_traits))
|
||||||
|
|
||||||
@@ -280,7 +286,8 @@ class RequestWideParams(object):
|
|||||||
This is in contrast with individual request groups (list of RequestGroup
|
This is in contrast with individual request groups (list of RequestGroup
|
||||||
above).
|
above).
|
||||||
"""
|
"""
|
||||||
def __init__(self, limit=None, group_policy=None):
|
def __init__(self, limit=None, group_policy=None,
|
||||||
|
anchor_required_traits=None, anchor_forbidden_traits=None):
|
||||||
"""Create a RequestWideParams.
|
"""Create a RequestWideParams.
|
||||||
|
|
||||||
:param limit: An integer, N, representing the maximum number of
|
:param limit: An integer, N, representing the maximum number of
|
||||||
@@ -294,23 +301,54 @@ class RequestWideParams(object):
|
|||||||
use_same_provider=True should interact with each other. If the
|
use_same_provider=True should interact with each other. If the
|
||||||
value is "isolate", we will filter out allocation requests
|
value is "isolate", we will filter out allocation requests
|
||||||
where any such RequestGroups are satisfied by the same RP.
|
where any such RequestGroups are satisfied by the same RP.
|
||||||
|
:param anchor_required_traits: Set of trait names which the anchor of
|
||||||
|
each returned allocation candidate must possess, regardless of
|
||||||
|
any RequestGroup filters.
|
||||||
|
:param anchor_forbidden_traits: Set of trait names which the anchor of
|
||||||
|
each returned allocation candidate must NOT possess, regardless
|
||||||
|
of any RequestGroup filters.
|
||||||
"""
|
"""
|
||||||
self.limit = limit
|
self.limit = limit
|
||||||
self.group_policy = group_policy
|
self.group_policy = group_policy
|
||||||
|
self.anchor_required_traits = anchor_required_traits
|
||||||
|
self.anchor_forbidden_traits = anchor_forbidden_traits
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_request(cls, req):
|
def from_request(cls, req):
|
||||||
|
# TODO(efried): Make it an error to specify limit more than once -
|
||||||
|
# maybe when we make group_policy optional.
|
||||||
limit = req.GET.getall('limit')
|
limit = req.GET.getall('limit')
|
||||||
# JSONschema has already confirmed that limit has the form
|
# JSONschema has already confirmed that limit has the form
|
||||||
# of an integer.
|
# of an integer.
|
||||||
if limit:
|
if limit:
|
||||||
limit = int(limit[0])
|
limit = int(limit[0])
|
||||||
|
|
||||||
|
# TODO(efried): Make it an error to specify group_policy more than once
|
||||||
|
# - maybe when we make it optional.
|
||||||
group_policy = req.GET.getall('group_policy') or None
|
group_policy = req.GET.getall('group_policy') or None
|
||||||
# Schema ensures we get either "none" or "isolate"
|
# Schema ensures we get either "none" or "isolate"
|
||||||
if group_policy:
|
if group_policy:
|
||||||
group_policy = group_policy[0]
|
group_policy = group_policy[0]
|
||||||
|
|
||||||
|
anchor_required_traits = None
|
||||||
|
anchor_forbidden_traits = None
|
||||||
|
root_required = req.GET.getall('root_required')
|
||||||
|
if root_required:
|
||||||
|
if len(root_required) > 1:
|
||||||
|
raise webob.exc.HTTPBadRequest(
|
||||||
|
"Query parameter 'root_required' may be specified only "
|
||||||
|
"once.", comment=errors.ILLEGAL_DUPLICATE_QUERYPARAM)
|
||||||
|
anchor_required_traits, anchor_forbidden_traits, conflicts = (
|
||||||
|
_fix_one_forbidden(util.normalize_traits_qs_param(
|
||||||
|
root_required[0], allow_forbidden=True)))
|
||||||
|
if conflicts:
|
||||||
|
raise webob.exc.HTTPBadRequest(
|
||||||
|
'Conflicting required and forbidden traits found in '
|
||||||
|
'root_required: %s' % ', '.join(conflicts),
|
||||||
|
comment=errors.QUERYPARAM_BAD_VALUE)
|
||||||
|
|
||||||
return cls(
|
return cls(
|
||||||
limit=limit,
|
limit=limit,
|
||||||
group_policy=group_policy)
|
group_policy=group_policy,
|
||||||
|
anchor_required_traits=anchor_required_traits,
|
||||||
|
anchor_forbidden_traits=anchor_forbidden_traits)
|
||||||
|
|||||||
@@ -85,6 +85,7 @@ VERSIONS = [
|
|||||||
# [A-Za-z0-9_-]{1,64}.
|
# [A-Za-z0-9_-]{1,64}.
|
||||||
'1.34', # Include a mappings key in allocation requests that shows which
|
'1.34', # Include a mappings key in allocation requests that shows which
|
||||||
# resource providers satisfied which request group suffix.
|
# resource providers satisfied which request group suffix.
|
||||||
|
'1.35', # Add a `root_required` queryparam on `GET /allocation_candidates`
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -72,21 +72,25 @@ class AllocationCandidates(object):
|
|||||||
and provider_summaries satisfying `requests`, limited
|
and provider_summaries satisfying `requests`, limited
|
||||||
according to `limit`.
|
according to `limit`.
|
||||||
"""
|
"""
|
||||||
alloc_reqs, provider_summaries = cls._get_by_requests(
|
try:
|
||||||
context, groups, rqparams, nested_aware=nested_aware)
|
alloc_reqs, provider_summaries = cls._get_by_requests(
|
||||||
|
context, groups, rqparams, nested_aware=nested_aware)
|
||||||
|
except exception.ResourceProviderNotFound:
|
||||||
|
alloc_reqs, provider_summaries = [], []
|
||||||
return cls(
|
return cls(
|
||||||
allocation_requests=alloc_reqs,
|
allocation_requests=alloc_reqs,
|
||||||
provider_summaries=provider_summaries,
|
provider_summaries=provider_summaries,
|
||||||
)
|
)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _get_by_one_request(rg_ctx):
|
def _get_by_one_request(rg_ctx, rw_ctx):
|
||||||
"""Get allocation candidates for one RequestGroup.
|
"""Get allocation candidates for one RequestGroup.
|
||||||
|
|
||||||
Must be called from within an placement_context_manager.reader
|
Must be called from within an placement_context_manager.reader
|
||||||
(or writer) context.
|
(or writer) context.
|
||||||
|
|
||||||
:param rg_ctx: RequestGroupSearchContext.
|
:param rg_ctx: RequestGroupSearchContext.
|
||||||
|
:param rw_ctx: RequestWideSearchContext.
|
||||||
"""
|
"""
|
||||||
if not rg_ctx.use_same_provider and (
|
if not rg_ctx.use_same_provider and (
|
||||||
rg_ctx.exists_sharing or rg_ctx.exists_nested):
|
rg_ctx.exists_sharing or rg_ctx.exists_nested):
|
||||||
@@ -105,7 +109,7 @@ class AllocationCandidates(object):
|
|||||||
rg_ctx.context, rg_ctx.required_trait_map)
|
rg_ctx.context, rg_ctx.required_trait_map)
|
||||||
if not trait_rps:
|
if not trait_rps:
|
||||||
return [], []
|
return [], []
|
||||||
rp_candidates = res_ctx.get_trees_matching_all(rg_ctx)
|
rp_candidates = res_ctx.get_trees_matching_all(rg_ctx, rw_ctx)
|
||||||
return _alloc_candidates_multiple_providers(rg_ctx, rp_candidates)
|
return _alloc_candidates_multiple_providers(rg_ctx, rp_candidates)
|
||||||
|
|
||||||
# Either we are processing a single-RP request group, or there are no
|
# Either we are processing a single-RP request group, or there are no
|
||||||
@@ -114,7 +118,7 @@ class AllocationCandidates(object):
|
|||||||
# the requested resources and more efficiently construct the
|
# the requested resources and more efficiently construct the
|
||||||
# allocation requests.
|
# allocation requests.
|
||||||
rp_tuples = res_ctx.get_provider_ids_matching(rg_ctx)
|
rp_tuples = res_ctx.get_provider_ids_matching(rg_ctx)
|
||||||
return _alloc_candidates_single_provider(rg_ctx, rp_tuples)
|
return _alloc_candidates_single_provider(rg_ctx, rw_ctx, rp_tuples)
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
@db_api.placement_context_manager.reader
|
@db_api.placement_context_manager.reader
|
||||||
@@ -122,16 +126,17 @@ class AllocationCandidates(object):
|
|||||||
rw_ctx = res_ctx.RequestWideSearchContext(
|
rw_ctx = res_ctx.RequestWideSearchContext(
|
||||||
context, rqparams, nested_aware)
|
context, rqparams, nested_aware)
|
||||||
sharing = res_ctx.get_sharing_providers(context)
|
sharing = res_ctx.get_sharing_providers(context)
|
||||||
|
# TODO(efried): If we ran anchors_for_sharing_providers here, we could
|
||||||
|
# narrow to only sharing providers associated with our filtered trees.
|
||||||
|
# Unclear whether this would be cheaper than waiting until we've
|
||||||
|
# filtered sharing providers for other things (like resources).
|
||||||
|
|
||||||
candidates = {}
|
candidates = {}
|
||||||
for suffix, group in groups.items():
|
for suffix, group in groups.items():
|
||||||
try:
|
rg_ctx = res_ctx.RequestGroupSearchContext(
|
||||||
rg_ctx = res_ctx.RequestGroupSearchContext(
|
context, group, rw_ctx.has_trees, sharing, suffix)
|
||||||
context, group, rw_ctx.has_trees, sharing, suffix)
|
|
||||||
except exception.ResourceProviderNotFound:
|
|
||||||
return [], []
|
|
||||||
|
|
||||||
alloc_reqs, summaries = cls._get_by_one_request(rg_ctx)
|
alloc_reqs, summaries = cls._get_by_one_request(rg_ctx, rw_ctx)
|
||||||
LOG.debug("%s (suffix '%s') returned %d matches",
|
LOG.debug("%s (suffix '%s') returned %d matches",
|
||||||
str(group), str(suffix), len(alloc_reqs))
|
str(group), str(suffix), len(alloc_reqs))
|
||||||
if not alloc_reqs:
|
if not alloc_reqs:
|
||||||
@@ -335,7 +340,7 @@ def _alloc_candidates_multiple_providers(rg_ctx, rp_candidates):
|
|||||||
return list(alloc_requests), list(summaries.values())
|
return list(alloc_requests), list(summaries.values())
|
||||||
|
|
||||||
|
|
||||||
def _alloc_candidates_single_provider(rg_ctx, rp_tuples):
|
def _alloc_candidates_single_provider(rg_ctx, rw_ctx, rp_tuples):
|
||||||
"""Returns a tuple of (allocation requests, provider summaries) for a
|
"""Returns a tuple of (allocation requests, provider summaries) for a
|
||||||
supplied set of requested resource amounts and resource providers. The
|
supplied set of requested resource amounts and resource providers. The
|
||||||
supplied resource providers have capacity to satisfy ALL of the resources
|
supplied resource providers have capacity to satisfy ALL of the resources
|
||||||
@@ -351,6 +356,7 @@ def _alloc_candidates_single_provider(rg_ctx, rp_tuples):
|
|||||||
determine requests across multiple providers.
|
determine requests across multiple providers.
|
||||||
|
|
||||||
:param rg_ctx: RequestGroupSearchContext
|
:param rg_ctx: RequestGroupSearchContext
|
||||||
|
:param rw_ctx: RequestWideSearchContext
|
||||||
:param rp_tuples: List of two-tuples of (provider ID, root provider ID)s
|
:param rp_tuples: List of two-tuples of (provider ID, root provider ID)s
|
||||||
for providers that matched the requested resources
|
for providers that matched the requested resources
|
||||||
"""
|
"""
|
||||||
@@ -381,7 +387,10 @@ def _alloc_candidates_single_provider(rg_ctx, rp_tuples):
|
|||||||
req_obj = _allocation_request_for_provider(
|
req_obj = _allocation_request_for_provider(
|
||||||
rg_ctx.context, rg_ctx.resources, rp_summary.resource_provider,
|
rg_ctx.context, rg_ctx.resources, rp_summary.resource_provider,
|
||||||
suffix=rg_ctx.suffix)
|
suffix=rg_ctx.suffix)
|
||||||
alloc_requests.append(req_obj)
|
# Exclude this if its anchor (which is its root) isn't in our
|
||||||
|
# prefiltered list of anchors
|
||||||
|
if rw_ctx.in_filtered_anchors(root_id):
|
||||||
|
alloc_requests.append(req_obj)
|
||||||
# If this is a sharing provider, we have to include an extra
|
# If this is a sharing provider, we have to include an extra
|
||||||
# AllocationRequest for every possible anchor.
|
# AllocationRequest for every possible anchor.
|
||||||
traits = rp_summary.traits
|
traits = rp_summary.traits
|
||||||
@@ -392,6 +401,9 @@ def _alloc_candidates_single_provider(rg_ctx, rp_tuples):
|
|||||||
# We already added self
|
# We already added self
|
||||||
if anchor.anchor_id == root_id:
|
if anchor.anchor_id == root_id:
|
||||||
continue
|
continue
|
||||||
|
# Only include if anchor is viable
|
||||||
|
if not rw_ctx.in_filtered_anchors(anchor.anchor_id):
|
||||||
|
continue
|
||||||
req_obj = copy.copy(req_obj)
|
req_obj = copy.copy(req_obj)
|
||||||
req_obj.anchor_root_provider_uuid = anchor.anchor_uuid
|
req_obj.anchor_root_provider_uuid = anchor.anchor_uuid
|
||||||
alloc_requests.append(req_obj)
|
alloc_requests.append(req_obj)
|
||||||
|
|||||||
@@ -178,6 +178,48 @@ class RequestWideSearchContext(object):
|
|||||||
self.group_policy = rqparams.group_policy
|
self.group_policy = rqparams.group_policy
|
||||||
self._nested_aware = nested_aware
|
self._nested_aware = nested_aware
|
||||||
self.has_trees = _has_provider_trees(context)
|
self.has_trees = _has_provider_trees(context)
|
||||||
|
# This is set up by _process_anchor_* below. It remains None if no
|
||||||
|
# anchor filters were requested. Otherwise it becomes a set of internal
|
||||||
|
# IDs of root providers that conform to the requested filters.
|
||||||
|
self.anchor_root_ids = None
|
||||||
|
self._process_anchor_traits(rqparams)
|
||||||
|
|
||||||
|
def _process_anchor_traits(self, rqparams):
|
||||||
|
"""Set or filter self.anchor_root_ids according to anchor
|
||||||
|
required/forbidden traits.
|
||||||
|
|
||||||
|
:param rqparams: RequestWideParams.
|
||||||
|
:raises TraitNotFound: If any named trait does not exist in the
|
||||||
|
database.
|
||||||
|
:raises ResourceProviderNotFound: If anchor trait filters were
|
||||||
|
specified, but we find no matching providers.
|
||||||
|
"""
|
||||||
|
required, forbidden = (
|
||||||
|
rqparams.anchor_required_traits, rqparams.anchor_forbidden_traits)
|
||||||
|
|
||||||
|
if not (required or forbidden):
|
||||||
|
return
|
||||||
|
|
||||||
|
required = set(trait_obj.ids_from_names(
|
||||||
|
self._ctx, required).values()) if required else None
|
||||||
|
forbidden = set(trait_obj.ids_from_names(
|
||||||
|
self._ctx, forbidden).values()) if forbidden else None
|
||||||
|
|
||||||
|
self.anchor_root_ids = _get_roots_with_traits(
|
||||||
|
self._ctx, required, forbidden)
|
||||||
|
|
||||||
|
if not self.anchor_root_ids:
|
||||||
|
raise exception.ResourceProviderNotFound()
|
||||||
|
|
||||||
|
def in_filtered_anchors(self, anchor_root_id):
|
||||||
|
"""Returns whether anchor_root_id is present in filtered anchors. (If
|
||||||
|
we don't have filtered anchors, that implicitly means "all possible
|
||||||
|
anchors", so we return True.)
|
||||||
|
"""
|
||||||
|
if self.anchor_root_ids is None:
|
||||||
|
# Not filtering anchors
|
||||||
|
return True
|
||||||
|
return anchor_root_id in self.anchor_root_ids
|
||||||
|
|
||||||
def exclude_nested_providers(
|
def exclude_nested_providers(
|
||||||
self, allocation_requests, provider_summaries):
|
self, allocation_requests, provider_summaries):
|
||||||
@@ -503,7 +545,7 @@ def get_provider_ids_matching(rg_ctx):
|
|||||||
|
|
||||||
|
|
||||||
@db_api.placement_context_manager.reader
|
@db_api.placement_context_manager.reader
|
||||||
def get_trees_matching_all(rg_ctx):
|
def get_trees_matching_all(rg_ctx, rw_ctx):
|
||||||
"""Returns a RPCandidates object representing the providers that satisfy
|
"""Returns a RPCandidates object representing the providers that satisfy
|
||||||
the request for resources.
|
the request for resources.
|
||||||
|
|
||||||
@@ -530,6 +572,7 @@ def get_trees_matching_all(rg_ctx):
|
|||||||
providers to satisfy different resources involved in a single RequestGroup.
|
providers to satisfy different resources involved in a single RequestGroup.
|
||||||
|
|
||||||
:param rg_ctx: RequestGroupSearchContext
|
:param rg_ctx: RequestGroupSearchContext
|
||||||
|
:param rw_ctx: RequestWideSearchContext
|
||||||
"""
|
"""
|
||||||
if rg_ctx.forbidden_aggs:
|
if rg_ctx.forbidden_aggs:
|
||||||
rps_bad_aggs = provider_ids_matching_aggregates(
|
rps_bad_aggs = provider_ids_matching_aggregates(
|
||||||
@@ -575,6 +618,17 @@ def get_trees_matching_all(rg_ctx):
|
|||||||
len(sharing_providers), amount, rc_name,
|
len(sharing_providers), amount, rc_name,
|
||||||
len(provs_with_inv_rc.trees))
|
len(provs_with_inv_rc.trees))
|
||||||
|
|
||||||
|
# If we have a list of viable anchor roots, filter to those
|
||||||
|
if rw_ctx.anchor_root_ids:
|
||||||
|
provs_with_inv_rc.filter_by_tree(rw_ctx.anchor_root_ids)
|
||||||
|
LOG.debug(
|
||||||
|
"found %d providers under %d trees after applying anchor root "
|
||||||
|
"filter",
|
||||||
|
len(provs_with_inv_rc.rps), len(provs_with_inv_rc.trees))
|
||||||
|
# If that left nothing, we're done
|
||||||
|
if not provs_with_inv_rc:
|
||||||
|
return rp_candidates.RPCandidateList()
|
||||||
|
|
||||||
if rg_ctx.member_of:
|
if rg_ctx.member_of:
|
||||||
# Aggregate on root spans the whole tree, so the rp itself
|
# Aggregate on root spans the whole tree, so the rp itself
|
||||||
# *or its root* should be in the aggregate
|
# *or its root* should be in the aggregate
|
||||||
|
|||||||
@@ -638,3 +638,16 @@ those resource providers that satisfy the identified request group. For
|
|||||||
convenience, this mapping can be included in the request payload for
|
convenience, this mapping can be included in the request payload for
|
||||||
``POST /allocations``, ``PUT /allocations/{consumer_uuid}``, and
|
``POST /allocations``, ``PUT /allocations/{consumer_uuid}``, and
|
||||||
``POST /reshaper``, but it will be ignored.
|
``POST /reshaper``, but it will be ignored.
|
||||||
|
|
||||||
|
1.35 - Support 'root_required' queryparam on GET /allocation_candidates
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
.. versionadded:: Train
|
||||||
|
|
||||||
|
Add support for the ``root_required`` query parameter to the ``GET
|
||||||
|
/allocation_candidates`` API. It accepts a comma-delimited list of trait names,
|
||||||
|
each optionally prefixed with ``!`` to indicate a forbidden trait, in the same
|
||||||
|
format as the ``required`` query parameter. This restricts allocation requests
|
||||||
|
in the response to only those whose (non-sharing) tree's root resource provider
|
||||||
|
satisfies the specified trait requirements. See
|
||||||
|
:ref:`filtering by root provider traits` for details.
|
||||||
|
|||||||
@@ -90,3 +90,9 @@ _GROUP_PAT_FMT_1_33 = "^%s(" + common.GROUP_PAT_1_33 + ")?$"
|
|||||||
GET_SCHEMA_1_33["patternProperties"] = {
|
GET_SCHEMA_1_33["patternProperties"] = {
|
||||||
_GROUP_PAT_FMT_1_33 % group_type: {"type": "string"}
|
_GROUP_PAT_FMT_1_33 % group_type: {"type": "string"}
|
||||||
for group_type in ('resources', 'required', 'member_of', 'in_tree')}
|
for group_type in ('resources', 'required', 'member_of', 'in_tree')}
|
||||||
|
|
||||||
|
# Microversion 1.35 supports root_required.
|
||||||
|
GET_SCHEMA_1_35 = copy.deepcopy(GET_SCHEMA_1_33)
|
||||||
|
GET_SCHEMA_1_35["properties"]['root_required'] = {
|
||||||
|
"type": ["string"]
|
||||||
|
}
|
||||||
|
|||||||
@@ -424,7 +424,9 @@ class ProviderTreeDBHelperTestCase(tb.PlacementDbBaseTestCase):
|
|||||||
except Exception:
|
except Exception:
|
||||||
pass
|
pass
|
||||||
rg_ctx = _req_group_search_context(self.ctx, **kwargs)
|
rg_ctx = _req_group_search_context(self.ctx, **kwargs)
|
||||||
results = res_ctx.get_trees_matching_all(rg_ctx)
|
rw_ctx = res_ctx.RequestWideSearchContext(
|
||||||
|
self.ctx, placement_lib.RequestWideParams(), True)
|
||||||
|
results = res_ctx.get_trees_matching_all(rg_ctx, rw_ctx)
|
||||||
|
|
||||||
tree_ids = self._get_rp_ids_matching_names(expected_trees)
|
tree_ids = self._get_rp_ids_matching_names(expected_trees)
|
||||||
rp_ids = self._get_rp_ids_matching_names(expected_rps)
|
rp_ids = self._get_rp_ids_matching_names(expected_rps)
|
||||||
@@ -907,7 +909,8 @@ class ProviderTreeDBHelperTestCase(tb.PlacementDbBaseTestCase):
|
|||||||
expected=(3,))
|
expected=(3,))
|
||||||
|
|
||||||
# Required & forbidden overlap. No results because it is impossible for
|
# Required & forbidden overlap. No results because it is impossible for
|
||||||
# one provider to both have and not have a trait.
|
# one provider to both have and not have a trait. (Unreachable in real
|
||||||
|
# life due to conflict check in the handler.)
|
||||||
do_test(required=[avx2_t, ssd_t], forbidden=[ssd_t, geneve_t])
|
do_test(required=[avx2_t, ssd_t], forbidden=[ssd_t, geneve_t])
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -417,6 +417,7 @@ class NUMANetworkFixture(APIFixture):
|
|||||||
>>....(from ss1)........| min_unit: 1024 |
|
>>....(from ss1)........| min_unit: 1024 |
|
||||||
| step_size: 128 |
|
| step_size: 128 |
|
||||||
| DISK_GB: 1000 |
|
| DISK_GB: 1000 |
|
||||||
|
| traits: FOO |
|
||||||
| agg: [aggA] |
|
| agg: [aggA] |
|
||||||
+---------+----------+
|
+---------+----------+
|
||||||
|
|
|
|
||||||
@@ -515,6 +516,7 @@ class NUMANetworkFixture(APIFixture):
|
|||||||
tb.add_inventory(
|
tb.add_inventory(
|
||||||
cn2, orc.MEMORY_MB, 2048, min_unit=1024, step_size=128)
|
cn2, orc.MEMORY_MB, 2048, min_unit=1024, step_size=128)
|
||||||
tb.add_inventory(cn2, orc.DISK_GB, 1000)
|
tb.add_inventory(cn2, orc.DISK_GB, 1000)
|
||||||
|
tb.set_traits(cn2, 'CUSTOM_FOO')
|
||||||
os.environ['CN2_UUID'] = cn2.uuid
|
os.environ['CN2_UUID'] = cn2.uuid
|
||||||
|
|
||||||
nics = []
|
nics = []
|
||||||
|
|||||||
@@ -0,0 +1,292 @@
|
|||||||
|
# Tests of allocation candidates API with root_required
|
||||||
|
|
||||||
|
fixtures:
|
||||||
|
- NUMANetworkFixture
|
||||||
|
|
||||||
|
defaults:
|
||||||
|
request_headers:
|
||||||
|
x-auth-token: admin
|
||||||
|
accept: application/json
|
||||||
|
openstack-api-version: placement 1.35
|
||||||
|
|
||||||
|
tests:
|
||||||
|
|
||||||
|
- name: root_required before microversion
|
||||||
|
GET: /allocation_candidates?resources=VCPU:1&root_required=HW_CPU_X86_AVX2
|
||||||
|
request_headers:
|
||||||
|
openstack-api-version: placement 1.34
|
||||||
|
status: 400
|
||||||
|
response_strings:
|
||||||
|
- Invalid query string parameters
|
||||||
|
- "'root_required' does not match any of the regexes"
|
||||||
|
|
||||||
|
- name: conflicting required and forbidden
|
||||||
|
GET: /allocation_candidates?resources=VCPU:1&root_required=HW_CPU_X86_AVX2,HW_CPU_X86_SSE,!HW_CPU_X86_AVX2
|
||||||
|
status: 400
|
||||||
|
response_strings:
|
||||||
|
- "Conflicting required and forbidden traits found in root_required: HW_CPU_X86_AVX2"
|
||||||
|
response_json_paths:
|
||||||
|
errors[0].code: placement.query.bad_value
|
||||||
|
|
||||||
|
- name: nonexistent required
|
||||||
|
GET: /allocation_candidates?resources=VCPU:1&root_required=CUSTOM_NO_EXIST,HW_CPU_X86_SSE,!HW_CPU_X86_AVX
|
||||||
|
status: 400
|
||||||
|
response_strings:
|
||||||
|
- "No such trait(s): CUSTOM_NO_EXIST"
|
||||||
|
|
||||||
|
- name: nonexistent forbidden
|
||||||
|
GET: /allocation_candidates?resources=VCPU:1&root_required=!CUSTOM_NO_EXIST,HW_CPU_X86_SSE,!HW_CPU_X86_AVX
|
||||||
|
status: 400
|
||||||
|
response_strings:
|
||||||
|
- "No such trait(s): CUSTOM_NO_EXIST"
|
||||||
|
|
||||||
|
- name: multiple root_required is an error
|
||||||
|
GET: /allocation_candidates?resources=VCPU:1&root_required=MISC_SHARES_VIA_AGGREGATE&root_required=!HW_NUMA_ROOT
|
||||||
|
status: 400
|
||||||
|
response_strings:
|
||||||
|
- Query parameter 'root_required' may be specified only once.
|
||||||
|
response_json_paths:
|
||||||
|
errors[0].code: placement.query.duplicate_key
|
||||||
|
|
||||||
|
- name: no hits for a required trait that is on children in one tree and absent from the other
|
||||||
|
GET: /allocation_candidates?resources=VCPU:1&root_required=HW_NUMA_ROOT
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# No root has HW_NUMA_ROOT
|
||||||
|
$.allocation_requests.`len`: 0
|
||||||
|
|
||||||
|
- name: required trait on a sharing root
|
||||||
|
GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&root_required=MISC_SHARES_VIA_AGGREGATE
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# MISC_SHARES is on the sharing root, but not on any of the anchor roots
|
||||||
|
$.allocation_requests.`len`: 0
|
||||||
|
|
||||||
|
- name: root_required trait on children
|
||||||
|
GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&root_required=HW_NUMA_ROOT
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# HW_NUMA_ROOT is on child providers, not on any root
|
||||||
|
$.allocation_requests.`len`: 0
|
||||||
|
|
||||||
|
- name: required trait not on any provider
|
||||||
|
GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&root_required=HW_CPU_X86_AVX2
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# HW_CPU_X86_AVX2 isn't anywhere in the env.
|
||||||
|
$.allocation_requests.`len`: 0
|
||||||
|
|
||||||
|
- name: limit to multiattach-capable unsuffixed no sharing
|
||||||
|
GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024&root_required=COMPUTE_VOLUME_MULTI_ATTACH
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# We only get results from cn1 because only it has MULTI_ATTACH
|
||||||
|
# We get candidates where VCPU and MEMORY_MB are provided by the same or
|
||||||
|
# alternate NUMA roots.
|
||||||
|
$.allocation_requests.`len`: 4
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['NUMA0_UUID']"].resources.VCPU: [1, 1]
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.VCPU: [1, 1]
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['NUMA0_UUID']"].resources.MEMORY_MB: [1024, 1024]
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.MEMORY_MB: [1024, 1024]
|
||||||
|
|
||||||
|
- name: limit to multiattach-capable separate granular no isolate no sharing
|
||||||
|
GET: /allocation_candidates?resources1=VCPU:1&resources2=MEMORY_MB:1024&group_policy=none&root_required=COMPUTE_VOLUME_MULTI_ATTACH
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# Same as above
|
||||||
|
$.allocation_requests.`len`: 4
|
||||||
|
# Prove we didn't break provider summaries
|
||||||
|
$.provider_summaries["$ENVIRON['NUMA0_UUID']"].resources[VCPU][capacity]: 4
|
||||||
|
$.provider_summaries["$ENVIRON['NUMA1_UUID']"].resources[MEMORY_MB][capacity]: 2048
|
||||||
|
|
||||||
|
- name: limit to multiattach-capable separate granular isolate no sharing
|
||||||
|
GET: /allocation_candidates?resources1=VCPU:1&resources2=MEMORY_MB:1024&group_policy=isolate&root_required=COMPUTE_VOLUME_MULTI_ATTACH
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# Now we (perhaps unrealistically) only get candidates where VCPU and
|
||||||
|
# MEMORY_MB are on alternate NUMA roots.
|
||||||
|
$.allocation_requests.`len`: 2
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['NUMA0_UUID']"].resources.VCPU: 1
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.VCPU: 1
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['NUMA0_UUID']"].resources.MEMORY_MB: 1024
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.MEMORY_MB: 1024
|
||||||
|
|
||||||
|
- name: limit to multiattach-capable unsuffixed sharing
|
||||||
|
GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&root_required=COMPUTE_VOLUME_MULTI_ATTACH
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# We only get results from cn1 because only it has MULTI_ATTACH
|
||||||
|
# We get candidates where VCPU and MEMORY_MB are provided by the same or
|
||||||
|
# alternate NUMA roots. DISK_GB is always provided by the sharing provider.
|
||||||
|
$.allocation_requests.`len`: 4
|
||||||
|
$.provider_summaries["$ENVIRON['NUMA0_UUID']"].traits:
|
||||||
|
- HW_NUMA_ROOT
|
||||||
|
$.provider_summaries["$ENVIRON['NUMA1_UUID']"].traits:
|
||||||
|
- HW_NUMA_ROOT
|
||||||
|
- CUSTOM_FOO
|
||||||
|
|
||||||
|
- name: limit to multiattach-capable granular sharing
|
||||||
|
GET: /allocation_candidates?resources1=VCPU:1,MEMORY_MB:1024&resources2=DISK_GB:100&&group_policy=none&root_required=COMPUTE_VOLUME_MULTI_ATTACH
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# We only get results from cn1 because only it has MULTI_ATTACH
|
||||||
|
# We only get candidates where VCPU and MEMORY_MB are provided by the same
|
||||||
|
# NUMA root, because requested in the same suffixed group. DISK_GB is
|
||||||
|
# always provided by the sharing provider.
|
||||||
|
$.allocation_requests.`len`: 2
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['NUMA0_UUID']"].resources.VCPU: 1
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.VCPU: 1
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['NUMA0_UUID']"].resources.MEMORY_MB: 1024
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.MEMORY_MB: 1024
|
||||||
|
|
||||||
|
- name: trait exists on root and child in separate trees case 1 unsuffixed required
|
||||||
|
GET: /allocation_candidates?resources=VCPU:1,DISK_GB:100&required=CUSTOM_FOO
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# We get a candidates from cn2 and cn2+ss1 because cn2 has all the
|
||||||
|
# resources and the trait.
|
||||||
|
# We get a candidate from numa1+ss1 because (even in the unsuffixed group)
|
||||||
|
# regular `required` is tied to the resource in that group.
|
||||||
|
$.allocation_requests.`len`: 3
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.VCPU: 1
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: [1, 1]
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: [100, 100]
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.DISK_GB: 100
|
||||||
|
|
||||||
|
- name: trait exists on root and child in separate trees case 2 unsuffixed root_required
|
||||||
|
GET: /allocation_candidates?resources=VCPU:1,DISK_GB:100&root_required=CUSTOM_FOO
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# We only get candidates from cn2 and cn2+ss1 because only cn2 has FOO on
|
||||||
|
# the root
|
||||||
|
$.allocation_requests.`len`: 2
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: [1, 1]
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: 100
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.DISK_GB: 100
|
||||||
|
|
||||||
|
- name: trait exists on root and child in separate trees case 3 suffixed required
|
||||||
|
GET: /allocation_candidates?resources1=VCPU:1&required1=CUSTOM_FOO&resources2=DISK_GB:100&group_policy=none
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# We get a candidates from cn2 because has all the resources and the trait;
|
||||||
|
# and from cn2+ss1 because group_policy=none and the required trait is on
|
||||||
|
# the group with the VCPU.
|
||||||
|
# We get a candidate from numa1+ss1 because the required trait is on the
|
||||||
|
# group with the VCPU.
|
||||||
|
$.allocation_requests.`len`: 3
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.VCPU: 1
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: [1, 1]
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: [100, 100]
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.DISK_GB: 100
|
||||||
|
|
||||||
|
- name: trait exists on root and child in separate trees case 4 suffixed root_required
|
||||||
|
GET: /allocation_candidates?resources1=VCPU:1&resources2=DISK_GB:100&group_policy=none&root_required=CUSTOM_FOO
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# We only get candidates from cn2 and cn2+ss1 because only cn2 has FOO on
|
||||||
|
# the root
|
||||||
|
$.allocation_requests.`len`: 2
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: [1, 1]
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: 100
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.DISK_GB: 100
|
||||||
|
|
||||||
|
- name: no filtering for a forbidden trait that is on children in one tree and absent from the other
|
||||||
|
GET: /allocation_candidates?resources=VCPU:1&root_required=!HW_NUMA_ROOT
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# No root has HW_NUMA_ROOT, so we hit all providers of VCPU
|
||||||
|
$.allocation_requests.`len`: 3
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['NUMA0_UUID']"].resources.VCPU: 1
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.VCPU: 1
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: 1
|
||||||
|
|
||||||
|
|
||||||
|
- name: forbidden trait on a sharing root
|
||||||
|
GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&root_required=!MISC_SHARES_VIA_AGGREGATE
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# This does not filter out candidates including the sharing provider, of
|
||||||
|
# which there are five (four from the combinations of VCPU+MEMORY_MB on cn1
|
||||||
|
# because non-isolated; one using VCPU+MEMORY_MB from cn2). The sixth is
|
||||||
|
# where cn2 provides all the resources.
|
||||||
|
$.allocation_requests.`len`: 6
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: [100, 100, 100, 100, 100]
|
||||||
|
|
||||||
|
- name: combine required with irrelevant forbidden
|
||||||
|
# This time the irrelevant forbidden is on a child provider
|
||||||
|
GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&root_required=CUSTOM_FOO,!HW_NUMA_ROOT
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# This is as above, but filtered to the candidates involving cn2, which has
|
||||||
|
# CUSTOM_FOO on the root.
|
||||||
|
$.allocation_requests.`len`: 2
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: [1, 1]
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.MEMORY_MB: [1024, 1024]
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.DISK_GB: 100
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: 100
|
||||||
|
|
||||||
|
- name: redundant required and forbidden
|
||||||
|
GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&root_required=CUSTOM_FOO,!COMPUTE_VOLUME_MULTI_ATTACH
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# Same result as above. The forbidden multi-attach and the required foo are
|
||||||
|
# both doing the same thing.
|
||||||
|
$.allocation_requests.`len`: 2
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: [1, 1]
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.MEMORY_MB: [1024, 1024]
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.DISK_GB: 100
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: 100
|
||||||
|
|
||||||
|
- name: forbiddens cancel each other
|
||||||
|
GET: /allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:100&root_required=!CUSTOM_FOO,!COMPUTE_VOLUME_MULTI_ATTACH
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# !foo gets rid of cn2; !multi-attach gets rid of cn1.
|
||||||
|
$.allocation_requests.`len`: 0
|
||||||
|
|
||||||
|
- name: isolate foo granular sharing
|
||||||
|
GET: /allocation_candidates?resources1=VCPU:1,MEMORY_MB:1024&resources2=DISK_GB:100&&group_policy=none&root_required=!CUSTOM_FOO
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# We only get results from cn1 because cn2 has the forbidden foo trait.
|
||||||
|
# We only get candidates where VCPU and MEMORY_MB are provided by the same
|
||||||
|
# NUMA root, because requested in the same suffixed group. DISK_GB is
|
||||||
|
# always provided by the sharing provider.
|
||||||
|
$.allocation_requests.`len`: 2
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['NUMA0_UUID']"].resources.VCPU: 1
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.VCPU: 1
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['SS1_UUID']"].resources.DISK_GB: [100, 100]
|
||||||
|
|
||||||
|
- name: unsuffixed required and root_required same trait
|
||||||
|
GET: /allocation_candidates?resources=VCPU:1&required=CUSTOM_FOO&root_required=CUSTOM_FOO
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# required=FOO would have limited us to getting VCPU from numa1 and cn2
|
||||||
|
# BUT root_required=FOO should further restrict us to just cn2
|
||||||
|
$.allocation_requests.`len`: 1
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: 1
|
||||||
|
|
||||||
|
- name: granular required and root_required same trait
|
||||||
|
GET: /allocation_candidates?resources1=VCPU:1&required1=CUSTOM_FOO&root_required=CUSTOM_FOO
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# same as above
|
||||||
|
$.allocation_requests.`len`: 1
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['CN2_UUID']"].resources.VCPU: 1
|
||||||
|
|
||||||
|
- name: required positive and root_required negative same trait
|
||||||
|
GET: /allocation_candidates?resources1=VCPU:1&required1=CUSTOM_FOO&root_required=!CUSTOM_FOO
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# Both numa1 and cn2 match required1=FOO, but since we're forbidding FOO on
|
||||||
|
# the root, we should only get numa1
|
||||||
|
$.allocation_requests.`len`: 1
|
||||||
|
$.allocation_requests..allocations["$ENVIRON['NUMA1_UUID']"].resources.VCPU: 1
|
||||||
|
|
||||||
|
- name: required negative and root_required positive same trait
|
||||||
|
GET: /allocation_candidates?resources1=VCPU:1&required1=!CUSTOM_FOO&root_required=CUSTOM_FOO
|
||||||
|
status: 200
|
||||||
|
response_json_paths:
|
||||||
|
# The only provider of VCPU that doesn't have FOO is numa0. But numa0 is on
|
||||||
|
# cn1, which doesn't have the required FOO on the root.
|
||||||
|
$.allocation_requests.`len`: 0
|
||||||
@@ -41,13 +41,13 @@ tests:
|
|||||||
response_json_paths:
|
response_json_paths:
|
||||||
$.errors[0].title: Not Acceptable
|
$.errors[0].title: Not Acceptable
|
||||||
|
|
||||||
- name: latest microversion is 1.34
|
- name: latest microversion is 1.35
|
||||||
GET: /
|
GET: /
|
||||||
request_headers:
|
request_headers:
|
||||||
openstack-api-version: placement latest
|
openstack-api-version: placement latest
|
||||||
response_headers:
|
response_headers:
|
||||||
vary: /openstack-api-version/
|
vary: /openstack-api-version/
|
||||||
openstack-api-version: placement 1.34
|
openstack-api-version: placement 1.35
|
||||||
|
|
||||||
- name: other accept header bad version
|
- name: other accept header bad version
|
||||||
GET: /
|
GET: /
|
||||||
|
|||||||
@@ -0,0 +1,12 @@
|
|||||||
|
---
|
||||||
|
features:
|
||||||
|
- |
|
||||||
|
Microversion 1.35_ adds support for the ``root_required`` query parameter
|
||||||
|
to the ``GET /allocation_candidates`` API. It accepts a comma-delimited
|
||||||
|
list of trait names, each optionally prefixed with ``!`` to indicate a
|
||||||
|
forbidden trait, in the same format as the ``required`` query parameter.
|
||||||
|
This restricts allocation requests in the response to only those whose
|
||||||
|
(non-sharing) tree's root resource provider satisfies the specified trait
|
||||||
|
requirements.
|
||||||
|
|
||||||
|
.. _1.35: https://docs.openstack.org/placement/latest/placement-api-microversion-history.html#support-root_required-queryparam-on-get-allocation_candidates
|
||||||
Reference in New Issue
Block a user