Add microversion 1.39 to support any-trait queries

The new microversion adds support for the ``in:`` syntax in the ``required``
query parameter in the ``GET /resource_providers`` API as well as to the
``required`` and ``requiredN`` query params of the
``GET /allocation_candidates`` API. Also adds support for repeating the
``required`` and ``requiredN`` parameters in the respective APIs. So

  required=in:T3,T4&required=T1,!T2

is supported and it means T1 and not T2 and (T3 or T4).

Story: 2005345
Story: 2005346
Change-Id: I66543c0c5509739d1461af2fb2c327a003202d74
This commit is contained in:
Balazs Gibizer 2022-01-27 17:10:50 +01:00
parent 8245f9e5ec
commit b2afade159
11 changed files with 641 additions and 31 deletions

View File

@ -246,6 +246,15 @@ required_traits_granular:
It is an error to specify a ``requiredN`` parameter without a corresponding
``resourcesN`` parameter with the same suffix.
**Starting from microversion 1.39** the granular ``requiredN`` query
parameter gained support for the ``in:`` syntax as well as the repetition
of the parameter. So::
requiredN=in:T3,T4&requiredN=T1,!T2
is supported and it means T1 and not T2 and (T3 or T4).
min_version: 1.25
required_traits_unnumbered:
type: string
@ -264,6 +273,30 @@ required_traits_unnumbered:
associated via aggregate. **Starting from microversion 1.22** traits which
are forbidden from any resource provider may be expressed by prefixing a
trait with a ``!``.
**Starting from microversion 1.39** the ``required`` query parameter can be
repeated. The trait lists from the repeated parameters are ANDed together.
So::
required=T1,!T2&required=T3
means T1 and not T2 and T3.
Also **starting from microversion 1.39** the ``required`` parameter
supports the syntax::
required=in:T1,T2,T3
which means T1 or T2 or T3.
Mixing forbidden traits into an ``in:`` prefixed value is not supported and
rejected. But mixing a normal trait list and an ``in:`` prefixed trait list
in two query params within the same request is supported. So::
required=in:T3,T4&required=T1,!T2
is supported and it means T1 and not T2 and (T3 or T4).
resource_provider_member_of:
type: string
in: query
@ -325,11 +358,35 @@ resource_provider_required_query:
type: string
in: query
required: false
description: >
description: |
A comma-delimited list of string trait names. Results will be filtered to
include only resource providers having all the specified traits. **Starting
from microversion 1.22** traits which are forbidden from any resource
provider may be expressed by prefixing a trait with a ``!``.
**Starting from microversion 1.39** the ``required`` query parameter can be
repeated. The trait lists from the repeated parameters are ANDed together.
So::
required=T1,!T2&required=T3
means T1 and not T2 and T3.
Also **starting from microversion 1.39** the ``required`` parameter
supports the syntax::
required=in:T1,T2,T3
which means T1 or T2 or T3.
Mixing forbidden traits into an ``in:`` prefixed value is not supported and
rejected. But mixing normal trait list and ``in:`` trait list in two query
params within the same request is supported. So::
required=in:T3,T4&required=T1,!T2
is supported and it means T1 and not T2 and (T3 or T4).
min_version: 1.18
resource_provider_tree_query:
type: string

View File

@ -205,6 +205,11 @@ spec for details. The ``required`` query parameter also supports negative
expression, via the ``!`` prefix, for forbidden traits. If a forbidden trait
is specified, none of the resource providers that appear in the allocation
candidate may have that trait. See the `Forbidden Traits`_ spec for details.
The ``required`` parameter also supports the syntax ``in:T1,T2,...`` which
means we are looking for resource providers that have either T1 or T2 traits on
them. The two trait query syntax can be combined by repeating the ``required``
query parameter. So querying providers having (T1 or T2) and T3 and not T4 can
be expressed with ``required=in:T1,T2&required=T3,!T4``.
For example, let's say we have the following environment::
@ -493,6 +498,10 @@ each optionally prefixed with ``!`` to indicate that it is forbidden.
.. note:: When sharing providers are involved in the request, ``root_required``
applies only to the root of the non-sharing provider tree.
.. note:: While the ``required`` param supports the any-traits query with the
``in:`` prefix syntax since microversion 1.39 the``root_required``
parameter does not support it yet.
Filtering by Same Subtree
=========================

View File

@ -99,6 +99,10 @@ VERSIONS = [
# type irrespective of whether the ``consumer_type`` was specified
# in the request. The corresponding changes to ``/reshaper`` are
# included.
'1.39', # Adds support for the ``in:`` syntax in the ``required`` query
# parameter in the ``GET /resource_providers`` API as well as to
# the ``required`` and ``requiredN`` query params of the
# ``GET /allocation_candidates`` API.
]

View File

@ -698,3 +698,18 @@ considered to have an ``unknown`` ``consumer_type``. If an ``unknown``
``unknown``.
The corresponding changes to ``POST /reshaper`` are included.
1.39 - Support for the any-traits syntax in the ``required`` parameter
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. versionadded:: Yoga
Adds support for the ``in:`` syntax in the ``required`` query parameter in the
``GET /resource_providers`` API as well as to the ``required`` and
``requiredN`` query params of the ``GET /allocation_candidates`` API. Also adds
support for repeating the ``required`` and ``requiredN`` parameters in the
respective APIs. So::
required=in:T3,T4&required=T1,!T2
is supported and it means T1 and not T2 and (T3 or T4).

View File

@ -772,3 +772,339 @@ class LegacyRBACPolicyFixture(APIFixture):
"""An APIFixture that enforce deprecated policies."""
_secure_rbac = False
class NeutronQoSMultiSegmentFixture(APIFixture):
"""A Gabbi API fixture that creates compute trees simulating Neutron
configured with QoS min bw and min packet rate features in multisegment
networks.
"""
# Have 4 trees. 3 trees with the structure of:
#
# compute
# \ VCPU:8, MEMORY_MB:2095, DISK_GB:500
# |\
# | - Open vSwitch agent
# | | NET_PACKET_RATE_KILOPACKET_PER_SEC: 1000
# | \ CUSTOM_VNIC_TYPE_NORMAL
# | \
# | - br-ex
# | NET_BW_EGR_KILOBIT_PER_SEC: 5000
# | NET_BW_IGR_KILOBIT_PER_SEC: 5000
# | CUSTOM_VNIC_TYPE_NORMAL
# | CUSTOM_PHYSNET_???
# \
# - NIC Switch agent
# | CUSTOM_VNIC_TYPE_DIRECT
# | CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL
# | CUSTOM_VNIC_TYPE_MACVTAP
# \
# \
# - enp129s0f0
# NET_BW_EGR_KILOBIT_PER_SEC: 10000
# NET_BW_IGR_KILOBIT_PER_SEC: 10000
# CUSTOM_VNIC_TYPE_DIRECT
# CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL
# CUSTOM_VNIC_TYPE_MACVTAP
# 'CUSTOM_PHYSNET_???'
#
# For CUSTOM_PHYSNET_??? define the network segment connectivity
# compute0: CUSTOM_PHYSNET_OTHER
# compute1: CUSTOM_PHYSNET_MSN_S1
# compute2: CUSTOM_PHYSNET_MSN_S2
#
# There is a 4th compute that has duplicate network connectivity:
# compute3-br-ex is connected to CUSTOM_PHYSNET_MSN_S1
# compute3-br-ex2 is connected to CUSTOM_PHYSNET_MSN_S2
# compute3-enp129s0f0 is connected to CUSTOM_PHYSNET_MSN_S1
# compute3-enp129s0f1 is connected to CUSTOM_PHYSNET_MSN_S2
# but also compute3 has limited bandwidth capacity
def start_fixture(self):
super(NeutronQoSMultiSegmentFixture, self).start_fixture()
# compute 0 with not connectivity to the multi segment network
compute0 = tb.create_provider(self.context, 'compute0')
os.environ['compute0'] = compute0.uuid
tb.add_inventory(compute0, 'VCPU', 8)
tb.add_inventory(compute0, 'MEMORY_MB', 4096)
tb.add_inventory(compute0, 'DISK_GB', 500)
# OVS agent subtree
compute0_ovs_agent = tb.create_provider(
self.context, 'compute0:Open vSwitch agent', parent=compute0.uuid)
os.environ['compute0:ovs_agent'] = compute0_ovs_agent.uuid
tb.add_inventory(
compute0_ovs_agent, 'NET_PACKET_RATE_KILOPACKET_PER_SEC', 1000)
tb.set_traits(
compute0_ovs_agent,
'CUSTOM_VNIC_TYPE_NORMAL',
)
compute0_br_ex = tb.create_provider(
self.context,
'compute0:Open vSwitch agent:br-ex',
parent=compute0_ovs_agent.uuid
)
os.environ['compute0:br_ex'] = compute0_br_ex.uuid
tb.add_inventory(
compute0_br_ex, 'NET_BW_EGR_KILOBIT_PER_SEC', 5000)
tb.add_inventory(
compute0_br_ex, 'NET_BW_IGR_KILOBIT_PER_SEC', 5000)
tb.set_traits(
compute0_br_ex,
'CUSTOM_VNIC_TYPE_NORMAL',
'CUSTOM_PHYSNET_OTHER',
)
# SRIOV agent subtree
compute0_sriov_agent = tb.create_provider(
self.context, 'compute0:NIC Switch agent', parent=compute0.uuid)
os.environ['compute0:sriov_agent'] = compute0_sriov_agent.uuid
tb.set_traits(
compute0_sriov_agent,
'CUSTOM_VNIC_TYPE_DIRECT',
'CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL',
'CUSTOM_VNIC_TYPE_MACVTAP',
)
compute0_pf0 = tb.create_provider(
self.context,
'compute0:NIC Switch agent:enp129s0f0',
parent=compute0_sriov_agent.uuid
)
os.environ['compute0:pf0'] = compute0_pf0.uuid
tb.add_inventory(
compute0_pf0, 'NET_BW_EGR_KILOBIT_PER_SEC', 10000)
tb.add_inventory(
compute0_pf0, 'NET_BW_IGR_KILOBIT_PER_SEC', 10000)
tb.set_traits(
compute0_pf0,
'CUSTOM_VNIC_TYPE_DIRECT',
'CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL',
'CUSTOM_VNIC_TYPE_MACVTAP',
'CUSTOM_PHYSNET_OTHER',
)
# compute 1 with network connectivity to segment 1
compute1 = tb.create_provider(self.context, 'compute1')
os.environ['compute1'] = compute1.uuid
tb.add_inventory(compute1, 'VCPU', 8)
tb.add_inventory(compute1, 'MEMORY_MB', 4096)
tb.add_inventory(compute1, 'DISK_GB', 500)
# OVS agent subtree
compute1_ovs_agent = tb.create_provider(
self.context, 'compute1:Open vSwitch agent', parent=compute1.uuid)
os.environ['compute1:ovs_agent'] = compute1_ovs_agent.uuid
tb.add_inventory(
compute1_ovs_agent, 'NET_PACKET_RATE_KILOPACKET_PER_SEC', 1000)
tb.set_traits(
compute1_ovs_agent,
'CUSTOM_VNIC_TYPE_NORMAL',
)
compute1_br_ex = tb.create_provider(
self.context,
'compute1:Open vSwitch agent:br-ex',
parent=compute1_ovs_agent.uuid
)
os.environ['compute1:br_ex'] = compute1_br_ex.uuid
tb.add_inventory(
compute1_br_ex, 'NET_BW_EGR_KILOBIT_PER_SEC', 5000)
tb.add_inventory(
compute1_br_ex, 'NET_BW_IGR_KILOBIT_PER_SEC', 5000)
tb.set_traits(
compute1_br_ex,
'CUSTOM_VNIC_TYPE_NORMAL',
'CUSTOM_PHYSNET_MSN_S1',
)
# SRIOV agent subtree
compute1_sriov_agent = tb.create_provider(
self.context, 'compute1:NIC Switch agent', parent=compute1.uuid)
os.environ['compute1:sriov_agent'] = compute1_sriov_agent.uuid
tb.set_traits(
compute1_sriov_agent,
'CUSTOM_VNIC_TYPE_DIRECT',
'CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL',
'CUSTOM_VNIC_TYPE_MACVTAP',
)
compute1_pf0 = tb.create_provider(
self.context,
'compute1:NIC Switch agent:enp129s0f0',
parent=compute1_sriov_agent.uuid
)
os.environ['compute1:pf0'] = compute1_pf0.uuid
tb.add_inventory(
compute1_pf0, 'NET_BW_EGR_KILOBIT_PER_SEC', 10000)
tb.add_inventory(
compute1_pf0, 'NET_BW_IGR_KILOBIT_PER_SEC', 10000)
tb.set_traits(
compute1_pf0,
'CUSTOM_VNIC_TYPE_DIRECT',
'CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL',
'CUSTOM_VNIC_TYPE_MACVTAP',
'CUSTOM_PHYSNET_MSN_S1',
)
# compute 2 with network connectivity to segment 2
compute2 = tb.create_provider(self.context, 'compute2')
os.environ['compute2'] = compute2.uuid
tb.add_inventory(compute2, 'VCPU', 8)
tb.add_inventory(compute2, 'MEMORY_MB', 4096)
tb.add_inventory(compute2, 'DISK_GB', 500)
# OVS agent subtree
compute2_ovs_agent = tb.create_provider(
self.context, 'compute2:Open vSwitch agent', parent=compute2.uuid)
os.environ['compute2:ovs_agent'] = compute2_ovs_agent.uuid
tb.add_inventory(
compute2_ovs_agent, 'NET_PACKET_RATE_KILOPACKET_PER_SEC', 1000)
tb.set_traits(
compute2_ovs_agent,
'CUSTOM_VNIC_TYPE_NORMAL',
)
compute2_br_ex = tb.create_provider(
self.context,
'compute2:Open vSwitch agent:br-ex',
parent=compute2_ovs_agent.uuid
)
os.environ['compute2:br_ex'] = compute2_br_ex.uuid
tb.add_inventory(
compute2_br_ex, 'NET_BW_EGR_KILOBIT_PER_SEC', 5000)
tb.add_inventory(
compute2_br_ex, 'NET_BW_IGR_KILOBIT_PER_SEC', 5000)
tb.set_traits(
compute2_br_ex,
'CUSTOM_VNIC_TYPE_NORMAL',
'CUSTOM_PHYSNET_MSN_S2',
)
# SRIOV agent subtree
compute2_sriov_agent = tb.create_provider(
self.context, 'compute2:NIC Switch agent', parent=compute2.uuid)
os.environ['compute2:sriov_agent'] = compute2_sriov_agent.uuid
tb.set_traits(
compute2_sriov_agent,
'CUSTOM_VNIC_TYPE_DIRECT',
'CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL',
'CUSTOM_VNIC_TYPE_MACVTAP',
)
compute2_pf0 = tb.create_provider(
self.context,
'compute2:NIC Switch agent:enp129s0f0',
parent=compute2_sriov_agent.uuid
)
os.environ['compute2:pf0'] = compute2_pf0.uuid
tb.add_inventory(
compute2_pf0, 'NET_BW_EGR_KILOBIT_PER_SEC', 10000)
tb.add_inventory(
compute2_pf0, 'NET_BW_IGR_KILOBIT_PER_SEC', 10000)
tb.set_traits(
compute2_pf0,
'CUSTOM_VNIC_TYPE_DIRECT',
'CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL',
'CUSTOM_VNIC_TYPE_MACVTAP',
'CUSTOM_PHYSNET_MSN_S2',
)
# compute 3 with network connectivity to both segment 1 and 2
compute3 = tb.create_provider(self.context, 'compute3')
os.environ['compute3'] = compute3.uuid
tb.add_inventory(compute3, 'VCPU', 8)
tb.add_inventory(compute3, 'MEMORY_MB', 4096)
tb.add_inventory(compute3, 'DISK_GB', 500)
# OVS agent subtree
compute3_ovs_agent = tb.create_provider(
self.context, 'compute3:Open vSwitch agent', parent=compute3.uuid)
os.environ['compute3:ovs_agent'] = compute3_ovs_agent.uuid
tb.add_inventory(
compute3_ovs_agent, 'NET_PACKET_RATE_KILOPACKET_PER_SEC', 1000)
tb.set_traits(
compute3_ovs_agent,
'CUSTOM_VNIC_TYPE_NORMAL',
)
compute3_br_ex = tb.create_provider(
self.context,
'compute3:Open vSwitch agent:br-ex',
parent=compute3_ovs_agent.uuid
)
os.environ['compute3:br_ex'] = compute3_br_ex.uuid
tb.add_inventory(
compute3_br_ex, 'NET_BW_EGR_KILOBIT_PER_SEC', 1000)
tb.add_inventory(
compute3_br_ex, 'NET_BW_IGR_KILOBIT_PER_SEC', 1000)
tb.set_traits(
compute3_br_ex,
'CUSTOM_VNIC_TYPE_NORMAL',
'CUSTOM_PHYSNET_MSN_S1',
)
compute3_br_ex2 = tb.create_provider(
self.context,
'compute3:Open vSwitch agent:br-ex2',
parent=compute3_ovs_agent.uuid
)
os.environ['compute3:br_ex2'] = compute3_br_ex2.uuid
tb.add_inventory(
compute3_br_ex2, 'NET_BW_EGR_KILOBIT_PER_SEC', 1000)
tb.add_inventory(
compute3_br_ex2, 'NET_BW_IGR_KILOBIT_PER_SEC', 1000)
tb.set_traits(
compute3_br_ex2,
'CUSTOM_VNIC_TYPE_NORMAL',
'CUSTOM_PHYSNET_MSN_S2',
)
# SRIOV agent subtree
compute3_sriov_agent = tb.create_provider(
self.context, 'compute3:NIC Switch agent', parent=compute3.uuid)
os.environ['compute3:sriov_agent'] = compute2_sriov_agent.uuid
tb.set_traits(
compute3_sriov_agent,
'CUSTOM_VNIC_TYPE_DIRECT',
'CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL',
'CUSTOM_VNIC_TYPE_MACVTAP',
)
compute3_pf0 = tb.create_provider(
self.context,
'compute3:NIC Switch agent:enp129s0f0',
parent=compute3_sriov_agent.uuid
)
os.environ['compute3:pf0'] = compute3_pf0.uuid
tb.add_inventory(
compute3_pf0, 'NET_BW_EGR_KILOBIT_PER_SEC', 1000)
tb.add_inventory(
compute3_pf0, 'NET_BW_IGR_KILOBIT_PER_SEC', 1000)
tb.set_traits(
compute3_pf0,
'CUSTOM_VNIC_TYPE_DIRECT',
'CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL',
'CUSTOM_VNIC_TYPE_MACVTAP',
'CUSTOM_PHYSNET_MSN_S1',
)
compute3_pf1 = tb.create_provider(
self.context,
'compute3:NIC Switch agent:enp129s0f1',
parent=compute3_sriov_agent.uuid
)
os.environ['compute3:pf1'] = compute3_pf1.uuid
tb.add_inventory(
compute3_pf1, 'NET_BW_EGR_KILOBIT_PER_SEC', 1000)
tb.add_inventory(
compute3_pf1, 'NET_BW_IGR_KILOBIT_PER_SEC', 1000)
tb.set_traits(
compute3_pf1,
'CUSTOM_VNIC_TYPE_DIRECT',
'CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL',
'CUSTOM_VNIC_TYPE_MACVTAP',
'CUSTOM_PHYSNET_MSN_S2',
)

View File

@ -0,0 +1,159 @@
fixtures:
- NeutronQoSMultiSegmentFixture
defaults:
request_headers:
x-auth-token: admin
accept: application/json
openstack-api-version: placement latest
tests:
- name: a VM with single port on a non multisegment network
# only compute0 has access to the non-multi-segment network
GET: >-
/allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:10
&resources-port-normal-pps=NET_PACKET_RATE_KILOPACKET_PER_SEC:1000
&required-port-normal-pps=CUSTOM_VNIC_TYPE_NORMAL
&resources-port-normal-bw=NET_BW_EGR_KILOBIT_PER_SEC:1000,NET_BW_IGR_KILOBIT_PER_SEC:1000
&required-port-normal-bw=CUSTOM_VNIC_TYPE_NORMAL,CUSTOM_PHYSNET_OTHER
&same_subtree=-port-normal-pps,-port-normal-bw
&group_policy=none
status: 200
response_json_paths:
$.allocation_requests.`len`: 1
$.allocation_requests..allocations["$ENVIRON['compute0']"].resources[VCPU]: 1
$.allocation_requests..allocations["$ENVIRON['compute0']"].resources[MEMORY_MB]: 1024
$.allocation_requests..allocations["$ENVIRON['compute0']"].resources[DISK_GB]: 10
$.allocation_requests..allocations["$ENVIRON['compute0:ovs_agent']"].resources[NET_PACKET_RATE_KILOPACKET_PER_SEC]: 1000
$.allocation_requests..allocations["$ENVIRON['compute0:br_ex']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: 1000
$.allocation_requests..allocations["$ENVIRON['compute0:br_ex']"].resources[NET_BW_EGR_KILOBIT_PER_SEC]: 1000
- name: a VM with single port on the multi-segment network
# compute1 compute2 has both access to one segment while compute3 has access
# to two segments so compute1,2 will have one candidate while compute 3 will
# have two
GET: >-
/allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:10
&resources-port-msn-pps=NET_PACKET_RATE_KILOPACKET_PER_SEC:1000
&required-port-msn-pps=CUSTOM_VNIC_TYPE_NORMAL
&resources-port-msn-bw=NET_BW_EGR_KILOBIT_PER_SEC:1000,NET_BW_IGR_KILOBIT_PER_SEC:1000
&required-port-msn-bw=CUSTOM_VNIC_TYPE_NORMAL
&required-port-msn-bw=in:CUSTOM_PHYSNET_MSN_S1,CUSTOM_PHYSNET_MSN_S2
&same_subtree=-port-msn-pps,-port-msn-bw
&group_policy=none
status: 200
response_json_paths:
$.allocation_requests.`len`: 4
$.allocation_requests..allocations["$ENVIRON['compute1']"].resources[VCPU]: 1
$.allocation_requests..allocations["$ENVIRON['compute1']"].resources[MEMORY_MB]: 1024
$.allocation_requests..allocations["$ENVIRON['compute1']"].resources[DISK_GB]: 10
$.allocation_requests..allocations["$ENVIRON['compute1:ovs_agent']"].resources[NET_PACKET_RATE_KILOPACKET_PER_SEC]: 1000
$.allocation_requests..allocations["$ENVIRON['compute1:br_ex']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: 1000
$.allocation_requests..allocations["$ENVIRON['compute1:br_ex']"].resources[NET_BW_EGR_KILOBIT_PER_SEC]: 1000
$.allocation_requests..allocations["$ENVIRON['compute2']"].resources[VCPU]: 1
$.allocation_requests..allocations["$ENVIRON['compute2']"].resources[MEMORY_MB]: 1024
$.allocation_requests..allocations["$ENVIRON['compute2']"].resources[DISK_GB]: 10
$.allocation_requests..allocations["$ENVIRON['compute2:ovs_agent']"].resources[NET_PACKET_RATE_KILOPACKET_PER_SEC]: 1000
$.allocation_requests..allocations["$ENVIRON['compute2:br_ex']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: 1000
$.allocation_requests..allocations["$ENVIRON['compute2:br_ex']"].resources[NET_BW_EGR_KILOBIT_PER_SEC]: 1000
$.allocation_requests..allocations["$ENVIRON['compute3']"].resources[VCPU]: [1, 1]
$.allocation_requests..allocations["$ENVIRON['compute3']"].resources[MEMORY_MB]: [1024, 1024]
$.allocation_requests..allocations["$ENVIRON['compute3']"].resources[DISK_GB]: [10, 10]
$.allocation_requests..allocations["$ENVIRON['compute3:ovs_agent']"].resources[NET_PACKET_RATE_KILOPACKET_PER_SEC]: [1000, 1000]
$.allocation_requests..allocations["$ENVIRON['compute3:br_ex']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: 1000
$.allocation_requests..allocations["$ENVIRON['compute3:br_ex']"].resources[NET_BW_EGR_KILOBIT_PER_SEC]: 1000
$.allocation_requests..allocations["$ENVIRON['compute3:br_ex2']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: 1000
$.allocation_requests..allocations["$ENVIRON['compute3:br_ex2']"].resources[NET_BW_EGR_KILOBIT_PER_SEC]: 1000
- name: a VM with two ports on the multi-segment network limited bandwidth
# similarly to the single port test compute 1 and compute 2 can offer one
# allocation candidate as both port fits to the one segment of each compute.
# However, compute3 only has enough bandwidth capacity for one port per
# connected network segment. So either we allocate port1-segment1 and
# port2-segment2 OR port1-segment2 and port2-segment1
GET: >-
/allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:10
&resources-port1-msn-pps=NET_PACKET_RATE_KILOPACKET_PER_SEC:100
&required-port1-msn-pps=CUSTOM_VNIC_TYPE_NORMAL
&resources-port1-msn-bw=NET_BW_EGR_KILOBIT_PER_SEC:1000,NET_BW_IGR_KILOBIT_PER_SEC:1000
&required-port1-msn-bw=CUSTOM_VNIC_TYPE_NORMAL
&required-port1-msn-bw=in:CUSTOM_PHYSNET_MSN_S1,CUSTOM_PHYSNET_MSN_S2
&same_subtree=-port1-msn-pps,-port1-msn-bw
&resources-port2-msn-pps=NET_PACKET_RATE_KILOPACKET_PER_SEC:100
&required-port2-msn-pps=CUSTOM_VNIC_TYPE_NORMAL
&resources-port2-msn-bw=NET_BW_EGR_KILOBIT_PER_SEC:1000,NET_BW_IGR_KILOBIT_PER_SEC:1000
&required-port2-msn-bw=CUSTOM_VNIC_TYPE_NORMAL
&required-port2-msn-bw=in:CUSTOM_PHYSNET_MSN_S1,CUSTOM_PHYSNET_MSN_S2
&same_subtree=-port2-msn-pps,-port2-msn-bw
&group_policy=none
status: 200
response_json_paths:
$.allocation_requests.`len`: 4
$.allocation_requests..allocations["$ENVIRON['compute1']"].resources[VCPU]: 1
$.allocation_requests..allocations["$ENVIRON['compute1']"].resources[MEMORY_MB]: 1024
$.allocation_requests..allocations["$ENVIRON['compute1']"].resources[DISK_GB]: 10
$.allocation_requests..allocations["$ENVIRON['compute1:ovs_agent']"].resources[NET_PACKET_RATE_KILOPACKET_PER_SEC]: 200
$.allocation_requests..allocations["$ENVIRON['compute1:br_ex']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: 2000
$.allocation_requests..allocations["$ENVIRON['compute2']"].resources[VCPU]: 1
$.allocation_requests..allocations["$ENVIRON['compute2']"].resources[MEMORY_MB]: 1024
$.allocation_requests..allocations["$ENVIRON['compute2']"].resources[DISK_GB]: 10
$.allocation_requests..allocations["$ENVIRON['compute2:ovs_agent']"].resources[NET_PACKET_RATE_KILOPACKET_PER_SEC]: 200
$.allocation_requests..allocations["$ENVIRON['compute2:br_ex']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: 2000
$.allocation_requests..allocations["$ENVIRON['compute3']"].resources[VCPU]: [1, 1]
$.allocation_requests..allocations["$ENVIRON['compute3']"].resources[MEMORY_MB]: [1024, 1024]
$.allocation_requests..allocations["$ENVIRON['compute3']"].resources[DISK_GB]: [10, 10]
$.allocation_requests..allocations["$ENVIRON['compute3:ovs_agent']"].resources[NET_PACKET_RATE_KILOPACKET_PER_SEC]: [200, 200]
$.allocation_requests..allocations["$ENVIRON['compute3:br_ex']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: [1000, 1000]
$.allocation_requests..allocations["$ENVIRON['compute3:br_ex2']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: [1000, 1000]
- name: a VM with two ports on the multi-segment network
# similar test as the previous but the bandwidth request is decreased so
# that compute3 now can fit both ports into one segment. This means compute3
# now has 4 candidates
GET: >-
/allocation_candidates?resources=VCPU:1,MEMORY_MB:1024,DISK_GB:10
&resources-port1-msn-pps=NET_PACKET_RATE_KILOPACKET_PER_SEC:100
&required-port1-msn-pps=CUSTOM_VNIC_TYPE_NORMAL
&resources-port1-msn-bw=NET_BW_EGR_KILOBIT_PER_SEC:100,NET_BW_IGR_KILOBIT_PER_SEC:100
&required-port1-msn-bw=CUSTOM_VNIC_TYPE_NORMAL
&required-port1-msn-bw=in:CUSTOM_PHYSNET_MSN_S1,CUSTOM_PHYSNET_MSN_S2
&same_subtree=-port1-msn-pps,-port1-msn-bw
&resources-port2-msn-pps=NET_PACKET_RATE_KILOPACKET_PER_SEC:100
&required-port2-msn-pps=CUSTOM_VNIC_TYPE_NORMAL
&resources-port2-msn-bw=NET_BW_EGR_KILOBIT_PER_SEC:100,NET_BW_IGR_KILOBIT_PER_SEC:100
&required-port2-msn-bw=CUSTOM_VNIC_TYPE_NORMAL
&required-port2-msn-bw=in:CUSTOM_PHYSNET_MSN_S1,CUSTOM_PHYSNET_MSN_S2
&same_subtree=-port2-msn-pps,-port2-msn-bw
&group_policy=none
status: 200
response_json_paths:
$.allocation_requests.`len`: 6
$.allocation_requests..allocations["$ENVIRON['compute1']"].resources[VCPU]: 1
$.allocation_requests..allocations["$ENVIRON['compute1']"].resources[MEMORY_MB]: 1024
$.allocation_requests..allocations["$ENVIRON['compute1']"].resources[DISK_GB]: 10
$.allocation_requests..allocations["$ENVIRON['compute1:ovs_agent']"].resources[NET_PACKET_RATE_KILOPACKET_PER_SEC]: 200
$.allocation_requests..allocations["$ENVIRON['compute1:br_ex']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: 200
$.allocation_requests..allocations["$ENVIRON['compute1:br_ex']"].resources[NET_BW_EGR_KILOBIT_PER_SEC]: 200
$.allocation_requests..allocations["$ENVIRON['compute2']"].resources[VCPU]: 1
$.allocation_requests..allocations["$ENVIRON['compute2']"].resources[MEMORY_MB]: 1024
$.allocation_requests..allocations["$ENVIRON['compute2']"].resources[DISK_GB]: 10
$.allocation_requests..allocations["$ENVIRON['compute2:ovs_agent']"].resources[NET_PACKET_RATE_KILOPACKET_PER_SEC]: 200
$.allocation_requests..allocations["$ENVIRON['compute2:br_ex']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: 200
$.allocation_requests..allocations["$ENVIRON['compute2:br_ex']"].resources[NET_BW_EGR_KILOBIT_PER_SEC]: 200
$.allocation_requests..allocations["$ENVIRON['compute3']"].resources[VCPU]: [1, 1, 1, 1]
$.allocation_requests..allocations["$ENVIRON['compute3']"].resources[MEMORY_MB]: [1024, 1024, 1024, 1024]
$.allocation_requests..allocations["$ENVIRON['compute3']"].resources[DISK_GB]: [10, 10, 10, 10]
$.allocation_requests..allocations["$ENVIRON['compute3:ovs_agent']"].resources[NET_PACKET_RATE_KILOPACKET_PER_SEC]: [200, 200, 200, 200]
# So the 4 candidate from compute3 are
# * both ports allocate from br_ex so br_ex has a consumption of 100 + 100,
# then br_ex2 is not in the candidate (this is why the br_ex2 lists have only 3 items)
# * both ports allocate from br_ex2 then br_ex is not in the candidate (this is why the br_ex lists have only 3 items)
# * port1 allocates 100 from br_ex, port2 allocates 100 from br_ex2
# * port2 allocates 100 from br_ex, port1 allocates 100 from br_ex2
# As the candidates are in random order the right-hand side needs to list all possible permutations
$.allocation_requests..allocations["$ENVIRON['compute3:br_ex']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: /[100, 100, 200]|[100, 200, 100]|[200, 100, 100]/
$.allocation_requests..allocations["$ENVIRON['compute3:br_ex']"].resources[NET_BW_EGR_KILOBIT_PER_SEC]: /[100, 100, 200]|[100, 200, 100]|[200, 100, 100]/
$.allocation_requests..allocations["$ENVIRON['compute3:br_ex2']"].resources[NET_BW_IGR_KILOBIT_PER_SEC]: /[100, 100, 200]|[100, 200, 100]|[200, 100, 100]/
$.allocation_requests..allocations["$ENVIRON['compute3:br_ex2']"].resources[NET_BW_EGR_KILOBIT_PER_SEC]: /[100, 100, 200]|[100, 200, 100]|[200, 100, 100]/

View File

@ -46,3 +46,31 @@ tests:
status: 200
response_json_paths:
$.allocation_requests.`len`: 1
- name: get candidates with both OR, AND, and NOT trait queries
# DXVA or TLS would allow all the trees, AVX filters that down to the left
# and the middle but FOO forbids left so middle remains. Middle has access
# to two shared disk provider so the query returns two candidates
GET: /allocation_candidates?required=in:HW_GPU_API_DXVA,HW_NIC_ACCEL_TLS&required=HW_CPU_X86_AVX,!CUSTOM_FOO&resources=VCPU:1,DISK_GB:1
status: 200
response_json_paths:
$.allocation_requests.`len`: 2
$.allocation_requests..allocations["$ENVIRON['CN_MIDDLE']"].resources[VCPU]: [1, 1]
$.allocation_requests..allocations["$ENVIRON['SHR_DISK_1']"].resources[DISK_GB]: 1
$.allocation_requests..allocations["$ENVIRON['SHR_DISK_2']"].resources[DISK_GB]: 1
- name: get candidates with multiple OR queries
# The left tree has neither MMX nor TLS, so it is out. The middle tree has
# TLS and SSD via shr_disk_1 so that is match. The right tree has MMX and SSD
# on the root so that is a match, but it can also get DISK from shr_disk_2
# even if it is not SSD (the SSD trait and the DISK_GB resource are not tight
# together in any way in placement)
GET: /allocation_candidates?required=in:HW_CPU_X86_MMX,HW_NIC_ACCEL_TLS&required=in:CUSTOM_DISK_SSD,CUSTOM_FOO&resources=VCPU:1,DISK_GB:1
status: 200
response_json_paths:
$.allocation_requests.`len`: 3
$.allocation_requests..allocations["$ENVIRON['CN_MIDDLE']"].resources[VCPU]: 1
$.allocation_requests..allocations["$ENVIRON['SHR_DISK_1']"].resources[DISK_GB]: 1
$.allocation_requests..allocations["$ENVIRON['CN_RIGHT']"].resources[VCPU]: [1, 1]
$.allocation_requests..allocations["$ENVIRON['CN_RIGHT']"].resources[DISK_GB]: 1
$.allocation_requests..allocations["$ENVIRON['SHR_DISK_2']"].resources[DISK_GB]: 1

View File

@ -41,13 +41,13 @@ tests:
response_json_paths:
$.errors[0].title: Not Acceptable
- name: latest microversion is 1.38
- name: latest microversion is 1.39
GET: /
request_headers:
openstack-api-version: placement latest
response_headers:
vary: /openstack-api-version/
openstack-api-version: placement 1.38
openstack-api-version: placement 1.39
- name: other accept header bad version
GET: /

View File

@ -27,3 +27,21 @@ tests:
status: 200
response_json_paths:
$.resource_providers.`len`: 1
- name: list providers with both OR, AND, and NOT trait queries
# DXVA or TLS would allow all the RPs, AVX filters that down to the left and
# the middle but FOO forbids the left so the middle remains
GET: /resource_providers?required=in:HW_GPU_API_DXVA,HW_NIC_ACCEL_TLS&required=HW_CPU_X86_AVX,!CUSTOM_FOO
status: 200
response_json_paths:
$.resource_providers.`len`: 1
$.resource_providers[0].name: cn_middle
- name: have multiple OR queries
# MMX or TLS matches middle and right, SSD or FOO matches left, right and
# shr_disk_1. So only right is a total match.
GET: /resource_providers?required=in:HW_CPU_X86_MMX,HW_NIC_ACCEL_TLS&required=in:CUSTOM_DISK_SSD,CUSTOM_FOO
status: 200
response_json_paths:
$.resource_providers.`len`: 1
$.resource_providers[0].name: cn_right

View File

@ -614,10 +614,6 @@ class TestNormalizeTraitsQsParams(testtools.TestCase):
str(ex),
)
# TODO(gibi): remove the mock when microversion 1.39 is fully added
@mock.patch(
'placement.microversion.max_version_string',
new=mock.Mock(return_value='1.39'))
def test_allow_any_traits_1_39(self):
req = self._get_req('required=in:FOO,BAZ', (1, 39))
@ -626,10 +622,6 @@ class TestNormalizeTraitsQsParams(testtools.TestCase):
self.assertEqual([{'FOO', 'BAZ'}], required)
self.assertEqual(set(), forbidden)
# TODO(gibi): remove the mock when microversion 1.39 is fully added
@mock.patch(
'placement.microversion.max_version_string',
new=mock.Mock(return_value='1.39'))
def test_repeated_param_1_39(self):
req = self._get_req(
'required=in:T1,T2'
@ -1168,10 +1160,6 @@ class TestParseQsRequestGroups(testtools.TestCase):
"microversion 1.39.",
str(exc))
# TODO(gibi): remove the mock when microversion 1.39 is fully added
@mock.patch(
'placement.microversion.max_version_string',
new=mock.Mock(return_value='1.39'))
def test_any_traits_1_39(self):
qs = 'resources1=RABBIT:1&required1=in:WHITE,BLACK'
expected = [
@ -1189,10 +1177,6 @@ class TestParseQsRequestGroups(testtools.TestCase):
self.assertRequestGroupsEqual(
expected, self.do_parse(qs, version=(1, 39)))
# TODO(gibi): remove the mock when microversion 1.39 is fully added
@mock.patch(
'placement.microversion.max_version_string',
new=mock.Mock(return_value='1.39'))
def test_any_traits_repeated(self):
qs = 'resources1=CUSTOM_MAGIC:1&required1=in:T1,T2&required1=T3,!T4'
expected = [
@ -1214,10 +1198,6 @@ class TestParseQsRequestGroups(testtools.TestCase):
self.assertRequestGroupsEqual(
expected, self.do_parse(qs, version=(1, 39)))
# TODO(gibi): remove the mock when microversion 1.39 is fully added
@mock.patch(
'placement.microversion.max_version_string',
new=mock.Mock(return_value='1.39'))
def test_any_traits_multiple_groups(self):
qs = ('resources=RABBIT:1&required=in:WHITE,BLACK&'
'resources2=CAT:2&required2=in:SILVER,RED&required2=!SPOTTED')
@ -1250,10 +1230,6 @@ class TestParseQsRequestGroups(testtools.TestCase):
self.assertRequestGroupsEqual(
expected, self.do_parse(qs, version=(1, 39)))
# TODO(gibi): remove the mock when microversion 1.39 is fully added
@mock.patch(
'placement.microversion.max_version_string',
new=mock.Mock(return_value='1.39'))
def test_any_traits_forbidden_conflict(self):
# going against one part of an OR expression is not a conflict as the
# other parts still can match and fulfill the query
@ -1278,10 +1254,6 @@ class TestParseQsRequestGroups(testtools.TestCase):
webob.exc.HTTPBadRequest, self.do_parse, qs, version=(1, 39))
self.assertEqual(expected_message, str(exc))
# TODO(gibi): remove the mock when microversion 1.39 is fully added
@mock.patch(
'placement.microversion.max_version_string',
new=mock.Mock(return_value='1.39'))
def test_stringification(self):
agg1 = uuidsentinel.agg1
agg2 = uuidsentinel.agg2

View File

@ -0,0 +1,12 @@
---
features:
- |
Microversion 1.39 adds support for the ``in:`` syntax in the ``required``
query parameter in the ``GET /resource_providers`` API as well as to the
``required`` and ``requiredN`` query params of the
``GET /allocation_candidates`` API. Also adds support for repeating the
``required`` and ``requiredN`` parameters in the respective APIs. So::
required=in:T3,T4&required=T1,!T2
is supported and it means T1 and not T2 and (T3 or T4).