support pci numa affinity policies in flavor and image
This addresses bug #1795920 by adding support for defining a pci numa affinity policy via the flavor extra specs or image metadata properties enabling the policies to be applied to neutron sriov port including hardware offloaded ovs. Closes-Bug: #1795920 Related-Bug: #1805891 Implements: blueprint vm-scoped-sriov-numa-affinity Change-Id: Ibd62b24c2bd2dd208d0f804378d4e4f2bbfdaed6
This commit is contained in:
parent
3f9411071d
commit
8c72241726
@ -4,5 +4,5 @@
|
||||
"hw_architecture": "x86_64"
|
||||
},
|
||||
"nova_object.name": "ImageMetaPropsPayload",
|
||||
"nova_object.version": "1.1"
|
||||
"nova_object.version": "1.2"
|
||||
}
|
||||
|
@ -171,9 +171,10 @@ found on the compute nodes. For example:
|
||||
.. code-block:: ini
|
||||
|
||||
[pci]
|
||||
alias = { "vendor_id":"8086", "product_id":"154d", "device_type":"type-PF", "name":"a1" }
|
||||
alias = { "vendor_id":"8086", "product_id":"154d", "device_type":"type-PF", "name":"a1", "numa_policy":"preferred" }
|
||||
|
||||
Refer to :oslo.config:option:`pci.alias` for syntax information.
|
||||
Refer to :ref:`Affinity <pci_numa_affinity_policy>` for ``numa_policy`` information.
|
||||
|
||||
Once configured, restart the :program:`nova-api` service.
|
||||
|
||||
|
@ -480,6 +480,42 @@ CPU pinning policy
|
||||
The ``hw:cpu_thread_policy`` option is only valid if ``hw:cpu_policy`` is
|
||||
set to ``dedicated``.
|
||||
|
||||
.. _pci_numa_affinity_policy:
|
||||
|
||||
PCI NUMA Affinity Policy
|
||||
For the libvirt driver, you can specify the NUMA affinity policy for
|
||||
PCI passthrough devices and neutron SR-IOV interfaces via the
|
||||
``hw:pci_numa_affinity_policy`` flavor extra spec or
|
||||
``hw_pci_numa_affinity_policy`` image property. The allowed values are
|
||||
``required``,``preferred`` or ``legacy`` (default).
|
||||
|
||||
**required**
|
||||
This value will mean that nova will boot instances with PCI devices
|
||||
**only** if at least one of the NUMA nodes of the instance is associated
|
||||
with these PCI devices. It means that if NUMA node info for some PCI
|
||||
devices could not be determined, those PCI devices wouldn't be consumable
|
||||
by the instance. This provides maximum performance.
|
||||
|
||||
**preferred**
|
||||
This value will mean that ``nova-scheduler`` will choose a compute host
|
||||
with minimal consideration for the NUMA affinity of PCI devices.
|
||||
``nova-compute`` will attempt a best effort selection of PCI devices
|
||||
based on NUMA affinity, however, if this is not possible then
|
||||
``nova-compute`` will fall back to scheduling on a NUMA node that is not
|
||||
associated with the PCI device.
|
||||
|
||||
**legacy**
|
||||
This is the default value and it describes the current nova behavior.
|
||||
Usually we have information about association of PCI devices with NUMA
|
||||
nodes. However, some PCI devices do not provide such information. The
|
||||
``legacy`` value will mean that nova will boot instances with PCI device
|
||||
if either:
|
||||
|
||||
* The PCI device is associated with at least one NUMA nodes on which the
|
||||
instance will be booted
|
||||
|
||||
* There is no information about PCI-NUMA affinity available
|
||||
|
||||
.. _extra-specs-numa-topology:
|
||||
|
||||
NUMA topology
|
||||
|
@ -915,6 +915,9 @@ class API(base.Base):
|
||||
|
||||
system_metadata = {}
|
||||
|
||||
pci_numa_affinity_policy = hardware.get_pci_numa_policy_constraint(
|
||||
instance_type, image_meta)
|
||||
|
||||
# PCI requests come from two sources: instance flavor and
|
||||
# requested_networks. The first call in below returns an
|
||||
# InstancePCIRequests object which is a list of InstancePCIRequest
|
||||
@ -922,9 +925,10 @@ class API(base.Base):
|
||||
# object for each SR-IOV port, and append it to the list in the
|
||||
# InstancePCIRequests object
|
||||
pci_request_info = pci_request.get_pci_requests_from_flavor(
|
||||
instance_type)
|
||||
instance_type, affinity_policy=pci_numa_affinity_policy)
|
||||
result = self.network_api.create_resource_requests(
|
||||
context, requested_networks, pci_request_info)
|
||||
context, requested_networks, pci_request_info,
|
||||
affinity_policy=pci_numa_affinity_policy)
|
||||
network_metadata, port_resource_requests = result
|
||||
|
||||
# Creating servers with ports that have resource requests, like QoS
|
||||
|
@ -1861,6 +1861,12 @@ class ImageNUMATopologyRebuildConflict(Invalid):
|
||||
"The image provided is invalid for this instance.")
|
||||
|
||||
|
||||
class ImagePCINUMAPolicyForbidden(Forbidden):
|
||||
msg_fmt = _("Image property 'hw_pci_numa_affinity_policy' is not "
|
||||
"permitted to override the 'hw:pci_numa_affinity_policy' "
|
||||
"flavor extra spec.")
|
||||
|
||||
|
||||
class ImageNUMATopologyAsymmetric(Invalid):
|
||||
msg_fmt = _("Instance CPUs and/or memory cannot be evenly distributed "
|
||||
"across instance NUMA nodes. Explicit assignment of CPUs "
|
||||
@ -2254,6 +2260,10 @@ class InvalidNetworkNUMAAffinity(Invalid):
|
||||
msg_fmt = _("Invalid NUMA network affinity configured: %(reason)s")
|
||||
|
||||
|
||||
class InvalidPCINUMAAffinity(Invalid):
|
||||
msg_fmt = _("Invalid PCI NUMA affinity configured: %(policy)s")
|
||||
|
||||
|
||||
class PowerVMAPIFailed(NovaException):
|
||||
msg_fmt = _("PowerVM API failed to complete for instance=%(inst_name)s. "
|
||||
"%(reason)s")
|
||||
|
@ -355,8 +355,9 @@ class API(base_api.NetworkAPI):
|
||||
# the requested number in this case.
|
||||
return num_instances
|
||||
|
||||
def create_resource_requests(self, context, requested_networks,
|
||||
pci_requests=None):
|
||||
def create_resource_requests(
|
||||
self, context, requested_networks, pci_requests=None,
|
||||
affinity_policy=None):
|
||||
"""Retrieve all information for the networks passed at the time of
|
||||
creating the server.
|
||||
|
||||
@ -366,6 +367,8 @@ class API(base_api.NetworkAPI):
|
||||
:param pci_requests: The list of PCI requests to which additional PCI
|
||||
requests created here will be added.
|
||||
:type pci_requests: nova.objects.InstancePCIRequests
|
||||
:param affinity_policy: requested pci numa affinity policy
|
||||
:type affinity_policy: nova.objects.fields.PCINUMAAffinityPolicy
|
||||
|
||||
:returns: A tuple with an instance of ``objects.NetworkMetadata`` for
|
||||
use by the scheduler or None and a list of RequestGroup
|
||||
|
@ -1971,8 +1971,9 @@ class API(base_api.NetworkAPI):
|
||||
resource_request = port.get(constants.RESOURCE_REQUEST, None)
|
||||
return vnic_type, trusted, network_id, resource_request
|
||||
|
||||
def create_resource_requests(self, context, requested_networks,
|
||||
pci_requests=None):
|
||||
def create_resource_requests(
|
||||
self, context, requested_networks, pci_requests=None,
|
||||
affinity_policy=None):
|
||||
"""Retrieve all information for the networks passed at the time of
|
||||
creating the server.
|
||||
|
||||
@ -1982,6 +1983,8 @@ class API(base_api.NetworkAPI):
|
||||
:param pci_requests: The list of PCI requests to which additional PCI
|
||||
requests created here will be added.
|
||||
:type pci_requests: nova.objects.InstancePCIRequests
|
||||
:param affinity_policy: requested pci numa affinity policy
|
||||
:type affinity_policy: nova.objects.fields.PCINUMAAffinityPolicy
|
||||
|
||||
:returns: A tuple with an instance of ``objects.NetworkMetadata`` for
|
||||
use by the scheduler or None and a list of RequestGroup
|
||||
@ -2062,6 +2065,8 @@ class API(base_api.NetworkAPI):
|
||||
spec=[spec],
|
||||
request_id=uuidutils.generate_uuid(),
|
||||
requester_id=requester_id)
|
||||
if affinity_policy:
|
||||
request.numa_policy = affinity_policy
|
||||
pci_requests.requests.append(request)
|
||||
pci_request_id = request.request_id
|
||||
|
||||
|
@ -106,7 +106,8 @@ class ImageMetaPayload(base.NotificationPayloadBase):
|
||||
class ImageMetaPropsPayload(base.NotificationPayloadBase):
|
||||
# Version 1.0: Initial version
|
||||
# Version 1.1: Added 'gop', 'virtio' and 'none' to hw_video_model field
|
||||
VERSION = '1.1'
|
||||
# Version 1.2: Added hw_pci_numa_affinity_policy field
|
||||
VERSION = '1.2'
|
||||
|
||||
SCHEMA = {
|
||||
'hw_architecture': ('image_meta_props', 'hw_architecture'),
|
||||
@ -133,6 +134,8 @@ class ImageMetaPropsPayload(base.NotificationPayloadBase):
|
||||
'hw_numa_nodes': ('image_meta_props', 'hw_numa_nodes'),
|
||||
'hw_numa_cpus': ('image_meta_props', 'hw_numa_cpus'),
|
||||
'hw_numa_mem': ('image_meta_props', 'hw_numa_mem'),
|
||||
'hw_pci_numa_affinity_policy': ('image_meta_props',
|
||||
'hw_pci_numa_affinity_policy'),
|
||||
'hw_pointer_model': ('image_meta_props', 'hw_pointer_model'),
|
||||
'hw_qemu_guest_agent': ('image_meta_props', 'hw_qemu_guest_agent'),
|
||||
'hw_rescue_bus': ('image_meta_props', 'hw_rescue_bus'),
|
||||
@ -210,6 +213,7 @@ class ImageMetaPropsPayload(base.NotificationPayloadBase):
|
||||
'hw_numa_nodes': fields.IntegerField(),
|
||||
'hw_numa_cpus': fields.ListOfSetsOfIntegersField(),
|
||||
'hw_numa_mem': fields.ListOfIntegersField(),
|
||||
'hw_pci_numa_affinity_policy': fields.PCINUMAAffinityPolicyField(),
|
||||
'hw_pointer_model': fields.PointerModelField(),
|
||||
'hw_qemu_guest_agent': fields.FlexibleBooleanField(),
|
||||
'hw_rescue_bus': fields.DiskBusField(),
|
||||
|
@ -174,12 +174,15 @@ class ImageMetaProps(base.NovaObject):
|
||||
# Version 1.22: Added 'gop', 'virtio' and 'none' to hw_video_model field
|
||||
# Version 1.23: Added 'hw_pmu' field
|
||||
# Version 1.24: Added 'hw_mem_encryption' field
|
||||
VERSION = '1.24'
|
||||
# Version 1.25: Added 'hw_pci_numa_affinity_policy' field
|
||||
VERSION = '1.25'
|
||||
|
||||
def obj_make_compatible(self, primitive, target_version):
|
||||
super(ImageMetaProps, self).obj_make_compatible(primitive,
|
||||
target_version)
|
||||
target_version = versionutils.convert_version_to_tuple(target_version)
|
||||
if target_version < (1, 25):
|
||||
primitive.pop('hw_pci_numa_affinity_policy', None)
|
||||
if target_version < (1, 24):
|
||||
primitive.pop('hw_mem_encryption', None)
|
||||
if target_version < (1, 23):
|
||||
@ -334,6 +337,9 @@ class ImageMetaProps(base.NovaObject):
|
||||
# list value indicates the memory size of that node.
|
||||
'hw_numa_mem': fields.ListOfIntegersField(),
|
||||
|
||||
# Enum field to specify pci device NUMA affinity.
|
||||
'hw_pci_numa_affinity_policy': fields.PCINUMAAffinityPolicyField(),
|
||||
|
||||
# Generic property to specify the pointer model type.
|
||||
'hw_pointer_model': fields.PointerModelField(),
|
||||
|
||||
|
@ -148,7 +148,7 @@ def _get_alias_from_config():
|
||||
return aliases
|
||||
|
||||
|
||||
def _translate_alias_to_requests(alias_spec):
|
||||
def _translate_alias_to_requests(alias_spec, affinity_policy=None):
|
||||
"""Generate complete pci requests from pci aliases in extra_spec."""
|
||||
pci_aliases = _get_alias_from_config()
|
||||
|
||||
@ -160,6 +160,7 @@ def _translate_alias_to_requests(alias_spec):
|
||||
|
||||
count = int(count)
|
||||
numa_policy, spec = pci_aliases[name]
|
||||
policy = affinity_policy or numa_policy
|
||||
|
||||
# NOTE(gibi): InstancePCIRequest has a requester_id field that could
|
||||
# be filled with the flavor.flavorid but currently there is no special
|
||||
@ -169,7 +170,7 @@ def _translate_alias_to_requests(alias_spec):
|
||||
count=count,
|
||||
spec=spec,
|
||||
alias_name=name,
|
||||
numa_policy=numa_policy))
|
||||
numa_policy=policy))
|
||||
return pci_requests
|
||||
|
||||
|
||||
@ -227,7 +228,7 @@ def get_instance_pci_request_from_vif(context, instance, vif):
|
||||
node_id=cn_id)
|
||||
|
||||
|
||||
def get_pci_requests_from_flavor(flavor):
|
||||
def get_pci_requests_from_flavor(flavor, affinity_policy=None):
|
||||
"""Validate and return PCI requests.
|
||||
|
||||
The ``pci_passthrough:alias`` extra spec describes the flavor's PCI
|
||||
@ -265,6 +266,7 @@ def get_pci_requests_from_flavor(flavor):
|
||||
}]
|
||||
|
||||
:param flavor: The flavor to be checked
|
||||
:param affinity_policy: pci numa affinity policy
|
||||
:returns: A list of PCI requests
|
||||
:rtype: nova.objects.InstancePCIRequests
|
||||
:raises: exception.PciRequestAliasNotDefined if an invalid PCI alias is
|
||||
@ -276,6 +278,7 @@ def get_pci_requests_from_flavor(flavor):
|
||||
if ('extra_specs' in flavor and
|
||||
'pci_passthrough:alias' in flavor['extra_specs']):
|
||||
pci_requests = _translate_alias_to_requests(
|
||||
flavor['extra_specs']['pci_passthrough:alias'])
|
||||
flavor['extra_specs']['pci_passthrough:alias'],
|
||||
affinity_policy=affinity_policy)
|
||||
|
||||
return objects.InstancePCIRequests(requests=pci_requests)
|
||||
|
@ -13,8 +13,10 @@
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import ddt
|
||||
import fixtures
|
||||
import mock
|
||||
|
||||
from oslo_log import log as logging
|
||||
from oslo_serialization import jsonutils
|
||||
|
||||
@ -397,3 +399,121 @@ class PCIServersWithRequiredNUMATest(PCIServersWithPreferredNUMATest):
|
||||
}
|
||||
)]
|
||||
end_status = 'ERROR'
|
||||
|
||||
|
||||
@ddt.ddt
|
||||
class PCIServersWithSRIOVAffinityPoliciesTest(_PCIServersTestBase):
|
||||
|
||||
# The order of the filters is required to make the assertion that the
|
||||
# PciPassthroughFilter is invoked in _run_build_test pass in the
|
||||
# numa affinity tests otherwise the NUMATopologyFilter will eliminate
|
||||
# all hosts before we execute the PciPassthroughFilter.
|
||||
ADDITIONAL_FILTERS = ['PciPassthroughFilter', 'NUMATopologyFilter']
|
||||
ALIAS_NAME = 'a1'
|
||||
PCI_PASSTHROUGH_WHITELIST = [jsonutils.dumps(
|
||||
{
|
||||
'vendor_id': fakelibvirt.PCI_VEND_ID,
|
||||
'product_id': fakelibvirt.PCI_PROD_ID,
|
||||
}
|
||||
)]
|
||||
# we set the numa_affinity policy to required to ensure strict affinity
|
||||
# between pci devices and the guest cpu and memory will be enforced.
|
||||
PCI_ALIAS = [jsonutils.dumps(
|
||||
{
|
||||
'vendor_id': fakelibvirt.PCI_VEND_ID,
|
||||
'product_id': fakelibvirt.PCI_PROD_ID,
|
||||
'name': ALIAS_NAME,
|
||||
'device_type': fields.PciDeviceType.STANDARD,
|
||||
'numa_policy': fields.PCINUMAAffinityPolicy.REQUIRED,
|
||||
}
|
||||
)]
|
||||
|
||||
# NOTE(sean-k-mooney): i could just apply the ddt decorators
|
||||
# to this function for the most part but i have chosen to
|
||||
# keep one top level function per policy to make documenting
|
||||
# the test cases simpler.
|
||||
def _test_policy(self, pci_numa_node, status, policy):
|
||||
host_info = fakelibvirt.HostInfo(cpu_nodes=2, cpu_sockets=1,
|
||||
cpu_cores=2, cpu_threads=2,
|
||||
kB_mem=15740000)
|
||||
pci_info = fakelibvirt.HostPCIDevicesInfo(
|
||||
num_pci=1, numa_node=pci_numa_node)
|
||||
fake_connection = self._get_connection(host_info, pci_info)
|
||||
self.mock_conn.return_value = fake_connection
|
||||
|
||||
# only allow cpus on numa node 1 to be used for pinning
|
||||
self.flags(cpu_dedicated_set='4-7', group='compute')
|
||||
|
||||
# request cpu pinning to create a numa toplogy and allow the test to
|
||||
# force which numa node the vm would have to be pinned too.
|
||||
extra_spec = {
|
||||
'hw:cpu_policy': 'dedicated',
|
||||
'pci_passthrough:alias': '%s:1' % self.ALIAS_NAME,
|
||||
'hw:pci_numa_affinity_policy': policy
|
||||
}
|
||||
flavor_id = self._create_flavor(extra_spec=extra_spec)
|
||||
self._run_build_test(flavor_id, end_status=status)
|
||||
|
||||
@ddt.unpack # unpacks each sub-tuple e.g. *(pci_numa_node, status)
|
||||
# the preferred policy should always pass regardless of numa affinity
|
||||
@ddt.data((-1, 'ACTIVE'), (0, 'ACTIVE'), (1, 'ACTIVE'))
|
||||
def test_create_server_with_sriov_numa_affinity_policy_preferred(
|
||||
self, pci_numa_node, status):
|
||||
"""Validate behavior of 'preferred' PCI NUMA affinity policy.
|
||||
|
||||
This test ensures that it *is* possible to allocate CPU and memory
|
||||
resources from one NUMA node and a PCI device from another *if*
|
||||
the SR-IOV NUMA affinity policy is set to preferred.
|
||||
"""
|
||||
self._test_policy(pci_numa_node, status, 'preferred')
|
||||
|
||||
@ddt.unpack # unpacks each sub-tuple e.g. *(pci_numa_node, status)
|
||||
# the legacy policy allow a PCI device to be used if it has NUMA
|
||||
# affinity or if no NUMA info is available so we set the NUMA
|
||||
# node for this device to -1 which is the sentinel value use by the
|
||||
# Linux kernel for a device with no NUMA affinity.
|
||||
@ddt.data((-1, 'ACTIVE'), (0, 'ERROR'), (1, 'ACTIVE'))
|
||||
def test_create_server_with_sriov_numa_affinity_policy_legacy(
|
||||
self, pci_numa_node, status):
|
||||
"""Validate behavior of 'legacy' PCI NUMA affinity policy.
|
||||
|
||||
This test ensures that it *is* possible to allocate CPU and memory
|
||||
resources from one NUMA node and a PCI device from another *if*
|
||||
the SR-IOV NUMA affinity policy is set to legacy and the device
|
||||
does not report NUMA information.
|
||||
"""
|
||||
self._test_policy(pci_numa_node, status, 'legacy')
|
||||
|
||||
@ddt.unpack # unpacks each sub-tuple e.g. *(pci_numa_node, status)
|
||||
# The required policy requires a PCI device to both report a NUMA
|
||||
# and for the guest cpus and ram to be affinitized to the same
|
||||
# NUMA node so we create 1 pci device in the first NUMA node.
|
||||
@ddt.data((-1, 'ERROR'), (0, 'ERROR'), (1, 'ACTIVE'))
|
||||
def test_create_server_with_sriov_numa_affinity_policy_required(
|
||||
self, pci_numa_node, status):
|
||||
"""Validate behavior of 'required' PCI NUMA affinity policy.
|
||||
|
||||
This test ensures that it *is not* possible to allocate CPU and memory
|
||||
resources from one NUMA node and a PCI device from another *if*
|
||||
the SR-IOV NUMA affinity policy is set to required and the device
|
||||
does reports NUMA information.
|
||||
"""
|
||||
|
||||
# we set the numa_affinity policy to preferred to allow the PCI device
|
||||
# to be selected from any numa node so we can prove the flavor
|
||||
# overrides the alias.
|
||||
alias = [jsonutils.dumps(
|
||||
{
|
||||
'vendor_id': fakelibvirt.PCI_VEND_ID,
|
||||
'product_id': fakelibvirt.PCI_PROD_ID,
|
||||
'name': self.ALIAS_NAME,
|
||||
'device_type': fields.PciDeviceType.STANDARD,
|
||||
'numa_policy': fields.PCINUMAAffinityPolicy.PREFERRED,
|
||||
}
|
||||
)]
|
||||
|
||||
self.flags(passthrough_whitelist=self.PCI_PASSTHROUGH_WHITELIST,
|
||||
alias=alias,
|
||||
group='pci')
|
||||
|
||||
self._test_policy(pci_numa_node, status, 'required')
|
||||
|
@ -1262,7 +1262,7 @@ class TestInstanceNotificationSample(
|
||||
'nova_object.data': {},
|
||||
'nova_object.name': 'ImageMetaPropsPayload',
|
||||
'nova_object.namespace': 'nova',
|
||||
'nova_object.version': u'1.1'},
|
||||
'nova_object.version': u'1.2'},
|
||||
'image.size': 58145823,
|
||||
'image.tags': [],
|
||||
'scheduler_hints': {'_nova_check_type': ['rebuild']},
|
||||
@ -1359,7 +1359,7 @@ class TestInstanceNotificationSample(
|
||||
'nova_object.data': {},
|
||||
'nova_object.name': 'ImageMetaPropsPayload',
|
||||
'nova_object.namespace': 'nova',
|
||||
'nova_object.version': u'1.1'},
|
||||
'nova_object.version': u'1.2'},
|
||||
'image.size': 58145823,
|
||||
'image.tags': [],
|
||||
'scheduler_hints': {'_nova_check_type': ['rebuild']},
|
||||
|
@ -202,8 +202,9 @@ def stub_out_nw_api(test, cls=None, private=None, publics=None):
|
||||
def validate_networks(self, context, networks, max_count):
|
||||
return max_count
|
||||
|
||||
def create_resource_requests(self, context, requested_networks,
|
||||
pci_requests):
|
||||
def create_resource_requests(
|
||||
self, context, requested_networks,
|
||||
pci_requests=None, affinity_policy=None):
|
||||
return None, []
|
||||
|
||||
if cls is None:
|
||||
|
@ -383,7 +383,7 @@ notification_object_data = {
|
||||
'FlavorNotification': '1.0-a73147b93b520ff0061865849d3dfa56',
|
||||
'FlavorPayload': '1.4-2e7011b8b4e59167fe8b7a0a81f0d452',
|
||||
'ImageMetaPayload': '1.0-0e65beeacb3393beed564a57bc2bc989',
|
||||
'ImageMetaPropsPayload': '1.1-789c420945f2cae6ac64ca8dffbcb1b8',
|
||||
'ImageMetaPropsPayload': '1.2-f237f65e1f14f05a73481dc4192df3ba',
|
||||
'InstanceActionNotification': '1.0-a73147b93b520ff0061865849d3dfa56',
|
||||
'InstanceActionPayload': '1.8-4fa3da9cbf0761f1f700ae578f36dc2f',
|
||||
'InstanceActionRebuildNotification':
|
||||
|
@ -1069,7 +1069,7 @@ object_data = {
|
||||
'HyperVLiveMigrateData': '1.4-e265780e6acfa631476c8170e8d6fce0',
|
||||
'IDEDeviceBus': '1.0-29d4c9f27ac44197f01b6ac1b7e16502',
|
||||
'ImageMeta': '1.8-642d1b2eb3e880a367f37d72dd76162d',
|
||||
'ImageMetaProps': '1.24-f92fa09d54185499da98f5430524964e',
|
||||
'ImageMetaProps': '1.25-66fc973af215eb5701ed4034bb6f0685',
|
||||
'Instance': '2.7-d187aec68cad2e4d8b8a03a68e4739ce',
|
||||
'InstanceAction': '1.2-9a5abc87fdd3af46f45731960651efb5',
|
||||
'InstanceActionEvent': '1.3-c749e1b3589e7117c81cb2aa6ac438d5',
|
||||
|
@ -23,6 +23,7 @@ from nova import context
|
||||
from nova import exception
|
||||
from nova.network import model
|
||||
from nova import objects
|
||||
from nova.objects import fields
|
||||
from nova.pci import request
|
||||
from nova import test
|
||||
from nova.tests.unit.api.openstack import fakes
|
||||
@ -189,6 +190,23 @@ class PciRequestTestCase(test.NoDBTestCase):
|
||||
self.assertRaises(exception.PciInvalidAlias,
|
||||
request._get_alias_from_config)
|
||||
|
||||
def test_valid_numa_policy(self):
|
||||
for policy in fields.PCINUMAAffinityPolicy.ALL:
|
||||
self.flags(alias=[
|
||||
"""{
|
||||
"name": "xxx",
|
||||
"capability_type": "pci",
|
||||
"product_id": "1111",
|
||||
"vendor_id": "8086",
|
||||
"device_type": "type-PCI",
|
||||
"numa_policy": "%s"
|
||||
}""" % policy],
|
||||
group='pci')
|
||||
aliases = request._get_alias_from_config()
|
||||
self.assertIsNotNone(aliases)
|
||||
self.assertIn("xxx", aliases)
|
||||
self.assertEqual(policy, aliases["xxx"][0])
|
||||
|
||||
def test_conflicting_device_type(self):
|
||||
"""Check behavior when device_type conflicts occur."""
|
||||
self.flags(alias=[
|
||||
@ -268,6 +286,37 @@ class PciRequestTestCase(test.NoDBTestCase):
|
||||
request._translate_alias_to_requests,
|
||||
"QuicAssistX : 3")
|
||||
|
||||
def test_alias_2_request_affinity_policy(self):
|
||||
# _fake_alias1 requests the legacy policy and _fake_alias3
|
||||
# has no numa_policy set so it will default to legacy.
|
||||
self.flags(alias=[_fake_alias1, _fake_alias3], group='pci')
|
||||
# so to test that the flavor/image policy takes precedence
|
||||
# set use the preferred policy.
|
||||
policy = fields.PCINUMAAffinityPolicy.PREFERRED
|
||||
expect_request = [
|
||||
{'count': 3,
|
||||
'requester_id': None,
|
||||
'spec': [{'vendor_id': '8086', 'product_id': '4443',
|
||||
'dev_type': 'type-PCI',
|
||||
'capability_type': 'pci'}],
|
||||
'alias_name': 'QuicAssist',
|
||||
'numa_policy': policy
|
||||
},
|
||||
|
||||
{'count': 1,
|
||||
'requester_id': None,
|
||||
'spec': [{'vendor_id': '8086', 'product_id': '1111',
|
||||
'dev_type': "type-PF",
|
||||
'capability_type': 'pci'}],
|
||||
'alias_name': 'IntelNIC',
|
||||
'numa_policy': policy
|
||||
}, ]
|
||||
|
||||
requests = request._translate_alias_to_requests(
|
||||
"QuicAssist : 3, IntelNIC: 1", affinity_policy=policy)
|
||||
self.assertEqual(set([p['count'] for p in requests]), set([1, 3]))
|
||||
self._verify_result(expect_request, requests)
|
||||
|
||||
@mock.patch.object(objects.compute_node.ComputeNode,
|
||||
'get_by_host_and_nodename')
|
||||
def test_get_instance_pci_request_from_vif_invalid(
|
||||
@ -410,3 +459,14 @@ class PciRequestTestCase(test.NoDBTestCase):
|
||||
flavor = {}
|
||||
requests = request.get_pci_requests_from_flavor(flavor)
|
||||
self.assertEqual([], requests.requests)
|
||||
|
||||
@mock.patch.object(
|
||||
request, "_translate_alias_to_requests", return_value=[])
|
||||
def test_get_pci_requests_from_flavor_affinity_policy(
|
||||
self, mock_translate):
|
||||
self.flags(alias=[_fake_alias1, _fake_alias3], group='pci')
|
||||
flavor = {'extra_specs': {"pci_passthrough:alias":
|
||||
"QuicAssist:3, IntelNIC: 1"}}
|
||||
policy = fields.PCINUMAAffinityPolicy.PREFERRED
|
||||
request.get_pci_requests_from_flavor(flavor, affinity_policy=policy)
|
||||
mock_translate.assert_called_with(mock.ANY, affinity_policy=policy)
|
||||
|
@ -305,6 +305,11 @@ class FakePCIDevice(object):
|
||||
'iommu_group': iommu_group,
|
||||
'numa_node': numa_node,
|
||||
}
|
||||
# -1 is the sentinel set in /sys/bus/pci/devices/*/numa_node
|
||||
# for no NUMA affinity. When the numa_node is set to -1 on a device
|
||||
# Libvirt omits the NUMA element so we remove it.
|
||||
if numa_node == -1:
|
||||
self.pci_device = self.pci_device.replace("<numa node='-1'/>", "")
|
||||
|
||||
def XMLDesc(self, flags):
|
||||
return self.pci_device
|
||||
|
@ -16,6 +16,7 @@ import collections
|
||||
import copy
|
||||
|
||||
import mock
|
||||
import testtools
|
||||
|
||||
from nova import exception
|
||||
from nova import objects
|
||||
@ -4184,3 +4185,73 @@ class MemEncryptionRequiredTestCase(test.NoDBTestCase):
|
||||
"hw_mem_encryption property of image %s" %
|
||||
(self.flavor_name, self.image_name)
|
||||
)
|
||||
|
||||
|
||||
class PCINUMAAffinityPolicyTest(test.NoDBTestCase):
|
||||
|
||||
def test_get_pci_numa_policy_flavor(self):
|
||||
|
||||
for policy in fields.PCINUMAAffinityPolicy.ALL:
|
||||
extra_specs = {
|
||||
"hw:pci_numa_affinity_policy": policy,
|
||||
}
|
||||
image_meta = objects.ImageMeta.from_dict({"properties": {}})
|
||||
flavor = objects.Flavor(
|
||||
vcpus=16, memory_mb=2048, extra_specs=extra_specs)
|
||||
self.assertEqual(
|
||||
policy, hw.get_pci_numa_policy_constraint(flavor, image_meta))
|
||||
|
||||
def test_get_pci_numa_policy_image(self):
|
||||
for policy in fields.PCINUMAAffinityPolicy.ALL:
|
||||
props = {
|
||||
"hw_pci_numa_affinity_policy": policy,
|
||||
}
|
||||
image_meta = objects.ImageMeta.from_dict({"properties": props})
|
||||
flavor = objects.Flavor(
|
||||
vcpus=16, memory_mb=2048, extra_specs={})
|
||||
self.assertEqual(
|
||||
policy, hw.get_pci_numa_policy_constraint(flavor, image_meta))
|
||||
|
||||
def test_get_pci_numa_policy_no_conflict(self):
|
||||
|
||||
for policy in fields.PCINUMAAffinityPolicy.ALL:
|
||||
extra_specs = {
|
||||
"hw:pci_numa_affinity_policy": policy,
|
||||
}
|
||||
flavor = objects.Flavor(
|
||||
vcpus=16, memory_mb=2048, extra_specs=extra_specs)
|
||||
props = {
|
||||
"hw_pci_numa_affinity_policy": policy,
|
||||
}
|
||||
image_meta = objects.ImageMeta.from_dict({"properties": props})
|
||||
self.assertEqual(
|
||||
policy, hw.get_pci_numa_policy_constraint(flavor, image_meta))
|
||||
|
||||
def test_get_pci_numa_policy_conflict(self):
|
||||
extra_specs = {
|
||||
"hw:pci_numa_affinity_policy":
|
||||
fields.PCINUMAAffinityPolicy.LEGACY,
|
||||
}
|
||||
flavor = objects.Flavor(
|
||||
vcpus=16, memory_mb=2048, extra_specs=extra_specs)
|
||||
props = {
|
||||
"hw_pci_numa_affinity_policy":
|
||||
fields.PCINUMAAffinityPolicy.REQUIRED,
|
||||
}
|
||||
image_meta = objects.ImageMeta.from_dict({"properties": props})
|
||||
self.assertRaises(
|
||||
exception.ImagePCINUMAPolicyForbidden,
|
||||
hw.get_pci_numa_policy_constraint, flavor, image_meta)
|
||||
|
||||
def test_get_pci_numa_policy_invalid(self):
|
||||
extra_specs = {
|
||||
"hw:pci_numa_affinity_policy": "fake",
|
||||
}
|
||||
flavor = objects.Flavor(
|
||||
vcpus=16, memory_mb=2048, extra_specs=extra_specs)
|
||||
image_meta = objects.ImageMeta.from_dict({"properties": {}})
|
||||
self.assertRaises(
|
||||
exception.InvalidPCINUMAAffinity,
|
||||
hw.get_pci_numa_policy_constraint, flavor, image_meta)
|
||||
with testtools.ExpectedException(ValueError):
|
||||
image_meta.properties.hw_pci_numa_affinity_policy = "fake"
|
||||
|
@ -1715,6 +1715,28 @@ def get_emulator_thread_policy_constraint(flavor):
|
||||
return emu_threads_policy
|
||||
|
||||
|
||||
def get_pci_numa_policy_constraint(flavor, image_meta):
|
||||
"""Return pci numa affinity policy or None.
|
||||
|
||||
:param flavor: a flavor object to read extra specs from
|
||||
:param image_meta: nova.objects.ImageMeta object instance
|
||||
:raises: nova.exception.ImagePCINUMAPolicyForbidden
|
||||
:raises: nova.exception.InvalidPCINUMAAffinity
|
||||
"""
|
||||
flavor_policy, image_policy = _get_flavor_image_meta(
|
||||
'pci_numa_affinity_policy', flavor, image_meta)
|
||||
|
||||
if flavor_policy and image_policy and flavor_policy != image_policy:
|
||||
raise exception.ImagePCINUMAPolicyForbidden()
|
||||
|
||||
policy = flavor_policy or image_policy
|
||||
|
||||
if policy and policy not in fields.PCINUMAAffinityPolicy.ALL:
|
||||
raise exception.InvalidPCINUMAAffinity(policy=policy)
|
||||
|
||||
return policy
|
||||
|
||||
|
||||
# TODO(sahid): Move numa related to hardware/numa.py
|
||||
def numa_get_constraints(flavor, image_meta):
|
||||
"""Return topology related to input request.
|
||||
|
@ -0,0 +1,13 @@
|
||||
---
|
||||
features:
|
||||
- |
|
||||
Added support for instance-level PCI NUMA policies using the
|
||||
``hw:pci_numa_affinity_policy`` flavor extra spec and
|
||||
``hw_pci_numa_affinity_policy`` image metadata property.
|
||||
These apply to both PCI passthrough and SR-IOV devices,
|
||||
unlike host-level PCI NUMA policies configured via the
|
||||
``alias`` key of the ``[pci] alias`` config option.
|
||||
See the `VM Scoped SR-IOV NUMA Affinity`_ spec for more
|
||||
info.
|
||||
|
||||
.. _`VM Scoped SR-IOV NUMA Affinity` : http://specs.openstack.org/openstack/nova-specs/specs/ussuri/approved/vm-scoped-sriov-numa-affinity.html
|
Loading…
Reference in New Issue
Block a user