vCPU model selection

Rename the exist config attribute: [libvirt]/cpu_model to
[libvirt]/cpu_models, which is an orderded list of CPU models the host
supports. The value in the list can be made case-insensitive.

Change logic of method: '_get_guest_cpu_model_config', if cpu_mode is
custom and cpu_models set. It will parse the required traits
associated with the CPU flags from flavor extra_specs and select the
most appropriate CPU model.

Add new method 'get_cpu_model_names' to host.py. It will return a list
of the cpu models that the CPU arch can support.

Update the docs of hypervisor-kvm.

Change-Id: I06e1f7429c056c4ce8506b10359762e457dbb2a0
Implements: blueprint cpu-model-selection
This commit is contained in:
ya.wang 2019-07-11 13:25:53 +08:00
parent 7020196aaa
commit f80e5f989d
9 changed files with 561 additions and 57 deletions

View File

@ -283,7 +283,7 @@ determine which models are supported by your local installation.
Two Compute configuration options in the :oslo.config:group:`libvirt` group
of ``nova.conf`` define which type of CPU model is exposed to the hypervisor
when using KVM: :oslo.config:option:`libvirt.cpu_mode` and
:oslo.config:option:`libvirt.cpu_model`.
:oslo.config:option:`libvirt.cpu_models`.
The :oslo.config:option:`libvirt.cpu_mode` option can take one of the following
values: ``none``, ``host-passthrough``, ``host-model``, and ``custom``.
@ -337,27 +337,54 @@ may even include the running kernel. Use this mode only if
Custom
------
If your ``nova.conf`` file contains ``cpu_mode=custom``, you can explicitly
specify one of the supported named models using the cpu_model configuration
option. For example, to configure the KVM guests to expose Nehalem CPUs, your
``nova.conf`` file should contain:
If :file:`nova.conf` contains :oslo.config:option:`libvirt.cpu_mode`\ =custom,
you can explicitly specify an ordered list of supported named models using
the :oslo.config:option:`libvirt.cpu_models` configuration option. It is
expected that the list is ordered so that the more common and less advanced cpu
models are listed earlier.
An end user can specify required CPU features through traits. When specified,
the libvirt driver will select the first cpu model in the
:oslo.config:option:`libvirt.cpu_models` list that can provide the requested
feature traits. If no CPU feature traits are specified then the instance will
be configured with the first cpu model in the list.
For example, if specifying CPU features ``avx`` and ``avx2`` as follows:
.. code-block:: console
$ openstack flavor set FLAVOR_ID --property trait:HW_CPU_X86_AVX=required \
--property trait:HW_CPU_X86_AVX2=required
and :oslo.config:option:`libvirt.cpu_models` is configured like this:
.. code-block:: ini
[libvirt]
cpu_mode = custom
cpu_model = Nehalem
[libvirt]
cpu_mode = custom
cpu_models = Penryn,IvyBridge,Haswell,Broadwell,Skylake-Client
Then ``Haswell``, the first cpu model supporting both ``avx`` and ``avx2``,
will be chosen by libvirt.
In selecting the ``custom`` mode, along with a
:oslo.config:option:`libvirt.cpu_model` that matches the oldest of your compute
:oslo.config:option:`libvirt.cpu_models` that matches the oldest of your compute
node CPUs, you can ensure that live migration between compute nodes will always
be possible. However, you should ensure that the
:oslo.config:option:`libvirt.cpu_model` you select passes the correct CPU
:oslo.config:option:`libvirt.cpu_models` you select passes the correct CPU
feature flags to the guest.
If you need to further tweak your CPU feature flags in the ``custom``
mode, see `Set CPU feature flags`_.
.. note::
If :oslo.config:option:`libvirt.cpu_models` is configured,
the CPU models in the list needs to be compatible with the host CPU. Also, if
:oslo.config:option:`libvirt.cpu_model_extra_flags` is configured, all flags
needs to be compatible with the host CPU. If incompatible CPU models or flags
are specified, nova service will raise an error and fail to start.
None (default for all libvirt-driven hypervisors other than KVM & QEMU)
-----------------------------------------------------------------------
@ -379,7 +406,7 @@ not enable the ``pcid`` feature flag --- but you do want to pass
[libvirt]
cpu_mode = custom
cpu_model = IvyBridge
cpu_models = IvyBridge
cpu_model_extra_flags = pcid
Nested guest support
@ -451,7 +478,7 @@ Custom
[libvirt]
cpu_mode = custom
cpu_model = IvyBridge
cpu_models = IvyBridge
cpu_model_extra_flags = vmx,pcid
Nested guest support limitations

View File

@ -34,23 +34,24 @@ as follows.
three following ways, given that Nova supports three distinct CPU
modes:
a. ``[libvirt]/cpu_mode = host-model``
a. :oslo.config:option:`libvirt.cpu_mode`\ =host-model
When using ``host-model`` CPU mode, the ``md-clear`` CPU flag
will be passed through to the Nova guests automatically.
This mode is the default, when ``virt_type=kvm|qemu`` is
set in ``/etc/nova/nova-cpu.conf`` on compute nodes.
This mode is the default, when
:oslo.config:option:`libvirt.virt_type`\ =kvm|qemu is set in
``/etc/nova/nova-cpu.conf`` on compute nodes.
b. ``[libvirt]/cpu_mode = host-passthrough``
b. :oslo.config:option:`libvirt.cpu_mode`\ =host-passthrough
When using ``host-passthrough`` CPU mode, the ``md-clear`` CPU
flag will be passed through to the Nova guests automatically.
c. A specific custom CPU model — this can be enabled using the
Nova config attributes: ``[libvirt]/cpu_mode = custom`` plus a
particular named CPU model, e.g. ``[libvirt]/cpu_model =
IvyBridge``
c. Specific custom CPU models — this can be enabled using the
Nova config attributes :oslo.config:option:`libvirt.cpu_mode`\ =custom
plus particular named CPU models, e.g.
:oslo.config:option:`libvirt.cpu_models`\ =IvyBridge.
(The list of all valid named CPU models that are supported by
your host, QEMU, and libvirt can be found by running the
@ -59,11 +60,11 @@ as follows.
When using a custom CPU mode, you must *explicitly* enable the
CPU flag ``md-clear`` to the Nova instances, in addition to the
flags required for previous vulnerabilities, using the
``cpu_model_extra_flags``. E.g.::
:oslo.config:option:`libvirt.cpu_model_extra_flags`. E.g.::
[libvirt]
cpu_mode = custom
cpu_model = IvyBridge
cpu_models = IvyBridge
cpu_model_extra_flags = spec-ctrl,ssbd,md-clear
(3) Reboot the compute node for the fixes to take effect. (To minimize
@ -73,7 +74,7 @@ as follows.
Once the above steps have been taken on every vulnerable compute
node in the deployment, each running guest in the cluster must be
fully powered down, and cold-booted (i.e. an explicit stop followed
by a start), in order to activate the new CPU model. This can be done
by a start), in order to activate the new CPU models. This can be done
by the guest administrators at a time of their choosing.

View File

@ -117,7 +117,7 @@ Related options:
* ``connection_uri``: depends on this
* ``disk_prefix``: depends on this
* ``cpu_mode``: depends on this
* ``cpu_model``: depends on this
* ``cpu_models``: depends on this
"""),
cfg.StrOpt('connection_uri',
default='',
@ -527,7 +527,7 @@ Related options:
choices=[
('host-model', 'Clone the host CPU feature flags'),
('host-passthrough', 'Use the host CPU model exactly'),
('custom', 'Use the CPU model in ``[libvirt]cpu_model``'),
('custom', 'Use the CPU model in ``[libvirt]cpu_models``'),
('none', "Don't set a specific CPU model. For instances with "
"``[libvirt] virt_type`` as KVM/QEMU, the default CPU model from "
"QEMU will be used, which provides a basic set of CPU features "
@ -541,13 +541,20 @@ will default to ``none``.
Related options:
* ``cpu_model``: This should be set ONLY when ``cpu_mode`` is set to
* ``cpu_models``: This should be set ONLY when ``cpu_mode`` is set to
``custom``. Otherwise, it would result in an error and the instance launch
will fail.
"""),
cfg.StrOpt('cpu_model',
help="""
Set the name of the libvirt CPU model the instance should use.
cfg.ListOpt('cpu_models',
deprecated_name='cpu_model',
default=[],
help="""
An ordered list of CPU models the host supports.
It is expected that the list is ordered so that the more common and less
advanced CPU models are listed earlier. Here is an example:
``SandyBridge,IvyBridge,Haswell,Broadwell``, the latter CPU model's features is
richer that the previous CPU model.
Possible values:
@ -558,9 +565,13 @@ Possible values:
Related options:
* ``cpu_mode``: This should be set to ``custom`` ONLY when you want to
configure (via ``cpu_model``) a specific named CPU model. Otherwise, it
configure (via ``cpu_models``) a specific named CPU model. Otherwise, it
would result in an error and the instance launch will fail.
* ``virt_type``: Only the virtualization types ``kvm`` and ``qemu`` use this.
.. note::
Be careful to only specify models which can be fully supported in
hardware.
"""),
cfg.ListOpt(
'cpu_model_extra_flags',
@ -578,7 +589,7 @@ to address the guest performance degradation as a result of applying the
[libvirt]
cpu_mode = custom
cpu_model = IvyBridge
cpu_models = IvyBridge
cpu_model_extra_flags = pcid
To specify multiple CPU flags (e.g. the Intel ``VMX`` to expose the
@ -587,13 +598,13 @@ huge pages for CPU models that do not provide it)::
[libvirt]
cpu_mode = custom
cpu_model = Haswell-noTSX-IBRS
cpu_models = Haswell-noTSX-IBRS
cpu_model_extra_flags = PCID, VMX, pdpe1gb
As it can be noticed from above, the ``cpu_model_extra_flags`` config
attribute is case insensitive. And specifying extra flags is valid in
combination with all the three possible values for ``cpu_mode``:
``custom`` (this also requires an explicit ``cpu_model`` to be
``custom`` (this also requires an explicit ``cpu_models`` to be
specified), ``host-model``, or ``host-passthrough``. A valid example
for allowing extra CPU flags even for ``host-passthrough`` mode is that
sometimes QEMU may disable certain CPU features -- e.g. Intel's
@ -630,7 +641,7 @@ need to use the ``cpu_model_extra_flags``.
Related options:
* cpu_mode
* cpu_model
* cpu_models
"""),
cfg.StrOpt('snapshots_directory',
default='$instances_path/snapshots',

View File

@ -1344,6 +1344,67 @@ class Connection(object):
raise Exception("fakelibvirt doesn't support getDomainCapabilities "
"for %s architecture" % arch)
def getCPUModelNames(self, arch):
mapping = {
'x86_64': [
'486',
'pentium',
'pentium2',
'pentium3',
'pentiumpro',
'coreduo',
'n270',
'core2duo',
'qemu32',
'kvm32',
'cpu64-rhel5',
'cpu64-rhel6',
'qemu64',
'kvm64',
'Conroe',
'Penryn',
'Nehalem',
'Nehalem-IBRS',
'Westmere',
'Westmere-IBRS',
'SandyBridge',
'SandyBridge-IBRS',
'IvyBridge',
'IvyBridge-IBRS',
'Haswell-noTSX',
'Haswell-noTSX-IBRS',
'Haswell',
'Haswell-IBRS',
'Broadwell-noTSX',
'Broadwell-noTSX-IBRS',
'Broadwell',
'Broadwell-IBRS',
'Skylake-Client',
'Skylake-Client-IBRS',
'Skylake-Server',
'Skylake-Server-IBRS',
'Cascadelake-Server',
'Icelake-Client',
'Icelake-Server',
'athlon',
'phenom',
'Opteron_G1',
'Opteron_G2',
'Opteron_G3',
'Opteron_G4',
'Opteron_G5',
'EPYC',
'EPYC-IBPB'],
'ppc64': [
'POWER6',
'POWER7',
'POWER8',
'POWER9',
'POWERPC_e5500',
'POWERPC_e6500']
}
return mapping.get(arch, [])
# Features are kept separately so that the tests can patch this
# class variable with alternate values.
_domain_capability_features = ''' <features>

View File

@ -385,6 +385,49 @@ _fake_qemu64_cpu_feature = """
</cpu>
"""
_fake_sandy_bridge_cpu_feature = """<cpu mode='custom' match='exact'>
<model fallback='forbid'>SandyBridge</model>
<feature policy='require' name='aes'/>
<feature policy='require' name='apic'/>
<feature policy='require' name='avx'/>
<feature policy='require' name='clflush'/>
<feature policy='require' name='cmov'/>
<feature policy='require' name='cx16'/>
<feature policy='require' name='cx8'/>
<feature policy='require' name='de'/>
<feature policy='require' name='fpu'/>
<feature policy='require' name='fxsr'/>
<feature policy='require' name='lahf_lm'/>
<feature policy='require' name='lm'/>
<feature policy='require' name='mca'/>
<feature policy='require' name='mce'/>
<feature policy='require' name='mmx'/>
<feature policy='require' name='msr'/>
<feature policy='require' name='mtrr'/>
<feature policy='require' name='nx'/>
<feature policy='require' name='pae'/>
<feature policy='require' name='pat'/>
<feature policy='require' name='pclmuldq'/>
<feature policy='require' name='pge'/>
<feature policy='require' name='pni'/>
<feature policy='require' name='popcnt'/>
<feature policy='require' name='pse'/>
<feature policy='require' name='pse36'/>
<feature policy='require' name='rdtscp'/>
<feature policy='require' name='sep'/>
<feature policy='require' name='sse'/>
<feature policy='require' name='sse2'/>
<feature policy='require' name='sse4.1'/>
<feature policy='require' name='sse4.2'/>
<feature policy='require' name='ssse3'/>
<feature policy='require' name='syscall'/>
<feature policy='require' name='tsc'/>
<feature policy='require' name='tsc-deadline'/>
<feature policy='require' name='x2apic'/>
<feature policy='require' name='xsave'/>
</cpu>
"""
_fake_broadwell_cpu_feature = """
<cpu mode='custom' match='exact'>
<model fallback='forbid'>Broadwell-noTSX</model>
@ -6651,9 +6694,61 @@ class LibvirtConnTestCase(test.NoDBTestCase,
</cpu>
"""
def fake_getCPUModelNames(arch):
return [
'486',
'pentium',
'pentium2',
'pentium3',
'pentiumpro',
'coreduo',
'n270',
'core2duo',
'qemu32',
'kvm32',
'cpu64-rhel5',
'cpu64-rhel6',
'qemu64',
'kvm64',
'Conroe',
'Penryn',
'Nehalem',
'Nehalem-IBRS',
'Westmere',
'Westmere-IBRS',
'SandyBridge',
'SandyBridge-IBRS',
'IvyBridge',
'IvyBridge-IBRS',
'Haswell-noTSX',
'Haswell-noTSX-IBRS',
'Haswell',
'Haswell-IBRS',
'Broadwell-noTSX',
'Broadwell-noTSX-IBRS',
'Broadwell',
'Broadwell-IBRS',
'Skylake-Client',
'Skylake-Client-IBRS',
'Skylake-Server',
'Skylake-Server-IBRS',
'Cascadelake-Server',
'Icelake-Client',
'Icelake-Server',
'athlon',
'phenom',
'Opteron_G1',
'Opteron_G2',
'Opteron_G3',
'Opteron_G4',
'Opteron_G5',
'EPYC',
'EPYC-IBPB']
# Make sure the host arch is mocked as x86_64
self.create_fake_libvirt_mock(getCapabilities=fake_getCapabilities,
baselineCPU=fake_baselineCPU,
getCPUModelNames=fake_getCPUModelNames,
getVersion=lambda: 1005001)
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
@ -6880,7 +6975,7 @@ class LibvirtConnTestCase(test.NoDBTestCase,
image_meta = objects.ImageMeta.from_dict(self.test_image_meta)
self.flags(cpu_mode="custom",
cpu_model="Penryn",
cpu_models=["Penryn"],
group='libvirt')
disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
instance_ref,
@ -6904,7 +6999,7 @@ class LibvirtConnTestCase(test.NoDBTestCase,
image_meta = objects.ImageMeta.from_dict(self.test_image_meta)
self.flags(cpu_mode="custom",
cpu_model="IvyBridge",
cpu_models=["IvyBridge"],
cpu_model_extra_flags="pcid",
group='libvirt')
disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
@ -6931,7 +7026,7 @@ class LibvirtConnTestCase(test.NoDBTestCase,
image_meta = objects.ImageMeta.from_dict(self.test_image_meta)
self.flags(cpu_mode="custom",
cpu_model="IvyBridge",
cpu_models=["IvyBridge"],
cpu_model_extra_flags="PCID",
group='libvirt')
disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
@ -6960,7 +7055,7 @@ class LibvirtConnTestCase(test.NoDBTestCase,
image_meta = objects.ImageMeta.from_dict(self.test_image_meta)
self.flags(cpu_mode="custom",
cpu_model="IvyBridge",
cpu_models=["IvyBridge"],
cpu_model_extra_flags=['pcid', 'vmx'],
group='libvirt')
disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
@ -6981,6 +7076,188 @@ class LibvirtConnTestCase(test.NoDBTestCase,
self.assertEqual(conf.cpu.threads, 1)
mock_warn.assert_not_called()
def test_get_guest_cpu_config_custom_upper_cpu_model(self):
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
instance_ref = objects.Instance(**self.test_instance)
image_meta = objects.ImageMeta.from_dict(self.test_image_meta)
disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
instance_ref,
image_meta)
self.flags(cpu_mode="custom",
cpu_models=["PENRYN", "IVYBRIDGE"],
group="libvirt")
conf = drvr._get_guest_config(instance_ref,
_fake_network_info(self, 1),
image_meta, disk_info)
self.assertEqual(conf.cpu.mode, "custom")
self.assertEqual(conf.cpu.model, "Penryn")
self.assertEqual(conf.cpu.sockets, instance_ref.flavor.vcpus)
self.assertEqual(conf.cpu.cores, 1)
self.assertEqual(conf.cpu.threads, 1)
def test_get_guest_cpu_config_custom_without_traits(self):
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
instance_ref = objects.Instance(**self.test_instance)
image_meta = objects.ImageMeta.from_dict(self.test_image_meta)
disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
instance_ref,
image_meta)
self.flags(cpu_mode="custom",
cpu_models=["SandyBridge", "IvyBridge"],
group="libvirt")
conf = drvr._get_guest_config(instance_ref,
_fake_network_info(self, 1),
image_meta, disk_info)
self.assertEqual(conf.cpu.mode, "custom")
self.assertEqual(conf.cpu.model, "SandyBridge")
self.assertEqual(conf.cpu.sockets, instance_ref.flavor.vcpus)
self.assertEqual(conf.cpu.cores, 1)
self.assertEqual(conf.cpu.threads, 1)
@mock.patch('nova.virt.libvirt.host.libvirt.Connection.baselineCPU')
def test_get_guest_cpu_config_custom_with_traits(self, mocked_baseline):
mocked_baseline.side_effect = ('', _fake_qemu64_cpu_feature,
_fake_broadwell_cpu_feature)
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
extra_specs = {
"trait:HW_CPU_X86_AVX": "required",
"trait:HW_CPU_X86_AVX2": "required"
}
test_instance = _create_test_instance()
test_instance["flavor"]["extra_specs"] = extra_specs
instance_ref = objects.Instance(**test_instance)
image_meta = objects.ImageMeta.from_dict(self.test_image_meta)
disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
instance_ref,
image_meta)
self.flags(cpu_mode="custom",
cpu_models=["qemu64", "Broadwell-noTSX"],
group="libvirt")
conf = drvr._get_guest_config(instance_ref,
_fake_network_info(self, 1),
image_meta, disk_info)
self.assertEqual(conf.cpu.mode, "custom")
self.assertEqual(conf.cpu.model, "Broadwell-noTSX")
self.assertEqual(conf.cpu.sockets, instance_ref.flavor.vcpus)
self.assertEqual(conf.cpu.cores, 1)
self.assertEqual(conf.cpu.threads, 1)
@mock.patch('nova.virt.libvirt.host.libvirt.Connection.baselineCPU')
def test_get_guest_cpu_config_custom_with_traits_multi_models(self,
mocked_baseline):
mocked_baseline.side_effect = ('', _fake_qemu64_cpu_feature,
_fake_broadwell_cpu_feature)
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
extra_specs = {
"trait:HW_CPU_X86_SSE41": "required",
"trait:HW_CPU_X86_SSE42": "required"
}
test_instance = _create_test_instance()
test_instance["flavor"]["extra_specs"] = extra_specs
instance_ref = objects.Instance(**test_instance)
image_meta = objects.ImageMeta.from_dict(self.test_image_meta)
disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
instance_ref,
image_meta)
self.flags(cpu_mode="custom",
cpu_models=["qemu64", "SandyBridge", "Broadwell-noTSX"],
group="libvirt")
conf = drvr._get_guest_config(instance_ref,
_fake_network_info(self, 1),
image_meta, disk_info)
self.assertEqual(conf.cpu.mode, "custom")
self.assertEqual(conf.cpu.model, "SandyBridge")
self.assertEqual(conf.cpu.sockets, instance_ref.flavor.vcpus)
self.assertEqual(conf.cpu.cores, 1)
self.assertEqual(conf.cpu.threads, 1)
@mock.patch('nova.virt.libvirt.host.libvirt.Connection.baselineCPU')
def test_get_guest_cpu_config_custom_with_traits_none_model(self,
mocked_baseline):
mocked_baseline.side_effect = ('', _fake_qemu64_cpu_feature,
_fake_sandy_bridge_cpu_feature)
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
extra_specs = {
"trait:HW_CPU_X86_AVX": "required",
"trait:HW_CPU_X86_AVX2": "required"
}
test_instance = _create_test_instance()
test_instance["flavor"]["extra_specs"] = extra_specs
instance_ref = objects.Instance(**test_instance)
image_meta = objects.ImageMeta.from_dict(self.test_image_meta)
disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
instance_ref,
image_meta)
self.flags(cpu_mode="custom",
cpu_models=["qemu64", "SandyBridge"],
group="libvirt")
self.assertRaises(exception.InvalidCPUInfo,
drvr._get_guest_config,
instance_ref,
_fake_network_info(self, 1),
image_meta,
disk_info)
@mock.patch('nova.virt.libvirt.host.libvirt.Connection.baselineCPU')
def test_get_guest_cpu_config_custom_with_progressive_model(self,
mocked_baseline):
"""Test progressive models
If require two flags: flag1 and flag2, and there are three sorted
CPU models: [model1, model2, model3], model1 only has flag1, model2
only has flag2, model3 both have flag1 and flag2.
Test that the driver will select model3 but not model2.
"""
# Assume that qemu64 have flag avx2 for the test.
fake_qemu64_cpu_feature_with_avx2 = _fake_qemu64_cpu_feature.split(
'\n')
fake_qemu64_cpu_feature_with_avx2.insert(
-2, " <feature policy='require' name='avx2'/>")
fake_qemu64_cpu_feature_with_avx2 = "\n".join(
fake_qemu64_cpu_feature_with_avx2)
mocked_baseline.side_effect = ('', fake_qemu64_cpu_feature_with_avx2,
_fake_sandy_bridge_cpu_feature,
_fake_broadwell_cpu_feature)
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
extra_specs = {
"trait:HW_CPU_X86_AVX": "required",
"trait:HW_CPU_X86_AVX2": "required"
}
test_instance = _create_test_instance()
test_instance["flavor"]["extra_specs"] = extra_specs
instance_ref = objects.Instance(**test_instance)
image_meta = objects.ImageMeta.from_dict(self.test_image_meta)
disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
instance_ref,
image_meta)
self.flags(cpu_mode="custom",
cpu_models=["qemu64", "SandyBridge", "Broadwell-noTSX"],
group="libvirt")
conf = drvr._get_guest_config(instance_ref,
_fake_network_info(self, 1),
image_meta, disk_info)
self.assertEqual(conf.cpu.mode, "custom")
self.assertEqual(conf.cpu.model, "Broadwell-noTSX")
self.assertEqual(conf.cpu.sockets, instance_ref.flavor.vcpus)
self.assertEqual(conf.cpu.cores, 1)
self.assertEqual(conf.cpu.threads, 1)
@mock.patch.object(libvirt_driver.LOG, 'warning')
def test_get_guest_cpu_config_host_model_with_extra_flags(self,
mock_warn):
@ -12895,12 +13172,64 @@ class LibvirtConnTestCase(test.NoDBTestCase,
</cpu>
"""
def fake_getCPUModelNames(arch):
return [
'486',
'pentium',
'pentium2',
'pentium3',
'pentiumpro',
'coreduo',
'n270',
'core2duo',
'qemu32',
'kvm32',
'cpu64-rhel5',
'cpu64-rhel6',
'qemu64',
'kvm64',
'Conroe',
'Penryn',
'Nehalem',
'Nehalem-IBRS',
'Westmere',
'Westmere-IBRS',
'SandyBridge',
'SandyBridge-IBRS',
'IvyBridge',
'IvyBridge-IBRS',
'Haswell-noTSX',
'Haswell-noTSX-IBRS',
'Haswell',
'Haswell-IBRS',
'Broadwell-noTSX',
'Broadwell-noTSX-IBRS',
'Broadwell',
'Broadwell-IBRS',
'Skylake-Client',
'Skylake-Client-IBRS',
'Skylake-Server',
'Skylake-Server-IBRS',
'Cascadelake-Server',
'Icelake-Client',
'Icelake-Server',
'athlon',
'phenom',
'Opteron_G1',
'Opteron_G2',
'Opteron_G3',
'Opteron_G4',
'Opteron_G5',
'EPYC',
'EPYC-IBPB']
# _fake_network_info must be called before create_fake_libvirt_mock(),
# as _fake_network_info calls importutils.import_class() and
# create_fake_libvirt_mock() mocks importutils.import_class().
network_info = _fake_network_info(self, 1)
self.create_fake_libvirt_mock(getLibVersion=fake_getLibVersion,
getCapabilities=fake_getCapabilities,
getCPUModelNames=fake_getCPUModelNames,
getVersion=lambda: 1005001,
baselineCPU=fake_baselineCPU)
@ -21820,7 +22149,7 @@ class LibvirtDriverTestCase(test.NoDBTestCase, TraitsComparisonMixin):
_fake_broadwell_cpu_features.
"""
self.flags(cpu_mode='custom',
cpu_model='Broadwell-noTSX',
cpu_models=['Broadwell-noTSX'],
group='libvirt')
mock_baseline.return_value = _fake_broadwell_cpu_feature
@ -21908,7 +22237,7 @@ class LibvirtDriverTestCase(test.NoDBTestCase, TraitsComparisonMixin):
the feature, only kvm and qemu supports reporting CPU traits.
"""
self.flags(cpu_mode='custom',
cpu_model='IvyBridge',
cpu_models=['IvyBridge'],
virt_type='lxc',
group='libvirt'
)
@ -21948,7 +22277,7 @@ class LibvirtDriverTestCase(test.NoDBTestCase, TraitsComparisonMixin):
"""Test if extra flags are accounted when cpu_mode is set to custom.
"""
self.flags(cpu_mode='custom',
cpu_model='IvyBridge',
cpu_models=['IvyBridge'],
cpu_model_extra_flags='PCID',
group='libvirt')

View File

@ -423,6 +423,12 @@ class LibvirtDriver(driver.ComputeDriver):
# intended to be updatable directly
self.provider_tree = None
# The CPU models in the configuration are case-insensitive, but the CPU
# model in the libvirt is case-sensitive, therefore create a mapping to
# map the lower case CPU model name to normal CPU model name.
self.cpu_models_mapping = {}
self.cpu_model_flag_mapping = {}
def _get_volume_drivers(self):
driver_registry = dict()
@ -3957,9 +3963,17 @@ class LibvirtDriver(driver.ComputeDriver):
else:
mount.get_manager().host_down()
def _get_guest_cpu_model_config(self):
def _get_cpu_model_mapping(self, model):
if not self.cpu_models_mapping:
cpu_models = self._host.get_cpu_model_names()
for cpu_model in cpu_models:
self.cpu_models_mapping[cpu_model.lower()] = cpu_model
return self.cpu_models_mapping.get(model.lower())
def _get_guest_cpu_model_config(self, flavor=None):
mode = CONF.libvirt.cpu_mode
model = CONF.libvirt.cpu_model
models = [self._get_cpu_model_mapping(model)
for model in CONF.libvirt.cpu_models]
extra_flags = set([flag.lower() for flag in
CONF.libvirt.cpu_model_extra_flags])
@ -3996,24 +4010,31 @@ class LibvirtDriver(driver.ComputeDriver):
"support selecting CPU models") % CONF.libvirt.virt_type
raise exception.Invalid(msg)
if mode == "custom" and model is None:
msg = _("Config requested a custom CPU model, but no "
"model name was provided")
if mode == "custom" and not models:
msg = _("Config requested custom CPU models, but no "
"model names was provided")
raise exception.Invalid(msg)
elif mode != "custom" and model is not None:
msg = _("A CPU model name should not be set when a "
if mode != "custom" and models:
msg = _("CPU model names should not be set when a "
"host CPU model is requested")
raise exception.Invalid(msg)
LOG.debug("CPU mode '%(mode)s' model '%(model)s' was chosen, "
"with extra flags: '%(extra_flags)s'",
{'mode': mode,
'model': (model or ""),
'extra_flags': (extra_flags or "")})
cpu = vconfig.LibvirtConfigGuestCPU()
cpu.mode = mode
cpu.model = model
cpu.model = models[0] if models else None
# compare flavor trait and cpu models, select the first mathched model
if flavor and mode == "custom":
flags = libvirt_utils.get_flags_by_flavor_specs(flavor)
if flags:
cpu.model = self._match_cpu_model_by_flags(models, flags)
LOG.debug("CPU mode '%(mode)s' models '%(models)s' was chosen, "
"with extra flags: '%(extra_flags)s'",
{'mode': mode,
'models': (cpu.model or ""),
'extra_flags': (extra_flags or "")})
# NOTE (kchamart): Currently there's no existing way to ask if a
# given CPU model + CPU flags combination is supported by KVM &
@ -4030,9 +4051,28 @@ class LibvirtDriver(driver.ComputeDriver):
return cpu
def _match_cpu_model_by_flags(self, models, flags):
for model in models:
if flags.issubset(self.cpu_model_flag_mapping.get(model, set([]))):
return model
cpu = vconfig.LibvirtConfigCPU()
cpu.arch = self._host.get_capabilities().host.cpu.arch
cpu.model = model
features_xml = self._get_guest_baseline_cpu_features(cpu.to_xml())
if features_xml:
cpu.parse_str(features_xml)
feature_names = [f.name for f in cpu.features]
self.cpu_model_flag_mapping[model] = feature_names
if flags.issubset(feature_names):
return model
msg = ('No CPU model match traits, models: {models}, required '
'flags: {flags}'.format(models=models, flags=flags))
raise exception.InvalidCPUInfo(msg)
def _get_guest_cpu_config(self, flavor, image_meta,
guest_cpu_numa_config, instance_numa_topology):
cpu = self._get_guest_cpu_model_config()
cpu = self._get_guest_cpu_model_config(flavor)
if cpu is None:
return None
@ -9667,7 +9707,7 @@ class LibvirtDriver(driver.ComputeDriver):
CPU features.
2. if mode is None, choose a default CPU model based on CPU
architecture.
3. if mode is 'custom', use cpu_model to generate CPU features.
3. if mode is 'custom', use cpu_models to generate CPU features.
The code also accounts for cpu_model_extra_flags configuration when
cpu_mode is 'host-model', 'host-passthrough' or 'custom', this
ensures user specified CPU feature flags to be included.

View File

@ -637,6 +637,14 @@ class Host(object):
return online_cpus
def get_cpu_model_names(self):
"""Get the cpu models based on host CPU arch
:returns: a list of cpu models which supported by the given CPU arch
"""
arch = self.get_capabilities().host.cpu.arch
return self.get_connection().getCPUModelNames(arch)
def get_capabilities(self):
"""Returns the host capabilities information

View File

@ -30,10 +30,12 @@ from oslo_utils import fileutils
import nova.conf
from nova.i18n import _
from nova import objects
from nova.objects import fields as obj_fields
import nova.privsep.fs
import nova.privsep.idmapshift
import nova.privsep.libvirt
from nova.scheduler import utils as scheduler_utils
from nova import utils
from nova.virt import images
from nova.virt.libvirt import config as vconfig
@ -82,6 +84,9 @@ CPU_TRAITS_MAPPING = {
'xop': os_traits.HW_CPU_X86_XOP
}
# Reverse CPU_TRAITS_MAPPING
TRAITS_CPU_MAPPING = {v: k for k, v in CPU_TRAITS_MAPPING.items()}
def create_image(disk_format, path, size):
"""Create a disk image
@ -585,3 +590,13 @@ def mdev_uuid2name(mdev_uuid):
mdev_<uuid_with_underscores>).
"""
return "mdev_" + mdev_uuid.replace('-', '_')
def get_flags_by_flavor_specs(flavor):
req_spec = objects.RequestSpec(flavor=flavor)
resource_request = scheduler_utils.ResourceRequest(req_spec)
required_traits = resource_request.all_required_traits
flags = [TRAITS_CPU_MAPPING.get(trait) for trait in required_traits]
return set(flags)

View File

@ -0,0 +1,12 @@
---
features:
- |
It is now possible to specify an ordered list of CPU models in the
``[libvirt] cpu_models`` config option. If ``[libvirt] cpu_mode``
is set to ``custom``, the libvirt driver will select the first
CPU model in this list that can provide the required feature
traits.
upgrades:
- |
The ``[libvirt] cpu_model`` config option has been renamed to
``[libvirt] cpu_models`` and now accepts a list of CPU models.