278ab01c32
Map 'hw:cpu_policy' and 'hw:cpu_thread_policy' as follows: hw:cpu_policy dedicated -> resources:PCPU=${flavor.vcpus} shared -> resources:VCPU=${flavor.vcpus} hw:cpu_thread_policy isolate -> trait:HW_CPU_HYPERTHREADING:forbidden require -> trait:HW_CPU_HYPERTHREADING:required prefer -> (none, handled later during scheduling) Ditto for the 'hw_cpu_policy' and 'hw_cpu_thread_policy' image metadata equivalents. In addition, increment the requested 'resources:PCPU' by 1 if the 'hw:emulator_threads_policy' extra spec is present and set to 'isolate'. The scheduler will attempt to get PCPUs allocations and fall back to VCPUs if that fails. This is okay because the NUMA fitting code from the 'hardware' module used by both the 'NUMATopology' filter and libvirt driver protects us. That code doesn't know anything about PCPUs or VCPUs but rather cares about the 'NUMATopology.pcpuset' field, (starting in change I492803eaacc34c69af073689f9159449557919db), which can be set to different values depending on whether this is Train with new-style config, Train with old-style config, or Stein: - For Train compute nodes with new-style config, 'NUMATopology.pcpuset' will be explictly set to the value of '[compute] cpu_dedicated_set' or, if only '[compute] cpu_dedicated_set' is configured, 'None' (it's nullable) by the virt driver so the calls to 'hardware.numa_fit_instance_to_host' in the 'NUMATopologyFilter' or virt driver will fail if it can't actually fit. - For Train compute nodes with old-style config, 'NUMATopology.pcpuset' will be set to the same value as 'NUMATopology.cpuset' by the virt driver. - For Stein compute nodes, 'NUMATopology.pcpuset' will be unset and we'll detect this in 'hardware.numa_fit_instance_to_host' and simply set it to the same value as 'NUMATopology.cpuset'. Part of blueprint cpu-resources Change-Id: Ie38aa625dff543b5980fd437ad2febeba3b50079 Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
49 lines
2.7 KiB
YAML
49 lines
2.7 KiB
YAML
---
|
|
features:
|
|
- |
|
|
Compute nodes using the libvirt driver can now report ``PCPU`` inventory.
|
|
This is consumed by instances with dedicated (pinned) CPUs. This can be
|
|
configured using the ``[compute] cpu_dedicated_set`` config option. The
|
|
scheduler will automatically translate the legacy ``hw:cpu_policy`` flavor
|
|
extra spec or ``hw_cpu_policy`` image metadata property to ``PCPU``
|
|
requests, falling back to ``VCPU`` requests only if no ``PCPU`` candidates
|
|
are found. Refer to the help text of the ``[compute] cpu_dedicated_set``,
|
|
``[compute] cpu_shared_set`` and ``vcpu_pin_set`` config options for more
|
|
information.
|
|
- |
|
|
Compute nodes using the libvirt driver will now report the
|
|
``HW_CPU_HYPERTHREADING`` trait if the host has hyperthreading. The
|
|
scheduler will automatically translate the legacy ``hw:cpu_thread_policy``
|
|
flavor extra spec or ``hw_cpu_thread_policy`` image metadata property to
|
|
either require or forbid this trait.
|
|
- |
|
|
A new configuration option, ``[compute] cpu_dedicated_set``, has been
|
|
added. This can be used to configure the host CPUs that should be used for
|
|
``PCPU`` inventory.
|
|
- |
|
|
A new configuration option, ``[workarounds] disable_fallback_pcpu_query``,
|
|
has been added. When creating or moving pinned instances, the scheduler will
|
|
attempt to provide a ``PCPU``-based allocation, but can also fallback to a legacy
|
|
``VCPU``-based allocation. This fallback behavior is enabled by
|
|
default to ensure it is possible to upgrade without having to modify compute
|
|
node configuration but it results in an additional request for allocation
|
|
candidates from placement. This can have a slight performance impact and is
|
|
unnecessary on new or upgraded deployments where the compute nodes have been
|
|
correctly configured to report ``PCPU`` inventory. The ``[workarounds]
|
|
disable_fallback_pcpu_query`` config option can be used to disable this
|
|
fallback allocation candidate request, meaning only ``PCPU``-based
|
|
allocation candidates will be retrieved.
|
|
deprecations:
|
|
- |
|
|
The ``vcpu_pin_set`` configuration option has been deprecated. You should
|
|
migrate host CPU configuration to the ``[compute] cpu_dedicated_set`` or
|
|
``[compute] cpu_shared_set`` config options, or both. Refer to the help
|
|
text of these config options for more information.
|
|
upgrade:
|
|
- |
|
|
Previously, if ``vcpu_pin_set`` was not defined, the libvirt driver would
|
|
count all available host CPUs when calculating ``VCPU`` inventory,
|
|
regardless of whether those CPUs were online or not. The driver will now
|
|
only report the total number of online CPUs. This should result in fewer
|
|
build failures on hosts with offlined CPUs.
|