Map 'hw:cpu_policy' and 'hw:cpu_thread_policy' as follows:
hw:cpu_policy
dedicated -> resources:PCPU=${flavor.vcpus}
shared -> resources:VCPU=${flavor.vcpus}
hw:cpu_thread_policy
isolate -> trait:HW_CPU_HYPERTHREADING:forbidden
require -> trait:HW_CPU_HYPERTHREADING:required
prefer -> (none, handled later during scheduling)
Ditto for the 'hw_cpu_policy' and 'hw_cpu_thread_policy' image metadata
equivalents.
In addition, increment the requested 'resources:PCPU' by 1 if the
'hw:emulator_threads_policy' extra spec is present and set to 'isolate'.
The scheduler will attempt to get PCPUs allocations and fall back to
VCPUs if that fails. This is okay because the NUMA fitting code from the
'hardware' module used by both the 'NUMATopology' filter and libvirt
driver protects us. That code doesn't know anything about PCPUs or VCPUs
but rather cares about the 'NUMATopology.pcpuset' field, (starting in
change I492803eaacc34c69af073689f9159449557919db), which can be set to
different values depending on whether this is Train with new-style
config, Train with old-style config, or Stein:
- For Train compute nodes with new-style config, 'NUMATopology.pcpuset'
will be explictly set to the value of '[compute] cpu_dedicated_set'
or, if only '[compute] cpu_dedicated_set' is configured, 'None' (it's
nullable) by the virt driver so the calls to
'hardware.numa_fit_instance_to_host' in the 'NUMATopologyFilter' or
virt driver will fail if it can't actually fit.
- For Train compute nodes with old-style config, 'NUMATopology.pcpuset'
will be set to the same value as 'NUMATopology.cpuset' by the virt
driver.
- For Stein compute nodes, 'NUMATopology.pcpuset' will be unset and
we'll detect this in 'hardware.numa_fit_instance_to_host' and simply
set it to the same value as 'NUMATopology.cpuset'.
Part of blueprint cpu-resources
Change-Id: Ie38aa625dff543b5980fd437ad2febeba3b50079
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>