Start making use of the new fields added to the 'NUMACell' object and
store things differently. There are a lot of mechanical changes required
to fix these. The bulk of these fall into one of three categories:
- Migrating 'NUMACell.cpuset' values to 'NUMACell.pcpuset' where
necessary (hint: if we were testing something to do with a instance
NUMA topology with pinning, we need to migrate the field)
- Configuring the 'cpu_shared_set' and 'cpu_dedicated_set' config
options (from the '[compute]' group) and changing calls to
'nova.virt.hardware.get_vcpu_pin_set' to '_get_vcpu_available'
and/or '_get_pcpu_available'. This is necessary because the
'_get_guest_numa_config' function has changed to call the latter
instead of the former.
- Removing checks for 'NUMACell.cpu_usage' for pinned tests, since this
no longer counts usage of pinned CPUs (that's handled by the
'pinned_cpus' field on the same object)
The only serious deviation from this is the
'test_get_guest_config_numa_host_instance_cpu_pinning_realtime' test,
which has to have configuration added to ensure the guest looks like it
has CPU pinning configured. This test was pretty much broken before.
It's useful to understand the lifecycle of the 'NUMATopology' object in
order to understand the upgrade impacts of this change, in so far as it
relates to the libvirt virt driver. We generate a 'NUMATopology' object
on the compute node in the 'LibvirtDriver._get_host_numa_topology'
method and report that as part of the 'ComputeNode' object provided to
the resource tracker from the 'LibvirtDriver.get_available_resource'
method. The 'NUMATopology' object generated by the driver *does not*
include any usage information, meaning the fields 'cpu_usage',
'pinned_cpus' and 'memory_usage' are set to empty values. Instead, these
are calculated by calling the 'hardware.numa_usage_from_instance_numa'
function. This happens in two places, in the resource tracker as part of
the 'ResourceTracker._update_usage' method, which is called by the
'ResourceTracker.update_usage' periodic task as well as by other
internal operations each time a instance is created, moved or destroyed,
and in the 'HostState._update_from_compute_node' method, which is called
by the 'HostState.update' method on the scheduler at startup.
As a result of the above, there isn't a significant upgrade impact for
this and it remains possible to run older Stein-based compute nodes
alongside newer Train-based compute nodes. There are two things that
make this possible. Firstly, at no point in this entire process does a
'NUMATopology' object make its way back to the compute node once it
leaves, by way of a 'ComputeNode' object. That's true for objects in
general and means we don't need to worry about the compute node seeing
these new object fields and not being able to understand them. Secondly,
we have checks in that crucial 'hardware.numa_usage_from_instance_numa'
function, which will check if 'pcpuset' is not set (meaning it's a
pre-Train compute node) and set this locally. Because the virt driver
doesn't set the usage-related fields, this is the only field we need to
care about.
Part of blueprint cpu-resources
Change-Id: I492803eaacc34c69af073689f9159449557919db
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>