
Subcloud collect bundles have an extra level of directory heirarchy. This update refactors the report.py bundle search and extraction handling to support both single and multi host and subcloud collect bundles. Typical used is report.py <bundle pointer option> /path/to/bundle Bundle pointer options --bundle Use this option to point to a 'directory' that 'contains' host tarball files. --directory Use this option when a collect bundle 'tar file' is in a in a specific 'directory'. --file Use this option to point to a specific collect bundle tar file to analyze. The following additional changes / improvements were made: - improved report.py code structure - improved management of the input and output dirs - improved debug and error logging (new --state option) - removed --clean option that can fail due to bundle file permissions - added --bundle option to support pointing to a directory containing a set of host tarballs. - modified collect to use the new --bundle option when --report option is used. - implement tool logfile migration from /tmp to bundle output_dir - create report_analysis dir in final output_dir only - fix file permissions to allow execution from git - order plugin analysis output based on size - added additional error checking and handling Test Plan: PASS: Verify collect --report (std system, AIO and subcloud) PASS: Verify report analysis PASS: Verify report run on-system, git and cached copy PASS: Verify on and off system analysis of PASS: ... single-host collect bundle with --file option PASS: ... multi-host collect bundle with --file option PASS: ... single-subcloud collect bundle with --file option PASS: ... multi-subcloud collect bundle with --file option PASS: ... single-host collect bundle with --directory option PASS: ... multi-host collect bundle with --directory option PASS: ... single-subcloud collect bundle with --directory option PASS: ... multi-subcloud collect bundle with --directory option PASS: ... single-host collect bundle with --bundle option PASS: ... multi-host collect bundle with --bundle option PASS: Verify --directory option handling when PASS: ... there are multiple bundles to select from (pass) PASS: ... there are is a bundle without the date_time (prompt) PASS: ... there are extra non-bundle files in target dir (ignore) PASS: ... the target dir only contains host tarballs (fail) PASS: ... the target dir has no tar files or extracted bundle (fail) PASS: ... the target dir does not exist (fail) PASS: Verify --bundle option handling when PASS: ... there are host tarballs in the target directory (pass) PASS: ... there are only extracted host dirs in target dir (pass) PASS: ... there are no host tarballs or dirs in target dir (fail) PASS: ... the target dir does not have a dated host dir (fail) PASS: ... the target dir does not exist (fail) PASS: ... the target is a file rather than a dir (fail) PASS: Verify --file option handling when PASS: ... the target tar file is found (pass) PASS: ... the target tar file is not date_time named (prompt) PASS: ... the target tar file does not exists (fail) PASS: ... the target tar is not a collect bundle (fail) PASS: Verify tar file(s) in a single and multi-subcloud collect with the --report option each include a report analysis. PASS: Verify logging with and without --debug and --state options PASS: Verify error handling when no -b, -f or -d option is specified Story: 2010533 Task: 48187 Change-Id: I4924034aa27577f94e97928265c752c204a447c7 Signed-off-by: Eric MacDonald <eric.macdonald@windriver.com>
utilities
This file serves as documentation for the components and features included on the utilities repository.
PCI IRQ Affinity Agent
While in OpenStack it is possible to enable instances to use PCI devices, the interrupts generated by these devices may be handled by host CPUs that are unrelated to the instance, and this can lead to a performance that is lower than it could be if the device interrupts were handled by the instance CPUs.
The agent only acts over instances with dedicated vCPUs. For instances using shared vCPUs no action will be taken by the agent.
The expected outcome from the agent operation is achieving a higher performance by assigning the instances core to handle the interrupts from PCI devices used by these instances and avoid interrupts consuming excessive cycles from the platform cores.
Agent operation
The agent operates by listening to RabbitMQ notifications from Nova. When an instance is created or moved to the host, the agent checks for an specific flavor spec (detailed below) and if it does then it queries libvirt to map the instance vCPUs into pCPUs from the host.
Once the agent has the CPU mapping, it determines the IRQ for each PCI device used by the instance, and then it loops over all PCI devices and determines which host NUMA node is associated with the device, the pCPUs that are associated with the NUMA node and finally set the CPU affinity for the IRQs of the PCI device based on the pCPU list.
There is also a periodic audit that runs every minute and loops over the existing IRQs, so that if there are new IRQs that weren't mapped before the agent maps them, and if there are PCI devices that aren't associated to an instance that they were before, their IRQ affinity is reset to the default value.
Flavor spec
The PCI IRQ Affinity Agent uses a specific flavor spec for PCI interrupt affining, that is used to determine which vCPUs assigned to the instance must handle the interrupts from the PCI devices:
hw:pci_irq_affinity_mask=<vcpus_cpulist>
Where vcpus_cpulist
can assume a comma-separated list of
values that can be expressed as:
int
: the vCPU expressed byint
will be assigned to handle the interruptions from the PCI devicesint1-int2
: the vCPUs betweenint1
andint2
(inclusive) will be used to handle the interruptions from the PCI devices^int
: the vCPU expressed byint
will not be assigned to handle the interruptions from the PCI devices and shall be used to exclude a vCPU that was included in a previous range
NOTE: int
must be a value between
0
and flavor.vcpus - 1
Example: hw_pci_irq_affinity_mask=1-4,^3,6
means that
vCPUs with indexes 1,2,4 and 6
from the vCPU list that Nova
allocates to the instance will be assigned to handle interruptions from
the PCI devices.
Limitations
- No CPU affining is performed for instances using shared CPUs (i.e.,
when using flavor spec
hw:cpu_policy=shared
) - No CPU affining will be performed when invalid ranges are specified on the flavor spec, the agent instead will log error messages indicating the problem
Agent packaging
The agent code resides on the starlingx/utilities
repo,
along with the spec and docker_image files that are used to build a
CentOS image with the agent wheel installed on it.
The agent is deployed by Armada along with the other OpenStack helm
charts; refer to PCI
IRQ Affinity Agent helm chart on
starlingx/openstack-armada-app
repository.