||2 days ago|
|ceph||3 weeks ago|
|security||2 months ago|
|tools||4 days ago|
|utilities||1 week ago|
|.gitignore||2 years ago|
|.gitreview||2 years ago|
|.zuul.yaml||6 months ago|
|CONTRIBUTING.rst||2 years ago|
|HACKING.rst||2 years ago|
|README.rst||1 month ago|
|bindep.txt||12 months ago|
|centos_build_layer.cfg||2 years ago|
|centos_dev_docker_images.inc||4 weeks ago|
|centos_dev_wheels.inc||4 weeks ago|
|centos_guest_image.inc||2 years ago|
|centos_guest_image_rt.inc||2 years ago|
|centos_iso_image.inc||2 years ago|
|centos_pkg_dirs||2 years ago|
|centos_stable_docker_images.inc||4 weeks ago|
|centos_stable_wheels.inc||4 weeks ago|
|centos_tarball-dl.lst||2 years ago|
|debian_build_layer.cfg||2 months ago|
|debian_iso_image.inc||1 month ago|
|debian_pkg_dirs||2 months ago|
|pylint.rc||1 month ago|
|requirements.txt||12 months ago|
|test-requirements.txt||5 days ago|
|tox.ini||5 days ago|
This file serves as documentation for the components and features included on the utilities repository.
PCI IRQ Affinity Agent
While in OpenStack it is possible to enable instances to use PCI devices, the interrupts generated by these devices may be handled by host CPUs that are unrelated to the instance, and this can lead to a performance that is lower than it could be if the device interrupts were handled by the instance CPUs.
The agent only acts over instances with dedicated vCPUs. For instances using shared vCPUs no action will be taken by the agent.
The expected outcome from the agent operation is achieving a higher performance by assigning the instances core to handle the interrupts from PCI devices used by these instances and avoid interrupts consuming excessive cycles from the platform cores.
The agent operates by listening to RabbitMQ notifications from Nova. When an instance is created or moved to the host, the agent checks for an specific flavor spec (detailed below) and if it does then it queries libvirt to map the instance vCPUs into pCPUs from the host.
Once the agent has the CPU mapping, it determines the IRQ for each PCI device used by the instance, and then it loops over all PCI devices and determines which host NUMA node is associated with the device, the pCPUs that are associated with the NUMA node and finally set the CPU affinity for the IRQs of the PCI device based on the pCPU list.
There is also a periodic audit that runs every minute and loops over the existing IRQs, so that if there are new IRQs that weren't mapped before the agent maps them, and if there are PCI devices that aren't associated to an instance that they were before, their IRQ affinity is reset to the default value.
The PCI IRQ Affinity Agent uses a specific flavor spec for PCI interrupt affining, that is used to determine which vCPUs assigned to the instance must handle the interrupts from the PCI devices:
vcpus_cpulist can assume a comma-separated list of values that can be expressed as:
int: the vCPU expressed by
intwill be assigned to handle the interruptions from the PCI devices
int1-int2: the vCPUs between
int2(inclusive) will be used to handle the interruptions from the PCI devices
^int: the vCPU expressed by
intwill not be assigned to handle the interruptions from the PCI devices and shall be used to exclude a vCPU that was included in a previous range
int must be a value between
flavor.vcpus - 1
hw_pci_irq_affinity_mask=1-4,^3,6 means that vCPUs with indexes
1,2,4 and 6 from the vCPU list that Nova allocates to the instance will be assigned to handle interruptions from the PCI devices.
- No CPU affining is performed for instances using shared CPUs (i.e., when using flavor spec
- No CPU affining will be performed when invalid ranges are specified on the flavor spec, the agent instead will log error messages indicating the problem
The agent code resides on the
starlingx/utilities repo, along with the spec and docker_image files that are used to build a CentOS image with the agent wheel installed on it.
The agent is deployed by Armada along with the other OpenStack helm charts; refer to PCI IRQ Affinity Agent helm chart on