StarlingX miscellaneous tools and utilities
Go to file
Tara Subedi 5a79edbc59 Subcloud Enrolment failure with OAM IP Address change
With the OAM reconfiguration on subcloud enrollment, both old and
new IP addresses are observed on OAM interface, old one by original
OAM IP configuration, new one by cloud init's network-config. With
oam-modify trigger:
1) ifdown: old OAM IP and default OAM route get deleted,
2) ifcfg file reconfigured
3) ifup: fails (silently), as new IP setting is already present
As end result, the default OAM route is still missing, which leads
to other issues later: "kubeadm init phase upload-certs" fails, and
"install cert failure".

Concrete example:
Initially OAM interface (vlan112:3-7) had 2620:10a:a001:aa0c::128
Enrollment with OAM-reconfig requested with 2620:10a:a001:aa0c::171
Cloud init apply "etc/network/interfaces.d/50-cloud-init" (derived
from network-config) and sets new 2620:10a:a001:aa0c::171 IP on
"vlan112". Now we have both IPs, new one on vlan112 and old one on
vlan112:3-7. oam-modify triggered, apply_network_config.sh called,
ifdown vlan112:3-7, removes old 128 IP (and makes var/run/network/
ifstate.vlan112:3-7 state down/empty), but do not remove new 171 of
vlan112. Then address changed (128 -> 171) on ifcfg file (etc/network
/interfaces.d/ifcfg-vlan112:3-7), and then "ifup vlan112:3-7" fails as
we already have 171 on vlan112. Thus if state is still down/empty on
etc/network/interfaces.d/ifcfg-vlan112:3-7, and deleted default route
did not get reinstalled.

This commit fixes by cleaning IP and Route on linux configured by
old OAM through puppet and making OAM label/alias interface down,
before doing oam-modify, when OAM reconfiguration don't change
interface/vlan with respect to factory install OAM interface/vlan.
When the interface/vlan is not modified, the OAM reconfiguration is
only for address change, which is supported by oam-modify itself.
oam-modify itself needs oam connection intact, thus relying completely
on cloud-init's OAM IP and route. When oam-modify triggers puppet
runtime, above step 1) ifdown, do nothing as the interface is already
in down state, and thus default OAM route don't get deleted.

TEST PLAN:
  PASS: subcloud enrollment with oam-reconfig w/o interface/vlan change
        - check /var/log/cloud-init-output.log, for ip/route deletion
        - check /var/log/user.log, there could be still "Failed bringing"
        - check OAM single IP and default route presence
        - OAM connection based on cloud-init's new IP/route.
  PASS: subcloud enrollment without oam-reconfig
  PASS: subcloud enrollement with oam-reconfig with interface change
  PASS: subcloud enrollment with oam-reconfig with vlan change
  PASS: test above subcloud enrollement with both IPv4 and IPv6 on OAM

Closes-bug: 2089689
Change-Id: If3b36dc8722263b9b66b7f51f62452f1056be124
Signed-off-by: Tara Nath Subedi <tara.subedi@windriver.com>
2024-11-26 16:28:15 -05:00
ceph Merge "Change stx-ceph-manager base image" 2024-08-05 15:02:58 +00:00
security Remove CentOS/OpenSUSE build support 2024-04-26 14:19:03 -04:00
tools Factory-install: Update stage service activation 2024-10-25 20:09:54 +00:00
utilities Subcloud Enrolment failure with OAM IP Address change 2024-11-26 16:28:15 -05:00
.gitignore Refactoring novaClient instantiation and unittests 2022-01-25 19:27:36 -03:00
.gitreview Add a .gitreview file to the new repo 2019-09-09 09:48:42 -05:00
.zuul.yaml Add python-ldap deps for zuul test 2023-09-15 18:41:49 +00:00
bindep.txt Add python-ldap deps for zuul test 2023-09-15 18:41:49 +00:00
CONTRIBUTING.rst Adding zuul jobs for new repo 2019-09-09 13:43:49 -05:00
debian_build_layer.cfg Add debian_build_layer.cfg file 2021-10-05 14:13:38 -04:00
debian_iso_image.inc debian: Remove debian-integration package 2022-12-06 08:01:23 -05:00
debian_pkg_dirs Add accel-config to stx-debian-tools-dev docker image 2024-03-21 12:20:34 -03:00
debian_stable_docker_images.inc Port stx-pci-irq-affinity-agent to stx-debian 2023-01-16 15:44:08 -03:00
debian_stable_wheels.inc Debian: Add build structure for utilities/pci-irq-affinity-agent 2023-01-16 15:43:53 -03:00
HACKING.rst Adding zuul jobs for new repo 2019-09-09 13:43:49 -05:00
pylint.rc Re-enable important py3k checks for utilities 2021-10-21 12:34:24 +00:00
README.rst Document PCI IRQ Affinity Agent operation 2021-11-04 16:39:08 -03:00
requirements.txt Turn off legacy resolver workaround in pip 2020-12-17 17:04:41 -06:00
test-requirements.txt Add flake8-import-order and use python3.9 on tox 2022-09-13 21:49:41 +00:00
tox.ini Add python-ldap deps for zuul test 2023-09-15 18:41:49 +00:00

utilities

This file serves as documentation for the components and features included on the utilities repository.

PCI IRQ Affinity Agent

While in OpenStack it is possible to enable instances to use PCI devices, the interrupts generated by these devices may be handled by host CPUs that are unrelated to the instance, and this can lead to a performance that is lower than it could be if the device interrupts were handled by the instance CPUs.

The agent only acts over instances with dedicated vCPUs. For instances using shared vCPUs no action will be taken by the agent.

The expected outcome from the agent operation is achieving a higher performance by assigning the instances core to handle the interrupts from PCI devices used by these instances and avoid interrupts consuming excessive cycles from the platform cores.

Agent operation

The agent operates by listening to RabbitMQ notifications from Nova. When an instance is created or moved to the host, the agent checks for an specific flavor spec (detailed below) and if it does then it queries libvirt to map the instance vCPUs into pCPUs from the host.

Once the agent has the CPU mapping, it determines the IRQ for each PCI device used by the instance, and then it loops over all PCI devices and determines which host NUMA node is associated with the device, the pCPUs that are associated with the NUMA node and finally set the CPU affinity for the IRQs of the PCI device based on the pCPU list.

There is also a periodic audit that runs every minute and loops over the existing IRQs, so that if there are new IRQs that weren't mapped before the agent maps them, and if there are PCI devices that aren't associated to an instance that they were before, their IRQ affinity is reset to the default value.

Flavor spec

The PCI IRQ Affinity Agent uses a specific flavor spec for PCI interrupt affining, that is used to determine which vCPUs assigned to the instance must handle the interrupts from the PCI devices:

  • hw:pci_irq_affinity_mask=<vcpus_cpulist>

Where vcpus_cpulist can assume a comma-separated list of values that can be expressed as:

  • int: the vCPU expressed by int will be assigned to handle the interruptions from the PCI devices
  • int1-int2: the vCPUs between int1 and int2 (inclusive) will be used to handle the interruptions from the PCI devices
  • ^int: the vCPU expressed by int will not be assigned to handle the interruptions from the PCI devices and shall be used to exclude a vCPU that was included in a previous range

NOTE: int must be a value between 0 and flavor.vcpus - 1

Example: hw_pci_irq_affinity_mask=1-4,^3,6 means that vCPUs with indexes 1,2,4 and 6 from the vCPU list that Nova allocates to the instance will be assigned to handle interruptions from the PCI devices.

Limitations

  • No CPU affining is performed for instances using shared CPUs (i.e., when using flavor spec hw:cpu_policy=shared)
  • No CPU affining will be performed when invalid ranges are specified on the flavor spec, the agent instead will log error messages indicating the problem

Agent packaging

The agent code resides on the starlingx/utilities repo, along with the spec and docker_image files that are used to build a CentOS image with the agent wheel installed on it.

The agent is deployed by Armada along with the other OpenStack helm charts; refer to PCI IRQ Affinity Agent helm chart on starlingx/openstack-armada-app repository.