Fix use of OVS DPDK context by direct use of methods on context
for OVS table values.
For modern OVS versions that require the PCI address of the
DPDK device for type=dpdk ports, use a hash of the PCI address
for the port name rather than the index of the PCI device in
the current list of devices to use; this is idempotent in the
event that the configuration changes and new devices appear
in the list of devices to use for DPDK.
Only set OVS table values if the value has changed; OVS will
try to re-allocate hugepage memory, irrespective as to whether
the table value actually changed.
Switch to using /run/libvirt-vhost-user for libvirt created DPDK
sockets, allowing libvirt to directly create the socket as part
of instance creation; Use systemd-tmpfiles to ensure that the
vhost-user subdirectory is re-created on boot with the correct
permissions.
Scan data-port and dpdk-bond-mappings for PCI devices to use
for DPDK to avoid having to replicate all PCI devices in data-port
configuration when DPDK bonds are in use.
Change-Id: I2964046bc8681fa870d61c6cd23b6ad6fee47bf4
This patch adds support for reading the 'enable-qos' setting from the
neutron-plugin-api relation and adding 'qos' to the extension_drivers setting
if it is True. This is part of a wider set of changes to support QoS across the
neutron charms.
A charmhelper sync was performed to pull in the QoS update to the
NeutronAPIContext.
Note: Amulet tests will fail until the corresponding neutron-api change
lands
Depends-On: I1beba9bebdb7766fd95d47bf13b6f4ad86e762b5
Change-Id: I9d857a4f2a25c6080963a0f3f6e6592c0a77d133
Partial-Bug: #1705358
These options are set in the neutron-api charm centrally, and
this patch allows neutron-openvswitch charm to continue doing:
1, polling_interval
Just used by neutron l2 agents, so neutron-openvswitch charm
gets it via it's relations and set it in [agent] of ml2_conf.ini
or openvswitch_agent.ini(>=Mitaka)
2, rpc_response_timeout
Used by all neutron agents, so both neutron-gateway charm and
neutron-openvswitch charm get it via it's relations and set it
in [default] of neutron.conf
3, report_interval
Used by all neutron agents, so both neutron-gateway charm and
neutron-openvswitch charm get it via it's relations and set it
in [agent] of neutron.conf
Change-Id: I76c0c75d5f3b4fdd1eb3242b53fde2e829fedca5
Partial-Bug: #1685788
Neutron has supported use of a native openvswitch firewall driver
for a few releases; OpenStack Mitaka on Ubuntu 16.04 has the
required kernel and openvswitch versions to support this feature.
Add a new firewall-driver configuration option to support use
of the openvswitch native firewall; the default remains as the
iptables_hybrid driver, and users can switch to the openvswitch
driver if they are deployed on Ubuntu Xenial or later.
Change-Id: I4c228c5cbbff7f9673c1028ee4b075edba1fdc13
Closes-Bug: 1681890
Add full support for DPDK; this includes a number of configuration
options to allow the number of cores and memory allocated per
NUMA node to be changed. By default, the first core and 1024MB of
RAM of each NUMA node will be configured for DPDK use.
When DPDK is enabled, OVS bridges are configured as datapath type
'netdev' rather than type 'system' to allow use of userspace
DPDK packet processing; Security groups are also disabled, as
iptables based rules cannot be applied against userspace sockets.
DPDK device binding is undertaken using /etc/dpdk/interfaces and
the dpdk init script provided as part of the DPDK package; device
resolution is determined using the data-port configuration option
using the <bridge:<mac address> format - MAC addresses are used
to resolve underlying PCI device names for binding with DPDK.
It's assumed that hugepage memory configuration is either done as
part of system boot as kernel command line options (set via MAAS)
or using the hugepages configuration option on the nova-compute
charm.
Change-Id: Ieb2ac522b07e495f1855e304d31eef59c316c0e4