SR-IOV network for OpenStack release later than Mitaka requires the
use of the neutron-sriov-agent to support management of SR-IOV PF
and VF interface state by Neutron - said interfaces are still
consumed directly by nova-compute/libvirt via PCI device allocation
scheduling for instances.
Add new configuration options to the neutron-openvswitch charm to
support enablement of the SR-IOV agent; this could have been done
automatically from data presented from neutron-api, but its possible
that cloud deployments may only have subsets of compute nodes that
are SR-IOV enabled in terms of hardware.
Enabling this option ('enable-sriov') will install and configure
the neutron-sriov-agent; configuration of SR-IOV PF's are made
using the 'sriov-numvfs', which by default automatically configures
all SR-IOV devices on every machine to the maximum number of VF's
supported by the device. This option can be used to configure
devices at an individual level as well.
Finally, neutron needs to understand what underlying provider
network each SR-IOV device maps to - this is configured using the
sriov-device-mappings configuration option.
Change-Id: Ie185fd347ddc1b11e9ed13cefaf44fb7c8546ab0
All contributions to this charm where made under Canonical
copyright; switch to Apache-2.0 license as agreed so we
can move forward with official project status.
Change-Id: I7bd44dc15ad951bf2536e5ee10de01ec592b8970
Add full support for DPDK; this includes a number of configuration
options to allow the number of cores and memory allocated per
NUMA node to be changed. By default, the first core and 1024MB of
RAM of each NUMA node will be configured for DPDK use.
When DPDK is enabled, OVS bridges are configured as datapath type
'netdev' rather than type 'system' to allow use of userspace
DPDK packet processing; Security groups are also disabled, as
iptables based rules cannot be applied against userspace sockets.
DPDK device binding is undertaken using /etc/dpdk/interfaces and
the dpdk init script provided as part of the DPDK package; device
resolution is determined using the data-port configuration option
using the <bridge:<mac address> format - MAC addresses are used
to resolve underlying PCI device names for binding with DPDK.
It's assumed that hugepage memory configuration is either done as
part of system boot as kernel command line options (set via MAAS)
or using the hugepages configuration option on the nova-compute
charm.
Change-Id: Ieb2ac522b07e495f1855e304d31eef59c316c0e4