This patch removes prevent-arp-spoofing warning log messages when it is
set to true, and the OpenStack release is >= ocata. It does not remove
the configuration option itself.
Change-Id: Ida1d67331e6d2df3d88d782125b8914f7dea5704
Closes-Bug: 1754386
Currently it is a requirement to have a network node with an l3 agent
running in the dvr_snat mode even for DVR deployments that do not use
SNAT or have a very limited usage of SNAT.
It is not possible to disable snat completely:
https://bugs.launchpad.net/neutron/+bug/1761591
Neutron creates a network:router_centralized_snat port and if it is not
possible to find a dvr_snat agent to schedule it on there are various
side-effects which are not seen at first. For example, Designate stops
creating records for floating IPs and Neutron/Designate integration is,
therefore, not functional.
The Neutron DVR documentation says that dvr_snat should be used on
network nodes. However, there is nothing restricting a DVR deployment
from using dvr_snat l3 agents on every compute node and not having
dedicated network nodes.
This change modifies neutron-openvswitch to optionally enable dvr_snat
l3 agent mode (this includes supporting L3HA routers if enabled). As a
result, it is possible to have deployments without neutron-gateway thus
saving on the amount of required nodes. Care should be taken when a
large amount of L3HA routers is used and using DVR routers without L3HA
is a recommended.
Change-Id: Iad3a64967f91c81312911f6db856ce2271b0e068
Closes-Bug: #1808045
Add support to enabling logging of security groups for
OpenStack Queens or later; this feature is enabled via
the neutron-api charm, with local charm configuration
options to allow control of rate and burst limits and to
set a local log output directory if require (allowing log
data to be written to a separate partition for example).
The feature is only compatible with the openvswitch firewall
driver and will not be enabled if this configuration option
is not set.
Basic deployment tests changes is included here since
nova-cloud-controller unit and relation was missing before,
and it leads to CI constantly failing.
Corresponding charm-helpers change:
https://github.com/juju/charm-helpers/pull/228
Change-Id: Id6ed09f714981e87838186d51a4f5e693bedb1d3
Closes-Bug: #1787397
Depends-On: https://review.openstack.org/602355
Fix use of OVS DPDK context by direct use of methods on context
for OVS table values.
For modern OVS versions that require the PCI address of the
DPDK device for type=dpdk ports, use a hash of the PCI address
for the port name rather than the index of the PCI device in
the current list of devices to use; this is idempotent in the
event that the configuration changes and new devices appear
in the list of devices to use for DPDK.
Only set OVS table values if the value has changed; OVS will
try to re-allocate hugepage memory, irrespective as to whether
the table value actually changed.
Switch to using /run/libvirt-vhost-user for libvirt created DPDK
sockets, allowing libvirt to directly create the socket as part
of instance creation; Use systemd-tmpfiles to ensure that the
vhost-user subdirectory is re-created on boot with the correct
permissions.
Scan data-port and dpdk-bond-mappings for PCI devices to use
for DPDK to avoid having to replicate all PCI devices in data-port
configuration when DPDK bonds are in use.
Change-Id: I2964046bc8681fa870d61c6cd23b6ad6fee47bf4
This brings this charm inline with the neutron-gateway charm
in terms of configurability when using a local dhcp agent.
Change-Id: Idc4f7735aaa9236d8a476fd3bae6aaf52b9dc043
Closes-Bug: 1777888
The current charm does not support creating and managing bonded network
interfaces. They are managed externaly. This is not possible when DPDK
is enabled. In this case OVS exposes the DPDK bond PMD which enslaves
the corresponding attached bond interfaces.
The new dpdk-bond-mappings configuration option allows such configuration
where mac:bond is specified. When the data-port configuration is processed
dpdk-bond-mappings are consulted to identify if the port belongs to a bond.
If this is true - then the bond is created with the mac designated interface
and the bond is added to the bridge. Subsequently more interfaces can be
added to the same bond.
Change-Id: I0224caaa1c2431c793c4f64caa7fc9e95b972fd7
Some NICs do not work with vfi-pci or uio_pci_generic. E.g. mxl4/mlx5
which uses relies on the OFED or Kernel drivers (post 4.14).
In this cases we don't want to generate entries in /etc/dpdk/inerfaces.
Here we change the configuration processing behavior. The charm will omit
adding entries in the aforementioned file when the value is not set.
The default value is changed to empty (i.e. None)
Change-Id: I2fb9f0404adbbee0f298729467794e172bae2d98
In some cases cpulist doesn't contain '-' and lists all
cores one by one. For this kind of lists, splitting by
comma will break parse_cpu_list().
Change-Id: Icc5fcf6408d76fdef34ccb18657624cfe5593f10
Closes-Bug: #1771527
This patch adds support for reading the 'enable-qos' setting from the
neutron-plugin-api relation and adding 'qos' to the extension_drivers setting
if it is True. This is part of a wider set of changes to support QoS across the
neutron charms.
A charmhelper sync was performed to pull in the QoS update to the
NeutronAPIContext.
Note: Amulet tests will fail until the corresponding neutron-api change
lands
Depends-On: I1beba9bebdb7766fd95d47bf13b6f4ad86e762b5
Change-Id: I9d857a4f2a25c6080963a0f3f6e6592c0a77d133
Partial-Bug: #1705358
Adds a dns-servers config option for specifying the forwarding
dns servers to be used by the dnsmasq services on the neutron
dhcp agent. This enables services using internal dns to also
specify the forwarding dns servers in order to resolve hosts
outside of the neutron network space.
Note: this option only takes effect when the
enable-local-dhcp-and-metadata flag is set to True.
Change-Id: I510d163dd9738477b15497b25266e73a50368539
Implements: blueprint internal-dns
Closes-Bug: #1713721
SR-IOV interfaces are currently only configured on charm
installation and not after seubsequent reboots.
The VFs need to be configured before the Neutron SR-IOV
agent is started. Charms should also really not be involved
in boot time system configuration. Due to these factors
this commit adds a init script and corrensponding systemd
unit file and upstart job to handle the boot-time configuration.
Keep configure_sriov function for runtime configuration. Add
warning about runtime configuration disrupting network service.
Add restart of Neutron SR-IOV agent after runtime configuration.
Cap value of sriov-numvfs at each interfaces sriov_totalvfs value.
Change-Id: I7bde7217bf027db09ded35a262c214ccb11d6d86
Closes-Bug: #1697572
These options are set in the neutron-api charm centrally, and
this patch allows neutron-openvswitch charm to continue doing:
1, polling_interval
Just used by neutron l2 agents, so neutron-openvswitch charm
gets it via it's relations and set it in [agent] of ml2_conf.ini
or openvswitch_agent.ini(>=Mitaka)
2, rpc_response_timeout
Used by all neutron agents, so both neutron-gateway charm and
neutron-openvswitch charm get it via it's relations and set it
in [default] of neutron.conf
3, report_interval
Used by all neutron agents, so both neutron-gateway charm and
neutron-openvswitch charm get it via it's relations and set it
in [agent] of neutron.conf
Change-Id: I76c0c75d5f3b4fdd1eb3242b53fde2e829fedca5
Partial-Bug: #1685788
Specify the dns_domain value in dhcp_agent.ini configuration
file in order to indicate the dns search domain which should
be advertised by the dnsmasq DHCP server.
Note, for neutron-openvswitch - this only takes effect when
the enable-local-dhcp-and-metadata flag is set to true.
Change-Id: If3529cf32a6e10d44c86423151cdacdad50445f8
Implements: blueprint charms-internal-dns
Add a new option to provide the ability to specify flags in the
dnsmasq.conf file. This allows users to configure the dnsmasq
processes used by the neutron-dhcp-agent when local dhcp and
metadata are enabled for provider networks.
Change-Id: I2bab8a00322afb0f81986001c86f0ef4fc535651
Closes-Bug: #1684231
Neutron has supported use of a native openvswitch firewall driver
for a few releases; OpenStack Mitaka on Ubuntu 16.04 has the
required kernel and openvswitch versions to support this feature.
Add a new firewall-driver configuration option to support use
of the openvswitch native firewall; the default remains as the
iptables_hybrid driver, and users can switch to the openvswitch
driver if they are deployed on Ubuntu Xenial or later.
Change-Id: I4c228c5cbbff7f9673c1028ee4b075edba1fdc13
Closes-Bug: 1681890
SR-IOV network for OpenStack release later than Mitaka requires the
use of the neutron-sriov-agent to support management of SR-IOV PF
and VF interface state by Neutron - said interfaces are still
consumed directly by nova-compute/libvirt via PCI device allocation
scheduling for instances.
Add new configuration options to the neutron-openvswitch charm to
support enablement of the SR-IOV agent; this could have been done
automatically from data presented from neutron-api, but its possible
that cloud deployments may only have subsets of compute nodes that
are SR-IOV enabled in terms of hardware.
Enabling this option ('enable-sriov') will install and configure
the neutron-sriov-agent; configuration of SR-IOV PF's are made
using the 'sriov-numvfs', which by default automatically configures
all SR-IOV devices on every machine to the maximum number of VF's
supported by the device. This option can be used to configure
devices at an individual level as well.
Finally, neutron needs to understand what underlying provider
network each SR-IOV device maps to - this is configured using the
sriov-device-mappings configuration option.
Change-Id: Ie185fd347ddc1b11e9ed13cefaf44fb7c8546ab0
I've added support for 'availability_zone' parameter. I've added
'dhcp_agent.ini' template and implemented the parameter to be consumed
via 'neutron-plugin' relation settings.
Change-Id: I015a6dfcf89800043bd7dbf02b07da07d8a7d728
Closes-Bug: 1595937
Add neutron-control interface to allow charms to send triggers to
restart neutron services managed by this charm
Change-Id: I0e44f7cab99db4fb9b5d2764859e16b30705e6fe
All contributions to this charm where made under Canonical
copyright; switch to Apache-2.0 license as agreed so we
can move forward with official project status.
Change-Id: I7bd44dc15ad951bf2536e5ee10de01ec592b8970
Note that this change only impacts use of this charm when
Distributed Virtual Routing is enabled in a deployment.
Switch the generated configuration to use "new" style external
networks when ext-port is not set. In this case we configure:
external_network_bridge = (intentionally blank)
gateway_external_network_id = (blank)
The current template configures external networks by using the default
external_network_bridge=br-ex (implied when not set). This activates
legacy code which assumes that a single external network exists on
that bridge and the L3 Agent directly plugs itself in.
provider:network_type, provider:physical_network and
provider:segmentation_id are ignored. You cannot create multiple
networks and you cannot use segmented networks (e.g. VLAN)
By setting external_network_bridge = (intentionally blank) the L2
Agent handles the configuration instead, this allows us to create
multiple networks and also to use more complex network configurations
such as VLAN. It is also possible to use the same physical connection
with different segmentation IDs for both internal and external
networks, as well as multiple external networks.
Legacy/existing configurations where ext-port is set generate the same
configuration as previous and should continue to work as before.
Migration from legacy to new style configuration is not supported.
Change-Id: I3d06581850ccbe5ea77741c4a546e663b2957a91
Closes-Bug: #1536768
The shared secret context makes use of 'resolve_address' to
resolve the local_ip address of the unit; the resulting
value is not actually used in the metadata_agent.ini template
and breaks under Juju 2.0, where resolve_address attempts
to use network-get to resolve the public endpoint of the
service using extra bindings (which are not relevant for this
charm).
Drop use of resolve address and tidy templates; the default
127.0.0.1 address is fine for accessing the Nova Metadata
service from the Neutron Metadata agent proxy.
Change-Id: I03fc6d1c7c8ca832b02a7df5b1666c04aaecc589
Close-Bug: 1580271
Check to see if a restart trigger has been sent by the principle,
if it has then right the trigger uuid in to the neutron.conf to
trigger a service restart
Change-Id: I19649cb73dad94f4fe24412c0b8c37a28f30047d
Partial-Bug: 1571634
Add full support for DPDK; this includes a number of configuration
options to allow the number of cores and memory allocated per
NUMA node to be changed. By default, the first core and 1024MB of
RAM of each NUMA node will be configured for DPDK use.
When DPDK is enabled, OVS bridges are configured as datapath type
'netdev' rather than type 'system' to allow use of userspace
DPDK packet processing; Security groups are also disabled, as
iptables based rules cannot be applied against userspace sockets.
DPDK device binding is undertaken using /etc/dpdk/interfaces and
the dpdk init script provided as part of the DPDK package; device
resolution is determined using the data-port configuration option
using the <bridge:<mac address> format - MAC addresses are used
to resolve underlying PCI device names for binding with DPDK.
It's assumed that hugepage memory configuration is either done as
part of system boot as kernel command line options (set via MAAS)
or using the hugepages configuration option on the nova-compute
charm.
Change-Id: Ieb2ac522b07e495f1855e304d31eef59c316c0e4
Juju 2.0 provides support for network spaces, allowing
charm authors to support direct binding of relations and
extra-bindings onto underlying network spaces.
Resync charm-helpers to pickup support for new hookenv
tools and add data extra-binding to the charm metadata.
This allows the local endpoint IP for overlay tunnels to
be configured using network spaces.
Any existing configuration of os-data-network is preferred
over the new binding support if already set.
Change-Id: I0e2e3f51106b6c6483f22ce4abd04bcb098b484e