These options are set in the neutron-api charm centrally, and
this patch allows neutron-openvswitch charm to continue doing:
1, polling_interval
Just used by neutron l2 agents, so neutron-openvswitch charm
gets it via it's relations and set it in [agent] of ml2_conf.ini
or openvswitch_agent.ini(>=Mitaka)
2, rpc_response_timeout
Used by all neutron agents, so both neutron-gateway charm and
neutron-openvswitch charm get it via it's relations and set it
in [default] of neutron.conf
3, report_interval
Used by all neutron agents, so both neutron-gateway charm and
neutron-openvswitch charm get it via it's relations and set it
in [agent] of neutron.conf
Change-Id: I76c0c75d5f3b4fdd1eb3242b53fde2e829fedca5
Partial-Bug: #1685788
Specify the dns_domain value in dhcp_agent.ini configuration
file in order to indicate the dns search domain which should
be advertised by the dnsmasq DHCP server.
Note, for neutron-openvswitch - this only takes effect when
the enable-local-dhcp-and-metadata flag is set to true.
Change-Id: If3529cf32a6e10d44c86423151cdacdad50445f8
Implements: blueprint charms-internal-dns
Add a new option to provide the ability to specify flags in the
dnsmasq.conf file. This allows users to configure the dnsmasq
processes used by the neutron-dhcp-agent when local dhcp and
metadata are enabled for provider networks.
Change-Id: I2bab8a00322afb0f81986001c86f0ef4fc535651
Closes-Bug: #1684231
Neutron has supported use of a native openvswitch firewall driver
for a few releases; OpenStack Mitaka on Ubuntu 16.04 has the
required kernel and openvswitch versions to support this feature.
Add a new firewall-driver configuration option to support use
of the openvswitch native firewall; the default remains as the
iptables_hybrid driver, and users can switch to the openvswitch
driver if they are deployed on Ubuntu Xenial or later.
Change-Id: I4c228c5cbbff7f9673c1028ee4b075edba1fdc13
Closes-Bug: 1681890
SR-IOV network for OpenStack release later than Mitaka requires the
use of the neutron-sriov-agent to support management of SR-IOV PF
and VF interface state by Neutron - said interfaces are still
consumed directly by nova-compute/libvirt via PCI device allocation
scheduling for instances.
Add new configuration options to the neutron-openvswitch charm to
support enablement of the SR-IOV agent; this could have been done
automatically from data presented from neutron-api, but its possible
that cloud deployments may only have subsets of compute nodes that
are SR-IOV enabled in terms of hardware.
Enabling this option ('enable-sriov') will install and configure
the neutron-sriov-agent; configuration of SR-IOV PF's are made
using the 'sriov-numvfs', which by default automatically configures
all SR-IOV devices on every machine to the maximum number of VF's
supported by the device. This option can be used to configure
devices at an individual level as well.
Finally, neutron needs to understand what underlying provider
network each SR-IOV device maps to - this is configured using the
sriov-device-mappings configuration option.
Change-Id: Ie185fd347ddc1b11e9ed13cefaf44fb7c8546ab0
I've added support for 'availability_zone' parameter. I've added
'dhcp_agent.ini' template and implemented the parameter to be consumed
via 'neutron-plugin' relation settings.
Change-Id: I015a6dfcf89800043bd7dbf02b07da07d8a7d728
Closes-Bug: 1595937
Add neutron-control interface to allow charms to send triggers to
restart neutron services managed by this charm
Change-Id: I0e44f7cab99db4fb9b5d2764859e16b30705e6fe
All contributions to this charm where made under Canonical
copyright; switch to Apache-2.0 license as agreed so we
can move forward with official project status.
Change-Id: I7bd44dc15ad951bf2536e5ee10de01ec592b8970
Note that this change only impacts use of this charm when
Distributed Virtual Routing is enabled in a deployment.
Switch the generated configuration to use "new" style external
networks when ext-port is not set. In this case we configure:
external_network_bridge = (intentionally blank)
gateway_external_network_id = (blank)
The current template configures external networks by using the default
external_network_bridge=br-ex (implied when not set). This activates
legacy code which assumes that a single external network exists on
that bridge and the L3 Agent directly plugs itself in.
provider:network_type, provider:physical_network and
provider:segmentation_id are ignored. You cannot create multiple
networks and you cannot use segmented networks (e.g. VLAN)
By setting external_network_bridge = (intentionally blank) the L2
Agent handles the configuration instead, this allows us to create
multiple networks and also to use more complex network configurations
such as VLAN. It is also possible to use the same physical connection
with different segmentation IDs for both internal and external
networks, as well as multiple external networks.
Legacy/existing configurations where ext-port is set generate the same
configuration as previous and should continue to work as before.
Migration from legacy to new style configuration is not supported.
Change-Id: I3d06581850ccbe5ea77741c4a546e663b2957a91
Closes-Bug: #1536768
The shared secret context makes use of 'resolve_address' to
resolve the local_ip address of the unit; the resulting
value is not actually used in the metadata_agent.ini template
and breaks under Juju 2.0, where resolve_address attempts
to use network-get to resolve the public endpoint of the
service using extra bindings (which are not relevant for this
charm).
Drop use of resolve address and tidy templates; the default
127.0.0.1 address is fine for accessing the Nova Metadata
service from the Neutron Metadata agent proxy.
Change-Id: I03fc6d1c7c8ca832b02a7df5b1666c04aaecc589
Close-Bug: 1580271
Check to see if a restart trigger has been sent by the principle,
if it has then right the trigger uuid in to the neutron.conf to
trigger a service restart
Change-Id: I19649cb73dad94f4fe24412c0b8c37a28f30047d
Partial-Bug: 1571634
Add full support for DPDK; this includes a number of configuration
options to allow the number of cores and memory allocated per
NUMA node to be changed. By default, the first core and 1024MB of
RAM of each NUMA node will be configured for DPDK use.
When DPDK is enabled, OVS bridges are configured as datapath type
'netdev' rather than type 'system' to allow use of userspace
DPDK packet processing; Security groups are also disabled, as
iptables based rules cannot be applied against userspace sockets.
DPDK device binding is undertaken using /etc/dpdk/interfaces and
the dpdk init script provided as part of the DPDK package; device
resolution is determined using the data-port configuration option
using the <bridge:<mac address> format - MAC addresses are used
to resolve underlying PCI device names for binding with DPDK.
It's assumed that hugepage memory configuration is either done as
part of system boot as kernel command line options (set via MAAS)
or using the hugepages configuration option on the nova-compute
charm.
Change-Id: Ieb2ac522b07e495f1855e304d31eef59c316c0e4
Juju 2.0 provides support for network spaces, allowing
charm authors to support direct binding of relations and
extra-bindings onto underlying network spaces.
Resync charm-helpers to pickup support for new hookenv
tools and add data extra-binding to the charm metadata.
This allows the local endpoint IP for overlay tunnels to
be configured using network spaces.
Any existing configuration of os-data-network is preferred
over the new binding support if already set.
Change-Id: I0e2e3f51106b6c6483f22ce4abd04bcb098b484e