I've added support for 'availability_zone' parameter. I've added
'dhcp_agent.ini' template and implemented the parameter to be consumed
via 'neutron-plugin' relation settings.
Change-Id: I015a6dfcf89800043bd7dbf02b07da07d8a7d728
Closes-Bug: 1595937
Add neutron-control interface to allow charms to send triggers to
restart neutron services managed by this charm
Change-Id: I0e44f7cab99db4fb9b5d2764859e16b30705e6fe
All contributions to this charm where made under Canonical
copyright; switch to Apache-2.0 license as agreed so we
can move forward with official project status.
Change-Id: I7bd44dc15ad951bf2536e5ee10de01ec592b8970
Note that this change only impacts use of this charm when
Distributed Virtual Routing is enabled in a deployment.
Switch the generated configuration to use "new" style external
networks when ext-port is not set. In this case we configure:
external_network_bridge = (intentionally blank)
gateway_external_network_id = (blank)
The current template configures external networks by using the default
external_network_bridge=br-ex (implied when not set). This activates
legacy code which assumes that a single external network exists on
that bridge and the L3 Agent directly plugs itself in.
provider:network_type, provider:physical_network and
provider:segmentation_id are ignored. You cannot create multiple
networks and you cannot use segmented networks (e.g. VLAN)
By setting external_network_bridge = (intentionally blank) the L2
Agent handles the configuration instead, this allows us to create
multiple networks and also to use more complex network configurations
such as VLAN. It is also possible to use the same physical connection
with different segmentation IDs for both internal and external
networks, as well as multiple external networks.
Legacy/existing configurations where ext-port is set generate the same
configuration as previous and should continue to work as before.
Migration from legacy to new style configuration is not supported.
Change-Id: I3d06581850ccbe5ea77741c4a546e663b2957a91
Closes-Bug: #1536768
The shared secret context makes use of 'resolve_address' to
resolve the local_ip address of the unit; the resulting
value is not actually used in the metadata_agent.ini template
and breaks under Juju 2.0, where resolve_address attempts
to use network-get to resolve the public endpoint of the
service using extra bindings (which are not relevant for this
charm).
Drop use of resolve address and tidy templates; the default
127.0.0.1 address is fine for accessing the Nova Metadata
service from the Neutron Metadata agent proxy.
Change-Id: I03fc6d1c7c8ca832b02a7df5b1666c04aaecc589
Close-Bug: 1580271
Check to see if a restart trigger has been sent by the principle,
if it has then right the trigger uuid in to the neutron.conf to
trigger a service restart
Change-Id: I19649cb73dad94f4fe24412c0b8c37a28f30047d
Partial-Bug: 1571634
Add full support for DPDK; this includes a number of configuration
options to allow the number of cores and memory allocated per
NUMA node to be changed. By default, the first core and 1024MB of
RAM of each NUMA node will be configured for DPDK use.
When DPDK is enabled, OVS bridges are configured as datapath type
'netdev' rather than type 'system' to allow use of userspace
DPDK packet processing; Security groups are also disabled, as
iptables based rules cannot be applied against userspace sockets.
DPDK device binding is undertaken using /etc/dpdk/interfaces and
the dpdk init script provided as part of the DPDK package; device
resolution is determined using the data-port configuration option
using the <bridge:<mac address> format - MAC addresses are used
to resolve underlying PCI device names for binding with DPDK.
It's assumed that hugepage memory configuration is either done as
part of system boot as kernel command line options (set via MAAS)
or using the hugepages configuration option on the nova-compute
charm.
Change-Id: Ieb2ac522b07e495f1855e304d31eef59c316c0e4
Juju 2.0 provides support for network spaces, allowing
charm authors to support direct binding of relations and
extra-bindings onto underlying network spaces.
Resync charm-helpers to pickup support for new hookenv
tools and add data extra-binding to the charm metadata.
This allows the local endpoint IP for overlay tunnels to
be configured using network spaces.
Any existing configuration of os-data-network is preferred
over the new binding support if already set.
Change-Id: I0e2e3f51106b6c6483f22ce4abd04bcb098b484e