Two generic contexts to handle nova vendor metadata have
been implemented in charm-helpers. So, replace the existing
one here in order to simplify and unify the implementation
across all charms that handle vendor metadata.
Change-Id: I2a802c763f2f4403a6dfb17575aa742ca8072e96
Related-Bug: #1777714
If you want to have vrrp watch the external networking interface
today, the option ha_vrrp_health_check_interval [1] detects a failure
it re-triggers the transitional change - which works if the external
physical interface fails because the ping will fail.
In fact, we've tried to enable it before [2], but then we had to
revert it [3] due to instability issues [4] in previous releases of
OpenStack. Maybe the previous instability issue [4] was caused by
another keepalived issue mentioned in the comment [5], now I have
tested this option again, it works.
This is how neutron allows monitoring southbound network today, so
I would suggest we add this capability into the charm again.
[1] https://docs.openstack.org/ocata/networking-guide/ \
deploy-ovs-ha-vrrp.html#keepalived-vrrp-health-check
[2] https://review.opendev.org/#/c/601533/
[3] https://review.opendev.org/#/c/603347/
[4] https://bugs.launchpad.net/neutron/+bug/1793102
[5] https://bugs.launchpad.net/neutron/+bug/1793102/comments/5
Change-Id: If2947e7640545cb9a48215afb9b2439fdc33c645
Closes-Bug: 1825966
The change adds an option to the charm to use JUJU_AVAILABILITY_ZONE
environment variable set by Juju for the hook environment based on the
underlying provider's availability zone information for a given machine.
This information is used to configure the availability_zone setting for
Neutron DHCP and L3 agents specifically because they support it
and for other agents (because both neutron.conf and agent-specific
configuration files are loaded) such as metadata agents and lbaas
agents.
Additionally, a setting is added to allow changing the default
availability zone because 'nova' is a default value coming from the
Neutron defaults for agents.
Change-Id: I94303aa70ee3adc6ace0f9af1e7c4f5c0edbcdb5
Closes-Bug: #1796068
The change turns off the local nova metadata service and uses
endpoint data recieved from the quantum-network-service relation
to point the neutron metadata service at the nova metadata service
on the nova cloud controller for Queens+.
Depends-On: I5ad15ba782cb87b6fdb3c0941a6482d201670bff
Change-Id: I7037a20feac73f3a3f1ed1b8b1b70d0fa534bc46
We actually need this upstream feature, but we found it has
another bug (lp bug: 1793102), so revert it first.
This reverts commit 7b60534ce8.
Change-Id: I8d8a755e250d4d80e269c853a9d3d97c3f364d40
The option ha_vrrp_health_check_interval [1] can re-trigger
the election process until a master is re-elected when multiple
masters problem appear. This is an important feature that enables
the system to recover automatically, we should enable it.
[1] https://docs.openstack.org/ocata/networking-guide/ \
deploy-ovs-ha-vrrp.html#keepalived-vrrp-health-check
Change-Id: Iaf15ac77e249d1fe4a5101068761302e53385642
Closes-Bug: 1732154
Using vendor metadata helps alleviate the need to spin custom images
for things like package mirrors, timezones, or network proxies.
Adds new config option 'vendor-data' which takes a JSON formated
string to be used as static vendor metadata.
Adds new config option 'vendor-data-url' which takes a URL which
serves dynamic JSON formatted vendor metadata.
Adds new NovaMetadataContext class which writes
/etc/nova/vendor_data.json and enables it via nova.conf.
Closes-Bug: 1777714
Change-Id: I1d70804e59d42b0651a462c81e01d9c95626f27d
Refactor codebase and unit tests to default to execution
under Python 3.
Drop install shim as Python 3 is always present >= trusty.
Drop legacy dhcp and network reassignment code from charm as
a) this relies on a py3 neutronclient (not supported on older
releases) and b) this function was superceeded by the ha-legacy-mode
and then neutron router and network HA built in functionality.
Use charmhelper provided get_host_ip as this superceeds the in
charm version of this function.
Change-Id: I0b28bf0851d44e85b1e856cbd97b71099faa76ae
This patch adds support for reading the 'enable-qos' setting from the
neutron-plugin-api relation and adding 'qos' to the extension_drivers setting
if it is True. This is part of a wider set of changes to support QoS across
the neutron charms.
The amulet tests were missing the neutron-api to neutron-gateway relation this
has been added in. A side-effect of this is that the l2-population setting is
now properly being set to True so tests were updated to expect that.
A charmhelper sync was performed to pull in the QoS update to the
NeutronAPIContext.
Note: Amulet tests will fail until the corresponding neutron-api change
lands
Depends-On: I1beba9bebdb7766fd95d47bf13b6f4ad86e762b5
Change-Id: I6dc71a96b635600b7e528a9acdfd4dc0eded9259
Partial-Bug: #1705358
Adds a dns-servers config option for specifying the forwarding
dns servers to be used by the dnsmasq services on the neutron
dhcp agent. This enables services using internal dns to also
specify the forwarding dns servers in order to resolve hosts
outside of the neutron network space.
Change-Id: I3cd608b1491a45f565d5147894b8285e638eeaa7
Implements: blueprint internal-dns
Closes-Bug: #1713721
These two options are set in neutron-api charm centrally,
this patch allows neutron-gateway charm to continue doing:
1, rpc_response_timeout
Used by all neutron agents, so both neutron-gateway charm and
neutron-openvswitch charm get it via it's relations and set it
in [default] of neutron.conf
2, report_interval
Used by all neutron agents, so both neutron-gateway charm and
neutron-openvswitch charm get it via it's relations and set it
in [agent] of neutron.conf
This patch also syncs charmhelpers for setting them centrally.
Change-Id: Ib97418b1aaf55f508cae05f4d7809d79a92a7f6f
Partial-Bug: #1685788
Specify the dns_domain value in dhcp_agent.ini configuration
file in order to indicate the dns search domain which should
be advertised by the dnsmasq DHCP server.
Change-Id: Ic8d30fb087cce8d82960f616460d832740a00ec9
Implements: blueprint internal-dns
Expose the 'enable_metadata_network' and 'enable_isolated_metadata'
configuration options. enable_isolated_metadata enables metadata the
metadata service on networks with no router port.
Change-Id: If773109007a456385adebf295d044247417135db
Closes-Bug: 1514901
Switch the generated configuration to use "new" style external
networks when ext-port is not set. In this case we configure
external_network_bridge = (intentionally blank),
gateway_external_network_id = (blank) and update the README with
information on using this new style of configuration.
The current template configures external networks by using the default
external_network_bridge=br-ex (implied when not set). This activates
legacy code which assumes that a single external network exists on
that bridge and the L3 Agent directly plugs itself in.
provider:network_type, provider:physical_network and
provider:segmentation_id are ignored. You cannot create multiple
networks and you cannot use segmented networks (e.g. VLAN)
By setting external_network_bridge = (intentionally blank) the L2
Agent handles the configuration instead, this allows us to create
multiple networks and also to use more complex network configurations
such as VLAN. It is also possible to use the same physical connection
with different segmentation IDs for both internal and external
networks, as well as multiple external networks.
Legacy/existing configurations where ext-port is set generate the same
configuration as previous and should continue to work as before. I do
not believe it to be easy to migrate existing setups to the "new"
style configuration automatically as changes to the neutron network
configuration may be required (specifically: provider:physical_network
will now be used when it was not before, and may not be correct) and
the physical port needs to be moved from br-ex to br-data which the
charm does not currently handle and is likely to error as it does not
attempt removal first. Further work may be possible in this area.
For information about this new style of configuration being preferred,
see discussions in LP#1491668, LP#1525059 and
http://docs.openstack.org/liberty/networking-guide/scenario-classic-ovs.html
Change-Id: I8d2bb8098e080969e0445293b1ed79714b2c964f
Related-Bug: #1491668
Related-Bug: #1525059
Closes-Bug: #1536768
Add the 'data' extra-binding to support binding of Open vSwitch
endpoints to a specific Juju network space.
Change-Id: Iff6567c2c9b353d729cc9b73d0523d6f72946d98
Includes dropping support for quantum, nvp plugin (renamed
nsx long ago) and generally refactoring the unit tests
around no longer having to deal with neutron and quantum in
the same codebase.
Drop support for database connections - these are no longer
required as all DB access is now via RPC to nova-conductor
or neutron-server.
Roll-up configuration file templates < icehouse, remove any
that are no longer required.
Refactor basic_deployment a bit as it was using the shared-db
relation to retrieve the n-gateway units private-address.
Change-Id: I22957c0e21c4dd49e5aa74795173b4fc8f043f55