When HA router is created in "stanby" mode, ipv6 forwarding is
disabled by default in its namespace.
But when router is transitioned to be "master" on node, ipv6
forwarding should be enabled. This was fine for routers with
configured gateway but we somehow missed the case when router don't
have gateway configured.
Because of that missing ipv6 forwarding setting in such case, IPv6
W-E traffic between 2 subnets was not working fine in L3 HA case.
This patch fixes it by adding configuring ipv6_forwarding on
"all" interface in router's namespace always, even if it don't have
gateway configured.
Change-Id: I8b1b2b426f7a26a4b2407a83f9bf29dd6e9ba7b0
CLoses-Bug: #1818224
This patch implements devstack plugin for network-segment-range api.
The network-segment-range api service is based on network-segment-range
spec [1].
[1] https://specs.openstack.org/openstack/neutron-specs/specs/stein/network-segment-range-management.html
Co-authored-by: Allain Legacy <Allain.legacy@windriver.com>
Partially-implements: blueprint network-segment-range-management
Change-Id: I09116a4323763db12917e03f354cf0ef25289fd0
Since iptables-restore doesn't support --dport with protocol vrrp,
it errors out setting the security groups on the hypervisor.
Marking this a partial fix, since we need a change to prevent
adding those incompatible rules in the first place, but this
patch will stop the bleeding.
Change-Id: If5e557a8e61c3aa364ba1e2c60be4cbe74c1ec8f
Partial-Bug: #1818385
This patch makes necessary changes to ML2 type drivers and plugin
manager for network segment range extension support when it is loaded.
When the network segment range extension is not loaded, no impact to the
current flow.
When the extension is loaded,
- populating a range that is managed from the configuration file [1]_,
such as "VLAN IDs", "VXLAN VNI IDs", "GRE tunnel IDs",
"Geneve VNI IDs" to the network segment range DB table as a "default"
and "shared" entry to maintain backward compatibility;
- reloading the "default" segment ranges when Neutron server
starts/restarts;
- creating a set of "default" network segment ranges out of the
ML2-config-file-defined ranges [1]_ and the segment allocation
operations are always retrieving the information from the DB to have
the network segment ranges fully administered via API;
- when a tenant allocates a segment, it will first allocate from an
available segment range assigned to the tenant, and then a shared
range if no tenant specific allocation is possible.
[1] /etc/neutron/plugins/ml2/ml2_conf.ini
Co-authored-by: Allain Legacy <Allain.legacy@windriver.com>
Partially-implements: blueprint network-segment-range-management
Change-Id: I522940fc4d054f5eec1110eb2c424e32e8ae6bad
Drive the choice of mechanism driver during binding as inferred from
the resource provider allocated by nova and as told to neutron via the
port's binding:profile.
As discussed on a neutron qos irc meeting some time ago
this patch introduces a new assumption on bind_port() implementations.
That is an implementation of bind_port() in any mech driver supporting
Guaranteed Minimum Bandwidth bind_port() must not have a non-idempotent
side effect. Because the last binding level will be redone for a 2nd
time with a narrowed down list of mechanism drivers. And if the 2nd call
does not give the same result as the first all kind of weird things can
happen.
Change-Id: I2b7573ec6795170ce45a13d5d0ad7844fb85182d
Depends-On: https://review.openstack.org/574781
Depends-On: https://review.openstack.org/635160
Partial-Bug: #1578989
See-Also: https://review.openstack.org/502306 (nova spec)
See-Also: https://review.openstack.org/508149 (neutron spec)
Sometimes, when the OVSDB is too loaded (that could happen during the
functional tests), there is a delay between the OVSDB post transaction
end and when the register (new or updated) can be read. Although this is
something that should not happen (considering the OVSDB is transactional),
tests should deal with this inconvenience and provide a robust method to
retrieve a value and at the same time check the value. This new method
should provide a retrieving mechanism to read again the value in case of
discordance.
In order to solve the gate problem ASAP, another bug is fixed in this
patch: to skip the QoS removal when OVS agent is initialized during
funtional tests
When executing functional tests, several OVS QoS policies specific for
minimum bandwidth rules [1]. Because during the functional tests
execution several threads can create more than one minimum bandwidth
QoS policy (something in a production environment cannot happen), the
OVS QoS driver must skip the execution of [2] to avoid removing other
QoS created in parellel in other tests.
This patch is marking as unstable "test_min_bw_qos_policy_rule_lifecycle"
and "test_bw_limit_qos_port_removed". Those tests will be investigated
once the CI gates are stable.
[1] Those QoS policies are created only to hold minimum bandwidth rules.
Those policies are marked with:
external_ids: {'_type'='minimum_bandwidth'}
[2] d6fba30781/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py (L43)
Closes-Bug: #1818613
Closes-Bug: #1818859
Related-Bug: #1819125
Change-Id: Ia725cc1b36bc3630d2891f86f76b13c16f6cc37c
In functional test environment, seems L3 agent can not handle
such 30+ routers in the test test_router_processing_pool_size.
It still meets timeout for some processing procedure.
For now, router initialize/process/delete is not our test purpose
for this case, so we just mock them.
Closes-Bug: #1816239
Change-Id: I85dc6fd9d98a6a13bbf35ee2e67ce6f69be48dde
Removing an active or a standby HA router from an agent that has a
valid DVR serviceable port (such as DHCP), does not remove the
HA interface associated with the Router in the SNAT namespace.
When we try to add the HA router back to the agent, then it
adds more than one HA interface to the SNAT Namespace causing
more problems and we sometimes also see multiple active routers.
This bug might have been introduced by this patch [1].
Fix the problem by just adding the router namespaces without HA
interfaces when there is no HA and re-insert the HA interfaces
when HA router is bound to the agent into the namespace.
[1] https://review.openstack.org/#/c/522362/
Closes-Bug: #1816698
Change-Id: Ie625abcb73f8185bb2bee06dcd26a01d8af0b0d1
Current version used is old and does not work on Bionic nodes. But as
Xenial kernels do not include the fix for local VXLAN tunnels
(bug/1684897), we still have to use a locally compiled version.
On Xenial nodes, the Queens UCA repository has openvswitch 2.9.0
On Bionic nodes, we have 2.9.2
So use the latest 2.9 release for fullstack testing
Change-Id: Ifb61daa1f14969a1d09379599081e96053488f9f
Closes-Bug: #1818632
This patch adds the support for network segment range CRUD. Subsequent
patches will be added to use this network segment range on segment
allocation if this extension is loaded.
Changes include:
- an API extension which exposes the segment range to be administered;
- standard attributes with tagging support for the new resource;
- a new service plugin "network_segment_range" for the feature
enabling/disabling;
- a new network segment range DB table model along with operation
logic;
- Oslo Versioned Objects for network segment range data model;
- policy-in-code support for network segment range.
Co-authored-by: Allain Legacy <Allain.legacy@windriver.com>
Partially-implements: blueprint network-segment-range-management
Change-Id: I75814e50b2c9402fe6776229d469745d7a72290b
It may helpdebug some issues related to keepalived and/or
dnsmasq which are logging to journal only.
Change-Id: I42c311f9111e0a0d1a6ea3a7aeab0fef8d77c549
While the initial version of this patch removed neutron.db.api, a
different duplicate patch [1] landed first.
This patch cleans up the remining references to neutron.db.api
including those in the docs and comments.
[1] https://review.openstack.org/#/c/635978/
Change-Id: I5f911f4c6a1fc582a9c1006ec5e2880853ff2909
In [1], a new init parameter was introduced in the class
OVSAgentExtensionAPI. This change in the extension API can break
backwards compatibility with other projects (networking_sfc and
bagpipe are affected).
Because this parameter is needed only in qos_driver extension when
calling OVSAgentExtensionAPI.request_phy_brs() (to retrieve the
physical bridges list), we can make this new parameter optional not
to break other stadium projects. When the OVS it's initialized
(in-tree agent), the extension is called with the three needed
parameters.
[1] https://review.openstack.org/#/c/406841/22/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_agent_extension_api.py@43
Change-Id: I31d1a31a935fdcdd12e13e1bc58f7c5f640ca092
Closes-Bug: #1818693
In case when L3 agent is running in dvr_snat mode on compute node,
it is like that e.g. in some of the gate jobs, it may happen that
same router is scheduled to be in standby mode on compute node and
on same compute node there is instance connected to it.
So in such case metadata proxy needs to be spawned in router namespace
even if it is in standby mode.
Change-Id: Id646ab2c184c7a1d5ac38286a0162dd37d72df6e
Closes-Bug: #1817956
Closes-Bug: #1606741
Unfortunately it still sometimes fails because restart was still happened in
very short pause between agents.
I will need to figure out some other possible solution for that issue.
This reverts commit bdd3540554.
Change-Id: Iaf9d1be3255e941c5fe227943535ab7c6905253c
Today, if live migration has failed after an inactive
binding was created on the destination node but before
the activation of the created binding, the port's binding level
for the destination host is not cleared during nova's API call
to neutron to delete the port binding.
This causes future attempts to perform live migration
of the instance to the same host to fail.
This change removes port binding level object during port binding
deletion.
Closes-Bug: #1815345
Change-Id: Idd55f7d24a2062c08ac8a0dc2243625632d962a5
In netlink_lib functional tests module there are listed conntrack
entries and those entries are assert to some expected list.
It may happen that sometimes some additional entries from other
tests will also be in the list and that cause failures of
netlink_lib tests.
So this patch changes way how those assertions are done. For now
it will check if each of expected entries is in entries list and
in case of delete entries tests, it will also check if any of
deleted entries isn't actually in list.
Change-Id: I30c18f141a8356b060902e6493ba0657b21619ad
Closes-Bug: #1817295