Neutron-ovs-agent can now enable IGMP snooping in integration bridge
if config option "igmp_snooping_enable" in OVS section in config will
be set to True.
It will also set mcast-snooping-disable-flood-unregistered=true
so flooding of multicast packets to all unregistered ports will be
disabled also.
Both changes are applied on integration bridge.
Change-Id: I12f4030a35d10d1715d3b4bfb3ed5efb9aa28f2b
Closes-Bug: #1840136
(cherry picked from commit 5b341150e2)
There is a race condition between nova-compute boots instance and
l3-agent processes DVR (local) router in compute node. This issue
can be seen when a large number of instances were booted to one
same host, and instances are under different DVR router. So the
l3-agent will concurrently process all these dvr routers in this
host at the same time.
For now we have a green pool for the router ResourceProcessingQueue
with 8 greenlet, but some of these routers can still be waiting, event
worse thing is that there are time-consuming actions during the router
processing procedure. For instance, installing arp entries, iptables
rules, route rules etc.
So when the VM is up, it will try to get meta via the local proxy
hosting by the dvr router. But the router is not ready yet in that
host. And finally those instances will not be able to setup some
config in the guest OS.
This patch adds a new measurement based on the router quantity to
indicate the L3 router process queue green pool size. The pool size
will be limit from 8 (original value) to 32, because we do not want
the L3 agent cost too much host resource on processing router in the
compute node.
Conflicts:
neutron/tests/functional/agent/l3/test_legacy_router.py
Related-Bug: #1813787
Change-Id: I62393864a103d666d5d9d379073f5fc23ac7d114
(cherry picked from commit 837c9283ab)
Patch [1] introduced new mechanism which only brings UP interfaces
on master node of HA router. It works fine with keepalived 1.x
but it is broken when keepalived 2.x was used (e.g. on Centos 8) as
in this new version of keepalived by default all interfaces of VIPs
and routes are tracked, and if one of them is DOWN, keepalived is
going to FAULT state. Because of that router will never be
transitioned to MASTER on any node.
This patch fixes it by adding "no_track" option to all VIPs
and routes in keepalived's config file.
This "no_track" option isn't added to ha interface so this one
is still tracked by keepalived.
[1] https://review.opendev.org/#/c/707406/
Closes-bug: #1874211
Change-Id: Ic16cf83fe1d1576d91047adb2d4f9e07d57185b6
(cherry picked from commit dc9084a8ec)
As described in the bug, when a HA router transitions from "master" to
"backup", "keepalived" processes will set the virtual IP in all other
HA routers. Each HA router will then advert it and "keepalived" will
decide, according to a trivial algorithm (higher interface IP), which
one should be "master". At this point, the other "keepalived" processes
running in the other servers, will remove the HA router virtual IP
assigned an instant before
To avoid transitioning some routers form "backup" to "master" and then
to "backup" in a very short period, this patch delays the "backup" to
"master" transition, waiting for a possible new "backup" state. If
during the waiting period (set to the HA VRRP advert time, 2 seconds
default) to set the HA state to "master", the L3 agent receives a new
"backup" HA state, the L3 agent does nothing.
Conflicts:
neutron/agent/l3/agent.py
Closes-Bug: #1837635
Change-Id: I70037da9cdd0f8448e0af8dd96b4e3f5de5728ad
(cherry picked from commit 3f022a193f)
(cherry picked from commit adac5d9b7a)
- This change updates _set_bridge_name to set
the bridge name field in the vif binding details.
- This change adds the integration_bridge name
to the agent configuration report.
Closes-Bug: #1788009
Closes-Bug: #1856152
(cherry picked from commit 995744c576)
Change-Id: I454efcb226745c585935d5bd1b3d378f69a55ca2
Increased timeouts for OVSDB connection:
- ovsdb_timeout = 30
This patch will mitigate the intermittent timeouts the CI is
experiencing while running the functional tests.
Change-Id: I97a1d170926bb8a69dc6f7bb78a785bdea80936a
Closes-Bug: #1815142
(cherry picked from commit 30e901242f)
In TestOVSAgent, there are two tests where the OVS agent is
configured and started twice per test. Before the second call,
the agent should be stopped first.
Depends-On: https://review.opendev.org/667216/
Change-Id: I30c2bd4ce3715cde60bc0cd3736bd9c75edc1df3
Closes-Bug: #1830895
(cherry picked from commit b77c79e5e8)
(cherry picked from commit ff66205081)
Ovs-agent will scan and process the ports during the
first rpc_loop, and a local port update notification
will be sent out. This will cause these ports to
be processed again in the ovs-agent next (second)
rpc_loop.
This patch passes the restart flag (iteration num 0)
to the local port_update call trace. After this patch,
the local port_update notification will be ignored in
the first RPC loop.
Related-Bug: #1813703
Change-Id: Ic5bf718cfd056f805741892a91a8d45f7a6e0db3
(cherry picked from commit eaf3ff5786)
For dvr scenario, if port has a bound floating, and then create
port forwarding to it, this port forwarding will not work, due to
the traffic is redirected to dvr rules.
This patch restricts such API request, if user try to create port
forwarding to a port, check if it has bound floating IP first.
This will be run for all type of routers, since neutron should
not let user to waste public IP address on a port which already
has a floating IP, it can take care all the procotol port
numbers.
Conflicts:
neutron/services/portforwarding/pf_plugin.py
Closes-Bug: #1799137
Change-Id: I4ba4b023d79185f8d478d60ce16417d3501bf785
(cherry picked from commit b8d2ab8543)
When new DVR serviceable port appears on new node we need
to update node's l3 agent with all routers which have the
port's subnets, including connected routers.
We don't need to update all nodes hosting these routers.
It costs us much as all l3 agents then go back to neutron server
and request routers info for no good reason.
This was one of the main issues with DVR at scale fixed in Mitaka.
Change-Id: I99d01d7bf29f236eff0f80d1ae8659f64ac55d39
Related-Bug: #1830456
(cherry picked from commit 52529bc949)
In functional tests for L3 HA agent, like e.g.
L3HATestFailover.test_ha_router_failover
it may happen that L3 agent will not change ipv6 accept_ra
knob and test fails because it checks that only once just
after router state is change.
This patch fixes that race by adding wait for 60 seconds to
ipv6 accept_ra change.
Conflicts:
neutron/tests/functional/agent/l3/framework.py
Change-Id: I459ce4b791c27b1e3d977e0de9fbdb21a8a379f5
Closes-Bug: #1829889
(cherry picked from commit 62b2f2b1b1)
New IP command introduced by Ie3fe825d65408fc969c478767b411fe0156e9fbc
requires only privsep initialization. This patch removes the prisep
error FailedToDropPrivileges when executed under neutron-rootwrap.
Closes-Bug: #1823038
Change-Id: I6cde3c9dae7ffdccce49e88c3c79d1c379f291cf
(cherry picked from commit aacd11ab9f)
1. give each HA failover case an independent vrrp_id
2. give each HA port an independent IP address, so the
interface IPs for router HA ports will be:
169.254.192.100 and 169.254.192.101
169.254.192.102 and 169.254.192.103
169.254.192.104 and 169.254.192.105
169.254.192.106 and 169.254.192.107
VIP of each case will be:
169.254.0.10/24
169.254.0.11/24
169.254.0.12/24
169.254.0.13/24
169.254.0.14/24
Conflicts:
neutron/tests/common/l3_test_common.py
Closes-Bug: #1819160
Change-Id: I1216d96af40449ec16a852cc1f6c4f15c85f4546
(cherry picked from commit c69a87405a)
(cherry picked from commit 2c5957f56d)
When two routers are created at the same time, we can't assume the
status of each one. Instead of this, the status of each router is
first checked and then compared to the other router status.
Change-Id: If20a3a414986ea29fbfd50616761c14e5b249b2c
Closes-Bug: #1819160
(cherry picked from commit 8f35331c91)
The test bridge veth pair devices is not up which cause the
VRRP advertisement packet can not pass to each HA port. Then
multiple master router is up. This patch just sets the veth
pair devices up.
Closes-Bug: #1819160
Change-Id: I0e0d0311d73bce83d3c7341e7a0167917818b1ff
(cherry picked from commit 8cc480bd01)
If one port has port forwarding and the port is under
a dvr router, then binding floating IP to this port
will not be allowed.
Change-Id: Ia014e18264b43cf751a5bc0e82bc55d106582620
Closes-Bug: #1799138
(cherry picked from commit 433228dd78)
In netlink_lib functional tests module there are listed conntrack
entries and those entries are assert to some expected list.
It may happen that sometimes some additional entries from other
tests will also be in the list and that cause failures of
netlink_lib tests.
So this patch changes way how those assertions are done. For now
it will check if each of expected entries is in entries list and
in case of delete entries tests, it will also check if any of
deleted entries isn't actually in list.
Change-Id: I30c18f141a8356b060902e6493ba0657b21619ad
Closes-Bug: #1817295
(cherry picked from commit 798c6c731f)
Since port creating can result an IP address in the
entire CIDR especially small subnet. And those next
N IP actions can be out of subnet IP range. This
patch gives the original test port a specific IP
addr to prevent this issue.
Closes-Bug: #1812404
Change-Id: I34cb99a518d4469c7d1ca9e2897671608b2b81ad
(cherry picked from commit 63ea9d7bcc)
Ovs-agent can be very time-consuming in handling a large number
of ports. At this point, the ovs-agent status report may have
exceeded the set timeout value. Some flows updating operations
will not be triggerred. This results in flows loss during agent
restart, especially for hosts to hosts of vxlan tunnel flow.
This fix will let the ovs-agent explicitly, in the first rpc loop,
indicate that the status is restarted. Then l2pop will be required
to update fdb entries.
Conflicts:
neutron/plugins/ml2/rpc.py
Closes-Bug: #1813703
Closes-Bug: #1813714
Closes-Bug: #1813715
Closes-Bug: #1794991
Closes-Bug: #1799178
Change-Id: I8edc2deb509216add1fb21e1893f1c17dda80961
(cherry picked from commit a5244d6d44)
In functional tests of HA router, in
L3AgentTestFramework._router_lifecycle method there was assertion
that HA router at the beginning don't have IPs configured in
router's namespace.
That could lead to test failure because sometimes keepalived process
switched router from standby to master before this assertion was
done and IPs were already configured.
There is alsmost no value in doing this assertion as it's just after
router was created so it is "normal" that there is no IP addresses
configured yet.
Because of that this patch removes this assertion.
Change-Id: Ib509a7226eb94483a0aaf2d930f329e419b8e135
Closes-Bug: #1816489
(cherry picked from commit e6351ab11e)
When HA router is created in "stanby" mode, ipv6 forwarding is
disabled by default in its namespace.
But when router is transitioned to be "master" on node, ipv6
forwarding should be enabled. This was fine for routers with
configured gateway but we somehow missed the case when router don't
have gateway configured.
Because of that missing ipv6 forwarding setting in such case, IPv6
W-E traffic between 2 subnets was not working fine in L3 HA case.
This patch fixes it by adding configuring ipv6_forwarding on
"all" interface in router's namespace always, even if it don't have
gateway configured.
Conflicts:
neutron/tests/functional/agent/l3/framework.py
Change-Id: I8b1b2b426f7a26a4b2407a83f9bf29dd6e9ba7b0
CLoses-Bug: #1818224
(cherry picked from commit b119247bea)
Sometimes in case of HA routers it may happend that
keepalived will set status of router to MASTER before
neutron-keepalived-state-change daemon will spawn "ip monitor"
to monitor changes of IPs in router's namespace.
In such case neutron-keepalived-state-change process will never
notice that keepalived set router to be MASTER and L3 agent will
not be notified about that so router will not be configured properly.
To avoid such race condition neutron-keepalived-state-change will
now check if VIP address is already configured on ha interface
before it will spawn "ip monitor". If it is already configured
by keepalived, it will notify L3 agent that router is set to
MASTER.
Change-Id: Ie3fe825d65408fc969c478767b411fe0156e9fbc
Closes-Bug: #1818614
(cherry picked from commit 8fec1ffc83)
Removing an active or a standby HA router from an agent that has a
valid DVR serviceable port (such as DHCP), does not remove the
HA interface associated with the Router in the SNAT namespace.
When we try to add the HA router back to the agent, then it
adds more than one HA interface to the SNAT Namespace causing
more problems and we sometimes also see multiple active routers.
This bug might have been introduced by this patch [1].
Fix the problem by just adding the router namespaces without HA
interfaces when there is no HA and re-insert the HA interfaces
when HA router is bound to the agent into the namespace.
[1] https://review.openstack.org/#/c/522362/
Closes-Bug: #1816698
Change-Id: Ie625abcb73f8185bb2bee06dcd26a01d8af0b0d1
(cherry picked from commit d9e0bab6ac)
In some cases our db migration tests which run on MySQL are
failing with timeout and it happens due to slow VMs on which
job is running.
Sometimes it may also happen that timeout exception is raised
in the middle of some sqlalchemy operations and
sqlalchemy.InterfaceError is raised as last one.
Details about this exception can be found in [1].
To avoid many rechecks because of this reason this patch
introduces new decorator which is very similar to "unstable_test"
but will skip test only if one of exceptions mentioned above will
be raised.
In all other cases it will fail test.
That should be a bit more safe for us because we will not miss
some other failures raised in those tests and will avoid rechecks
because of this "well-known" reason described in related bug.
[1] http://sqlalche.me/e/rvf5
Conflicts:
neutron/tests/functional/db/test_migrations.py
Change-Id: Ie291fda7d23a696aaa1160d126a3cf72b08c522f
Related-Bug: #1687027
(cherry picked from commit c0fec67672)
An external network can have more than one subnet. Currently only the
first subnet is added to the FIP namespace routing table. Packets for
FIPs with addresses in other subnets can't pass through the external
port because there is no route for those FIP CIDRs.
This change adds routes for those CIDRs via the external port IP and
interface.
These routes doesn't collide with the existing ones, added to provide
a back path for the packets with a destination IP matching a FIP.
E.g.:
$ ip netns exec fip-e1ec0f98-b593-4514-ae08-f1c5cf1c2788 ip route
(1) 169.254.106.114/31 dev fpr-3937f879-d proto kernel scope link \
src 169.254.106.115
(2) 192.168.20.250 via 169.254.106.114 dev fpr-3937f879-d
(3) 192.168.30.0/24 dev fg-bee060f1-dd proto kernel scope link \
src 192.168.30.129
(4) 192.168.20.0/24 via 192.168.30.129 dev fg-bee060f1-dd scope link
Rule (2) is added when a FIP is assigned. This rule permits ingress
packets going into the router namespace. This FIP belongs to the second
subnet of the external network (note the external port CIDR is not the
same). Rule (4), added by this patch, allows egress packets to exit
the FIP namespace through the external port. Rule (2), because of the
prefix length (32), has more priority than rule (4).
Change-Id: I4d476b47e89fa5709dca2f66ffae72a27d88340a
Closes-Bug: #1805456
(cherry picked from commit 97c98a1c6d)
When the external gateway is plugged and we enable IPv6
forwarding on it, make sure the 'all' sysctl knob is also
enabled, else IPv6 packets will not be forwarded. This
seems to only affect HA routers that default to disabling
this 'all' knob on creation.
Also, when we are removing all the IPv6 addresses from a
HA router internal interface, set 'accept_ra' to zero so
it doesn't accidentally auto-configure an address. Set
it back to one when adding them back.
Re-homed newly added _wait_until_ipv6_forwarding_has_state()
accordingly.
Conflicts:
neutron/tests/functional/agent/l3/test_ha_router.py
Closes-bug: #1787919
Change-Id: Ia1f311ee31d1479089685367a97bf13cf170b342
(cherry picked from commit b847cd02c5)
When a deployment has instance ports that are neutron trunk ports with
DPDK vhu in vhostuserclient mode, when the instance reboots nova will
delete the ovs port and then recreate when the host comes back from
reboot. This quick transition change can trigger a race condition that
causes the tbr trunk bridge to be deleted after the port has been
recreated. See the bug for more details.
This change mitigates the race condition by adding a check for active
service ports within the trunk port deletion function.
Change-Id: I70b9c26990e6902f8888449bfd7483c25e5bff46
Closes-Bug: #1807239
(cherry picked from commit bd2a1bc6c3)
It may happen that L3 agent works in dvr_snat mode but
it handles some router as "normal" dvr router because
snat for this router is handled on other node.
In such case we shouldn't try to get floating IPs cidrs
from snat namespace as it doesn't exists on host.
Change-Id: Ib27dc223fcca56030ebb528625cc927fc60553e1
Related-Bug: #1717302
(cherry picked from commit 7d0e1ccd34)
With DVR routers, if a port is associated with a FloatingIP,
before it is used by a VM, the FloatingIP will be initially
started at the Network Node SNAT Namespace, since the port
is not bound to any host.
Then when the port is attached to a VM, the port gets its
host binding, and then the FloatingIP setup should be migrated
to the Compute host and the original FloatingIP in the Network
Node SNAT Namespace should be cleared.
But the original FloatingIP setup in SNAT Namespace was not
cleared by the agent.
This patch addresses the issue.
Change-Id: I55a16bcc0020087aa1abe76f5bc85cd64ccdaecd
Closes-Bug: #1796491
(cherry picked from commit cd0cc47a6a)
In case when 2 dvr routers are connected to each other with
tenant network, those routers needs to be always deployed
on same compute nodes.
So this patch changes dvr routers scheduler that it will create
dvr router on each host on which there are vms or other dvr routers
connected to same subnets.
Co-Authored-By: Swaminathan Vasudevan <SVasudevan@suse.com>
Closes-Bug: #1786272
Change-Id: I579c2522f8aed2b4388afacba34d9ffdc26708e3
(cherry picked from commit 5018d70241)
In test test_ha_router_namespace_has_ipv6_forwarding_disabled
functional test it may happen that L3 agent will not change ipv6
forwarding and test fails because it checks that only once just
after router state is change to master.
This patch fixes that race by adding wait for 60 seconds to
ipv6 forwarding change.
Change-Id: I85a602561ebe9b7ab135913af49a3f010b09f196
Closes-Bug: #1801930
(cherry picked from commit 916e774516)
Free subnet can not remove from router if other router's
subnets have port_forwarding. This patch fixed it by
checking the router interface subnet and IP address.
Co-Authored-By: LIU Yulong <i@liuyulong.me>
Closes-Bug: #1799140
Change-Id: Idace35126bb00139fa1f9f48be3aa3aab265b9d8
(cherry picked from commit f5d3a4159b)
Patch [1] increased timeouts for test_walk_version functional tests
for MySQL backend to 300 seconds to avoid failures due to timeouts.
Unfortunately it looks that on nodes from some cloud providers used
in the gate and with number of migration scripts which we have in
Neutron those tests can take sometimes even around 400 seconds.
So lets increase this to 600 seconds to avoid such failures of
functional tests job.
[1] https://review.openstack.org/#/c/610003/
Change-Id: I9d129f0e90a072ec980aadabb2c6b812c08e1618
Closes-Bug: #1687027
(cherry picked from commit c39afbd5fc)
Extra routes are not configured on Router namespaces in dvr_snat
node with DVR-HA configuration.
This patch fixes the problem.
Change-Id: If620b23564479042aa6f58640bcd6705e5eb52cf
Closes-Bug: #1797037
(cherry picked from commit 81652cd939)
In Neutron we hit quite often same issue as Manila, see [1] for
details.
It looks that solution for this problem may be increase timeout
for test_walk_version functional tests.
Higher timeout will be applied for tests for both pgsql and mysql
backends but it is mostly needed for mysql because 'pymysql' works
much slower on slow nodes than 'psycopg2'
This patch adds also new decorator to set individual timeout for
tests.
[1] https://bugs.launchpad.net/manila/+bug/1501272
Change-Id: I5f344af6dc3e5a6ee5f52c250b6c719e1b43e02d
Closes-Bug: #1687027
(cherry picked from commit c2c37272bf)
For L3 DVR HA router, the centralized floating IP nat rules are not
installed in every HA node snat namespace. So, install the rules to
all the router snat-namespace on every scheduled HA router host.
Conflicts:
neutron/tests/common/l3_test_common.py
neutron/tests/functional/agent/l3/test_dvr_router.py
Closes-Bug: #1793527
Change-Id: I08132510b3ed374a3f85146498f3624a103873d7
(cherry picked from commit ee7660f593)
This change is a follow-up to Ib6ced838a7ec6d5c459a8475318556001c31bdf,
reintroducing a single place for applying the NORMAL action to
egress traffic, which is necessary to fix a regression introduced
by Ib6ced838a7ec6d5c459a8475318556001c31bdf.
Change-Id: I60d299275effd9ef35c8007773d3c9fcabfa50fa
Partial-Bug: 1789878
Packet sent to table 91 are considered accepted by the egress pipeline
and NORMAL action is used by default in this table. However, if we
create a security group logging resource, then ovs flows log will be
added into this table with higher priority. Therefore packet matches
with ovs flows log will be sent to CONTROLLER and never forward.
So this patch append action=NORMAL into ovs flows log to forward
the packet and send it to CONTROLLER for logging.
Closes-Bug: #1787106
Change-Id: I6e95e2e646ec8a5507c7f140ab2c4a56be8404c3
(cherry picked from commit 7d2ac2d0af)
This patch fixes the race condition with update/delete neutron
serveral resources, such as port forwarding conflict with
floatingip and port forwarding conflict with port.
Also this approach need the revision function, so need to fix in port
forwarding model to aware relationship revision update.
As the port forwarding resource associated with 2 resources,
one is floatingip, the other is neutron internal port.
So floatingip update/delete maybe in a conflict situation with
port forwarding creation. But for port, we just lack the logic to
process port forwarding during update port's fixed_ip and delete
port.
So the approach here is adding logic to let l3 plugin and port
forwarding plugin know each other when both sides may process the same
floatingip resource. Based on the existing revision_number feature,
if one side fail as db staleError, the api layer will retry the whole
operation for this resource, so there must be a failure on one side in
this case. This patch just adds the association logic for l3 plugin and
port forwarding plugin, also adds a event receiver for port update/delete.
Then the behavior about the port forwarding associated resources would
be:
* For fip resource, I introduce one function in that patch.
_check_floatingip_request
So during floatingip update/delete, the function will process
fip and check by rpc callback from l3_plugin, if port forwarding plugin
also creates a port forwarding with the same fip at this moment. The
success side would be the one who update the fip_db first, the other side
would be failure after db retry.
* For port resource, during update port fixed_ip or delete port, we will
delete the associated port forwarding resources for free the
fip:external_port socket.
Partially-Implements: blueprint port-forwarding
Change-Id: I637ebcb33b91d899a077bded5ca10097a830a847
Partial-Bug: #1491317
This patch contains the l3 agent extension and agent part code.
This patch introduce a new l3 agent extension named "port_forwarding",
to process the binding of the port forwarding resources, manage its own
floatingip configuration on router interface and floatingip status.
Currrently, we support all Neutron Router reference implementations.
This extension uses the period router sync task and PortForwarding OVO
rpc.
* The main idea about this new extension is using the generic router sync
rpc to maintain the host port forwarding resources,
* For a single port forwarding create/update/delete, process it one by one
in smaller scope for forbidding refresh the iptables with a larger
scope frequently.
Partially-Implements: blueprint port-forwarding
Partial-Bug: #1491317
Change-Id: Ic56e67d428f6177099c285a9d1bccabc1e710f2b
This patch implements the plugin.
This patch introduces an new service plugin for port forwarding resources,
named 'pf_plugin', and supports create/update/delete port forwarding
operation towards a free Floating IP.
This patch including some works below:
* Introduces portforwarding extension and the base class of plugin
* Introduces portforwarding plugin, support CRUD port forwarding
resources
* Add the policy of portforwarding
The race issue fix in:
https://review.openstack.org/#/c/574673/
Fip extend port forwarding field addition in:
https://review.openstack.org/#/c/575326/
Partially-Implements: blueprint port-forwarding
Change-Id: Ibc446f8234bff80d5b16c988f900d3940245ba89
Partial-Bug: #1491317