Back in Newton, patch [1] added to the agents possibility to report in
the heartbeat messages if hybrid plug of the ports is required or not.
Usage of "firewall_driver" option by mechanism drivers (so on the
server's side) was kept just for backward compatibility.
But as we are now about 4 years from the [1] I think it should be safe
to do small cleaning, remove usage of this option in the neutron server
and not confuse users where this config option has to be set and why.
[1] https://review.opendev.org/#/c/311814/
Change-Id: I2ccc4c8784c64858acaa3c3431cf9a3d13e5e154
OVNL3RouterPlugin inherits from L3_NAT_dbonly_mixin, which inherits
from neutron.extensions.l3.RouterPluginBase
As maintenance task expects OVNL3RouterPlugin to behave as
RouterPluginBase, the add_router_interface should have the signature:
add_router_interface(self, context, router_id, interface_info)
Note: With this change, the default behavior of OVNL3RouterPlugin's
_add_neutron_router_interface becomes idem-potent: multiple calls to add
the same interface will not fail. Because of that, the unit test
test_router_add_interface_dup_port no longer makes sense and is being
removed.
Closes-Bug: #1876148
Change-Id: I8010113b4d8c66ecbccf3126f322a8836d92e7ba
Signed-off-by: Flavio Fernandes <flaviof@redhat.com>
The patch adds a short living connection in pre-fork routine that
creates neutron_pg_drop Port Group. Later after workers are spawned,
each worker also creates a short living connection and waits for an
event that the Port Group was created.
The short living IDLs limit its tables only for relevant tables so it
doesn't fetch the whole OVS DB to the local copy.
Closes-bug: #1866068
Change-Id: I1f5af36b8c3d5650f890edfed3c33dc206869824
Signed-off-by: Jakub Libosvar <libosvar@redhat.com>
Now that we are python3 only, we should move to using the built
in version of mock that supports all of our testing needs and
remove the dependency on the "mock" package.
This patch moves all references to "import mock" to
"from unittest import mock". It also cleans up some new line
inconsistency.
Fixed an inconsistency in the OVSBridge.deferred() definition
as it needs to also have an *args argument.
Fixed an issue where an l3-agent test was mocking
functools.partial, causing a python3.8 failure.
Unit tests only, removing from tests/base.py affects
functional tests which need additional work.
Change-Id: I40e8a8410840c3774c72ae1a8054574445d66ece
The delete_port() method from OVNClient has a potential problem of
leaving stale ports when RowNotFound is raised from the process to
delete the port from the OVN database. Since the exception is not
granular enough, the RowNotFound could be raised from other objects that
are part of the same transaction (such as ACLs, DNS entries, etc...)
resulting in the revision for the port being deleted even tho the port
is still in the database.
Instead of giving a pass on the RowNotFound exception, this patch is
logging the error and re-raising it without deleting the revision.
Change-Id: I25b93b7c080403fc38365b638e4e03298b447d0f
Partial-Bug: #1874733
Signed-off-by: Lucas Alvares Gomes <lucasagomes@gmail.com>
The field "in_use" is added to "subnet" DB definition. This DB
register column is a flag used to mark a register as in use
by other transaction. When a write DB transaction writes any
value on this field, the register is locked for any other
concurrent transaction. If two DB transactions try to set this
column at the same time, one of them will fail.
This DB lock is implemented in "subnet" and is used during the
subnet deletion and the port IP assignation, where all the port
network subnets are retrieved to provide an IP address on the subnet
CIDR.
As reported in the related bug, it was possible to assign an IP
to a port and, before the port creation command finished, delete the
subnet where the IP belonged. This patch introduces this subnet lock
during the IP assignation and at the beginning of the subnet deletion
process. At the end of both transactions, the DB engine checks if the
lock operation (write "in_use" column) is possible or the subnet
register was already requested by other DB transaction.
Change-Id: I45a724917389814e83400f5854ada175dfce2b7b
Closes-Bug: #1865891
Prior to this patch, the OVN driver wasn't account for the VNIC types
VNIC_DIRECT_PHYSICAL and VNIC_MACVTAP. These types should work the same
way as the VNIC_DIRECT type in the OVN driver perspective.
Closes-Bug: #1874065
Change-Id: Idb596b5a80a3155bc9cdee1e082506701e730f00
Signed-off-by: Lucas Alvares Gomes <lucasagomes@gmail.com>
From the comments, this code existed to have API compatibility
between the native openflow and ovs-ofctl of_interface drivers,
but since the latter was removed, this code is no longer
necessary. Remove the tunnel bridge code now, the integration
bridge code needs further work.
Change-Id: I692789e35a4be8872ec72ffb10bc5488cab05f2b
The QoS OVN client extension is moved to the ML2 driver. This
extension is called from the OVN driver in the events of:
- create port
- update port
- delete port
- update network
The QoS OVN client extension now can accept several rules per policy
as documented in the SUPPORTED_RULES. The QoS OVN client extension
can write one OVN QoS rule per flow direction and each OVN QoS rule
register can hold both a bandwidth limit rule and a DSCP marking rule.
The "update_policy" method is called from the OVN QoS driver, when
a QoS policy or its rules are updated.
The QoS OVN client extension updates the QoS OVN registers
exclusively, based on the related events.
Closes-Bug: #1863852
Change-Id: I4833ed0c9a2741bdd007d4ebb3e8c1cb4c30d4c7
Only reschedule gateways/update segments when things have changed
that would require those actions.
Co-Authored-By: Terry Wilson <twilson@redhat.com>
Change-Id: I62f53dbd862c0f38af4a1434d453e97c18777eb4
Closes-bug: #1861510
Closes-bug: #1861509
We can now revert this patch, because main cause has been already
fixed in Core OVN [1]. With this fix the ARP responder flows are not
installed on LS pipeline, when LSP has port security disabled, and
an 'unknown' address is set in addresses column.
This makes MAC spoofing possible.
[1] https://patchwork.ozlabs.org/patch/1258152/
This reverts commit 03b87ad963.
Change-Id: Ie4c87d325b671348e133d62818d99af147d50ca2
Closes-Bug: #1864027
The "old" parameter passed to the handle_ha_chassis_group_changes()
method is a delta object and sometimes it does not contain the
"external_ids" column (because it hasn't changed).
The absence of that column was misleading that method into believe that
the "old" object was no longer a gateway chassis and that triggered some
changes in the HA group. Changing the HA group resulted in the SRIOV
(external in OVN) ports to start flapping between the gateway chassis.
This patch is adding a check to verify that the "external_ids" column
has changed before acting on it, otherwise just ignore the update and
return.
Closes-Bug: #1869389
Change-Id: I3f7de633e5546dc78c3546b9c34ea81d0a0524d3
Signed-off-by: Lucas Alvares Gomes <lucasagomes@gmail.com>
The check _is_virtual_port_supported() prevented us from
clearing the addresses field while port was OVN LB VIP port.
The virtual port should be set only when port is Octavia Amphorae
VIP port.
Change-Id: Id6dd29650951855d13498a7206f6d1dde7db7864
Closes-Bug: 1863893
oslo-utils "isotime" is deprecated [1]. "datetime.datetime.isoformat"
should be used instead.
[1]382370781b/oslo_utils/timeutils.py (L45-L49)
Change-Id: Iaaab299298c6528ea56e4b212674f492dfe517b7
The current flow where a port is unbound from an agent running on
smartnic was not implemented causing representor ports to remain
connected to the integration bridge.
This change solves this issue in two cases:
* When deleting the instance and so deleting SmartNic port,
it will remove the unbound ports from integration bridge
* When resync SmartNic ports on Neutron OVS agent restart,
it will remove the port from integration bridge not mapped to
neutron SmartNic ports.
Closes-Bug: #1855260
Change-Id: I7077577cca54329fbcb77fbde730389835ab6497
The OVN maintenance code was not always calling into
the OVNClient class methods with the correct number of
arguments, leading to exceptions.
After a deeper review, there were a number of places
where this was happening, so changed most methods to
take a 'context' argument since it's usually available
in the caller.
Change-Id: I1bcb0ca68747e4c32523e41307dc132291c55f6d
Closes-bug: #1861502
The unit tests in test_ovs_tunnel.py were verifying that
port_exists() was calling bool(), which fails when using
unittest.mock. Since it doesn't really gain anything,
just remove the check for that exact call.
Trivialfix
Change-Id: Id7712330a24f51f0cfee8d7b3916c05d3501ee3f
This patch introduces a new mechanism to allow rerunning maintenance
tasks upon an OVN database schema change to avoid a service restart.
As an example, the "migrate_to_port_groups" maintenance task will run
again when the database schema is updated. In case of a migration from
an OVN version without port groups support to a version that supports
it, the OVN driver will migrate the code automatically without the need
of a service restart.
Closes-Bug: #1864641
Change-Id: I520a3de105b4c6924908e099a3b8d60c3280f499
Signed-off-by: Lucas Alvares Gomes <lucasagomes@gmail.com>
This patch is adding support for a new port type called "external" in
core OVN.
Prior to this work, when a VM had a SR-IOV port attached to it, OVN itself
wasn't able to reply to things such as DHCP requests packets since the
OVS port was skipped. Core OVN then introduced the concept of "external"
ports which are ports deployed on a different node than the one that the
VM is running and is able to reply to such requests on behalf of the VM.
With this patch, when a port with the VNIC type "direct" and no
"switchdev" capability is created, ovn driver will then create a
logical port with the type "external" for it and add it to a default
HA Chassis Group. The port will then get bound to the "master" (higher
priority) chassis of that group.
Please note that, as a first step, this patch is creating only one HA
Chassis Group which *all* external ports will belong to. That means that
all external ports will be *scheduled onto the same node* (but it's
HA nevertheless). In the future we should enhance this behavior.
Change-Id: Ic6c4bb6c584682169f3ebd73105a847b05dddc76
Closes-Bug: #1841154
Signed-off-by: Lucas Alvares Gomes <lucasagomes@gmail.com>
It was checking the call arguments incorrectly, which was
caught by unittest.mock but not by mock library.
Trivialfix
Change-Id: I00b8cd364c869eabb6b1cfe1f7ed4eb8e5f22a87
From the comments, this code existed to have API compatibility
between the native openflow and ovs-ofctl of_interface drivers,
but since the latter was removed, this code is no longer
necessary.
Change-Id: Icf346e58904412a97e5e22155166821faed19fc2
This patch is ading IGMP snooping support in the OVN driver. Multicast
support has been introduced in core OVN upstream.
Also, the patch always sets the "mcast_flood_unregistered" config in
the OVN Logical_Switch to the same value as the "mcast_snoop" config.
This is so that OVN matches the OVS behavior which is to enable IGMP
flooding by default [0] (in OVN, by default it's false).
[0] http://www.openvswitch.org/support/dist-docs/ovs-vsctl.8.txt (grep
for "mcast-snooping-disable")
Change-Id: I32f61ba3dd06d7eacf76a74c5c44e1286f90e584
Co-Authored-By: Daniel Alvarez <dalvarez@redhat.com>
Signed-off-by: Lucas Alvares Gomes <lucasagomes@gmail.com>
Multiple ports are located in ports_re_added. Assume we have port_one
and port_two. It will loop through the ports. Port_one is iterated
first, events ['re_added'] is assigned port_one, events ['removed']
is assigned port_two. In the second loop, events ['re_added'] is set
to port_two instead of adding port_two to list. So after the loop,
only port_two is left in events ['re_added'].
Change-Id: If8edd29dd741f1688ffcac341fd58173539ba000
Closes-Bug: #1864630
When the segmentation ID of a network is updated, first the provider
network segment is validated and then reserved. If service plugin
"network_segment_range" is enabled, the Neutron server retrieves the
network segment ranges with shared=True or those ones with the same
project_id as the network.
This patch adds the "project_id" information to the filters when
reserving the network provider segment. This change will allow to
retrieve those private networks segments belonging to the same
project.
Change-Id: I21bd60af000276779f56b3a6d45b4a6c1836bed1
Closes-Bug: #1863619
By default, if no metric is defined, the kernel interprets the
highest value (0).
The current implementation, using pyroute2, is a translation from
the CLI command "ip route". This command uses the netlink API to
communicate with the kernel. In IPv6, when the metric value is not
set is translated as 1024 as default [1].
[1]https://access.redhat.com/solutions/3659171
Change-Id: I0c5f9e320bbbf314a2d6a22c515bf903de84cdaf
Related-Bug: #1855759