Back in Newton, patch [1] added to the agents possibility to report in
the heartbeat messages if hybrid plug of the ports is required or not.
Usage of "firewall_driver" option by mechanism drivers (so on the
server's side) was kept just for backward compatibility.
But as we are now about 4 years from the [1] I think it should be safe
to do small cleaning, remove usage of this option in the neutron server
and not confuse users where this config option has to be set and why.
[1] https://review.opendev.org/#/c/311814/
Change-Id: I2ccc4c8784c64858acaa3c3431cf9a3d13e5e154
In hacking 2.0 or later, local-check-factory was removed as it is not
compatible with flake8 3.x and it is advised to use flake8's local
plugins [1]. neutron-lib provided a factory to register common hacking
rules, but it no longer works with hacking 2, so we need to define rules
defined in neutron-lib as flake8 local check plugin [2] explicitly.
This needs to be done in each neutron related project, so it is the
downside of the migration to hacking 2.x (I explored a way to continue
to use the factory but failed to find a good way to achieve this) but
I believe it is good to migrate the newer libraries.
* flake8ext decorator in neutron/hacking/checks.py is also replaced with
hacking.core.flake8ext to avoid the copy-and-paste code.
* neutron-lib dependency is updated as neutron-lib 2.3 added hacking 3 support.
* Python modules related to coding style checks (listed in blacklist.txt in
openstack/requirements repo) are dropped from lower-constraints.txt
as they are not actually used in tests (other than pep8).
* HackingDocTestCase is now converted into normal test cases.
HackingDocTestCase depends on the internal of hacking and pycodestyle
so it looks better to use normal style of writing tests.
[1] https://docs.openstack.org/releasenotes/hacking/unreleased.html#relnotes-2-0-0
[2] https://flake8.pycqa.org/en/3.7.0/user/configuration.html#using-local-plugins
Change-Id: I92cf50a84bb587a0649a7cffee15cce4ce37d086
A new pep8 style library must have been released which
is causing some new errors, E471 among them. Clean-up
on aisle 8.
Change-Id: I153abada74e8c522fe9866a239a36dbb8365a29e
This new synthetic field is linked to a "QosPolicyFloatingIPBinding"
register. This binding register will bind a QoS policy and a
floating IP.
Now is possible to provide this field in the create/update input
parameters. If provided, the "FloatingIP" OVO will create/delete the
"QosPolicyFloatingIPBinding" register.
The OVO takes this parameter from the DB object. When the DB object
is retrieved, the QoS policy binding register is retrieved too due
to a backref link in the "QosFIPPolicyBinding" DB model to the
"FloatingIP" DB model.
Change-Id: Ideb042a71336b110bbe0f9e81ed8e0c21434fc42
Closes-Bug: #1877404
Related-Bug: #1877408
This commit adds possibility to configure fip port_forwarding
service plugin and l3 extension with devstack plugin for OVN.
Since OVN uses API workers, this change also introduces the
callbacks necessary in pf_plugin, so events related to port
forwarding are sent using neutron_lib callbacks registry.
Related-Bug: #1877447
Change-Id: I8124fac13bf4d802d232e8b3976e6a2cebc72106
In dnsmasq 2.81 there is a regression (see [1] for details).
Prior versions of dnsmasq would select a host record where:
a) no address is present in the host record.
b) an address matching address family of the client request
is present in the host record.
dnsmasq 2.81 will also use a host record where a only an address
not matching the address family of the client request is present.
The same issue is also backported to the dnsmasq-2.79-11.el8.x86_64
which is e.g. in RHEL 8.2 and Centos 8.
dnsmasq version 2.81 also adds support for using tag's on host
records. When a dhcpv6 request is received, dnsmasq automatically
sets the tag 'dhcpv6'.
This change adds a runtime check, testing for dnsmasq host entry
tag support. And adds 'tag:dhcpv6' to all IPv6 host records when
dnsmasq supports this.
Adding the tag makes dnsmasq prefer the tagged host for dhcpv6
requests, i.e it's a workaround fix for the regression issue.
[1] http://lists.thekelleys.org.uk/pipermail/dnsmasq-discuss/2020q2/014051.html
Closes-Bug: #1876094
Change-Id: Ie654c84137914226bdc3e31e16219345c2efaac9
OVNL3RouterPlugin inherits from L3_NAT_dbonly_mixin, which inherits
from neutron.extensions.l3.RouterPluginBase
As maintenance task expects OVNL3RouterPlugin to behave as
RouterPluginBase, the add_router_interface should have the signature:
add_router_interface(self, context, router_id, interface_info)
Note: With this change, the default behavior of OVNL3RouterPlugin's
_add_neutron_router_interface becomes idem-potent: multiple calls to add
the same interface will not fail. Because of that, the unit test
test_router_add_interface_dup_port no longer makes sense and is being
removed.
Closes-Bug: #1876148
Change-Id: I8010113b4d8c66ecbccf3126f322a8836d92e7ba
Signed-off-by: Flavio Fernandes <flaviof@redhat.com>
Now that we are python3 only, we should move to using the built
in version of mock that supports all of our testing needs and
remove the dependency on the "mock" package.
This completes removal of all references to "import mock",
changing to "from unittest import mock" in fullstack and
functional tests.
Added a hacking check to enforce it in future patches.
Change-Id: Ifcaf1c21bea0ec3c35278e49cecc90a101a82113
The patch adds a short living connection in pre-fork routine that
creates neutron_pg_drop Port Group. Later after workers are spawned,
each worker also creates a short living connection and waits for an
event that the Port Group was created.
The short living IDLs limit its tables only for relevant tables so it
doesn't fetch the whole OVS DB to the local copy.
Closes-bug: #1866068
Change-Id: I1f5af36b8c3d5650f890edfed3c33dc206869824
Signed-off-by: Jakub Libosvar <libosvar@redhat.com>
Test was using the wrong string to check against failure,
instead use the constants from neutron-lib.
Trivialfix
Change-Id: I5bd8aa44d9ccc47b299a0c71d0c2a190e28f0e48
Now that we are python3 only, we should move to using the built
in version of mock that supports all of our testing needs and
remove the dependency on the "mock" package.
This patch moves all references to "import mock" to
"from unittest import mock". It also cleans up some new line
inconsistency.
Fixed an inconsistency in the OVSBridge.deferred() definition
as it needs to also have an *args argument.
Fixed an issue where an l3-agent test was mocking
functools.partial, causing a python3.8 failure.
Unit tests only, removing from tests/base.py affects
functional tests which need additional work.
Change-Id: I40e8a8410840c3774c72ae1a8054574445d66ece
The delete_port() method from OVNClient has a potential problem of
leaving stale ports when RowNotFound is raised from the process to
delete the port from the OVN database. Since the exception is not
granular enough, the RowNotFound could be raised from other objects that
are part of the same transaction (such as ACLs, DNS entries, etc...)
resulting in the revision for the port being deleted even tho the port
is still in the database.
Instead of giving a pass on the RowNotFound exception, this patch is
logging the error and re-raising it without deleting the revision.
Change-Id: I25b93b7c080403fc38365b638e4e03298b447d0f
Partial-Bug: #1874733
Signed-off-by: Lucas Alvares Gomes <lucasagomes@gmail.com>
When the L3 agent get a router update notification, it will try to
retrieve the router info from neutron server. But at this time, if
the message queue is down/unreachable. It will get exceptions related
message queue. The resync actions will be run then. Sometimes, rabbitMQ
cluster is not so much easy to recover. Then Long time MQ recover time
will cause the router info sync RPC never get successful until it meets
the max retry time. Then the bad thing happens, L3 agent is trying to
remove the router now. It basically shutdown all the existing L3 traffic
of this router.
This patch directly removes the final router removal action, let the
router run as it is.
Closes-Bug: #1871850
Change-Id: I9062638366b45a7a930f31185cd6e23901a43957
The field "in_use" is added to "subnet" DB definition. This DB
register column is a flag used to mark a register as in use
by other transaction. When a write DB transaction writes any
value on this field, the register is locked for any other
concurrent transaction. If two DB transactions try to set this
column at the same time, one of them will fail.
This DB lock is implemented in "subnet" and is used during the
subnet deletion and the port IP assignation, where all the port
network subnets are retrieved to provide an IP address on the subnet
CIDR.
As reported in the related bug, it was possible to assign an IP
to a port and, before the port creation command finished, delete the
subnet where the IP belonged. This patch introduces this subnet lock
during the IP assignation and at the beginning of the subnet deletion
process. At the end of both transactions, the DB engine checks if the
lock operation (write "in_use" column) is possible or the subnet
register was already requested by other DB transaction.
Change-Id: I45a724917389814e83400f5854ada175dfce2b7b
Closes-Bug: #1865891
Patch [1] introduced new mechanism which only brings UP interfaces
on master node of HA router. It works fine with keepalived 1.x
but it is broken when keepalived 2.x was used (e.g. on Centos 8) as
in this new version of keepalived by default all interfaces of VIPs
and routes are tracked, and if one of them is DOWN, keepalived is
going to FAULT state. Because of that router will never be
transitioned to MASTER on any node.
This patch fixes it by adding "no_track" option to all VIPs
and routes in keepalived's config file.
This "no_track" option isn't added to ha interface so this one
is still tracked by keepalived.
[1] https://review.opendev.org/#/c/707406/
Closes-bug: #1874211
Change-Id: Ic16cf83fe1d1576d91047adb2d4f9e07d57185b6
Prior to this patch, the OVN driver wasn't account for the VNIC types
VNIC_DIRECT_PHYSICAL and VNIC_MACVTAP. These types should work the same
way as the VNIC_DIRECT type in the OVN driver perspective.
Closes-Bug: #1874065
Change-Id: Idb596b5a80a3155bc9cdee1e082506701e730f00
Signed-off-by: Lucas Alvares Gomes <lucasagomes@gmail.com>
The second parameter of fip_id_cidrs requires a cidr, but now is address. this
causes function `_sync_and_remove_fip` fail to remove vip as expected.
if current is ha router, in function _sync_and_remove_fip will call
`ri._remove_vip(fip_id_cidr[1])`. the final parameter is passed to
KeepalivedInstance.remove_vip_by_ip_address and compare with attribute vips.
the values of vips are of type CIDR.
if not ha, the following process processing use netaddr.IPNetwork and can be
performed as expected.
Closes-bug: #1873708
Change-Id: I2ae2ade29700a56dc340256389bf8b0efd697ba4
From the comments, this code existed to have API compatibility
between the native openflow and ovs-ofctl of_interface drivers,
but since the latter was removed, this code is no longer
necessary. Remove the tunnel bridge code now, the integration
bridge code needs further work.
Change-Id: I692789e35a4be8872ec72ffb10bc5488cab05f2b
Improve port retrieval in method
"_validate_auto_address_subnet_delete". Instead of requesting each
port individually, a single DB query is executed to retrieve all
the ports with IP allocation in a in a subnet.
Change-Id: I7875142ebecd17663e17847fb14997200d7ae5c8
Related-Bug: #1865138
The QoS OVN client extension is moved to the ML2 driver. This
extension is called from the OVN driver in the events of:
- create port
- update port
- delete port
- update network
The QoS OVN client extension now can accept several rules per policy
as documented in the SUPPORTED_RULES. The QoS OVN client extension
can write one OVN QoS rule per flow direction and each OVN QoS rule
register can hold both a bandwidth limit rule and a DSCP marking rule.
The "update_policy" method is called from the OVN QoS driver, when
a QoS policy or its rules are updated.
The QoS OVN client extension updates the QoS OVN registers
exclusively, based on the related events.
Closes-Bug: #1863852
Change-Id: I4833ed0c9a2741bdd007d4ebb3e8c1cb4c30d4c7
When "network_segment_range" service extension is enabled, the default
(shared) network segment range could not exist. In this case, when
retrieving the segmentation IDs, the existance of this range should be
checked first.
Change-Id: Iaff891a48adc811ab114fb03b24ab3da9311eec3
Closes-Bug: #1870569
Only reschedule gateways/update segments when things have changed
that would require those actions.
Co-Authored-By: Terry Wilson <twilson@redhat.com>
Change-Id: I62f53dbd862c0f38af4a1434d453e97c18777eb4
Closes-bug: #1861510
Closes-bug: #1861509