Some changes require updating the existing entities in a
clear and transparent way for the user.
This patch adds a mechanism to create separate tasks that
can run periodically or just once in order to update or
modify existing entities that require changes after a new
patch or RFE.
As an example, a first task has been included for updating
existing OVN LB HM ports, changing their device_owner, and
adding their device_id.
Closes-Bug: 2038091
Change-Id: I0d4feb1e5c128d5a768d1b87deb2dcb3ab6d1ea1
To differentiate OVN LB HM (Load Balancer Health Monitor) ports
from Neutron ovn-metadata ports, a new constant will be used for
the 'device_owner' field in OVN LB HM ports.
This change ensures that these ports are not managed by some Neutron
tasks that assume only one port per network should have a 'device_owner'
value of 'network:distributed'.
Partially-Closes: 2038091
Depends-On: https://review.opendev.org/c/openstack/neutron/+/897345
Change-Id: I9a9a55d919fc215bf9a593a894e678c84e395e82
When LB or member is created, driver looks for the Logical Router which
is plugged to the Logical Switch. As there can be more than one address
on the port, we should iterate over them to be compared with the gateway
IP.
This patch modifies code to do not crash if more than one address is
found in neutron:cidrs external_ids field.
Closes-Bug: 2036620
Change-Id: I17b2c2577a4d99455c30ca1e10632a7004d7c084
Add file to the reno documentation build to show release notes for
stable/2023.2.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/2023.2.
Sem-Ver: feature
Change-Id: I5acf5babe7f81123d3b883ae91e75ae86e198d92
When a HM is attached to a pool and a backend member in that pool
is a fake member (e.g. due to a typo on creation) the member remains
in ONLINE status. Basically this is due to the fact that there
isn't any LSP attached to that member and no Service_Monitor entries
will take care of it.
This patch checks inmediatelly after creation the member and update
the whole LB status to reflect this fake member that could help to
the user to identify quickly those fake members.
Closes-Bug: 2034522
Change-Id: I72b2d9c5f454f9b156414bf91ca7deb7f0e9d8b0
It is needed to enforce the correct versions of both projects due
to incompatibilities with previous version. The Neutron Bobcat beta
3 should work with neutron-lib 3.8.0.
Change-Id: I1b7e35c92b01c15c9c236861f60d13bc5098330f
Since [1] OVN/OVS source deploy jobs running with
OVN_BRANCH=main fails to compile ovn as this now
requires newer ovs commits from branch-3.2.
[1] https://github.com/ovn-org/ovn/commit/558da0cd
Change-Id: Ia546671f0d7be3e893eb2c7de67c82287bc53f52
When a LogicalSwitchPortUpdate event is triggered after removing
FIP from LB VIP, the event received include the port affected,
but the FIP related is not passing to the handler method.
This patch includes the FIP into the info passed to the handler
method, simplifying the current handler logic and providing
future support for the new multi-vip feature. Also added a match
for only manage events including external_id updates.
Closes-Bug: #2028161
Change-Id: Ibee3906e8e9575fba7811e989e3e111a026ce45b
Currently when a FIP is attached to LB VIP after a HM is already
created, the LB_HC created for the new FIP is not including the
port in the vip field. At this way, when a member is in ERROR
operating status, request over the FIP are still distribute
to the ERROR'ed members.
This patch add the port when the FIP is associated to the LB VIP.
Related-Bug: #1997418
Change-Id: Iefe5d67b5a8fc47972b14c4247c381d625efcc09
When a HM is deleted, Octavia API will block the related
pool with a provision_status to PENDING_UPDATE, waiting
for the new status after finishing the HM deletion on the
provider. When multiple pools are attached to a LB, this
status is sent for the first pool obtained, keeping the
related pool in PENDING_UPDATE.
This patch ensures that the update status sent by the ovn
provider is referencing the correct pool id.
Closes-Bug: 2024912
Change-Id: Ie5d01ce291409383558b3dd7c4d2fe91fd657255
This patch adds support to configure ovn loadbalancer
affinity_timeout option based on the pool session persistence
timeout.
Change-Id: I07c8f3492e62576f66008e8ea1ef9846bed8c6fa
Traffic to member, if they have FIPs gets centralized when they
are part of a loadbalancer. However, when the loadbalancer gets
deleted, the traffic should be distributed again (if DVR was
enabled). To do that this patch also considers the cascade deletion
Closes-Bug: #2025637
Change-Id: Ie4b44c9f15fc9e33a68f9aacd766590b974c63fd
If a new member is added with the admin_state_up set to False,
they should not participate in load balancing requests over
the LB VIP. However, the member still receives requests, even
though the Octavia API applies the member's operation_status correctly,
This patch fixes this issue by not adding the member to the vips
(at OVN NB) so that request over LB VIP are not taking into account
that member.
Closes-Bug: 2016862
Change-Id: Iec7f6b1da8548a29eb9cc0e2544e77e1a6c6fb1e
An out of sync has been identified between the changes applied
over the OVN NB DB and Octavia DB when a batch-update-members
includes some unsupported option for any of the member to be
modified.
To prevent such inconsistencies, this patch rejects the entire
request if any of the proposed changes are identified as
unsupported. The user will be notified of the reason for the
rejection.
Closes-Bug: 2017216
Change-Id: I6e132ab5c23c9c53176612f74bb500e46c89024f
After a thorough review of the issue, it looks like that the problem
does not originate from the base code of ovn-octavia-provider or
neutron. Other projects are also experiencing this problem,
indicating that it likely stems from a different source or set of
libraries [1].
To minimize the need for extensive rechecks on future patches, this
patch introduces a retry mechanism, utilizing tenacity, to the
affected methods.
Once the root cause of the problem '(sqlite3.InterfaceError) Cursor
needed to be reset because of commit/rollback and can no longer be
fetched from,' is identified and resolved, this patch should be
reverted.
[1] https://opensearch.logs.openstack.org/_dashboards/app/discover/?security_tenant=global#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-30d,to:now))&_a=(columns:!(_source),filters:!(('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'94869730-aea8-11ec-9e6a-83741af3fdcd',key:build_status,negate:!f,params:(query:FAILURE),type:phrase),query:(match_phrase:(build_status:FAILURE)))),index:'94869730-aea8-11ec-9e6a-83741af3fdcd',interval:auto,query:(language:kuery,query:'message:%22Cursor%20needed%20to%20be%20reset%20because%20of%20commit%2F%22'),sort:!())
Related-Bug: #2020195
Change-Id: Ia7a9b5230f9cf56de8278b736022240a780130d6
python-neutronclient has been deprecated and Octavia has already removed
it in the dependend change below. These are the respective changes on
ovn-octavia-provider side and they are in line with changes in Octavia
itself:
- Replaced code that uses the deprecated `python-neutronclient` library
with code that uses `openstacksdk` and removed `python-neutronclient`
as a dependency.
- Marked certain configuration options that were related to Keystone
authentication as deprecated for removal. In future releases
authentication options options need to be added to the [neutron]
section of the configuration.
Note: After [1] some calls to neutron test_db_base_plugin_v2 had added
a new param 'as_admin' that need to be included in the calls from
ovn-provider functional tests. Squashed with patch [2] to solve
cross dependency.
[1] https://review.opendev.org/c/openstack/neutron/+/879827
[2] https://review.opendev.org/c/openstack/ovn-octavia-provider/+/882715
Depends-On: https://review.opendev.org/c/openstack/octavia/+/866327
Change-Id: I985b24e4a6db962b1e73eeae69a8c96f4b0760ae
Clarify that HM type is not supported for SCTP because OVN
health checks is just supporting TCP and UDP-CONNECT.
Change-Id: Ice771ae36a521baad792c935fd2481602548a24d
With [1] OVN main branch compilation fails, until
main branch is fixed to work with ovs master, let's
pin OVS_BRANCH to working commit.
[1] https://github.com/openvswitch/ovs/commit/07cf5810de
Related-Bug: #2015728
Change-Id: Icdd1affc944de6c1e00da9539e13a8d698cfc0e6
LB ip_port_mapping is updated just adding and deleting every member
after any related operation over the LB-HM, this operation was done
in two steps, a db_clear and a db_set.
This patch takes ovsdbapp specific commands for add/del backends to
the ip_port_mapping in a more appropiate way, reducing any further
operation from OVN DBs not related to the member added/deleted. Also
taking care about the possibility of the same backend_ip could be
pointed by other member, under a different HM.
ovsdbapp bumps to 2.1.0 to be able to use those new functionalities [1]
[1] f3c5da5402
Closes-Bug: 2007835
Change-Id: I5705c490bcd36e7e2edcc62954a3ffa0ff645519
With the latest version of bandit (1.7.5), a new lint rule has been
introduced that checks the inclusion of the timeout parameter for
every "requests" call [1].
So B113 lint rule[2] needs to be skipped or code adapted, this patch
add the timeout parameter to the put/get requests.
[1] 5ff73ff8ff
[2] https://bandit.readthedocs.io/en/latest/plugins/b113_request_without_timeout.html
Closes-bug: #2011573
Change-Id: I341faedbf7e237eed176e0d3ed3586b8d2c2cbb8
Add file to the reno documentation build to show release notes for
stable/2023.1.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/2023.1.
Sem-Ver: feature
Change-Id: I4b12eeeb72bdbc301540564005e476672bfd1012
The expected behavior when a HM is deleted is that any reference to it
in the LB's external_ids must be cleanup or removed. Until this patch,
this reference was not removed when the pool associated to the HM
is deleted.
Closes-Bug: #2008695
Change-Id: Ieeef917d9e293af27e5feed14335f25fd9a6fb48
At present, when a health monitor (HM) is created for a pool,
the members of that pool are automatically set to ONLINE
provisioning status, unless the HM identifies an ERROR during
health checks.
This patch addresses an issue where, after deleting an HM,
the members should be reset to NO_MONITOR provisioning status,
regardless of whether the HM had previously set them to ONLINE
or ERROR status.
Closes-Bug: #2007985
Change-Id: I02bcba61d0cbc9202a6e50b849f8d781fb825d49
Currently, if a FIP gets associated to a LB with HealthMonitors
it is not included as a new OVN Load Balancer Health Checks. This
means that if the VIP is used, traffic will not be redirected to
the dead members, buit if the FIP is used there is no health checks
being applied and traffic will reach dead members.
This patch adds the extra functionality so that an extra OVN
Load Balancer Health Check is created for the FIPs associated to
the Load Balancer.
Closes-Bug: #1997418
Change-Id: Idbf1fb15076518092ce5fdaa57500d29342f51be
For every backend IP in the load balancer for which health
check is configured, a new row in the Service_Monitor table
is created and according to that ovn-controller will
periodically sends out the service monitor packets.
In this patch we create a new port for this purpose,
instead of use the ovn_metadata_port to configure the
backends in the field ip_port_mappings, this mapping is
the info used to be translated to Service_Monitor
entries (more details [1]).
[1] 24cd3267c4/northd/ovn-northd.8.xml (L1431)
Closes-Bug: #2004238
Change-Id: I11c4d9671eee002b15080d055a18a4d3f4d7c540
In core OVN, LBs on switches with localnet ports (i.e., neutron
provider networks) don't work if traffic comes from localnet [1]
In order to force NAT to happen at the virtual router instead
of the LS level, when the VIP of the LoadBalancer is associated
to a provider network we should avoid adding the LB to the
LS associated to the provider network
[1] https://bugzilla.redhat.com/show_bug.cgi?id=2164652
Closes-Bug: #2003997
Change-Id: I009ddd2604d208bbf793e2d19d4195b77726f7b2
Following the recommendations provided in [1], this patch disables the
"skipsdist" flag. Also format passenv values due to cannot contain
whitespace, and add allowlist_externals where necessary because looks
more strictly checked.
[1]https://github.com/tox-dev/tox/issues/2730
Change-Id: Iea8e355cd18c51a00bdbe5225239965cfc1704d7
When a new HM is created, the provisioning status is conditioned
by the status of the existing members on the pool. When any of
the members are in ERROR status (e.g. when a member is configure
with non existing address) the created HM is in ERROR status.
It makes more sense to warn about the member problem but let the
HM continue with its normal flow of operation over the possible
remaining members that exist for the pool on which it is created.
This patch removes the break after finding a problematic member
(port not found) and just log a warning about the issue, but
continue with the rest of the members.
Closes-Bug: #2000071
Change-Id: I5be9130eb63c03d273fc8dfcc93094204a3ed361
When a HM is created/deleted over a pool, the listener related to the
pool keeps in PENDING_UPDATE status.
This patch return the correct status to Octavia API for the
listeners related to the pool, ensuring they could be modified and
not considered as inmutable.
Closes-Bug: #1999813
Change-Id: I4f6e4a8acb7c7bb030aaadc6875894d6fc00d740
There was quite a mix between (Octavia) Health Monitors and
(OVN) Load Balancer Health Checks. This patch tries to make a
more clear distinction between Octavia HM and OVN LB HCs.
This patch also add a reference to the Octavia Health Monitors
Ids to the external_ids of the OVN NB Load_Balancer entries.
Related-Bug: #1997418
Change-Id: Ib8499d7c4ea102e183ead31f063a3c0a70af6e23
This patch ensures if only one parameter is provided the
rest are not modified (set to undefined)
Closes-Bug: #1997416
Change-Id: Ie47f19afdd041843fe47da739b09ee03a88c7b02
Octavia API allows to create fully populated load balancers with
a single call, and the pools included in that call could include
HM. OVN-octavia-provider was not implementing this option and only
the loadbalancer/listener(s)/pool(s)/member(s) were created,
keeping the HM at the Octavia API as PENDING_CREATE.
This also generated leftovers when trying to delete the loadbalancer
with the --cascade option.
Closes-Bug: #1997094
Change-Id: Ic24a0c1622c0aac2a40542cadf91a1bc47de1de6
Coverage tests is flickering over the threshold configured on 92%.
(without knowing the reason)
This patch adds additional unit tests to files that has been
flickering in the last running cover jobs, in order to keep the
results in a more stable line.
Change-Id: I1cadb861ff5eb8cf6379d561e693a967fd5b90fc
If an ovn-lb is created (VIP and members) in a LS (neutron network)
that has 2 subnets (IPv4 + IPv6), and this LS is connected to a LR,
removing the LS from the LR leads to the removal of the ovn-lb from
the LS and consequently to remove it from the OVN SB DB as it is not
associated to any datapath
This is a problem on the _find_ls_for_lr function that looks for
all the LR ports, and get the network name from them, therefore,
even though the port for the LS got deleted, there is still another
port from the other subnet pointing to the same network (LS), which
is the culprit to delete the ovn-lb from that LS.
With this patch, the VIP IP version is consider so that the router
ports that belongs to the other subnet are not considered and the
ovn-lb is not therefore removed from the LS.
Closes-Bug: #1992363
Change-Id: I7b6dd9a31020d942d391726662e9b5ed9d76dc1f