Multiple ports are located in ports_re_added. Assume we have port_one
and port_two. It will loop through the ports. Port_one is iterated
first, events ['re_added'] is assigned port_one, events ['removed']
is assigned port_two. In the second loop, events ['re_added'] is set
to port_two instead of adding port_two to list. So after the loop,
only port_two is left in events ['re_added'].
Conflicts:
neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/test_ovs_neutron_agent.py
Change-Id: If8edd29dd741f1688ffcac341fd58173539ba000
Closes-Bug: #1864630
(cherry picked from commit 5600163e9b)
(cherry picked from commit 22df469504)
In case when openvswitch was restarted, full sync of all bridges will
be always triggered by neutron-ovs-agent so there is no need to check
in same rpc_loop iteration if bridges were recreated.
Conflicts:
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py
Change-Id: I3cc1f1b7dc480d54a7cee369e4638f9fd597c759
Related-bug: #1864822
(cherry picked from commit 45482e300a)
(cherry picked from commit b8e7886d8b)
Commit 90212b12 changed the OVS agent so adding vital drop flows on
br-int (table 0 priority 2) for packets from physical bridges was
deferred until DVR initialization later on. But if br-int has no flows
from a previous run (eg after host reboot), then these packets will hit
the NORMAL flow in table 60. And if there is more than one physical
bridge, then the physical interfaces from the different bridges are now
essentially connected at layer 2 and a network loop is possible in the
time before the flows are added by DVR. Also the DVR code won't add them
until after RPC calls to the server, so a loop is more likely if the
server is not available.
This patch restores adding these flows to when the physical bridges are
first configured. Also updated a comment that was no longer correct and
updated the unit test.
Change-Id: I42c33fefaae6a7bee134779c840f35632823472e
Closes-Bug: #1887148
Related-Bug: #1869808
(cherry picked from commit c1a77ef8b7)
(cherry picked from commit 143fe8ff89)
(cherry picked from commit 6a861b8c8c28e5675ec2208057298b811ba2b649)
(cherry picked from commit 8181c5dbfe)
Conflicts:
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py
When the vlan and vxlan both exist in env, and l2population
and arp_responder are enabled, if we update a port's ip address
from vlan network, there will be arp responder related flows
added into br-tun, this will cause too many arp reply for
one arp request, and vm connections will be unnormal.
Closes-Bug: #1824504
Change-Id: I1b6154b9433a9442d3e0118dedfa01c4a9b4740b
(cherry picked from commit 5301ecf41b)
In case when neutron-ovs-agent will notice that any of physical
bridges was "re-created", we should also ensure that stale Open
Flow rules (with old cookie id) are cleaned.
This patch is doing exactly that.
Conflicts:
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py
Change-Id: I7c7c8a4c371d6f4afdaab51ed50950e2b20db30f
Related-Bug: #1864822
(cherry picked from commit 63c45b3766)
Block traffic between br-int and br-physical is over kill
and will at least
1. interrupt vlan flow during startup, and is particularly
so if dvr enabled
2. if let's rabbitmq is not stable, it is possible data plane
will be affected and vlan will never work.
Using openstack on k8s particularly amplifies the problem
because pod could be killed pretty easily by liveness
probes.
Conflicts:
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py
Change-Id: I51050c600ba7090fea71213687d94340bac0674a
Closes-Bug: #1869808
(cherry picked from commit 90212b12cd)
In some cases it may be useful to log new vlan tag which is found
on the port when it losts old vlan tag which should is expected to
be there.
So this patch adds such value to the log message.
TrivialFix
Depends-On: https://review.opendev.org/735615
Change-Id: I231e624f460510decc6d2237040c8bef207e2e8e
(cherry picked from commit 3ac63422ea)
In case when physical bridge is removed and created again it
is initialized by neutron-ovs-agent.
But if agent has enabled distributed routing, dvr related
flows wasn't configured again and that lead to connectivity issues
in case of DVR routers.
This patch fixes it by adding configuration of dvr related flows
if distributed routing is enabled in agent's configuration.
It also adds reset list of phys_brs in dvr_agent. Without that there
were different objects used in ovs agent and dvr_agent classes thus
e.g. 2 various cookie ids were set on flows in physical bridge.
This was also the same issue in case when openvswitch was restarted and
all bridges were reconfigured.
Now in such case there is correctly new cookie_id configured for all
flows.
Conflicts:
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py
Change-Id: I710f00f0f542bcf7fa2fc60800797b90f9f77e14
Closes-Bug: #1864822
(cherry picked from commit 91f0bf3c85)
Neutron-ovs-agent can now enable IGMP snooping in integration bridge
if config option "igmp_snooping_enable" in OVS section in config will
be set to True.
It will also set mcast-snooping-disable-flood-unregistered=true
so flooding of multicast packets to all unregistered ports will be
disabled also.
Both changes are applied on integration bridge.
Change-Id: I12f4030a35d10d1715d3b4bfb3ed5efb9aa28f2b
Closes-Bug: #1840136
(cherry picked from commit 5b341150e2)
In order to reduce the number of elements retrieved from the DB, this
patch, before processing the VLAN allocations per physical network,
deleted those registers belonging to any unconfigured physical network.
The VLAN registers per physical network are deleted using a bulk delete
operation, to speed up the process.
Those missing VLAN registers per network are now created using a bulk
insert operation, available in the ORM. This bulk operation speeds up
the sync process.
Conflicts:
neutron/plugins/ml2/drivers/type_vlan.py
Change-Id: I8568e2277e157754aaff87a059a40e34e6a43e2b
Partial-Bug: #1862178
(cherry picked from commit 016e7826f1)
(cherry picked from commit 651eb12bec)
(cherry picked from commit 4fff732b76)
Operators may want to see how long it takes in the port
processing procedure since DEBUG log does not enable
basically in the production envrionment.
Related-Bug: #1813703
Related-Bug: #1813707
Related-Bug: #1813706
Related-Bug: #1813709
Conflicts:
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py
Change-Id: I43733546abf5421d0e3f4cd5a959d279e1b89d1e
(cherry picked from commit 8e73de8bc4)
Do not flood the packets to bridge, since we have the
bridge port list, we can add a simple direct flow to
the right port only.
Conflicts:
neutron/agent/linux/openvswitch_firewall/firewall.py
neutron/conf/plugins/ml2/drivers/ovs_conf.py
Closes-Bug: #1732067
Related-Bug: #1841622
Change-Id: I14fefe289a19b718b247bf0740ca9bc47f8903f4
(cherry picked from commit efa8dd0895)
Patch https://review.opendev.org/#/c/697655/ cannot be backported
because it includes an RPC version change. This patch is for the
stable branches.
Currently the ovs agent calls update_device_list with the
agent_restarted flag set only on the first loop iteration. Then the
server knows to send the l2pop flooding entries for the network to
the agent. But when a compute node with many instances on many
networks reboots, it takes time to readd all the active devices and
some may be readded after the first loop iteration. Then the server
can fail to send the flooding entries which means there will be no
flood_to_tuns flow and broadcasts like dhcp will fail.
This patch fixes that by also setting the agent_restarted flag if
the agent has not received the flooding entries for a network.
Change-Id: Iccc4fe4a785ee042fd76a663d0e76a27facd1809
Closes-Bug: #1853613
(cherry picked from commit bc0ab0fcd7)
(cherry picked from commit aee87e72b1)
The OVS agent processes the port events in a polling loop. It could
happen (and more frequently in a loaded OVS agent) that the "removed"
and "added" events can happen in the same polling iteration. Because
of this, the same port is detected as "removed" and "added".
When the virtual machine is restarted, the port event sequence is
"removed" and then "added". When both events are captured in the same
iteration, the port is already present in the bridge and the port is
discharted from the "removed" list.
Because the port was removed first and the added, the QoS policies do
not apply anymore (QoS and Queue registers, OF rules). If the QoS
policy does not change, the QoS agent driver will detect it and won't
call the QoS driver methods (based on the OVS agent QoS cache, storing
port and QoS rules). This will lead to an unconfigured port.
This patch solves this issue by detecting this double event and
registering it as "removed_and_added". When the "added" port is
handled, the QoS deletion method is called first (if needed) to remove
the unneded artifacts (OVS registers, OF rules) and remove the QoS
cache (port/QoS policy). Then the QoS policy is applied again on the
port.
NOTE: this is going to be quite difficult to be tested in a fullstack
test.
Conflicts:
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py
Change-Id: I51eef168fa8c18a3e4cee57c9ff86046ea9203fd
Closes-Bug: #1845161
(cherry picked from commit 50ffa5173d)
(cherry picked from commit 3eceb6d2ae)
(cherry picked from commit 6376391b45)
OVS agent is a single thread module executed on a os-ken AppManager
context. os-ken uses, by default (and no other implementation is
available today [1]), "eventlet" threads. Those threads are scheduled
manually by the code itself; the context switch is done through
yielding. The easiest way to do this is by executing:
eventlet.sleep()
If the assigned thread is not ready to take the GIL and do not yield
back the executor, other threads will starve and eventually will
timeout.
This patch removes the "sleep" command during the DP retrieval. This
will keep the executor on the current thread and will prevent the
execution timeouts, as seen in the bug related.
[1]1f751b2d7d/os_ken/lib/hub.py
Closes-Bug: #1861269
Change-Id: I19e1af1bda788ed970d30ab251e895f7daa11e39
(cherry picked from commit 740741864a)
- This change updates _set_bridge_name to set
the bridge name field in the vif binding details.
- This change adds the integration_bridge name
to the agent configuration report.
Closes-Bug: #1788009
Closes-Bug: #1856152
(cherry picked from commit 995744c576)
Change-Id: I454efcb226745c585935d5bd1b3d378f69a55ca2
- This change adds a max priority flow to drop
all traffic that is associated with the
DEAD VLAN 4095.
- This change is part of a partial mitigation of
bug 1734320. Without this change vlan 4095 traffic
will be dropped via a low priority flow after being
processed by part/all of the openflow pipeline.
By raising the priorty and droping in table 0
we drop invalid packets as soon as they enter
the pipeline.
Change-Id: I3482c7c4f00942828cc9396cd2f3d646c9e8c9d1
Partial-Bug: #1734320
(cherry picked from commit e3dc447b90)
Type of lvm.vlan is int and other_config.get('tag') is a string,
they can never be equal. We should do type conversion before
comparing to avoid unnecessary operation of ovsdb and flows.
Change-Id: Ib84da6296ddf3c95be9e9f370eb574bf92ceec15
Closes-Bug: #1843425
(cherry picked from commit 0550c0e1f6)
In case when vlan network was created with segmentation_id=0 and without
physical_network given, it was passing validation of provider segment
and first available segmentation_id was choosen for network.
Problem was that in such case all available segmentation ids where
allocated and no other vlan network could be created later.
This patch fixes validation of segmentation_id when it is set to value 0.
Change-Id: Ic768deb84d544db832367f9a4b84a92729eee620
Closes-bug: #1840895
(cherry picked from commit f01f3ae5dd)
Neutron-ovs-agent configures physical bridges that they works
in fail_mode=secure. This means that only packets which match some
OpenFlow rule in the bridge can be processed.
This may cause problem on hosts with only one physical NIC
where same bridge is used to provide control plane connectivity
like connection to rabbitmq and data plane connectivity for VM.
After e.g. host reboot bridge will still be in fail_mode=secure
but there will be no any OpenFlow rule on it thus there will be
no communication to rabbitmq.
With current order of actions in __init__ method of OVSNeutronAgent
class it first tries to establish connection to rabbitmq and later
configure physical bridges with some initial OpenFlow rules.
And in case described above it will fail as there is no connectivity
to rabbitmq through physical bridge.
So this patch changes order of actions in __init__ method that it first
setup physical bridges and than configure rpc connection.
Conflicts:
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py
Change-Id: I41c02b0164537c5b1c766feab8117cc88487bc77
Closes-Bug: #1840443
(cherry picked from commit d41bd58f31)
(cherry picked from commit 3a2842bdd8)
In case when physical bridge was recreated on host, ovs agent
is trying to reconfigure it.
If there will be e.g. timeout while getting bridge's datapath_id,
RuntimeError will be raised and that caused crash of whole agent.
This patch changes that to not crash agent in such case but try to
reconfigure everything in next rpc_loop iteration once again.
Conflicts:
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py
neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/test_ovs_neutron_agent.py
Change-Id: Ic9b17a420068c0c76748e2c24d97be1ed7c460c7
Closes-Bug: #1837380
(cherry picked from commit b63809715a)
Ovs-agent will scan and process the ports during the
first rpc_loop, and a local port update notification
will be sent out. This will cause these ports to
be processed again in the ovs-agent next (second)
rpc_loop.
This patch passes the restart flag (iteration num 0)
to the local port_update call trace. After this patch,
the local port_update notification will be ignored in
the first RPC loop.
Related-Bug: #1813703
Change-Id: Ic5bf718cfd056f805741892a91a8d45f7a6e0db3
(cherry picked from commit eaf3ff5786)
Check for configured and actual number of VFs to prevent
device registaration with 0 VFs.
Closes-Bug: #1831622
Change-Id: Ie699d245f8ae2fc1d16b96432d2962788d9dba57
(cherry picked from commit c148c6df46)
This parameter applies to the OVSDB Controller table when the
native openflow driver is used. There are reports that increasing
it can reduce errors on busy systems. This patch also sets the
default value to 10s which is more than the OVS default of 5s.
See the ovs-vswitchd.conf.db man page for full description.
Conflicts:
neutron/tests/functional/agent/common/test_ovs_lib.py
Change-Id: If0d42919412dac75deb4d7f484c42cea630fbc59
Partial-Bug: #1817022
(cherry picked from commit 540d00f68e)
(cherry picked from commit 6e661ecd2d)
It may happen that subnet is connected to dvr router using IP address
different than subnet's gateway_ip.
So in br-tun arp to dvr router's port should be dropped instead of
dropping arp to subnet's gateway_ip (or mac in case of IPv6).
Conflicts:
neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/test_ovs_neutron_agent.py
Change-Id: Ida6b7ae53f3fc76f54e389c5f7131b5a66f533ce
Closes-bug: #1831575
(cherry picked from commit ae3aa28f5a)
In the OVS agent, when setting up the ancillary bridges, the parameter
external_id:bridge-id is retrieved. If this parameter is not defined
(e.g.: manually created bridges), ovsdbapp writes an error in the logs.
This information is irrelevant and can cause confusion during debugging time.
Change-Id: Ic85db65f651eb67fcb56b937ebe5850ec1e8f29f
Closes-Bug: #1815912
(cherry picked from commit 769e971293)
In order to avoid inaccurate agent_boot_time setting,
this patch suggests to consider agent as "started" only
after completion of initial sync with server.
Change-Id: Icba05288889219e8a606c3809efd88b2c234bef3
Closes-Bug: #1799178
(cherry picked from commit 8f20963c5b)
In one specific compute node, the security group rules
can be enormous quantity. This patch adds a step-by-step
processing method to deal with the large number of the
security group rules. And also changes or adds some LOG.
Related-Bug: #1813703
Related-Bug: #1813704
Related-Bug: #1813707
Conflicts:
neutron/common/constants.py
Change-Id: I57bf27ec75cf848271c5a28b22beee12b8bd5faa
(cherry picked from commit 6ac420df7e)
Ovs-agent can be very time-consuming in handling a large number
of ports. At this point, the ovs-agent status report may have
exceeded the set timeout value. Some flows updating operations
will not be triggerred. This results in flows loss during agent
restart, especially for hosts to hosts of vxlan tunnel flow.
This fix will let the ovs-agent explicitly, in the first rpc loop,
indicate that the status is restarted. Then l2pop will be required
to update fdb entries.
Conflicts:
neutron/plugins/ml2/rpc.py
Closes-Bug: #1813703
Closes-Bug: #1813714
Closes-Bug: #1813715
Closes-Bug: #1794991
Closes-Bug: #1799178
Change-Id: I8edc2deb509216add1fb21e1893f1c17dda80961
(cherry picked from commit a5244d6d44)
The dump-flows action will get a very large sets of flow information
if there are enormous ports or openflow security group rules. For now
we can meet some known exception during such action, for instance,
memory issue, timeout issue.
So after this patch, the cleanup action of the bridge stale flows
will be done one table by one table. But note, this only supports
for 'native' OpenFlow interface driver.
Related-Bug: #1813703
Related-Bug: #1813712
Related-Bug: #1813709
Related-Bug: #1813708
Change-Id: Ie06d1bebe83ffeaf7130dcbb8ca21e5e59a220fb
(cherry picked from commit f898ffd71f)
The native OVS/ofctl controllers talk to the bridges using a
datapath-id, instead of the bridge name. The datapath ID is
auto-generated based on the MAC address of the bridge's NIC.
In the case where bridges are on VLAN interfaces, they would
have the same MACs, therefore the same datapath-id, causing
flows for one physical bridge to be programmed on each other.
The datapath-id is a 64-bit field, with lower 48 bits being
the MAC. We set the upper 12 unused bits to identify each
unique physical bridge
This could also be fixed manually using ovs-vsctl set, but
it might be beneficial to automate this in the code.
ovs-vsctl set bridge <mybr> other-config:datapath-id=<datapathid>
You can change this yourself using above command.
You can view/verify current datapath-id via
ovs-vsctl get Bridge br-vlan datapath-id
"00006ea5a4b38a4a"
(please note that other-config is needed in the set, but not get)
Closes-Bug: #1697243
Co-Authored-By: Rodolfo Alonso Hernandez <ralonsoh@redhat.com>
Change-Id: I575ddf0a66e2cfe745af3874728809cf54e37745
(cherry picked from commit 379a9faf62)
During the l2-agent stop, if the policy rule is cleared,
after the l2-agent is started, the qos rule that has been applied should be cleared.
Change-Id: Iaaff10dfa8ac6ab8c9dead3124e2bb3caa03a665
Closes-Bug: #1810025
(cherry picked from commit 58de79a58b)
This fixes race condition leading to lack of fdb entries
on agent after OVS restart, if agent managed to handle all ports
before sending state report with start_flag set to True.
Change-Id: I943f8d805630cdfbefff9cff1fb4bce89210618b
Closes-Bug: #1808136
(cherry picked from commit 3995abefb1)
When ovs-vswitchd process is restarted neutron-ovs-agent will
handle it and reconfigure all ports and openflows in bridges.
Unfortunatelly when tunnel networks are used together with
L2pop mechanism driver, this driver will not notice that agent
lost all openflow config and will not send all fdb entries which
should be added on host.
In such case L2pop mechanism driver should behave in same way like
when neutron-ovs-agent is restarted and send all fdb_entries to
agent.
This patch adds "simulate" of agent start flag when ovs_restart is
handled thus neutron-server will send all fdb_entries to agent and
tunnels openflow rules can be reconfigured properly.
Change-Id: I5f1471e20bbad90c4cdcbc6c06d3a4412db55b2a
Closes-bug: #1804842
(cherry picked from commit ae031d1886)
Sometimes due to NIC driver incorrect behavior, VFs might be
missing in 'ip link show' output. This may lead to a VM boot
failure as agent will just skip such missing devices.
Make the agent do a resync in case a newly added device
'disappears' during processing, which should cause a MAC to
get assigned.
Co-authored-by: Oleg Bondarev <obondarev@mirantis.com>
Change-Id: I148b5a025fc388821fd1269d02908cc8ce1882fe
Closes-bug: #1784484
(cherry picked from commit eea5aaac4f)
This is a revise for I7b24a159962af7b58c096a1b2766e2169e9f8aed
Br-int's flow tables are already uninstalled in setup_integration_br.
And setup_integration_br will install some default flows. If we still
uninstall flow tables of br-int in setup_dvr_flows_on_integ_br, these
default flow tables will be missing.
Closes-Bug: #1775146
Change-Id: I71c1f9034dfc913b9e9ae17cc8f6bd084c9ee580
(cherry picked from commit 760870b6c2)
With high concurrency more than 1 port may be activated on an
OVS agent at the same time (like VM port + a DVR port),
so the patch mitigates the condition by checking for 1 or 2
first active ports.
Given that the condition also contains "or self.agent_restarted(context)"
which makes it True first 180 sec (by default) after agent restart,
I believe the downside of changing 1 to 2 should be negligible.
Please see bug for more details on the issue.
Closes-Bug: #1789846
Change-Id: Ieab0186cbe05185d47bbf5a31141563cf923f66f
(cherry picked from commit b32db30874)
This change is a follow-up to Ib6ced838a7ec6d5c459a8475318556001c31bdf,
reintroducing a single place for applying the NORMAL action to
egress traffic, which is necessary to fix a regression introduced
by Ib6ced838a7ec6d5c459a8475318556001c31bdf.
Change-Id: I60d299275effd9ef35c8007773d3c9fcabfa50fa
Partial-Bug: 1789878
When HA router's interface on host is going DOWN but router
is still available on this host, L2 population
mechanism driver will now send to other hosts info to remove
fdb unicast entries to this port on host.
It will not send FLOODING_ENTRY because this port is still on
host but in standby mode and might be transformed to master
in future.
This solves issue with migration router from Legacy to HA.
In such case, port which was originally attached to legacy
router is transformed to be HA backup port before changing
its status to DOWN.
Now in such case unicast entries to this port and backup
node will be removed properly so packets to HA router will
be really send to host which is master node for router.
Closes-Bug: #1785582
Change-Id: Icc14e5f5d40fc6fbb49e0f7b18cc3b15ebec8508
(cherry picked from commit 6c300b1a9b)
On hosts with dvr_snat agent mode, after restarting OVS agent,
sometimes the SNAT port is processed first instead of the distributed port.
The subnet_info is cached locally via get_subnet_for_dvr when either of these ports
are processed. However, it returns the MAC address of the port used to query
as the gateway for the subnet. Using the SNAT port, this puts the wrong
MAC as the gateway, causing some flows such as the DVR flows on br-int
for local src VMs to have the wrong MAC.
This patch fixes the get_subnet_for_dvr with fixed_ips as None for the csnat port,
as that causes the server side handler to fill in the subnet's actual gateway
rather than using the port's MAC.
Change-Id: If045851819fd53c3b9a1506cc52bc1757e6d6851
Closes-Bug: #1783470
(cherry picked from commit c6de172e58)
The Neutron OVS agent logs can get flooded with KeyErrors as the
'_get_port_info' method skips the added/removed dict items if no
ports have been added/removed, which are expected to be present,
even if those are just empty sets.
This change ensures that those port info dict fields are always set.
Closes-Bug: #1783556
Change-Id: I9e5325aa2d8525231353ba451e8ea895be51b1ca
(cherry picked from commit da5b13df2b)
In CommonAgentLoop class, there is logic to detect tap device is changed
locally or not by comparing timestamp with previous.
Sometimes timestamp value could be None depending on the timing (see bug/1781129)
But current _get_devices_locally_modified logic can not detect local
change from None to something because _get_devices_locally_modified
function don't always compare if previous timestamp value was None.
In order not to miss updated device always, better not to use dict.get() to
know previous iteration have timestamp or not.
Change-Id: Ib0361ad5c281f88558e8e048cfec588b9f9b1de4
Closes-Bug: #1781129
As part of the implementation of multiple port bindings [1], add binding
activation support to the linux bridge agent. This will enable the
execution with linux bridge agents of the complete sequence of steps
outlined in [1] during an instance migration:
1) Create inactive port bindings for destination host
2) Migrate the instance to the destination host and plug its VIFs
3) Activate the port bindings in the destination host
4) Delete the port bindings for the source host
[1] https://review.openstack.org/#/c/309416/
Change-Id: I2c937cc0a551e5ce0e8534c4dd4384ec2ca92da1
Partial-Bug: #1580880
As part of the implementation of multiple port bindings [1], add binding
activation support to the OVS agent. This will enable the execution in
OVS agents of the complete sequence of steps outlined in [1] during an
instance migration:
1) Create inactive port bindings for destination host
2) Migrate the instance to the destination host and plug its VIFs
3) Activate the port bindings in the destination host
4) Delete the port bindings for the source host
[1] https://review.openstack.org/#/c/309416/
Change-Id: Iabca39364ec95633b2a8891fc295b3ada5f4f5e0
Partial-Bug: #1580880
The externally consumed APIs from neutron.db.api were rehomed into
neutron-lib with https://review.openstack.org/#/c/557040/
This patch consumes the retry_db_errors function from lib by:
- Removing retry_db_errors from neutron.db.api
- Updating the imports for retry_db_errors to use it from lib
- Using the DB API retry fixture from lib in the UTs where applicable
- Removing the UTs for neutron.db.api as they are now covered in lib
NeutronLibImpact
Change-Id: I1feb842d3e0e92c945efb01ece29856335a398fe
To support multiple port bindings, the L2 agents have to have the
capability to handle binding deactivation notifications from the
Neutron server. This patch adds the necessary code to the OVS agent.
After receiving the notification, the agent un-plugs the corresponding
VIF from the integration bridge.
Change-Id: I78178de2039ccabc649558de4f6549a38de90418
Partial-Bug: #1580880
This commit adds a binding_deactivate method to the Linux bridge agent
to receive messages from the ML2 plugin when a binding is de-activated
for a port. After receiving that message, the agent un-plugs the
corresponding tap interface from the port's network bridge.
To support this, a binding_deactivate method is also added to the agents
notifier. Finally, the activate method in the ML2 plugin is updated to
use the binding_deactivate method in the agents notifier.
Change-Id: I3f4e34766791c472a2c81842190094f697baa05c
Partial-Bug: #1580880
The remainder of the neutron.plugins.common.utils were rehomed into
neutron-lib with [1][2]. This patch consumes them by using the functions
from neutron-lib, and removing the neutron.plugins.common.utils module
all together as it's fully rehomed now.
NeutronLibImpact
[1] https://review.openstack.org/#/c/560950/
[2] https://review.openstack.org/#/c/554546/
Change-Id: Ic0f7b37861f078ce8c5ee92d97e977b8d2b468ad