neutron/neutron/plugins/ml2
Sławek Kapłoński 39b9197f6f Don't set administratively disabled ports as ACTIVE
There was a race condition during port update to disable it.
In case when neutron-server receives port update call to set
admin_state_down on port, it sends PORT_UPDATE notification
to agents.

Then both L2 and DHCP agents start doing their job with port.
L2 agent asks neutron-server about device details and during
processing this call server sets port's status to DOWN if
its admin_state_up = False.
Problem is that sometimes just after that, DHCP agent will send
to neutron server notification that provisioning for this port
is finished.
As there is no any other provisioning block in db (because it
is just port update) neutron-server is setting port's status
to ACTIVE.

This patch fixes this issue by allowing to transition to
ACTIVE only ports which are administratively enabled.

Change-Id: If506e0ff68fc49748f19618470c85901339a419b
Closes-Bug: #1757089
(cherry picked from commit 2a1319ab7a)
2018-06-21 07:40:49 +00:00
..
common Make l2/l3 operations retriable at plugin level 2016-09-12 07:45:38 +00:00
drivers Don't delete flows on ports which were on dead vlan during plug 2018-05-17 09:16:23 +00:00
extensions Fix port deletion when dns_integration is enabled 2017-11-06 11:12:33 +00:00
README Metaplugin removal 2015-07-23 19:05:05 +09:00
__init__.py Empty files should not contain copyright or license 2014-10-20 00:50:32 +00:00
config.py Change tunnel MTU calculation to support IPv6 2016-07-05 18:07:29 -04:00
db.py Add some bulk lookup methods to ML2 for RPC handling 2017-05-23 10:30:06 -07:00
driver_api.py Update docstring in validate_provider_segment 2017-01-20 02:40:51 +00:00
driver_context.py Allow offloading lookups in driver contexts 2017-05-16 14:36:12 -07:00
managers.py Merge "ML2: Lower log level of "Host filtering is disabled" message" 2017-02-03 09:18:27 +00:00
models.py Switch to 'subquery' for 1-M relationships 2017-02-14 15:27:07 +00:00
ovo_rpc.py Make ML2 OVO push notification asynchronous 2017-03-01 19:45:41 +00:00
plugin.py Don't set administratively disabled ports as ACTIVE 2018-06-21 07:40:49 +00:00
rpc.py ml2: fix update_device_up to send lm events with linux bridge 2018-04-16 14:12:10 +01:00

README

The Modular Layer 2 (ML2) plugin is a framework allowing OpenStack
Networking to simultaneously utilize the variety of layer 2 networking
technologies found in complex real-world data centers. It supports the
Open vSwitch, Linux bridge, and Hyper-V L2 agents, replacing and
deprecating the monolithic plugins previously associated with those
agents, and can also support hardware devices and SDN controllers. The
ML2 framework is intended to greatly simplify adding support for new
L2 networking technologies, requiring much less initial and ongoing
effort than would be required for an additional monolithic core
plugin. It is also intended to foster innovation through its
organization as optional driver modules.

The ML2 plugin supports all the non-vendor-specific neutron API
extensions, and works with the standard neutron DHCP agent. It
utilizes the service plugin interface to implement the L3 router
abstraction, allowing use of either the standard neutron L3 agent or
alternative L3 solutions. Additional service plugins can also be used
with the ML2 core plugin.

Drivers within ML2 implement separately extensible sets of network
types and of mechanisms for accessing networks of those
types. Multiple mechanisms can be used simultaneously to access
different ports of the same virtual network. Mechanisms can utilize L2
agents via RPC and/or interact with external devices or
controllers. By utilizing the multiprovidernet extension, virtual
networks can be composed of multiple segments of the same or different
types. Type and mechanism drivers are loaded as python entrypoints
using the stevedore library.

Each available network type is managed by an ML2 type driver.  Type
drivers maintain any needed type-specific network state, and perform
provider network validation and tenant network allocation. As of the
havana release, drivers for the local, flat, vlan, gre, and vxlan
network types are included.

Each available networking mechanism is managed by an ML2 mechanism
driver. All registered mechanism drivers are called twice when
networks, subnets, and ports are created, updated, or deleted. They
are first called as part of the DB transaction, where they can
maintain any needed driver-specific state. Once the transaction has
been committed, they are called again, at which point they can
interact with external devices and controllers. Mechanism drivers are
also called as part of the port binding process, to determine whether
the associated mechanism can provide connectivity for the network, and
if so, the network segment and VIF driver to be used. The havana
release includes mechanism drivers for the Open vSwitch, Linux bridge,
and Hyper-V L2 agents, and for vendor switches/controllers/etc.
It also includes an L2 Population mechanism driver that
can help optimize tunneled virtual network traffic.

For additional information regarding the ML2 plugin and its collection
of type and mechanism drivers, see the OpenStack manuals and
http://wiki.openstack.org/wiki/Neutron/ML2.