neutron/neutron/plugins/ml2
Swaminathan Vasudevan 02d31ffb8a DVR: Inter Tenant Traffic between networks not possible with shared net
Inter Tenant Traffic between two different networks that belong
to two different Tenants is not possible when connected through
a shared network that are internally connected through DVR
routers.

This issue can be seen in multinode environment where there
is network isolation.

The issue is, we have two different IP for the ports that are
connecting the two routers and DVR does not expose the router
interfaces outside a compute and is blocked by ovs tunnel bridge
rules.

This patch fixes the issue by not applying the DVR specific
rules in the tunnel-bridge to the shared network ports that
are connecting the routers.

Closes-Bug: #1751396
Change-Id: I0717f29209f1354605d2f4128949ddbaefd99629
(cherry picked from commit d019790fe4)
2018-03-20 17:48:26 +00:00
..
common Make l2/l3 operations retriable at plugin level 2016-09-12 07:45:38 +00:00
drivers DVR: Inter Tenant Traffic between networks not possible with shared net 2018-03-20 17:48:26 +00:00
extensions use ml2 driver api from neutron-lib 2017-11-10 08:41:28 -07:00
README Metaplugin removal 2015-07-23 19:05:05 +09:00
__init__.py Empty files should not contain copyright or license 2014-10-20 00:50:32 +00:00
db.py Revert "Integration of (Distributed) Port Binding OVO" 2018-01-27 18:19:20 -06:00
driver_context.py Revert "Integration of (Distributed) Port Binding OVO" 2018-01-27 18:19:20 -06:00
managers.py fix same mechanism driver called twice bug 2018-02-01 18:51:39 +08:00
models.py Revert "Integration of (Distributed) Port Binding OVO" 2018-01-27 18:19:20 -06:00
ovo_rpc.py ovo_rpc: Avoid flooding logs with semantic violation warning 2017-07-10 16:25:56 +09:00
plugin.py Add notification for floatingip update/delete 2018-02-01 11:55:49 -08:00
rpc.py use qos constants from neutron-lib 2017-10-26 19:57:19 +00:00

README

The Modular Layer 2 (ML2) plugin is a framework allowing OpenStack
Networking to simultaneously utilize the variety of layer 2 networking
technologies found in complex real-world data centers. It supports the
Open vSwitch, Linux bridge, and Hyper-V L2 agents, replacing and
deprecating the monolithic plugins previously associated with those
agents, and can also support hardware devices and SDN controllers. The
ML2 framework is intended to greatly simplify adding support for new
L2 networking technologies, requiring much less initial and ongoing
effort than would be required for an additional monolithic core
plugin. It is also intended to foster innovation through its
organization as optional driver modules.

The ML2 plugin supports all the non-vendor-specific neutron API
extensions, and works with the standard neutron DHCP agent. It
utilizes the service plugin interface to implement the L3 router
abstraction, allowing use of either the standard neutron L3 agent or
alternative L3 solutions. Additional service plugins can also be used
with the ML2 core plugin.

Drivers within ML2 implement separately extensible sets of network
types and of mechanisms for accessing networks of those
types. Multiple mechanisms can be used simultaneously to access
different ports of the same virtual network. Mechanisms can utilize L2
agents via RPC and/or interact with external devices or
controllers. By utilizing the multiprovidernet extension, virtual
networks can be composed of multiple segments of the same or different
types. Type and mechanism drivers are loaded as python entrypoints
using the stevedore library.

Each available network type is managed by an ML2 type driver.  Type
drivers maintain any needed type-specific network state, and perform
provider network validation and tenant network allocation. As of the
havana release, drivers for the local, flat, vlan, gre, and vxlan
network types are included.

Each available networking mechanism is managed by an ML2 mechanism
driver. All registered mechanism drivers are called twice when
networks, subnets, and ports are created, updated, or deleted. They
are first called as part of the DB transaction, where they can
maintain any needed driver-specific state. Once the transaction has
been committed, they are called again, at which point they can
interact with external devices and controllers. Mechanism drivers are
also called as part of the port binding process, to determine whether
the associated mechanism can provide connectivity for the network, and
if so, the network segment and VIF driver to be used. The havana
release includes mechanism drivers for the Open vSwitch, Linux bridge,
and Hyper-V L2 agents, and for vendor switches/controllers/etc.
It also includes an L2 Population mechanism driver that
can help optimize tunneled virtual network traffic.

For additional information regarding the ML2 plugin and its collection
of type and mechanism drivers, see the OpenStack manuals and
http://wiki.openstack.org/wiki/Neutron/ML2.