neutron/neutron/plugins/ml2
venkata anil 26d8702b9d l2pop fdb flows for HA router ports
This patch makes L3 HA failover not depended on neutron components
(during failover).

All HA agents(active and backup) call update_device_up/down after wiring
the ports. But l2pop driver is called for only active agent as port
binding in DB reflects active agent. Then l2pop creates unicast and
multicast flows for active agent.
On failover, flows to new active agent is created. For this to happen -
all of database, messaging server, neutron-server and destination L3
agent should be active during failover. This creates two issues -
1) When any of the above resources(i.e neutron-server, .. ) are dead,
   flows between new master and other agents won't be created and
   L3 Ha failover is not working. In same scenario, L3 Ha failover will
   work if l2pop is disabled.
2) Packet loss during failover is higher as above neutron resources
   interact multiple times, so will take time to create l2 flows.

In this change, we allow plugin to notify l2pop when update_device_up/down
is called by backup agents also. Then l2pop will create flood flows to
all HA agents(both active and slave). L2pop won't create unicast flow for
this port, instead unicast flow is created by learning action of table 10
when keepalived sends GARP after assigning ip address to master router's
qr-xx port. As flood flows are already created and unicast flow is
dynamically added, L3 HA failover is not depended on l2pop.

This solves two isses
1) with L3 HA + l2pop, failover will work even if any of above agents
   or processes dead.
2) Reduce failover time as we are not depending on neutron to create
   flows during failover.
We use L3HARouterAgentPortBinding table for getting all HA agents of a
router port. HA router port on slave agent is also considered for l2pop
distributed_active_network_ports and agent_network_active_port_count

Closes-bug: #1522980
Closes-bug: #1602614
Change-Id: Ie1f5289390b3ff3f7f3ed7ffc8f6a8258ee8662e
2016-09-08 22:30:16 +00:00
..
common Use MultipleExceptions from neutorn-lib 2016-08-27 22:36:09 -04:00
drivers l2pop fdb flows for HA router ports 2016-09-08 22:30:16 +00:00
extensions Avoid KeyError when accessing "dns_name" as it may not exist 2016-08-18 15:43:53 +01:00
README Metaplugin removal 2015-07-23 19:05:05 +09:00
__init__.py Empty files should not contain copyright or license 2014-10-20 00:50:32 +00:00
config.py Change tunnel MTU calculation to support IPv6 2016-07-05 18:07:29 -04:00
db.py Enable create and delete segments in ML2 2016-08-28 01:24:56 -04:00
driver_api.py Typo fix 2016-06-16 11:42:57 +03:00
driver_context.py Create segment_host mapping after new network 2016-06-30 18:08:16 -06:00
managers.py Enable create and delete segments in ML2 2016-08-28 01:24:56 -04:00
models.py Switch to neutron-lib for model_base 2016-08-31 11:12:18 -04:00
plugin.py Remove workaround for bug/1543094 2016-09-02 16:32:51 +00:00
rpc.py l2pop fdb flows for HA router ports 2016-09-08 22:30:16 +00:00

README

The Modular Layer 2 (ML2) plugin is a framework allowing OpenStack
Networking to simultaneously utilize the variety of layer 2 networking
technologies found in complex real-world data centers. It supports the
Open vSwitch, Linux bridge, and Hyper-V L2 agents, replacing and
deprecating the monolithic plugins previously associated with those
agents, and can also support hardware devices and SDN controllers. The
ML2 framework is intended to greatly simplify adding support for new
L2 networking technologies, requiring much less initial and ongoing
effort than would be required for an additional monolithic core
plugin. It is also intended to foster innovation through its
organization as optional driver modules.

The ML2 plugin supports all the non-vendor-specific neutron API
extensions, and works with the standard neutron DHCP agent. It
utilizes the service plugin interface to implement the L3 router
abstraction, allowing use of either the standard neutron L3 agent or
alternative L3 solutions. Additional service plugins can also be used
with the ML2 core plugin.

Drivers within ML2 implement separately extensible sets of network
types and of mechanisms for accessing networks of those
types. Multiple mechanisms can be used simultaneously to access
different ports of the same virtual network. Mechanisms can utilize L2
agents via RPC and/or interact with external devices or
controllers. By utilizing the multiprovidernet extension, virtual
networks can be composed of multiple segments of the same or different
types. Type and mechanism drivers are loaded as python entrypoints
using the stevedore library.

Each available network type is managed by an ML2 type driver.  Type
drivers maintain any needed type-specific network state, and perform
provider network validation and tenant network allocation. As of the
havana release, drivers for the local, flat, vlan, gre, and vxlan
network types are included.

Each available networking mechanism is managed by an ML2 mechanism
driver. All registered mechanism drivers are called twice when
networks, subnets, and ports are created, updated, or deleted. They
are first called as part of the DB transaction, where they can
maintain any needed driver-specific state. Once the transaction has
been committed, they are called again, at which point they can
interact with external devices and controllers. Mechanism drivers are
also called as part of the port binding process, to determine whether
the associated mechanism can provide connectivity for the network, and
if so, the network segment and VIF driver to be used. The havana
release includes mechanism drivers for the Open vSwitch, Linux bridge,
and Hyper-V L2 agents, and for vendor switches/controllers/etc.
It also includes an L2 Population mechanism driver that
can help optimize tunneled virtual network traffic.

For additional information regarding the ML2 plugin and its collection
of type and mechanism drivers, see the OpenStack manuals and
http://wiki.openstack.org/wiki/Neutron/ML2.