c11b6bf25c
This script is exported when dragonflow tempest job is executed, we can exclude failing tempest tests with it. Change-Id: Id6f26ab59ebeba958c292d9fb0a291440d801765 |
||
---|---|---|
devstack | ||
doc | ||
dragonflow | ||
ovn-patch | ||
tools | ||
.coveragerc | ||
.gitignore | ||
.gitreview | ||
.mailmap | ||
.testr.conf | ||
babel.cfg | ||
CONTRIBUTING.rst | ||
HACKING.rst | ||
LICENSE | ||
MANIFEST.in | ||
neutron-l3-controller-agent | ||
openstack-common.conf | ||
README.rst | ||
requirements.txt | ||
setup.cfg | ||
setup.py | ||
test-requirements.txt | ||
tox.ini |
SDN based Virtual Router add-on for Neutron OpenStack
- Free software: Apache license
- Homepage: http://launchpad.net/dragonflow
- Source: http://git.openstack.org/cgit/openstack/dragonflow
- Bugs: http://bugs.launchpad.net/dragonflow
Documentation:
- Solution Overview Presentation
- Solution Overview Blog Post
- Deep-Dive Introduction 1 Blog Post
- Deep-Dive Introduction 2 Blog Post
- Kilo-Release Blog Post
Overview
Dragonflow is an implementation of a fully distributed virtual router for OpenStack Neutron, which is based on a Software-Defined Network Controller (SDNC) design.
The main purpose of Dragonflow is to simplify the management of the virtual router, while improving performance, scale and eliminating single point of failure and the notorious network node bottleneck.
The proposed method is based on the separation of the routing control plane from the data plane. This is accomplished by implementing the routing logic in distributed forwarding rules on the virtual switches. In OpenFlow these rules are called flows. To put this simply, the virtual router is implemented using OpenFlow flows.
Dragonflow eliminates the use of namespaces in contrast to the standard DVR implementation. A diagram showing Dragonflow components and overall architecture can be seen here:
Perhaps the most important part of the solution is the OpenFlow pipeline which we install into the integration bridge upon bootstrap. This is the flow that controls all traffic in the OVS integration bridge (br-int). The pipeline works in the following manner:
1) Classify the traffic
2) Forward to the appropriate element:
1. If it is ARP, forward to the ARP Responder table
2. If routing is required (L3), forward to the L3 Forwarding table
(which implements a virtual router)
3. All L2 traffic and local subnet traffic are offloaded to the NORMAL
pipeline handled by ML2
4. North/South traffic is forwarded to the network node (SNAT)
The following diagram shows the multi-table OpenFlow pipeline installed into the OVS integration bridge (br-int) in order to represent the virtual router using flows only:
A detailed blog post describing the solution can be found Here.
How to Install
DevStack Single Node Configuration
DevStack Multi Node Configuration
Prerequisites
Install DevStack with Neutron ML2 as core plugin Install OVS 2.3.1 or newer
Features
- APIs for routing IPv4 East-West traffic
- Performance improvement for inter-subnet network by removing the amount of kernel layers (namespaces and their TCP stack overhead)
- Scalability improvement for inter-subnet network by offloading L3 East-West routing from the Network Node to all Compute Nodes
- Reliability improvement for inter-subnet network by removal of Network Node from the East-West traffic
- Simplified virtual routing management
- Support for all type drivers GRE/VXLAN/VLAN
- Support for centralized shared public network (SNAT) based on the legacy L3 implementation
- Support for centralized floating IP (DNAT) based on the legacy L3 implementation
- Support for HA, in case the connection to the Controller is lost, fall back to the legacy L3 implementation until recovery. Reused all the legacy L3 HA. (Controller HA will be supported in the next release).
- Supports for centralized IPv6 based on the legacy L3 implementation
TODO
- Add support for North-South L3 IPv4 distribution (SNAT and DNAT)
- Add support for IPv6
- Support for multi controllers solution
Full description can be found in the project Blueprints