API's and implementations to support L2 Gateways in Neutron.
 
 
 
Go to file
elajkat 6e9c1b693e CI: Fix dependencies of SQLAlchemy main job
Add the same oslo master and SQLAlchemy jobs to experimental
also.

Change-Id: I2c5833622a3f38645a29d75bf608345b288e390a
2023-04-14 09:32:42 +02:00
contrib 'l2gw' entrypoint for Neutron service_plugins 2017-09-11 13:56:42 -06:00
debian networking-l2gw debian packaging and installation 2015-03-26 03:23:55 -07:00
devstack .zuul.yaml to the project and change location 2020-09-15 07:41:36 +00:00
doc .zuul.yaml to the project and change location 2020-09-15 07:41:36 +00:00
etc Introduce socket timeout for a blocking socket 2016-08-26 08:57:17 +00:00
networking_l2gw Add context for all SQL transactions 2022-09-27 08:21:43 +00:00
specs New API SPEC for Border Gateway support 2016-07-19 09:40:33 +03:00
tools .zuul.yaml to the project and change location 2020-09-15 07:41:36 +00:00
.coveragerc Remove ovsapp references form .coverage file 2016-02-19 00:10:35 -08:00
.gitignore Initial cookiecutter setup 2014-12-11 23:42:16 -08:00
.gitreview Fix missing common options 2022-06-14 14:11:35 +09:00
.testr.conf Revert "Fix for HP 3rd Party CI failure" 2016-01-11 12:01:57 -08:00
.zuul.yaml CI: Fix dependencies of SQLAlchemy main job 2023-04-14 09:32:42 +02:00
CONTRIBUTING.rst PDF documentation build 2019-09-17 11:10:03 +00:00
HACKING.rst Update the documentation link for doc migration 2017-07-24 15:10:01 +08:00
LICENSE Initial cookiecutter setup 2014-12-11 23:42:16 -08:00
MANIFEST.in Include alembic migrations in module 2016-09-26 12:20:14 +01:00
README.rst Add missing "Getting started" shell commands 2020-10-23 15:22:56 +01:00
bindep.txt Add Python 3 Train unit tests 2019-07-24 07:30:32 +00:00
lower-constraints.txt Update requirements for python 3.11 2022-12-06 09:19:05 -05:00
requirements.txt Update requirements for python 3.11 2022-12-06 09:19:05 -05:00
setup.cfg .zuul.yaml to the project and change location 2020-09-15 07:41:36 +00:00
setup.py Cleanup py27 support 2020-06-19 15:21:49 +00:00
test-requirements.txt Tests: fix requirements for unit tests 2022-07-19 11:59:51 +00:00
tox.ini Fix tox.ini for tox4 2023-01-09 09:44:02 +00:00

README.rst

networking-l2gw

API's and implementations to support L2 Gateways in Neutron.

L2 Gateways

This project proposes a Neutron API extension that can be used to express and manage L2 Gateway components. In the simplest terms L2 Gateways are meant to bridge two or more networks together to make them look at a single L2 broadcast domain.

Initial implementation

There are a number of use cases that can be addressed by an L2 Gateway API. Most notably in cloud computing environments, a typical use case is bridging the virtual with the physical. Translate this to Neutron and the OpenStack world, and this means relying on L2 Gateway capabilities to extend Neutron logical (overlay) networks into physical (provider) networks that are outside the OpenStack realm. These networks can be, for instance, VLAN's that may or may not be managed by OpenStack.

More information

For help using or hacking on L2GW, you can send an email to the OpenStack Discuss Mailing List <mailto:openstack-discuss@lists.openstack.org>; please use the [L2-Gateway] Tag in the subject. Most folks involved hang out on the IRC channel #openstack-neutron.

Getting started

To get started you have to install the l2gw plugin software on the Controller node where you are already running the Neutron server. Then you need a new node, that we call the l2gw node, where you do the actual bridging between a vxlan tenant network and a physical network. The l2gw node could be a bare metal switch that supports the OVSDB schema, or a server with OVS installed. In this example we are going to use a server.

In this example the l2gw node has a ens5 interface attached to a physical segment, and a management interface with IP 10.225.0.27.

ip link set up dev ens5
apt-get update
apt-get install openvswitch-vtep
ovsdb-tool create /etc/openvswitch/vtep.db /usr/share/openvswitch/vtep.ovsschema
ovsdb-tool create /etc/openvswitch/vswitch.db /usr/share/openvswitch/vswitch.ovsschema
# Stop OVS services started by the installer.
systemctl is-active --quiet ovs-vswitchd && systemctl stop ovs-vswitchd
systemctl is-active --quiet ovsdb-server && systemctl stop ovsdb-server
mkdir -p /var/run/openvswitch/
ovsdb-server --pidfile --detach --log-file --remote ptcp:6632:10.225.0.27 \
             --remote punix:/var/run/openvswitch/db.sock --remote=db:hardware_vtep,Global,managers \
             /etc/openvswitch/vswitch.db /etc/openvswitch/vtep.db
ovs-vswitchd --log-file --detach --pidfile unix:/var/run/openvswitch/db.sock
ovs-vsctl add-br myphyswitch
vtep-ctl add-ps myphyswitch
vtep-ctl set Physical_Switch myphyswitch tunnel_ips=10.225.0.27
ovs-vsctl add-port myphyswitch ens5
vtep-ctl add-port myphyswitch ens5
/usr/share/openvswitch/scripts/ovs-vtep \
             --log-file=/var/log/openvswitch/ovs-vtep.log \
             --pidfile=/var/run/openvswitch/ovs-vtep.pid \
             --detach myphyswitch

At this point your l2gw node is running.

For the configuration of the Openstack control plane you have to check three files: neutron.conf, l2gw_plugin.ini, and l2gateway_agent.ini Edit your neutron.conf on the controller node and make sure that in the service_plugins you have the string networking_l2gw.services.l2gateway.plugin.L2GatewayPlugin.

You can add it with: :

sudo sed -ri 's/^(service_plugins.*)/\1,networking_l2gw.services.l2gateway.plugin.L2GatewayPlugin/' \
   /etc/neutron/neutron.conf

Make sure the neutron-server runs with --config-file=/etc/neutron/l2gw_plugin.ini. The default for the l2gw_plugin.ini file should be okay.

Now you are ready to create the database tables for the neutron l2gw plugin using the command: neutron-db-manage upgrade heads

The file l2gateway_agent.ini is used to configure the neutron-l2gateway agent. The agent is the piece of software that will configure the l2gw node when you interact with the Openstack API. Here it is important to give the pointer to the switch. ovsdb_hosts = 'ovsdb1:10.225.0.27:6632'

The name ovsdb1 is just a name that will be used in the Openstack database to identify this switch.

Now that both the l2gw node and the Openstack control plane are configured, we can use the API service to bridge a VXLAN tenant network to a physical interface of the l2gw node.

First let's create in Openstack a l2-gateway object. We need to give the interface names and the name of the bridge that we used before in the OVS commands.

l2-gateway-create --device name="myphyswitch",interface_names="ens5" openstackname

Use the <GATEWAY-NAME/UUID> just created to feed the second command where you do the actual bridging between the VXLAN tenant network and the Physical L2 network.

l2-gateway-connection-create <GATEWAY-NAME/UUID> <NETWORK-NAME/UUID>

Now let's see what happened. On the l2gw node you can do the commands: :

ovs-vsctl show
vtep-ctl show

You should see some VXLAN tunnels are created. You will see a vxlan tunnel to each compute node that is hosting an instance attached to the tenant network that you bridge. If there is also a router in this tenant network, you will find a VXLAN tunnel also to the network node.

References: