Since the dependent patch was merged in Neutron, we need to call the register_common_config_options method explicitly to register common options like oslo.log options. This change updates the job template according to the tested runtimes for Zed. This is required to remove Python 3.6/7 because these are no longer supported. Also, now unit tests are executed with current master code of Neutron, instead of the latest release, so that we can run test with any unreleased change in Neutron. Closes-Bug: #1977980 Related-Bug: #1968606 Depends-on: https://review.opendev.org/c/openstack/neutron/+/837392 Change-Id: I1ac3557fcd6eba187d52686345040c5fdfb4f7e9
|4 months ago|
|contrib||5 years ago|
|debian||8 years ago|
|devstack||2 years ago|
|doc||2 years ago|
|etc||6 years ago|
|networking_l2gw||4 months ago|
|specs||6 years ago|
|tools||2 years ago|
|.coveragerc||7 years ago|
|.gitignore||8 years ago|
|.gitreview||4 months ago|
|.testr.conf||7 years ago|
|.zuul.yaml||4 months ago|
|CONTRIBUTING.rst||3 years ago|
|HACKING.rst||5 years ago|
|LICENSE||8 years ago|
|MANIFEST.in||6 years ago|
|README.rst||2 years ago|
|bindep.txt||3 years ago|
|lower-constraints.txt||2 years ago|
|requirements.txt||2 years ago|
|setup.cfg||2 years ago|
|setup.py||2 years ago|
|test-requirements.txt||2 years ago|
|tox.ini||4 months ago|
API's and implementations to support L2 Gateways in Neutron.
- Free software: Apache license
- Source: https://opendev.org/x/networking-l2gw
This project proposes a Neutron API extension that can be used to express and manage L2 Gateway components. In the simplest terms L2 Gateways are meant to bridge two or more networks together to make them look at a single L2 broadcast domain.
There are a number of use cases that can be addressed by an L2 Gateway API. Most notably in cloud computing environments, a typical use case is bridging the virtual with the physical. Translate this to Neutron and the OpenStack world, and this means relying on L2 Gateway capabilities to extend Neutron logical (overlay) networks into physical (provider) networks that are outside the OpenStack realm. These networks can be, for instance, VLAN's that may or may not be managed by OpenStack.
For help using or hacking on L2GW, you can send an email to the OpenStack Discuss Mailing List <mailto:email@example.com>; please use the [L2-Gateway] Tag in the subject. Most folks involved hang out on the IRC channel #openstack-neutron.
To get started you have to install the l2gw plugin software on the Controller node where you are already running the Neutron server. Then you need a new node, that we call the l2gw node, where you do the actual bridging between a vxlan tenant network and a physical network. The l2gw node could be a bare metal switch that supports the OVSDB schema, or a server with OVS installed. In this example we are going to use a server.
In this example the l2gw node has a ens5 interface attached to a physical segment, and a management interface with IP 10.225.0.27.
ip link set up dev ens5 apt-get update apt-get install openvswitch-vtep ovsdb-tool create /etc/openvswitch/vtep.db /usr/share/openvswitch/vtep.ovsschema ovsdb-tool create /etc/openvswitch/vswitch.db /usr/share/openvswitch/vswitch.ovsschema # Stop OVS services started by the installer. systemctl is-active --quiet ovs-vswitchd && systemctl stop ovs-vswitchd systemctl is-active --quiet ovsdb-server && systemctl stop ovsdb-server mkdir -p /var/run/openvswitch/ ovsdb-server --pidfile --detach --log-file --remote ptcp:6632:10.225.0.27 \ --remote punix:/var/run/openvswitch/db.sock --remote=db:hardware_vtep,Global,managers \ /etc/openvswitch/vswitch.db /etc/openvswitch/vtep.db ovs-vswitchd --log-file --detach --pidfile unix:/var/run/openvswitch/db.sock ovs-vsctl add-br myphyswitch vtep-ctl add-ps myphyswitch vtep-ctl set Physical_Switch myphyswitch tunnel_ips=10.225.0.27 ovs-vsctl add-port myphyswitch ens5 vtep-ctl add-port myphyswitch ens5 /usr/share/openvswitch/scripts/ovs-vtep \ --log-file=/var/log/openvswitch/ovs-vtep.log \ --pidfile=/var/run/openvswitch/ovs-vtep.pid \ --detach myphyswitch
At this point your l2gw node is running.
For the configuration of the Openstack control plane you have to check three files:
neutron.conf, l2gw_plugin.ini, and l2gateway_agent.ini Edit your
neutron.conf on the controller node and make sure that in the
service_plugins you have the string
You can add it with: :
sudo sed -ri 's/^(service_plugins.*)/\1,networking_l2gw.services.l2gateway.plugin.L2GatewayPlugin/' \ /etc/neutron/neutron.conf
Make sure the neutron-server runs with
--config-file=/etc/neutron/l2gw_plugin.ini. The default for the l2gw_plugin.ini file should be okay.
Now you are ready to create the database tables for the neutron l2gw plugin using the command:
neutron-db-manage upgrade heads
The file l2gateway_agent.ini is used to configure the neutron-l2gateway agent. The agent is the piece of software that will configure the l2gw node when you interact with the Openstack API. Here it is important to give the pointer to the switch.
ovsdb_hosts = 'ovsdb1:10.225.0.27:6632'
ovsdb1 is just a name that will be used in the Openstack database to identify this switch.
Now that both the l2gw node and the Openstack control plane are configured, we can use the API service to bridge a VXLAN tenant network to a physical interface of the l2gw node.
First let's create in Openstack a l2-gateway object. We need to give the interface names and the name of the bridge that we used before in the OVS commands.
l2-gateway-create --device name="myphyswitch",interface_names="ens5" openstackname
Use the <GATEWAY-NAME/UUID> just created to feed the second command where you do the actual bridging between the VXLAN tenant network and the Physical L2 network.
l2-gateway-connection-create <GATEWAY-NAME/UUID> <NETWORK-NAME/UUID>
Now let's see what happened. On the l2gw node you can do the commands: :
ovs-vsctl show vtep-ctl show
You should see some VXLAN tunnels are created. You will see a vxlan tunnel to each compute node that is hosting an instance attached to the tenant network that you bridge. If there is also a router in this tenant network, you will find a VXLAN tunnel also to the network node.