Update term "cross OpenStack L2 network"

1. What is the problem
Because Tricircle can work both in multi-region and nova cells v2,
it's more appropriate to replace the term "cross OpenStack L2 network"
with "cross Neutron L2 network".

2. What is the solution for the problem
1) Replace all the occurrences of "cross OpenStack L2 network" with
   "cross Neutron L2 network"
2) Replace all the occurrences of "cross multiple OpenStack instances"
   or "cross pods" with "cross multiple Neutron servers"
3) issue: the configuration "cross_pod_vxlan_mode" keeps unchanged.

3. What the features need to be implemented to the Tricircle to
realize the solution
All documents under directory doc/source and spec/ need to be updated.

Change-Id: I0a4cd9ad0e976fd201aff3e44831c0346d376774
This commit is contained in:
Fangming Liu 2017-06-23 16:34:28 +08:00
parent 6211ef4b02
commit 17f104d057
12 changed files with 126 additions and 125 deletions

View File

@ -87,7 +87,7 @@ maintenance. The following items should be configured in Tricircle's api.conf.
Tricircle XJob Settings
=======================
Tricircle XJob serves for receiving and processing cross OpenStack
Tricircle XJob serves for receiving and processing cross Neutron
functionality and other async jobs from Admin API or Tricircle Central
Neutron Plugin. The following items should be configured in Tricircle's
xjob.conf.
@ -131,7 +131,7 @@ Tricircle Central Neutron Plugin and Tricircle Local Neutron Plugin.
**Tricircle Central Neutron Plugin**
The Tricircle Central Neutron Plugin serves for tenant level L2/L3 networking
automation across multiple OpenStack instances. The following items should be
automation across multiple Neutron servers. The following items should be
configured in central Neutron's neutron.conf.
.. _Central Neutron:

View File

@ -11,12 +11,12 @@ server, only one pod(one pod means one OpenStack instance) is running. Network
is created with the default network type: local. Local type network will be only
presented in one pod. If a local type network is already hosting virtual machines
in one pod, you can not use it to boot virtual machine in another pod. That is
to say, local type network doesn't support cross-pod l2 networking.
to say, local type network doesn't support cross-Neutron l2 networking.
With multi-pod installation of the Tricircle, you can try out cross-pod l2
networking and cross-pod l3 networking features.
With multi-pod installation of the Tricircle, you can try out cross-Neutron l2
networking and cross-Neutron l3 networking features.
To support cross-pod l2 networking, we have added both VLAN and VxLAN
To support cross-Neutron l2 networking, we have added both VLAN and VxLAN
network type to the Tricircle. When a VLAN type network created via the
central Neutron server is used to boot virtual machines in different pods, local
Neutron server in each pod will create a VLAN type network with the same VLAN
@ -28,16 +28,16 @@ configured with the same VxLAN allocation pool, so local Neutron server in each
pod can create a VxLAN type network with the same VxLAN ID as is allocated by
the central Neutron server.
Cross-pod l3 networking is supported in two ways in the Tricircle. If two
Cross-Neutron l3 networking is supported in two ways in the Tricircle. If two
networks connected to the router are of local type, we utilize a shared
VLAN or VxLAN network to achieve cross-pod l3 networking. When a subnet is
VLAN or VxLAN network to achieve cross-Neutron l3 networking. When a subnet is
attached to a router via the central Neutron server, the Tricircle not only
creates corresponding subnet and router in the pod, but also creates a "bridge"
network. Both tenant network and "bridge" network are attached to the router.
Each tenant will have one allocated VLAN or VxLAN ID, which is shared by the
tenant's "bridge" networks across pods. The CIDRs of "bridge" networks for one
tenant's "bridge" networks across Neutron servers. The CIDRs of "bridge" networks for one
tenant are also the same, so the router interfaces in "bridge" networks across
different pods can communicate with each other. By adding an extra route as
different Neutron servers can communicate with each other. By adding an extra route as
following::
destination: CIDR of tenant network in another pod
@ -51,7 +51,7 @@ attaches a subnet to a router via the central Neutron server and the job is
finished asynchronously.
If one of the network connected to the router is not local type, meaning that
cross-pod l2 networking is supported in this network(like VLAN type), and
cross-Neutron l2 networking is supported in this network(like VLAN type), and
the l2 network can be stretched into current pod, packets sent to the virtual
machine in this network will not pass through the "bridge" network. Instead,
packets first go to router, then are directly forwarded to the target virtual
@ -83,7 +83,7 @@ for installing DevStack in bare metal server and
`All-In-One Single VM <http://docs.openstack.org/developer/devstack/guides/single-vm.html>`_
for installing DevStack in virtual machine.
If you want to experience cross OpenStack VxLAN network, please make sure
If you want to experience cross Neutron VxLAN network, please make sure
compute nodes are routable to each other on data plane, and enable L2
population mechanism driver in OpenStack RegionOne and OpenStack RegionTwo.

View File

@ -141,7 +141,7 @@ Create net1 which will work as the L2 network across RegionOne and RegionTwo.
.. code-block:: console
If net1 is vlan based cross-OpenStack L2 network
If net1 is vlan based cross-Neutron L2 network
$ neutron --os-region-name=CentralRegion net-create --provider:network_type vlan --provider:physical_network bridge --availability-zone-hint az1 --availability-zone-hint az2 net1
+---------------------------+--------------------------------------+
| Field | Value |
@ -162,7 +162,7 @@ Create net1 which will work as the L2 network across RegionOne and RegionTwo.
| tenant_id | ce444c8be6da447bb412db7d30cd7023 |
+---------------------------+--------------------------------------+
If net1 is vxlan based cross-OpenStack L2 network
If net1 is vxlan based cross-Neutron L2 network
$ neutron --os-region-name=CentralRegion net-create --provider:network_type vxlan --availability-zone-hint az1 --availability-zone-hint az2 net1
+---------------------------+--------------------------------------+
| Field | Value |

View File

@ -226,7 +226,7 @@ Create net3 which will work as the L2 network across RegionOne and RegionTwo.
.. code-block:: console
If net3 is vlan based cross-OpenStack L2 network
If net3 is vlan based cross-Neutron L2 network
$ neutron --os-region-name=CentralRegion net-create --provider:network_type vlan --provider:physical_network bridge --availability-zone-hint az1 --availability-zone-hint az2 net3
+---------------------------+--------------------------------------+
@ -248,7 +248,7 @@ Create net3 which will work as the L2 network across RegionOne and RegionTwo.
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
+---------------------------+--------------------------------------+
If net3 is vxlan based cross-OpenStack L2 network
If net3 is vxlan based cross-Neutron L2 network
$ neutron --os-region-name=CentralRegion net-create --provider:network_type vxlan --availability-zone-hint az1 --availability-zone-hint az2 net3
+---------------------------+--------------------------------------+

View File

@ -34,7 +34,7 @@ needed.
The logical topology to be composed in Tricircle is as follows. R3(1), R3(2)
and bridge-net will be one logical router R3, and R3 is only for cross
OpenStack east-west traffic. North-south traffic of net1, net2 will go
Neutron east-west traffic. North-south traffic of net1, net2 will go
through R1, north-south traffic of net3, net4 will go through R2.
.. code-block:: console

View File

@ -104,7 +104,7 @@ instance in another region to this network. The local network could be VLAN
or VxLAN or GRE network by default, it's up to your local Neutron's
configuration.
If you want to create a L2 network across multiple Neutron, then you
If you want to create a L2 network across multiple Neutron servers, then you
have to speficy --provider-network-type vlan in network creation
command for vlan network type, or --provider-network-type vxlan for vxlan
network type. Both vlan and vxlan network type could work as the bridge

View File

@ -37,20 +37,20 @@ Local Router
neutron --os-region-name=CentralRegion router-create --availability-zone-hint RegionOne R1
Cross OpenStack L2 Network
- Cross OpenStack L2 Network is a network which can be stretched into more
than one OpenStack cloud.
- Also called cross Neutron L2 network, or cross pod L2 network.
Cross Neutron L2 Network
- Cross Neutron L2 Network is a network which can be stretched into more
than one Neutron servers, these Neutron servers may work in one
OpenStack cloud or multiple OpenStack clouds.
- Network type could be VLAN, VxLAN, Flat.
- During the network creation, if availability-zone-hint is not specified,
or specified with availability zone name, or more than one region name,
or more than one availability zone name, then the network will be created
as cross OpenStack L2 network.
as cross Neutron L2 network.
- If the default network type to be created is not configured to "local" in
central Neutron, then the network will be cross OpenStack L2 network if
central Neutron, then the network will be cross Neutron L2 network if
the network was created without specified provider network type and single
region name in availability-zone-hint.
- For example, cross OpenStack L2 network could be created as follows:
- For example, cross Neutron L2 network could be created as follows:
.. code-block:: console
@ -60,8 +60,8 @@ Non-Local Router
- Non-Local Router will be able to reside in more than one OpenStack cloud,
and internally inter-connected with bridge network.
- Bridge network used internally for non-local router is a special cross
OpenStack L2 network.
- Local networks or cross OpenStack L2 networks can be attached to local
Neutron L2 network.
- Local networks or cross Neutron L2 networks can be attached to local
router or non-local routers if the network can be presented in the region
where the router can reside.
- During the router creation, if availability-zone-hint is not specified,
@ -74,19 +74,19 @@ Non-Local Router
neutron --os-region-name=CentralRegion router-create --availability-zone-hint RegionOne --availability-zone-hint RegionTwo R3
It's also important to understand that cross OpenStack L2 network, local
It's also important to understand that cross Neutron L2 network, local
router and non-local router can be created for different north-south/east-west
networking purpose.
North-South and East-West Networking
- Instances in different OpenStack clouds can be attached to a cross
OpenStack L2 network directly, so that they can communicate with
Neutron L2 network directly, so that they can communicate with
each other no matter in which OpenStack cloud.
- If L3 networking across OpenStack clouds is preferred, local network
attached to non-local router can be created for instances to attach.
- Local router can be set gateway with external networks to support
north-south traffic handled locally.
- Non-local router can work only for cross OpenStack east-west networking
- Non-local router can work only for cross Neutron east-west networking
purpose if no external network is set to the router.
- Non-local router can serve as the centralized north-south traffic gateway
if external network is attached to the router, and support east-west

View File

@ -1,6 +1,6 @@
======================================
Cross pod L2 networking in Tricircle
======================================
========================================
Cross Neutron L2 networking in Tricircle
========================================
Background
==========
@ -23,7 +23,7 @@ The Tricircle has the following components:
Nova API-GW provides the functionality to trigger automatic networking creation
when new VMs are being provisioned. Neutron Tricircle plug-in is the
functionality to create cross OpenStack L2/L3 networking for new VMs. After the
functionality to create cross Neutron L2/L3 networking for new VMs. After the
binding of tenant-id and pod finished in the Tricircle, Cinder API-GW and Nova
API-GW will pass the cinder api or nova api request to appropriate bottom
OpenStack instance.
@ -66,8 +66,8 @@ spread out into multiple bottom OpenStack instances in one AZ or multiple AZs.
OpenStack may not be enough, then a new OpenStack instance has to be added
to the cloud. But the tenant still wants to add new VMs into same network.
* Cross OpenStack network service chaining. Service chaining is based on
the port-pairs. Leveraging the cross pod L2 networking capability which
* Cross Neutron network service chaining. Service chaining is based on
the port-pairs. Leveraging the cross Neutron L2 networking capability which
is provided by the Tricircle, the chaining could also be done by across sites.
For example, vRouter1 in pod1, but vRouter2 in pod2, these two VMs could be
chained.
@ -80,7 +80,7 @@ spread out into multiple bottom OpenStack instances in one AZ or multiple AZs.
beat among application components (directly or via replicated database
services, or via private designed message format). When this kind of
applications are distributedly deployed into multiple OpenStack instances,
cross OpenStack L2 networking is needed to support heart beat
cross Neutron L2 networking is needed to support heart beat
or state replication.
* When a tenant's VMs are provisioned in different OpenStack instances, there
@ -88,18 +88,18 @@ spread out into multiple bottom OpenStack instances in one AZ or multiple AZs.
visible to the tenant, and isolation is needed. If the traffic goes through
N-S (North-South) via tenant level VPN, overhead is too much, and the
orchestration for multiple site to site VPN connection is also complicated.
Therefore cross OpenStack L2 networking to bridge the tenant's routers in
different OpenStack instances can provide more light weight isolation.
Therefore cross Neutron L2 networking to bridge the tenant's routers in
different Neutron servers can provide more light weight isolation.
* In hybrid cloud, there is cross L2 networking requirement between the
private OpenStack and the public OpenStack. Cross pod L2 networking will
private OpenStack and the public OpenStack. Cross Neutron L2 networking will
help the VMs migration in this case and it's not necessary to change the
IP/MAC/Security Group configuration during VM migration.
The spec[5] is to explain how one AZ can support more than one pod, and how
to schedule a proper pod during VM or Volume creation.
And this spec is to deal with the cross OpenStack L2 networking automation in
And this spec is to deal with the cross Neutron L2 networking automation in
the Tricircle.
The simplest way to spread out L2 networking to multiple OpenStack instances
@ -107,13 +107,13 @@ is to use same VLAN. But there is a lot of limitations: (1) A number of VLAN
segment is limited, (2) the VLAN network itself is not good to spread out
multiple sites, although you can use some gateways to do the same thing.
So flexible tenant level L2 networking across multiple OpenStack instances in
So flexible tenant level L2 networking across multiple Neutron servers in
one site or in multiple sites is needed.
Proposed Change
===============
Cross pod L2 networking can be divided into three categories,
Cross Neutron L2 networking can be divided into three categories,
``VLAN``, ``Shared VxLAN`` and ``Mixed VLAN/VxLAN``.
* VLAN
@ -144,21 +144,21 @@ There is another network type called “Local Network”. For “Local Network
the network will be only presented in one bottom OpenStack instance. And the
network won't be presented in different bottom OpenStack instances. If a VM
in another pod tries to attach to the “Local Network”, it should be failed.
This use case is quite useful for the scenario in which cross pod L2
This use case is quite useful for the scenario in which cross Neutron L2
networking is not required, and one AZ will not include more than bottom
OpenStack instance.
Cross pod L2 networking will be able to be established dynamically during
Cross Neutron L2 networking will be able to be established dynamically during
tenant's VM is being provisioned.
There is assumption here that only one type of L2 networking will work in one
cloud deployment.
A Cross Pod L2 Networking Creation
------------------------------------
A Cross Neutron L2 Networking Creation
--------------------------------------
A cross pod L2 networking creation will be able to be done with the az_hint
A cross Neutron L2 networking creation will be able to be done with the az_hint
attribute of the network. If az_hint includes one AZ or more AZs, the network
will be presented only in this AZ or these AZs, if no AZ in az_hint, it means
that the network can be extended to any bottom OpenStack.
@ -171,12 +171,12 @@ so that the external network will be only created in one specified pod per AZ.
is out of scope of this spec.*
Pluggable L2 networking framework is proposed to deal with three types of
L2 cross pod networking, and it should be compatible with the
L2 cross Neutron networking, and it should be compatible with the
``Local Network``.
1. Type Driver under Tricircle Plugin in Neutron API server
* Type driver to distinguish different type of cross pod L2 networking. So
* Type driver to distinguish different type of cross Neutron L2 networking. So
the Tricircle plugin need to load type driver according to the configuration.
The Tricircle can reuse the type driver of ML2 with update.
@ -229,7 +229,7 @@ Tricircle plugin needs to support multi-segment network extension[4].
For Shared VxLAN or Mixed VLAN/VxLAN L2 network type, L2GW driver will utilize the
multi-segment network extension in Neutron API server to build the L2 network in the
Tricircle. Each network in the bottom OpenStack instance will be a segment for the
whole cross pod L2 networking in the Tricircle.
whole cross Neutron L2 networking in the Tricircle.
After the network in the bottom OpenStack instance was created successfully, Nova
API-GW will call Neutron server API to update the network in the Tricircle with a
@ -292,7 +292,7 @@ Implementation
**Local Network Implementation**
For Local Network, L2GW is not required. In this scenario, no cross pod L2/L3
For Local Network, L2GW is not required. In this scenario, no cross Neutron L2/L3
networking is required.
A user creates network ``Net1`` with single AZ1 in az_hint, the Tricircle plugin
@ -311,10 +311,10 @@ local_network type network and it is limited to present in ``POD1`` in AZ1 only.
**VLAN Implementation**
For VLAN, L2GW is not required. This is the most simplest cross pod
For VLAN, L2GW is not required. This is the most simplest cross Neutron
L2 networking for limited scenario. For example, with a small number of
networks, all VLANs are extended through physical gateway to support cross
site VLAN networking, or all pods under same core switch with same visible
Neutron VLAN networking, or all Neutron servers under same core switch with same visible
VLAN ranges that supported by the core switch are connected by the core
switch.
@ -359,7 +359,7 @@ send network creation massage to ``POD2``, network creation message includes
get by ``POD2``.
The Tricircle plugin detects that the network includes more than one segment
network, calls L2GW driver to start async job for cross pod networking for
network, calls L2GW driver to start async job for cross Neutron networking for
``Net1``. The L2GW driver will create L2GW1 in ``POD1`` and L2GW2 in ``POD2``. In
``POD1``, L2GW1 will connect the local ``Net1`` and create L2GW remote connection
to L2GW2, then populate the information of MAC/IP which resides in L2GW1. In
@ -376,8 +376,8 @@ resides in the same pod.
**Mixed VLAN/VxLAN**
To achieve cross pod L2 networking, L2GW will be used to connect L2 network in
different pods, using L2GW should work for Shared VxLAN and Mixed VLAN/VxLAN
To achieve cross Neutron L2 networking, L2GW will be used to connect L2 network
in different Neutron servers, using L2GW should work for Shared VxLAN and Mixed VLAN/VxLAN
scenario.
When L2GW connected with local network in the same OpenStack instance, no
@ -421,7 +421,7 @@ and queries the network information including segment and network type,
updates this new segment to the ``Net1`` in Tricircle ``Neutron API Server``.
The Tricircle plugin detects that the ``Net1`` includes more than one network
segments, calls L2GW driver to start async job for cross pod networking for
segments, calls L2GW driver to start async job for cross Neutron networking for
``Net1``. The L2GW driver will create L2GW1 in ``POD1`` and L2GW2 in ``POD2``. In
``POD1``, L2GW1 will connect the local ``Net1`` and create L2GW remote connection
to L2GW2, then populate information of MAC/IP which resides in ``POD2`` in L2GW1.
@ -439,7 +439,7 @@ not resides in the same pod.
**L3 bridge network**
Current implementation without cross pod L2 networking.
Current implementation without cross Neutron L2 networking.
* A special bridge network is created and connected to the routers in
different bottom OpenStack instances. We configure the extra routes of the routers
@ -460,13 +460,13 @@ Difference between L2 networking for tenant's VM and for L3 bridging network.
top layer to avoid IP/mac collision if they are allocated separately in
bottom pods.
After cross pod L2 networking is introduced, the L3 bridge network should
After cross Neutron L2 networking is introduced, the L3 bridge network should
be updated too.
L3 bridge network N-S (North-South):
* For each tenant, one cross pod N-S bridge network should be created for router
N-S inter-connection. Just replace the current VLAN N-S bridge network
* For each tenant, one cross Neutron N-S bridge network should be created for
router N-S inter-connection. Just replace the current VLAN N-S bridge network
to corresponding Shared VxLAN or Mixed VLAN/VxLAN.
L3 bridge network E-W (East-West):
@ -478,11 +478,12 @@ L3 bridge network E-W (East-West):
network, then no bridge network is needed.
* For example, (Net1, Router1) in ``Pod1``, (Net2, Router1) in ``Pod2``, if
``Net1`` is a cross pod L2 network, and can be expanded to Pod2, then will just
expand ``Net1`` to Pod2. After the ``Net1`` expansion ( just like cross pod L2 networking
to spread one network in multiple pods ), itll look like (Net1, Router1)
in ``Pod1``, (Net1, Net2, Router1) in ``Pod2``, In ``Pod2``, no VM in ``Net1``, only for
E-W traffic. Now the E-W traffic will look like this:
``Net1`` is a cross Neutron L2 network, and can be expanded to Pod2, then
will just expand ``Net1`` to Pod2. After the ``Net1`` expansion ( just like
cross Neutron L2 networking to spread one network in multiple Neutron servers ), itll
look like (Net1, Router1) in ``Pod1``, (Net1, Net2, Router1) in ``Pod2``, In
``Pod2``, no VM in ``Net1``, only for E-W traffic. Now the E-W traffic will
look like this:
from Net2 to Net1:
@ -497,12 +498,12 @@ to the L2GW implementation. With the inbound traffic through L2GW, the inbound
traffic to the VM will not be impacted by the VM migration from one host to
another.
If ``Net2`` is a cross pod L2 network, and can be expanded to ``Pod1`` too, then will
just expand ``Net2`` to ``Pod1``. After the ``Net2`` expansion(just like cross pod L2
networking to spread one network in multiple pods ), itll look like (Net2,
Net1, Router1) in ``Pod1``, (Net1, Net2, Router1) in ``Pod2``, In ``Pod1``, no VM in
Net2, only for E-W traffic. Now the E-W traffic will look like this:
from ``Net1`` to ``Net2``:
If ``Net2`` is a cross Neutron L2 network, and can be expanded to ``Pod1`` too,
then will just expand ``Net2`` to ``Pod1``. After the ``Net2`` expansion(just
like cross Neutron L2 networking to spread one network in multiple Neutron servers ), itll
look like (Net2, Net1, Router1) in ``Pod1``, (Net1, Net2, Router1) in ``Pod2``,
In ``Pod1``, no VM in Net2, only for E-W traffic. Now the E-W traffic will look
like this: from ``Net1`` to ``Net2``:
Net1 in Pod1 -> Router1 in Pod1 -> Net2 in Pod1 -> L2GW in Pod1 ---> L2GW in
Pod2 -> Net2 in Pod2.
@ -513,7 +514,7 @@ to delete the network and create again.
If the network cant be expanded, then E-W bridge network is needed. For
example, Net1(AZ1, AZ2,AZ3), Router1; Net2(AZ4, AZ5, AZ6), Router1.
Then a cross pod L2 bridge network has to be established:
Then a cross Neutron L2 bridge network has to be established:
Net1(AZ1, AZ2, AZ3), Router1 --> E-W bridge network ---> Router1,
Net2(AZ4, AZ5, AZ6).

View File

@ -5,8 +5,8 @@ Layer-3 Networking and Combined Bridge Network
Background
==========
To achieve cross-OpenStack layer-3 networking, we utilize a bridge network to
connect networks in each OpenStack cloud, as shown below:
To achieve cross-Neutron layer-3 networking, we utilize a bridge network to
connect networks in each Neutron server, as shown below:
East-West networking::
@ -265,7 +265,7 @@ control.
Discussion
==========
The implementation of DVR does bring some restrictions to our cross-OpenStack
The implementation of DVR does bring some restrictions to our cross-Neutron
layer-2 and layer-3 networking, resulting in the limitation of the above two
proposals. In the first proposal, if the real external network is deployed with
internal networks in the same OpenStack cloud, one extra router is needed in
@ -275,7 +275,7 @@ other is legacy mode. The limitation of the second proposal is that the router
is non-DVR mode, so east-west and north-south traffic are all go through the
router namespace in the network node.
Also, cross-OpenStack layer-2 networking can not work with DVR because of
Also, cross-Neutron layer-2 networking can not work with DVR because of
source MAC replacement. Considering the following topology::
+----------------------------------------------+ +-------------------------------+
@ -290,7 +290,7 @@ source MAC replacement. Considering the following topology::
Fig 6
net2 supports cross-OpenStack layer-2 networking, so instances in net2 can be
net2 supports cross-Neutron layer-2 networking, so instances in net2 can be
created in both OpenStack clouds. If the router net1 and net2 connected to is
DVR mode, when Instance1 ping Instance2, the packets are routed locally and
exchanged via a VxLAN tunnel. Source MAC replacement is correctly handled
@ -298,7 +298,7 @@ inside OpenStack1. But when Instance1 tries to ping Instance3, OpenStack2 does
not recognize the DVR MAC from OpenStack1, thus connection fails. Therefore,
only local type network can be attached to a DVR mode router.
Cross-OpenStack layer-2 networking and DVR may co-exist after we address the
Cross-Neutron layer-2 networking and DVR may co-exist after we address the
DVR MAC recognition problem(we will issue a discussion about this problem in
the Neutron community) or introduce l2 gateway. Actually this bridge network
approach is just one of the implementation, we are considering in the near
@ -306,9 +306,9 @@ future to provide a mechanism to let SDN controller to plug in, which DVR and
bridge network may be not needed.
Having the above limitation, can our proposal support the major user scenarios?
Considering whether the tenant network and router are local or across OpenStack
clouds, we divide the user scenarios into four categories. For the scenario of
cross-OpenStack router, we use the proposal shown in Fig 3 in our discussion.
Considering whether the tenant network and router are local or across Neutron
servers, we divide the user scenarios into four categories. For the scenario of
cross-Neutron router, we use the proposal shown in Fig 3 in our discussion.
Local Network and Local Router
------------------------------
@ -338,7 +338,7 @@ Topology::
Each OpenStack cloud has its own external network, instance in each local
network accesses the external network via the local router. If east-west
networking is not required, this scenario has no requirement on cross-OpenStack
networking is not required, this scenario has no requirement on cross-Neutron
layer-2 and layer-3 networking functionality. Both central Neutron server and
local Neutron server can process network resource management request. While if
east-west networking is needed, we have two choices to extend the above
@ -378,8 +378,8 @@ only local network is attached to local router, so it can be either legacy or
DVR mode. In the right topology, two local routers are connected by a shared
VxLAN network, so they can only be legacy mode.
Cross-OpenStack Network and Local Router
----------------------------------------
Cross-Neutron Network and Local Router
--------------------------------------
Topology::
@ -446,8 +446,8 @@ set to "False". See Fig 10::
Fig 10
Local Network and Cross-OpenStack Router
----------------------------------------
Local Network and Cross-Neutron Router
--------------------------------------
Topology::
@ -478,13 +478,13 @@ Topology::
Fig 11
Since the router is cross-OpenStack type, the Tricircle automatically creates
bridge network to connect router instances inside the two OpenStack clouds and
Since the router is cross-Neutron type, the Tricircle automatically creates
bridge network to connect router instances inside the two Neutron servers and
connect the router instance to the real external network. Networks attached to
the router are local type, so the router can be either legacy or DVR mode.
Cross-OpenStack Network and Cross-OpenStack Router
--------------------------------------------------
Cross-Neutron Network and Cross-Neutron Router
----------------------------------------------
Topology::

View File

@ -6,14 +6,14 @@ Background
==========
One of the key value we would like to achieve via the Tricircle project is to
provide networking automation functionality across several OpenStack instances.
provide networking automation functionality across several Neutron servers.
Each OpenStack instance runs its own Nova and Neutron services but shares the
same Keystone service or uses federated Keystone, which is a multi-region
deployment mode. With networking automation, virtual machines or bare metals
booted in different OpenStack instances can inter-communicate via layer2 or
layer3 network.
Considering the cross OpenStack layer2 network case, if Neutron service in each
Considering the cross Neutron layer2 network case, if Neutron service in each
OpenStack instance allocates ip address independently, the same ip address
could be assigned to virtual machines in different OpenStack instances, thus ip
address conflict could occur. One straightforward solution to this problem is
@ -45,7 +45,7 @@ Local Plugin
For connecting central and local Neutron servers, Neutron plugin is again a
good place for us to build the bridge. We can write our own plugin, the
Tricircle local Neutron plugin(abbr: "local plugin") to trigger the cross
OpenStack networking automation in local Neutron server. During virtual machine
Neutron networking automation in local Neutron server. During virtual machine
booting, local Nova server will interact with local Neutron server to query
network or create port, which will trigger local plugin to retrieve data from
central Neutron server and create necessary network resources according to the
@ -112,7 +112,7 @@ Neutron server with region name in the request body. In Keystone, we register
services inside one OpenStack instance as one unique region, so we can use
region name to identify one OpenStack instance. After receiving the request,
central Neutron server is informed that one virtual machine port is correctly
setup in one OpenStack instance, so it starts the cross OpenStack networking
setup in one OpenStack instance, so it starts the cross Neutron networking
automation process, like security group rule population, tunnel setup for
layer2 communication and route setup for layer3 communication, which are done
by making Neutron API call to each local Neutron server.
@ -179,7 +179,7 @@ one dhcp agent is scheduled, it will use this port other than create a new one.
Gateway Port Handle
-------------------
If cross OpenStack layer2 networking is enabled in one network, we need to
If cross Neutron layer2 networking is enabled in one network, we need to
allocate one gateway ip for that network in each OpenStack instance. The reason
is that we want layer3 routing to be finished locally in each OpenStack
instance. If all the OpenStack instances have the same gateway ip, packets sent

View File

@ -1,11 +1,11 @@
=======================================
Cross Pod VxLAN Networking in Tricircle
=======================================
===========================================
Cross Neutron VxLAN Networking in Tricircle
===========================================
Background
==========
Currently we only support VLAN as the cross-pod network type. For VLAN network
Currently we only support VLAN as the cross-Neutron network type. For VLAN network
type, central plugin in Tricircle picks a physical network and allocates a VLAN
tag(or uses what users specify), then before the creation of local network,
local plugin queries this provider network information and creates the network
@ -17,16 +17,16 @@ physical devices.
For more flexible deployment, VxLAN network type is a better choice. Compared
to 12-bit VLAN ID, 24-bit VxLAN ID can support more numbers of bridge networks
and cross-pod L2 networks. With MAC-in-UDP encapsulation of VxLAN network,
and cross-Neutron L2 networks. With MAC-in-UDP encapsulation of VxLAN network,
hosts in different pods only need to be IP routable to transport instance
packets.
Proposal
========
There are some challenges to support cross-pod VxLAN network.
There are some challenges to support cross-Neutron VxLAN network.
1. How to keep VxLAN ID identical for the same VxLAN network across pods
1. How to keep VxLAN ID identical for the same VxLAN network across Neutron servers
2. How to synchronize tunnel endpoint information between pods
@ -82,7 +82,7 @@ then sends the information to the target agent via RPC. Second, driver sends
the tunnel endpoint information of the updated port to other agents where ports
in the same network are located, also via RPC. L2 agents will build the tunnels
based on the information they received. To trigger the above processes to build
tunnels across pods, we further introduce shadow port.
tunnels across Neutron servers, we further introduce shadow port.
Let's say we have two instance ports, port1 is located in host1 in pod1 and
port2 is located in host2 in pod2. To make L2 agent running in host1 build a
@ -176,7 +176,7 @@ pushed to agents and agents use it to create necessary tunnels and flows.
**How to support different back-ends besides ML2+OVS implementation**
We consider two typical back-ends that can support cross-pod VxLAN networking,
We consider two typical back-ends that can support cross-Neutron VxLAN networking,
L2 gateway and SDN controller like ODL. For L2 gateway, we consider only
supporting static tunnel endpoint information for L2 gateway at the first step.
Shadow agent and shadow port process is almost the same with the ML2+OVS
@ -184,8 +184,8 @@ implementation. The difference is that, for L2 gateway, the tunnel IP of the
shadow agent is set to the tunnel endpoint of the L2 gateway. So after L2
population, L2 agents will create tunnels to the tunnel endpoint of the L2
gateway. For SDN controller, we assume that SDN controller has the ability to
manage tunnel endpoint information across pods, so Tricircle only helps to
allocate VxLAN ID and keep the VxLAN ID identical across pods for one network.
manage tunnel endpoint information across Neutron servers, so Tricircle only helps to
allocate VxLAN ID and keep the VxLAN ID identical across Neutron servers for one network.
Shadow agent and shadow port process will not be used in this case. However, if
different SDN controllers are used in different pods, it will be hard for each
SDN controller to connect hosts managed by other SDN controllers since each SDN
@ -224,7 +224,7 @@ Documentation Impact
- Update configuration guide to introduce options for VxLAN network
- Update networking guide to discuss new scenarios with VxLAN network
- Add release note about cross-pod VxLAN networking support
- Add release note about cross-Neutron VxLAN networking support
References
==========

View File

@ -74,7 +74,7 @@ will be introduced, R3(1) and R3(2) will be inter-connected by bridge-net.
Bridge-net could be VLAN or VxLAN cross Neutron L2 network, and it's the
"external network" for both R3(1) and R3(2), please note here the bridge-net
is not real external network, just the concept of Neutron network. R3(1) and
R3(2) will only forward the east-west traffic across OpenStack for local
R3(2) will only forward the east-west traffic across Neutron for local
networks, so it's not necessary to work as DVR, centralized router is good
enough.
@ -154,9 +154,9 @@ East-west traffic between Instance1 and Instance2 work like follows::
Instance1 <-> net1 <-> R3(1) <-> bridge-net <-> R3(2) <-> net2 <-> Instance2
Two hops for cross OpenStack east-west traffic.
Two hops for cross Neutron east-west traffic.
The topology will be more complex if there are cross OpenStack L2 networks
The topology will be more complex if there are cross Neutron L2 networks
except local networks::
+-----------------------+ +----------------------+
@ -183,7 +183,7 @@ except local networks::
| +-----------------+ | | +-----------------+ |
+-----------------------+ +----------------------+
Figure.3 Multi-NS and cross OpenStack L2 networks
Figure.3 Multi-NS and cross Neutron L2 networks
The logical topology in central Neutron for Figure.3 looks like as follows::
@ -208,7 +208,7 @@ The logical topology in central Neutron for Figure.3 looks like as follows::
+-+---+------------+---------+------------+-----+-+
| R3 |
+-------------------------------------------------+
Figure.4 Logical topology in central Neutron with cross OpenStack L2 network
Figure.4 Logical topology in central Neutron with cross Neutron L2 network
East-west traffic inside one region will be processed locally through default
gateway. For example, in RegionOne, R1 has router interfaces in net1, net3,
@ -224,14 +224,14 @@ net5, net6, the east-west traffic between these networks will work as follows::
There is nothing special for east-west traffic between local networks
in different OpenStack regions.
Net5 and net6 are cross OpenStack L2 networks, instances could be attached
Net5 and net6 are cross Neutron L2 networks, instances could be attached
to network from different regions, and instances are reachable in a remote
region via the cross OpenStack L2 network itself. There is no need to add host
route for cross OpenStack L2 network, for it's routable in the same region for
other local networks or cross OpenStack L2 networks, default route is enough
region via the cross Neutron L2 network itself. There is no need to add host
route for cross Neutron L2 network, for it's routable in the same region for
other local networks or cross Neutron L2 networks, default route is enough
for east-west traffic.
It's needed to address how one cross OpenStack L2 network will be
It's needed to address how one cross Neutron L2 network will be
attached different local router: different gateway IP address will be used.
For example, in central Neutron, net5's default gateway IP is 192.168.0.1
in R1, the user needs to create a gateway port explicitly for local router R2
@ -261,12 +261,12 @@ For net5 in RegionTwo, host route should be added::
Similar operation for net6 in RegionOne and RegionTwo.
If R1 and R2 are centralized routers, cross OpenStack L2 network will
If R1 and R2 are centralized routers, cross Neutron L2 network will
work, but if R1 and R2 are DVRs, then DVR MAC issue mentioned in the
spec "l3-networking-combined-bridge-net" should be fixed[2].
In order to make the topology not too complex, this use case will not be
supported: a cross OpenStack L2 network is not able to be stretched into
supported: a cross Neutron L2 network is not able to be stretched into
the region where there are local networks. This use case is not useful
and will make the east-west traffic even more complex::
@ -297,7 +297,7 @@ and will make the east-west traffic even more complex::
| bridge-net | |
+----------------------------+-------------------+
Figure.5 Cross OpenStack L2 network not able to be stretched into some region
Figure.5 Cross Neutron L2 network not able to be stretched into some region
Implementation
@ -319,7 +319,7 @@ Adding router interface to east-west gateway router::
# go through this router
# router is the default router gateway, it's the
# single north-south external network mode
if the network is cross OpenStack L2 network
if the network is cross Neutron L2 network
reserve gateway port in different region
add router interface in each region using reserved gateway port IP
make sure the gateway port IP is the default route
@ -327,7 +327,7 @@ Adding router interface to east-west gateway router::
add router interface using the default gateway port or the port
specified in request
else # not the default gateway IP in this subnet
if the network is cross OpenStack L2 network
if the network is cross Neutron L2 network
reserve gateway port in different region
add router interface in each region using reserved gateway port IP
update host route in each connected local network in each region,
@ -341,7 +341,7 @@ Adding router interface to east-west gateway router::
Configure extra route to the router in each region for EW traffic
Adding router interface to local router for cross OpenStack L2 network will
Adding router interface to local router for cross Neutron L2 network will
make the local router as the default gateway router in this region::
# default north-south traffic will go through this router