Merge "Update doc and add release note for vxlan"

This commit is contained in:
Jenkins 2017-03-30 07:12:24 +00:00 committed by Gerrit Code Review
commit 6731837607
11 changed files with 131 additions and 41 deletions

View File

@ -27,6 +27,7 @@ ADMIN_PASSWORD=password
HOST_IP=10.250.201.24
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:2001:3000,extern:3001:4000)
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=1001:2000)
OVS_BRIDGE_MAPPINGS=bridge:br-vlan
# Specify Central Region name

View File

@ -31,6 +31,7 @@ KEYSTONE_SERVICE_HOST=10.250.201.24
KEYSTONE_AUTH_HOST=10.250.201.24
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:2001:3000,extern:3001:4000)
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=1001:2000)
OVS_BRIDGE_MAPPINGS=bridge:br-vlan,extern:br-ext
# Specify Central Region name

View File

@ -243,7 +243,6 @@ function start_central_neutron_server {
type_drivers+=,vlan
tenant_network_types+=,vlan
iniset $NEUTRON_CONF.$server_index tricircle network_vlan_ranges `echo $Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS | awk -F= '{print $2}'`
iniset $NEUTRON_CONF.$server_index tricircle bridge_network_type vlan
fi
if [ "$Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS" != "" ]; then
type_drivers+=,vxlan
@ -253,6 +252,7 @@ function start_central_neutron_server {
iniset $NEUTRON_CONF.$server_index tricircle type_drivers $type_drivers
iniset $NEUTRON_CONF.$server_index tricircle tenant_network_types $tenant_network_types
iniset $NEUTRON_CONF.$server_index tricircle enable_api_gateway False
# default value of bridge_network_type is vxlan
recreate_database $Q_DB_NAME$server_index
$NEUTRON_BIN_DIR/neutron-db-manage --config-file $NEUTRON_CONF.$server_index --config-file /$Q_PLUGIN_CONF_FILE upgrade head

View File

@ -148,16 +148,18 @@ configured in central Neutron's neutron.conf.
- (String) core plugin central Neutron server uses, should be set to tricircle.network.central_plugin.TricirclePlugin
* - **[tricircle]**
-
* - ``bridge_network_type`` = ``vlan``
- (String) Type of l3 bridge network, this type should be enabled in tenant_network_types and is not local type, for example, vlan.
* - ``bridge_network_type`` = ``vxlan``
- (String) Type of l3 bridge network, this type should be enabled in tenant_network_types and is not local type, for example, vlan or vxlan.
* - ``default_region_for_external_network`` = ``RegionOne``
- (String) Default region where the external network belongs to, it must exist, for example, RegionOne.
* - ``network_vlan_ranges`` = ``None``
- (String) List of <physical_network>:<vlan_min>:<vlan_max> or <physical_network> specifying physical_network names usable for VLAN provider and tenant networks, as well as ranges of VLAN tags on each available for allocation to tenant networks, for example,bridge:2001:3000.
* - ``tenant_network_types`` = ``local,vlan``
- (String) Ordered list of network_types to allocate as tenant networks. The default value "local" is useful for single pod connectivity. For example, local and vlan.
* - ``type_drivers`` = ``local,vlan``
- (String) List of network type driver entry points to be loaded from the tricircle.network.type_drivers namespace. For example, local and vlan.
- (String) List of <physical_network>:<vlan_min>:<vlan_max> or <physical_network> specifying physical_network names usable for VLAN provider and tenant networks, as well as ranges of VLAN tags on each available for allocation to tenant networks, for example, bridge:2001:3000.
* - ``tenant_network_types`` = ``local,vxlan``
- (String) Ordered list of network_types to allocate as tenant networks. The default value "local" is useful for single pod connectivity, for example, local vlan and vxlan.
* - ``type_drivers`` = ``local,vxlan``
- (String) List of network type driver entry points to be loaded from the tricircle.network.type_drivers namespace, for example, local vlan and vxlan.
* - ``vni_ranges`` = ``None``
- (String) Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of VXLAN VNI IDs that are available for tenant network allocation, for example, 1001:2000

View File

@ -167,10 +167,12 @@ Installation with Central Neutron Server
[client] admin_tenant, "project name of admin account", demo
[client] admin_user_domain_name, "user domain name of admin account", Default
[client] admin_tenant_domain_name, "project name of admin account", Default
[tricircle] type_drivers, "list of network type driver entry points to be loaded", "local,vlan"
[tricircle] tenant_network_types, "ordered list of network_types to allocate as tenant networks", "local,vlan"
[tricircle] type_drivers, "list of network type driver entry points to be loaded", "local,vlan,vxlan"
[tricircle] tenant_network_types, "ordered list of network_types to allocate as tenant networks", "local,vlan,vxlan"
[tricircle] network_vlan_ranges, "physical_network names and VLAN tags range usable of VLAN provider", "bridge:2001:3000"
[tricircle] bridge_network_type, "l3 bridge network type which is enabled in tenant_network_types and is not local type", vlan
[tricircle] vni_ranges, "VxLAN VNI range", "1001:2000"
[tricircle] bridge_network_type, "l3 bridge network type which is enabled in tenant_network_types and is not local type", vxlan
[tricircle] default_region_for_external_network, "Default Region where the external network belongs to", RegionOne
[tricircle] enable_api_gateway, "whether the API gateway is enabled", False
.. note:: Change keystone_service_host to the address of Keystone service.

View File

@ -16,26 +16,28 @@ to say, local type network doesn't support cross-pod l2 networking.
With multi-pod installation of the Tricircle, you can try out cross-pod l2
networking and cross-pod l3 networking features.
As the first step to support cross-pod l2 networking, we have added VLAN
To support cross-pod l2 networking, we have added both VLAN and VxLAN
network type to the Tricircle. When a VLAN type network created via the
central Neutron server is used to boot virtual machines in different pods, local
Neutron server in each pod will create a VLAN type network with the same VLAN
ID and physical network as the central network, so each pod should be configured
with the same VLAN allocation pool and physical network. Then virtual machines
in different pods can communicate with each other in the same physical network
with the same VLAN tag.
with the same VLAN tag. Similarly, for VxLAN network type, each pod should be
configured with the same VxLAN allocation pool, so local Neutron server in each
pod can create a VxLAN type network with the same VxLAN ID as is allocated by
the central Neutron server.
Cross-pod l3 networking is supported in two ways in the Tricircle. If two
networks connected to the router are of local type, we utilize a shared provider
VLAN network to achieve cross-pod l3 networking. Later we may also use VxLAN
network or multi-segment VLAN network. When a subnet is attached to a router via
the central Neutron server, the Tricircle not only creates corresponding subnet
and router in the pod, but also creates a VLAN type "bridge" network. Both
tenant network and "bridge" network are attached to the router. Each tenant will
have one allocated VLAN, which is shared by the tenant's "bridge" networks
across pods. The CIDRs of "bridge" networks for one tenant are also the same, so
the router interfaces in "bridge" networks across different pods can communicate
with each other via the provider VLAN network. By adding an extra route as
networks connected to the router are of local type, we utilize a shared
VLAN or VxLAN network to achieve cross-pod l3 networking. When a subnet is
attached to a router via the central Neutron server, the Tricircle not only
creates corresponding subnet and router in the pod, but also creates a "bridge"
network. Both tenant network and "bridge" network are attached to the router.
Each tenant will have one allocated VLAN or VxLAN ID, which is shared by the
tenant's "bridge" networks across pods. The CIDRs of "bridge" networks for one
tenant are also the same, so the router interfaces in "bridge" networks across
different pods can communicate with each other. By adding an extra route as
following::
destination: CIDR of tenant network in another pod
@ -67,9 +69,11 @@ Prerequisite
In this guide we take two nodes deployment as an example. One node to run the
Tricircle API, the central Neutron server and one pod, the other one node to run
another pod. Both nodes have two network interfaces, for management network and
provider VLAN network. For VLAN network, the physical network infrastructure
should support VLAN tagging. If you would like to try north-south networking,
another pod. For VLAN network, both nodes should have two network interfaces,
which are connected to the management network and provider VLAN network. The
physical network infrastructure should support VLAN tagging. For VxLAN network,
you can combine the management plane and data plane, in this case, only one
network interface is needed. If you would like to try north-south networking,
too, you should prepare one more network interface in the second node for the
external network. In this guide, the external network is also VLAN type, so the
local.conf sample is based on VLAN type external network setup. For the resource
@ -111,6 +115,12 @@ RegionOne,
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:2001:3000,extern:3001:4000)
- if you would like to also configure vxlan network, you can set
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS. the format of it is
(vni_ranges=<min vxlan>:<max vxlan>)::
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=1001:2000)
- the format of OVS_BRIDGE_MAPPINGS is <physical network name>:<ovs bridge name>,
you can change these names, but remember to adapt your change to the
commands showed in this guide. You do not need specify the bridge mapping
@ -118,6 +128,8 @@ RegionOne,
OVS_BRIDGE_MAPPINGS=bridge:br-vlan
this option can be omitted if only VxLAN networks are needed
- set TRICIRCLE_START_SERVICES to True to install the Tricircle service and
central Neutron in node1::
@ -129,7 +141,8 @@ RegionOne,
sudo ovs-vsctl add-port br-vlan eth1
br-vlan is the OVS bridge name you configure on OVS_PHYSICAL_BRIDGE, eth1 is
the device name of your VLAN network interface
the device name of your VLAN network interface, this step can be omitted if
only VxLAN networks are provided to tenants.
- 5 Run DevStack. In DevStack folder, run ::
@ -169,12 +182,22 @@ In pod2 in node2 for OpenStack RegionTwo,
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:2001:3000,extern:3001:4000)
- if you would like to also configure vxlan network, you can set
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS. the format of it is
(vni_ranges=<min vxlan>:<max vxlan>)::
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=1001:2000)
- the format of OVS_BRIDGE_MAPPINGS is <physical network name>:<ovs bridge name>,
you can change these names, but remember to adapt your change to the commands
showed in this guide::
OVS_BRIDGE_MAPPINGS=bridge:br-vlan,extern:br-ext
if you only use vlan network for external network, it can be configured like::
OVS_BRIDGE_MAPPINGS=extern:br-ext
- set TRICIRCLE_START_SERVICES to False(it's True by default) so Tricircle
services and central Neutron will not be started in node2::
@ -196,7 +219,8 @@ In pod2 in node2 for OpenStack RegionTwo,
br-vlan and br-ext are the OVS bridge names you configure on
OVS_PHYSICAL_BRIDGE, eth1 and eth2 are the device names of your VLAN network
interfaces, for the "bridge" network and the external network.
interfaces, for the "bridge" network and the external network. Omit br-vlan
if you only use vxlan network as tenant network.
- 5 Run DevStack. In DevStack folder, run ::

View File

@ -141,6 +141,7 @@ Create net1 which will work as the L2 network across RegionOne and RegionTwo.
.. code-block:: console
If net1 is vlan based cross-OpenStack L2 network
$ neutron --os-region-name=CentralRegion net-create --provider:network_type vlan --provider:physical_network bridge --availability-zone-hint az1 --availability-zone-hint az2 net1
+---------------------------+--------------------------------------+
| Field | Value |
@ -161,6 +162,27 @@ Create net1 which will work as the L2 network across RegionOne and RegionTwo.
| tenant_id | ce444c8be6da447bb412db7d30cd7023 |
+---------------------------+--------------------------------------+
If net1 is vxlan based cross-OpenStack L2 network
$ neutron --os-region-name=CentralRegion net-create --provider:network_type vxlan --availability-zone-hint az1 --availability-zone-hint az2 net1
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | az1 |
| | az2 |
| id | 0093f32c-2ecd-4888-a8c2-a6a424bddfe8 |
| name | net1 |
| project_id | ce444c8be6da447bb412db7d30cd7023 |
| provider:network_type | vxlan |
| provider:physical_network | |
| provider:segmentation_id | 1036 |
| router:external | False |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | ce444c8be6da447bb412db7d30cd7023 |
+---------------------------+--------------------------------------+
Create subnet in net1.
.. code-block:: console

View File

@ -202,6 +202,7 @@ Create net3 which will work as the L2 network across RegionOne and RegionTwo.
.. code-block:: console
If net3 is vlan based cross-OpenStack L2 network
$ neutron --os-region-name=CentralRegion net-create --provider:network_type vlan --provider:physical_network bridge --availability-zone-hint az1 --availability-zone-hint az2 net3
+---------------------------+--------------------------------------+
@ -223,6 +224,27 @@ Create net3 which will work as the L2 network across RegionOne and RegionTwo.
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
+---------------------------+--------------------------------------+
If net3 is vxlan based cross-OpenStack L2 network
$ neutron --os-region-name=CentralRegion net-create --provider:network_type vxlan --availability-zone-hint az1 --availability-zone-hint az2 net3
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | az1 |
| | az2 |
| id | 0f171049-0c15-4d1b-95cd-ede8dc554b44 |
| name | net3 |
| project_id | 532890c765604609a8d2ef6fc8e5f6ef |
| provider:network_type | vxlan |
| provider:physical_network | |
| provider:segmentation_id | 1031 |
| router:external | False |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
+---------------------------+--------------------------------------+
Create a subnet in net3.

View File

@ -44,6 +44,11 @@ configure the local.conf like this::
TRICIRCLE_START_SERVICES=True
enable_plugin tricircle https://github.com/openstack/tricircle/
If you also want to configure vxlan network, suppose the vxlan range for tenant
network is 1001~2000, add the following configuration to the above local.conf::
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=1001:2000)
In the node which will run local Neutron without Tricircle services, configure
the local.conf like this::
@ -57,12 +62,17 @@ You may have noticed that the only difference is TRICIRCLE_START_SERVICES
is True or False. All examples given in this document will be based on these
settings.
If you also want to configure vxlan network, suppose the vxlan range for tenant
network is 1001~2000, add the following configuration to the above local.conf::
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=1001:2000)
In both RegionOne and RegionTwo, external network is able to be provisioned,
the settings will look like this in /etc/neutron/plugins/ml2/ml2_conf.ini::
network_vlan_ranges = bridge:101:150,extern:151:200
vni_ranges = 1:1000
vni_ranges = 1001:2000(or the range that you configure)
bridge_mappings = bridge:br-vlan,extern:br-ext
@ -72,10 +82,11 @@ Please be aware that the physical network name for tenant VLAN network is
In central Neutron's configuration file, the default settings look like as
follows::
bridge_network_type = vlan
network_vlan_ranges = bridge:101:150
tenant_network_types = local,vlan
type_drivers = local,vlan
bridge_network_type = vxlan
network_vlan_ranges = bridge:101:150,extern:151:200
vni_ranges = 1001:2000
tenant_network_types = local,vlan,vxlan
type_drivers = local,vlan,vxlan
The default network type in central Neutron is local network, i.e, one
network can only be presented in one local Neutron. In which region the
@ -87,9 +98,9 @@ configuration.
If you want to create a L2 network across multiple Neutron, then you
have to speficy --provider-network-type vlan in network creation
command for vlan network type. Currently only vlan network
type could work as the bridge network. VxLAN network to support L2 networking
across Neutron will be introduced later.
command for vlan network type, or --provider-network-type vxlan for vxlan
network type. Both vlan and vxlan network type could work as the bridge
network. The default bridge network type is vxlan.
You can create L2 network for different purposes, and the supported network
types for different purposes are summarized as follows.
@ -104,8 +115,8 @@ types for different purposes are summarized as follows.
* - Local L2 network for instances
- VLAN, VxLAN
* - Cross Neutron L2 network for instances
- VLAN
- VLAN, VxLAN
* - Bridge network for routers
- VLAN
- VLAN, VxLAN
* - External network
- VLAN

View File

@ -0,0 +1,5 @@
---
features:
- |
Support VxLAN network type for tenant network and bridge network to be
stretched into multiple OpenStack clouds

View File

@ -70,11 +70,11 @@ from tricircle.network import security_groups
tricircle_opts = [
cfg.ListOpt('type_drivers',
default=['local'],
default=['local,vxlan'],
help=_('List of network type driver entry points to be loaded '
'from the tricircle.network.type_drivers namespace.')),
cfg.ListOpt('tenant_network_types',
default=['local'],
default=['local,vxlan'],
help=_('Ordered list of network_types to allocate as tenant '
'networks. The default value "local" is useful for '
'single pod connectivity.')),
@ -91,7 +91,7 @@ tricircle_opts = [
'enumerating ranges of VXLAN VNI IDs that are '
'available for tenant network allocation.')),
cfg.StrOpt('bridge_network_type',
default='',
default='vxlan',
help=_('Type of l3 bridge network, this type should be enabled '
'in tenant_network_types and is not local type.')),
cfg.StrOpt('default_region_for_external_network',