Retire the Tricircle project
Recently the TC has worked on determining the criteria for when an OpenStack project should be retired[1]. During Victoria cycle, there was not a PTL nominee for the Triccirle project, that triggered the TC to review the project health. In TC meeting it was decided to retire the Tricircle project and was announced in the openstack-discuss mailing[2]. This commit retires the repository as per process and if anyone would like to maintain Tricircle again, please revert back this commit and propose the re-adding of Tricircle to governance. The community wishes to express our thanks and appreciation to all of those who have contributed to the Tricircle project over the years. [1] https://governance.openstack.org/tc/reference/dropping-projects.html [2] http://lists.openstack.org/pipermail/openstack-discuss/2020-April/014338.html Change-Id: Ide73af8b7dbbadb49f5d6a6bd068d5d42f501cca
This commit is contained in:
parent
6fc9861ac7
commit
307592dc0e
74
.zuul.yaml
74
.zuul.yaml
|
@ -1,74 +0,0 @@
|
|||
- job:
|
||||
name: tricircle-functional-python3
|
||||
parent: legacy-dsvm-base
|
||||
run: playbooks/tricircle-dsvm-functional/run.yaml
|
||||
post-run: playbooks/tricircle-dsvm-functional/post.yaml
|
||||
timeout: 7800
|
||||
required-projects:
|
||||
- openstack/devstack-gate
|
||||
- openstack/tricircle
|
||||
- openstack/neutron
|
||||
- openstack/networking-sfc
|
||||
vars:
|
||||
devstack_localrc:
|
||||
USE_PYTHON3: true
|
||||
|
||||
- job:
|
||||
name: tricircle-multiregion
|
||||
parent: legacy-dsvm-base-multinode
|
||||
run: playbooks/tricircle-dsvm-multiregion/run.yaml
|
||||
post-run: playbooks/tricircle-dsvm-multiregion/post.yaml
|
||||
timeout: 7800
|
||||
required-projects:
|
||||
- openstack/devstack-gate
|
||||
- openstack/networking-sfc
|
||||
- openstack/tricircle
|
||||
|
||||
- job:
|
||||
name: tricircle-tox-lower-constraints
|
||||
parent: openstack-tox-lower-constraints
|
||||
required-projects:
|
||||
- openstack/neutron
|
||||
- openstack/networking-sfc
|
||||
|
||||
- job:
|
||||
name: tricircle-tox-cover
|
||||
parent: openstack-tox-cover
|
||||
required-projects:
|
||||
- openstack/neutron
|
||||
- openstack/networking-sfc
|
||||
|
||||
|
||||
- project:
|
||||
templates:
|
||||
- openstack-python3-victoria-jobs-neutron
|
||||
- openstack-python3-victoria-jobs
|
||||
- check-requirements
|
||||
- publish-openstack-docs-pti
|
||||
- release-notes-jobs-python3
|
||||
check:
|
||||
jobs:
|
||||
- tricircle-tox-cover
|
||||
- tricircle-tox-lower-constraints
|
||||
- openstack-tox-pep8:
|
||||
required-projects:
|
||||
- openstack/neutron
|
||||
- openstack/networking-sfc
|
||||
- openstack-tox-py36:
|
||||
required-projects:
|
||||
- openstack/neutron
|
||||
- openstack/networking-sfc
|
||||
- tricircle-functional-python3
|
||||
- tricircle-multiregion
|
||||
gate:
|
||||
jobs:
|
||||
- tricircle-tox-lower-constraints
|
||||
- openstack-tox-pep8:
|
||||
required-projects:
|
||||
- openstack/neutron
|
||||
- openstack/networking-sfc
|
||||
- openstack-tox-py36:
|
||||
required-projects:
|
||||
- openstack/neutron
|
||||
- openstack/networking-sfc
|
||||
- tricircle-multiregion
|
|
@ -1,17 +0,0 @@
|
|||
If you would like to contribute to the development of OpenStack, you should
|
||||
follow the steps in this page:
|
||||
|
||||
https://docs.openstack.org/infra/manual/developers.html
|
||||
|
||||
If you already knew how the OpenStack CI system works and your
|
||||
OpenStack accounts is setup properly, you can start from the development
|
||||
workflow section in that documentation to know how you should commit your
|
||||
patch set for review via the Gerrit tool:
|
||||
|
||||
https://docs.openstack.org/infra/manual/developers.html#development-workflow
|
||||
|
||||
Any pull requests submitted through GitHub will be ignored.
|
||||
|
||||
Any bug should be filed on Launchpad, not GitHub:
|
||||
|
||||
https://bugs.launchpad.net/tricircle
|
|
@ -1,6 +0,0 @@
|
|||
================================
|
||||
The Tricircle Style Commandments
|
||||
================================
|
||||
|
||||
Please read the OpenStack Style Commandments
|
||||
https://docs.openstack.org/hacking/latest/
|
201
LICENSE
201
LICENSE
|
@ -1,201 +0,0 @@
|
|||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
63
README.rst
63
README.rst
|
@ -1,57 +1,10 @@
|
|||
========================
|
||||
Team and repository tags
|
||||
========================
|
||||
This project is no longer maintained.
|
||||
|
||||
.. image:: https://governance.openstack.org/tc/badges/tricircle.svg
|
||||
:target: https://governance.openstack.org/tc/reference/tags/index.html
|
||||
The contents of this repository are still available in the Git
|
||||
source code management system. To see the contents of this
|
||||
repository before it reached its end of life, please check out the
|
||||
previous commit with "git checkout HEAD^1".
|
||||
|
||||
.. Change things from this point on
|
||||
|
||||
=========
|
||||
Tricircle
|
||||
=========
|
||||
|
||||
The purpose of the Tricircle project is to provide networking automation
|
||||
across Neutron servers in multi-region OpenStack clouds deployment.
|
||||
|
||||
Each OpenStack cloud includes its own Nova, Cinder and Neutron, the Neutron
|
||||
servers in these OpenStack clouds are called local Neutron servers, all these
|
||||
local Neutron servers will be configured with the Tricircle Local Neutron
|
||||
Plugin. A separate Neutron server will be installed and run standalone as
|
||||
the coordinator of networking automation across local Neutron servers, this
|
||||
Neutron server will be configured with the Tricircle Central Neutron Plugin,
|
||||
and is called central Neutron server.
|
||||
|
||||
Leverage the Tricircle Central Neutron Plugin and the Tricircle Local Neutron
|
||||
Plugin configured in these Neutron servers, the Tricircle can ensure the
|
||||
IP address pool, IP/MAC address allocation and network segment allocation
|
||||
being managed globally without conflict, and the Tricircle handles tenant
|
||||
oriented data link layer(Layer2) or network layer(Layer3) networking
|
||||
automation across local Neutron servers, resources like VMs, bare metal or
|
||||
containers of the tenant can communicate with each other via Layer2 or Layer3,
|
||||
no matter in which OpenStack cloud these resources are running on.
|
||||
|
||||
Note: There are some our own definitions of Layer2/Layer3 networking
|
||||
across Neutron. To make sure what they are, please read our design
|
||||
documentation, especially "6.5 L2 Networking across Neutron". The wiki and
|
||||
design documentation are linked below.
|
||||
|
||||
The Tricircle and multi-region OpenStack clouds will use shared
|
||||
KeyStone(with centralized or distributed deployment) or federated KeyStones.
|
||||
|
||||
The Tricircle source code is distributed under the terms of the Apache
|
||||
License, Version 2.0. The full terms and conditions of this license are
|
||||
detailed in the LICENSE file.
|
||||
|
||||
* Free software: Apache license
|
||||
* Design documentation: `Tricircle Design Blueprint <https://docs.google.com/document/d/1zcxwl8xMEpxVCqLTce2-dUOtB-ObmzJTbV1uSQ6qTsY/>`_
|
||||
* Wiki: https://wiki.openstack.org/wiki/tricircle
|
||||
* Installation guide: https://docs.openstack.org/tricircle/latest/install/index.html
|
||||
* Admin guide: https://docs.openstack.org/tricircle/latest/admin/index.html
|
||||
* Configuration guide: https://docs.openstack.org/tricircle/latest/configuration/index.html
|
||||
* Networking guide: https://docs.openstack.org/tricircle/latest/networking/index.html
|
||||
* Source: https://opendev.org/openstack/tricircle
|
||||
* Bugs: https://bugs.launchpad.net/tricircle
|
||||
* Blueprints: https://blueprints.launchpad.net/tricircle
|
||||
* Release notes: https://docs.openstack.org/releasenotes/tricircle
|
||||
* Contributing: https://docs.openstack.org/tricircle/latest/contributor/index.html
|
||||
For any further questions, please email
|
||||
openstack-discuss@lists.openstack.org or join #openstack-dev on
|
||||
Freenode.
|
||||
|
|
|
@ -1,9 +0,0 @@
|
|||
export OS_PROJECT_DOMAIN_ID=default
|
||||
export OS_USER_DOMAIN_ID=default
|
||||
export OS_PROJECT_NAME=admin
|
||||
export OS_TENANT_NAME=admin
|
||||
export OS_USERNAME=admin
|
||||
export OS_PASSWORD=password
|
||||
export OS_AUTH_URL=http://127.0.0.1:5000
|
||||
export OS_IDENTITY_API_VERSION=3
|
||||
export OS_REGION_NAME=RegionOne
|
|
@ -1,39 +0,0 @@
|
|||
# apache configuration template for tricircle-api
|
||||
|
||||
Listen %PUBLICPORT%
|
||||
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\" %D(us)" tricircle_combined
|
||||
|
||||
<Directory %TRICIRCLE_BIN%>
|
||||
Require all granted
|
||||
</Directory>
|
||||
<VirtualHost *:%PUBLICPORT%>
|
||||
WSGIDaemonProcess tricircle-api processes=%APIWORKERS% threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
|
||||
WSGIProcessGroup tricircle-api
|
||||
WSGIScriptAlias / %PUBLICWSGI%
|
||||
WSGIApplicationGroup %{GLOBAL}
|
||||
WSGIPassAuthorization On
|
||||
<IfVersion >= 2.4>
|
||||
ErrorLogFormat "%M"
|
||||
</IfVersion>
|
||||
ErrorLog /var/log/%APACHE_NAME%/tricircle-api.log
|
||||
CustomLog /var/log/%APACHE_NAME%/tricircle_access.log tricircle_combined
|
||||
%SSLENGINE%
|
||||
%SSLCERTFILE%
|
||||
%SSLKEYFILE%
|
||||
</VirtualHost>
|
||||
|
||||
%SSLLISTEN%<VirtualHost *:443>
|
||||
%SSLLISTEN% %SSLENGINE%
|
||||
%SSLLISTEN% %SSLCERTFILE%
|
||||
%SSLLISTEN% %SSLKEYFILE%
|
||||
%SSLLISTEN%</VirtualHost>
|
||||
|
||||
Alias /tricircle %PUBLICWSGI%
|
||||
<Location /tricircle>
|
||||
SetHandler wsgi-script
|
||||
Options +ExecCGI
|
||||
|
||||
WSGIProcessGroup tricircle-api
|
||||
WSGIApplicationGroup %{GLOBAL}
|
||||
WSGIPassAuthorization On
|
||||
</Location>
|
|
@ -1,63 +0,0 @@
|
|||
#
|
||||
# Sample DevStack local.conf.
|
||||
#
|
||||
# This sample file is intended to be used for your typical Tricircle DevStack
|
||||
# multi-node environment. This file has the configuration values for DevStack
|
||||
# to result in Central Neutron service and Tricircle Admin API service
|
||||
# registered in CentralRegion, and local Neutron service and remaining
|
||||
# services(e. g. Nova, Cinder, etc.) will be placed in RegionOne, but Keystone
|
||||
# will be registered in RegionOne and is shared by services in all the
|
||||
# regions.
|
||||
#
|
||||
# This file works with local.conf.node_2.sample to help you build a two-node
|
||||
# three-region Tricircle environment(Central Region, RegionOne and RegionTwo).
|
||||
#
|
||||
# Some options need to be changed to adapt to your environment, see README.rst
|
||||
# for detail.
|
||||
#
|
||||
|
||||
[[local|localrc]]
|
||||
|
||||
DATABASE_PASSWORD=password
|
||||
RABBIT_PASSWORD=password
|
||||
SERVICE_PASSWORD=password
|
||||
SERVICE_TOKEN=password
|
||||
ADMIN_PASSWORD=password
|
||||
|
||||
HOST_IP=10.250.201.24
|
||||
|
||||
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:2001:3000,extern:3001:4000)
|
||||
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=1001:2000)
|
||||
Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS=(flat_networks=bridge,extern)
|
||||
OVS_BRIDGE_MAPPINGS=bridge:br-vlan
|
||||
ML2_L3_PLUGIN=tricircle.network.local_l3_plugin.TricircleL3Plugin
|
||||
|
||||
# Specify Central Region name
|
||||
# CENTRAL_REGION_NAME=CentralRegion
|
||||
|
||||
# Specify port for central Neutron server
|
||||
# TRICIRCLE_NEUTRON_PORT=20001
|
||||
|
||||
# Set to True to integrate Tricircle with Nova cell v2(experiment)
|
||||
# TRICIRCLE_DEPLOY_WITH_CELL=True
|
||||
|
||||
TRICIRCLE_START_SERVICES=True
|
||||
enable_plugin tricircle https://github.com/openstack/tricircle/
|
||||
|
||||
# Configure Neutron LBaaS, which will be removed after tricircle plugin enabling
|
||||
# enable_plugin neutron-lbaas https://github.com/openstack/neutron-lbaas.git
|
||||
# enable_plugin octavia https://github.com/openstack/octavia.git
|
||||
# ENABLED_SERVICES+=,q-lbaasv2
|
||||
# ENABLED_SERVICES+=,octavia,o-cw,o-hk,o-hm,o-api
|
||||
|
||||
disable_service horizon
|
||||
|
||||
# Enable l2population for vxlan network
|
||||
[[post-config|/$Q_PLUGIN_CONF_FILE]]
|
||||
|
||||
[ml2]
|
||||
mechanism_drivers = openvswitch,linuxbridge,l2population
|
||||
|
||||
[agent]
|
||||
tunnel_types=vxlan
|
||||
l2_population=True
|
|
@ -1,67 +0,0 @@
|
|||
#
|
||||
# Sample DevStack local.conf.
|
||||
#
|
||||
# This sample file is intended to be used for your typical Tricircle DevStack
|
||||
# multi-node environment. As this file has configuration values for DevStack
|
||||
# to result in RegionTwo running original Nova, Cinder and Neutron, and
|
||||
# the local Neutron will be configured with Tricircle Local Neutron Plugin
|
||||
# to work with central Neutron with Tricircle Central Neutron Plugin.
|
||||
#
|
||||
# This file works with local.conf.node_1.sample to help you build a two-node
|
||||
# three-region environment(CentralRegion, RegionOne and RegionTwo). Keystone in
|
||||
# RegionOne is shared by services in all the regions.
|
||||
#
|
||||
# Some options need to be changed to adapt to your environment, see README.rst
|
||||
# for detail.
|
||||
#
|
||||
|
||||
[[local|localrc]]
|
||||
|
||||
DATABASE_PASSWORD=password
|
||||
RABBIT_PASSWORD=password
|
||||
SERVICE_PASSWORD=password
|
||||
SERVICE_TOKEN=password
|
||||
ADMIN_PASSWORD=password
|
||||
|
||||
HOST_IP=10.250.201.25
|
||||
REGION_NAME=RegionTwo
|
||||
KEYSTONE_REGION_NAME=RegionOne
|
||||
SERVICE_HOST=$HOST_IP
|
||||
KEYSTONE_SERVICE_HOST=10.250.201.24
|
||||
KEYSTONE_AUTH_HOST=10.250.201.24
|
||||
|
||||
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:2001:3000,extern:3001:4000)
|
||||
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=1001:2000)
|
||||
Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS=(flat_networks=bridge,extern)
|
||||
OVS_BRIDGE_MAPPINGS=bridge:br-vlan,extern:br-ext
|
||||
ML2_L3_PLUGIN=tricircle.network.local_l3_plugin.TricircleL3Plugin
|
||||
|
||||
# Specify Central Region name
|
||||
# CENTRAL_REGION_NAME=CentralRegion
|
||||
|
||||
# Specify port for central Neutron server
|
||||
# TRICIRCLE_NEUTRON_PORT=20001
|
||||
|
||||
# Set to True to integrate Tricircle with Nova cell v2(experiment)
|
||||
# TRICIRCLE_DEPLOY_WITH_CELL=True
|
||||
|
||||
TRICIRCLE_START_SERVICES=False
|
||||
enable_plugin tricircle https://github.com/openstack/tricircle/
|
||||
|
||||
# Configure Neutron LBaaS, which will be removed after tricircle plugin enabling
|
||||
# enable_plugin neutron-lbaas https://github.com/openstack/neutron-lbaas.git
|
||||
# enable_plugin octavia https://github.com/openstack/octavia.git
|
||||
# ENABLED_SERVICES+=,q-lbaasv2
|
||||
# ENABLED_SERVICES+=,octavia,o-cw,o-hk,o-hm,o-api
|
||||
|
||||
disable_service horizon
|
||||
|
||||
# Enable l2population for vxlan network
|
||||
[[post-config|/$Q_PLUGIN_CONF_FILE]]
|
||||
|
||||
[ml2]
|
||||
mechanism_drivers = openvswitch,linuxbridge,l2population
|
||||
|
||||
[agent]
|
||||
tunnel_types=vxlan
|
||||
l2_population=True
|
|
@ -1,28 +0,0 @@
|
|||
#
|
||||
# Sample DevStack local.conf.
|
||||
#
|
||||
# This sample file is intended to be used for your typical Tricircle DevStack
|
||||
# environment that's running all of OpenStack on a single host.
|
||||
#
|
||||
# No changes to this sample configuration are required for this to work.
|
||||
#
|
||||
|
||||
[[local|localrc]]
|
||||
|
||||
DATABASE_PASSWORD=password
|
||||
RABBIT_PASSWORD=password
|
||||
SERVICE_PASSWORD=password
|
||||
SERVICE_TOKEN=password
|
||||
ADMIN_PASSWORD=password
|
||||
|
||||
HOST_IP=127.0.0.1
|
||||
|
||||
# Specify Central Region name
|
||||
# CENTRAL_REGION_NAME=CentralRegion
|
||||
|
||||
# Specify port for central Neutron server
|
||||
# TRICIRCLE_NEUTRON_PORT=20001
|
||||
|
||||
enable_plugin tricircle https://github.com/openstack/tricircle/
|
||||
|
||||
# disable_service horizon
|
|
@ -1,476 +0,0 @@
|
|||
# Devstack extras script to install Tricircle
|
||||
|
||||
# Test if any tricircle services are enabled
|
||||
# is_tricircle_enabled
|
||||
function is_tricircle_enabled {
|
||||
[[ ,${ENABLED_SERVICES} =~ ,"t-api" ]] && return 0
|
||||
return 1
|
||||
}
|
||||
|
||||
# create_tricircle_accounts() - Set up common required tricircle
|
||||
# service accounts in keystone
|
||||
# Project User Roles
|
||||
# -------------------------------------------------------------------------
|
||||
# $SERVICE_TENANT_NAME tricircle service
|
||||
|
||||
function create_tricircle_accounts {
|
||||
if [[ "$ENABLED_SERVICES" =~ "t-api" ]]; then
|
||||
create_service_user "tricircle" "admin"
|
||||
local tricircle_api=$(get_or_create_service "tricircle" \
|
||||
"tricircle" "Cross Neutron Networking Automation Service")
|
||||
|
||||
local tricircle_api_url="$SERVICE_PROTOCOL://$TRICIRCLE_API_HOST/tricircle/v1.0"
|
||||
if [[ "$TRICIRCLE_DEPLOY_WITH_WSGI" == "False" ]]; then
|
||||
tricircle_api_url="$SERVICE_PROTOCOL://$TRICIRCLE_API_HOST:$TRICIRCLE_API_PORT/v1.0/"
|
||||
fi
|
||||
|
||||
get_or_create_endpoint $tricircle_api \
|
||||
"$CENTRAL_REGION_NAME" \
|
||||
"$tricircle_api_url" \
|
||||
"$tricircle_api_url" \
|
||||
"$tricircle_api_url"
|
||||
fi
|
||||
}
|
||||
|
||||
# create_tricircle_cache_dir() - Set up cache dir for tricircle
|
||||
function create_tricircle_cache_dir {
|
||||
|
||||
# Delete existing dir
|
||||
sudo rm -rf $TRICIRCLE_AUTH_CACHE_DIR
|
||||
sudo mkdir -p $TRICIRCLE_AUTH_CACHE_DIR
|
||||
sudo chown `whoami` $TRICIRCLE_AUTH_CACHE_DIR
|
||||
}
|
||||
|
||||
# common config-file configuration for tricircle services
|
||||
function init_common_tricircle_conf {
|
||||
local conf_file=$1
|
||||
|
||||
touch $conf_file
|
||||
iniset $conf_file DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
|
||||
iniset $conf_file DEFAULT verbose True
|
||||
iniset $conf_file DEFAULT use_syslog $SYSLOG
|
||||
iniset $conf_file DEFAULT tricircle_db_connection `database_connection_url tricircle`
|
||||
|
||||
iniset $conf_file client auth_url http://$KEYSTONE_SERVICE_HOST/identity
|
||||
iniset $conf_file client identity_url http://$KEYSTONE_SERVICE_HOST/identity/v3
|
||||
iniset $conf_file client admin_username admin
|
||||
iniset $conf_file client admin_password $ADMIN_PASSWORD
|
||||
iniset $conf_file client admin_tenant demo
|
||||
iniset $conf_file client auto_refresh_endpoint True
|
||||
iniset $conf_file client top_region_name $CENTRAL_REGION_NAME
|
||||
|
||||
iniset $conf_file oslo_concurrency lock_path $TRICIRCLE_STATE_PATH/lock
|
||||
iniset_rpc_backend tricircle $conf_file
|
||||
}
|
||||
|
||||
function init_local_nova_conf {
|
||||
iniset $NOVA_CONF glance api_servers http://$KEYSTONE_SERVICE_HOST/image
|
||||
iniset $NOVA_CONF placement os_region_name $CENTRAL_REGION_NAME
|
||||
}
|
||||
|
||||
# common config-file configuration for local Neutron(s)
|
||||
function init_local_neutron_conf {
|
||||
|
||||
iniset $NEUTRON_CONF DEFAULT core_plugin tricircle.network.local_plugin.TricirclePlugin
|
||||
if [[ "$TRICIRCLE_DEPLOY_WITH_CELL" == "True" ]]; then
|
||||
iniset $NEUTRON_CONF nova region_name $CENTRAL_REGION_NAME
|
||||
fi
|
||||
|
||||
iniset $NEUTRON_CONF client auth_url http://$KEYSTONE_SERVICE_HOST/identity
|
||||
iniset $NEUTRON_CONF client identity_url http://$KEYSTONE_SERVICE_HOST/identity/v3
|
||||
iniset $NEUTRON_CONF client admin_username admin
|
||||
iniset $NEUTRON_CONF client admin_password $ADMIN_PASSWORD
|
||||
iniset $NEUTRON_CONF client admin_tenant demo
|
||||
iniset $NEUTRON_CONF client auto_refresh_endpoint True
|
||||
iniset $NEUTRON_CONF client top_pod_name $CENTRAL_REGION_NAME
|
||||
|
||||
iniset $NEUTRON_CONF tricircle real_core_plugin neutron.plugins.ml2.plugin.Ml2Plugin
|
||||
iniset $NEUTRON_CONF tricircle local_region_name $REGION_NAME
|
||||
iniset $NEUTRON_CONF tricircle central_neutron_url http://$KEYSTONE_SERVICE_HOST:$TRICIRCLE_NEUTRON_PORT
|
||||
}
|
||||
|
||||
# Set the environment variables for local Neutron(s)
|
||||
function init_local_neutron_variables {
|
||||
|
||||
export Q_USE_PROVIDERNET_FOR_PUBLIC=True
|
||||
|
||||
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=${Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS:-}
|
||||
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=${Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS:-}
|
||||
# if VLAN options were not set in local.conf, use default VLAN bridge
|
||||
# and VLAN options
|
||||
if [ "$Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS" == "" ]; then
|
||||
|
||||
export TRICIRCLE_ADD_DEFAULT_BRIDGES=True
|
||||
|
||||
local vlan_option="bridge:$TRICIRCLE_DEFAULT_VLAN_RANGE"
|
||||
local ext_option="extern:$TRICIRCLE_DEFAULT_EXT_RANGE"
|
||||
local vlan_ranges=(network_vlan_ranges=$vlan_option,$ext_option)
|
||||
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=$vlan_ranges
|
||||
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS="vni_ranges=$TRICIRCLE_DEFAULT_VXLAN_RANGE"
|
||||
Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS="flat_networks=$TRICIRCLE_DEFAULT_FLAT_NETWORKS"
|
||||
|
||||
local vlan_mapping="bridge:$TRICIRCLE_DEFAULT_VLAN_BRIDGE"
|
||||
local ext_mapping="extern:$TRICIRCLE_DEFAULT_EXT_BRIDGE"
|
||||
OVS_BRIDGE_MAPPINGS=$vlan_mapping,$ext_mapping
|
||||
|
||||
fi
|
||||
if [ "$TRICIRCLE_ENABLE_TRUNK" == "True" ]; then
|
||||
_neutron_service_plugin_class_add trunk
|
||||
fi
|
||||
}
|
||||
|
||||
function add_default_bridges {
|
||||
|
||||
if [ "$TRICIRCLE_ADD_DEFAULT_BRIDGES" == "True" ]; then
|
||||
_neutron_ovs_base_add_bridge $TRICIRCLE_DEFAULT_VLAN_BRIDGE
|
||||
_neutron_ovs_base_add_bridge $TRICIRCLE_DEFAULT_EXT_BRIDGE
|
||||
fi
|
||||
}
|
||||
|
||||
function configure_tricircle_api {
|
||||
|
||||
if is_service_enabled t-api ; then
|
||||
echo "Configuring Tricircle API"
|
||||
|
||||
init_common_tricircle_conf $TRICIRCLE_API_CONF
|
||||
|
||||
setup_colorized_logging $TRICIRCLE_API_CONF DEFAULT tenant_name
|
||||
|
||||
if is_service_enabled keystone; then
|
||||
|
||||
create_tricircle_cache_dir
|
||||
|
||||
# Configure auth token middleware
|
||||
configure_auth_token_middleware $TRICIRCLE_API_CONF tricircle \
|
||||
$TRICIRCLE_AUTH_CACHE_DIR
|
||||
|
||||
else
|
||||
iniset $TRICIRCLE_API_CONF DEFAULT auth_strategy noauth
|
||||
fi
|
||||
|
||||
fi
|
||||
}
|
||||
|
||||
# configure_tricircle_api_wsgi() - Set WSGI config files
|
||||
function configure_tricircle_api_wsgi {
|
||||
local tricircle_api_apache_conf
|
||||
local venv_path=""
|
||||
local tricircle_bin_dir=""
|
||||
local tricircle_ssl_listen="#"
|
||||
|
||||
tricircle_bin_dir=$(get_python_exec_prefix)
|
||||
tricircle_api_apache_conf=$(apache_site_config_for tricircle-api)
|
||||
|
||||
if is_ssl_enabled_service "tricircle-api"; then
|
||||
tricircle_ssl_listen=""
|
||||
tricircle_ssl="SSLEngine On"
|
||||
tricircle_certfile="SSLCertificateFile $TRICIRCLE_SSL_CERT"
|
||||
tricircle_keyfile="SSLCertificateKeyFile $TRICIRCLE_SSL_KEY"
|
||||
fi
|
||||
|
||||
# configure venv bin if VENV is used
|
||||
if [[ ${USE_VENV} = True ]]; then
|
||||
venv_path="python-path=${PROJECT_VENV["tricircle"]}/lib/$(python_version)/site-packages"
|
||||
tricircle_bin_dir=${PROJECT_VENV["tricircle"]}/bin
|
||||
fi
|
||||
|
||||
sudo cp $TRICIRCLE_API_APACHE_TEMPLATE $tricircle_api_apache_conf
|
||||
sudo sed -e "
|
||||
s|%TRICIRCLE_BIN%|$tricircle_bin_dir|g;
|
||||
s|%PUBLICPORT%|$TRICIRCLE_API_PORT|g;
|
||||
s|%APACHE_NAME%|$APACHE_NAME|g;
|
||||
s|%PUBLICWSGI%|$tricircle_bin_dir/tricircle-api-wsgi|g;
|
||||
s|%SSLENGINE%|$tricircle_ssl|g;
|
||||
s|%SSLCERTFILE%|$tricircle_certfile|g;
|
||||
s|%SSLKEYFILE%|$tricircle_keyfile|g;
|
||||
s|%SSLLISTEN%|$tricircle_ssl_listen|g;
|
||||
s|%USER%|$STACK_USER|g;
|
||||
s|%VIRTUALENV%|$venv_path|g
|
||||
s|%APIWORKERS%|$API_WORKERS|g
|
||||
" -i $tricircle_api_apache_conf
|
||||
}
|
||||
|
||||
# start_tricircle_api_wsgi() - Start the API processes ahead of other things
|
||||
function start_tricircle_api_wsgi {
|
||||
enable_apache_site tricircle-api
|
||||
restart_apache_server
|
||||
tail_log tricircle-api /var/log/$APACHE_NAME/tricircle-api.log
|
||||
|
||||
echo "Waiting for tricircle-api to start..."
|
||||
if ! wait_for_service $SERVICE_TIMEOUT $TRICIRCLE_API_PROTOCOL://$TRICIRCLE_API_HOST/tricircle; then
|
||||
die $LINENO "tricircle-api did not start"
|
||||
fi
|
||||
}
|
||||
|
||||
# stop_tricircle_api_wsgi() - Disable the api service and stop it.
|
||||
function stop_tricircle_api_wsgi {
|
||||
disable_apache_site tricircle-api
|
||||
restart_apache_server
|
||||
}
|
||||
|
||||
# cleanup_tricircle_api_wsgi() - Remove residual data files, anything left over from previous
|
||||
# runs that a clean run would need to clean up
|
||||
function cleanup_tricircle_api_wsgi {
|
||||
sudo rm -f $(apache_site_config_for tricircle-api)
|
||||
}
|
||||
|
||||
function configure_tricircle_xjob {
|
||||
if is_service_enabled t-job ; then
|
||||
echo "Configuring Tricircle xjob"
|
||||
|
||||
init_common_tricircle_conf $TRICIRCLE_XJOB_CONF
|
||||
|
||||
setup_colorized_logging $TRICIRCLE_XJOB_CONF DEFAULT
|
||||
fi
|
||||
}
|
||||
|
||||
function start_central_nova_server {
|
||||
local local_region=$1
|
||||
local central_region=$2
|
||||
local central_neutron_port=$3
|
||||
|
||||
echo "Configuring Nova API for Tricircle to work with cell V2"
|
||||
|
||||
iniset $NOVA_CONF neutron region_name $central_region
|
||||
iniset $NOVA_CONF neutron url "$Q_PROTOCOL://$SERVICE_HOST:$central_neutron_port"
|
||||
|
||||
# Here we create new endpoints for central region instead of updating the
|
||||
# endpoints in local region because at the end of devstack, the script tries
|
||||
# to query the nova api in local region to check whether the nova-compute
|
||||
# service is running. If we update the endpoint region from local region to
|
||||
# central region, the check will fail and thus devstack fails
|
||||
nova_url=$(openstack endpoint list --service compute --interface public --region $local_region -c URL -f value)
|
||||
get_or_create_endpoint "compute" "$central_region" "$nova_url"
|
||||
nova_legacy_url=$(openstack endpoint list --service compute_legacy --interface public --region $local_region -c URL -f value)
|
||||
get_or_create_endpoint "compute_legacy" "$central_region" "$nova_legacy_url"
|
||||
|
||||
central_image_endpoint_id=$(openstack endpoint list --service image --interface public --region $central_region -c ID -f value)
|
||||
if [[ -z "$central_image_endpoint_id" ]]; then
|
||||
glance_url=$(openstack endpoint list --service image --interface public --region $local_region -c URL -f value)
|
||||
get_or_create_endpoint "image" "$central_region" "$glance_url"
|
||||
fi
|
||||
|
||||
place_endpoint_id=$(openstack endpoint list --service placement --interface public --region $local_region -c ID -f value)
|
||||
openstack endpoint set --region $central_region $place_endpoint_id
|
||||
|
||||
restart_service devstack@n-api
|
||||
restart_apache_server
|
||||
}
|
||||
|
||||
function start_central_neutron_server {
|
||||
local server_index=0
|
||||
local region_name=$1
|
||||
local q_port=$2
|
||||
|
||||
get_or_create_service "neutron" "network" "Neutron Service"
|
||||
get_or_create_endpoint "network" \
|
||||
"$region_name" \
|
||||
"$Q_PROTOCOL://$SERVICE_HOST:$q_port/" \
|
||||
"$Q_PROTOCOL://$SERVICE_HOST:$q_port/" \
|
||||
"$Q_PROTOCOL://$SERVICE_HOST:$q_port/"
|
||||
|
||||
# reconfigure central neutron server to use our own central plugin
|
||||
echo "Configuring central Neutron plugin for Tricircle"
|
||||
|
||||
cp $NEUTRON_CONF $NEUTRON_CONF.$server_index
|
||||
iniset $NEUTRON_CONF.$server_index database connection `database_connection_url $Q_DB_NAME$server_index`
|
||||
iniset $NEUTRON_CONF.$server_index DEFAULT bind_port $q_port
|
||||
iniset $NEUTRON_CONF.$server_index DEFAULT core_plugin "tricircle.network.central_plugin.TricirclePlugin"
|
||||
iniset $NEUTRON_CONF.$server_index DEFAULT service_plugins ""
|
||||
iniset $NEUTRON_CONF.$server_index DEFAULT tricircle_db_connection `database_connection_url tricircle`
|
||||
iniset $NEUTRON_CONF.$server_index DEFAULT notify_nova_on_port_data_changes False
|
||||
iniset $NEUTRON_CONF.$server_index DEFAULT notify_nova_on_port_status_changes False
|
||||
iniset $NEUTRON_CONF.$server_index client admin_username admin
|
||||
iniset $NEUTRON_CONF.$server_index client admin_password $ADMIN_PASSWORD
|
||||
iniset $NEUTRON_CONF.$server_index client admin_tenant demo
|
||||
iniset $NEUTRON_CONF.$server_index client auto_refresh_endpoint True
|
||||
iniset $NEUTRON_CONF.$server_index client top_region_name $CENTRAL_REGION_NAME
|
||||
|
||||
local service_plugins=''
|
||||
if [ "$TRICIRCLE_ENABLE_TRUNK" == "True" ]; then
|
||||
service_plugins+=",tricircle.network.central_trunk_plugin.TricircleTrunkPlugin"
|
||||
fi
|
||||
if [ "$TRICIRCLE_ENABLE_SFC" == "True" ]; then
|
||||
service_plugins+=",networking_sfc.services.flowclassifier.plugin.FlowClassifierPlugin,tricircle.network.central_sfc_plugin.TricircleSfcPlugin"
|
||||
iniset $NEUTRON_CONF.$server_index sfc drivers tricircle_sfc
|
||||
iniset $NEUTRON_CONF.$server_index flowclassifier drivers tricircle_fc
|
||||
fi
|
||||
|
||||
if [ "$TRICIRCLE_ENABLE_QOS" == "True" ]; then
|
||||
service_plugins+=",tricircle.network.central_qos_plugin.TricircleQosPlugin"
|
||||
fi
|
||||
|
||||
if [ -n service_plugins ]; then
|
||||
service_plugins=$(echo $service_plugins| sed 's/^,//')
|
||||
iniset $NEUTRON_CONF.$server_index DEFAULT service_plugins "$service_plugins"
|
||||
fi
|
||||
|
||||
local type_drivers=''
|
||||
local tenant_network_types=''
|
||||
if [ "$Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS" != "" ]; then
|
||||
type_drivers+=,vxlan
|
||||
tenant_network_types+=,vxlan
|
||||
iniset $NEUTRON_CONF.$server_index tricircle vni_ranges `echo $Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS | awk -F= '{print $2}'`
|
||||
fi
|
||||
if [ "$Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS" != "" ]; then
|
||||
type_drivers+=,vlan
|
||||
tenant_network_types+=,vlan
|
||||
iniset $NEUTRON_CONF.$server_index tricircle network_vlan_ranges `echo $Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS | awk -F= '{print $2}'`
|
||||
fi
|
||||
if [ "Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS" != "" ]; then
|
||||
type_drivers+=,flat
|
||||
tenant_network_types+=,flat
|
||||
iniset $NEUTRON_CONF.$server_index tricircle flat_networks `echo $Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS | awk -F= '{print $2}'`
|
||||
fi
|
||||
type_drivers+=,local
|
||||
tenant_network_types+=,local
|
||||
# remove the heading ","
|
||||
type_drivers=$(echo $type_drivers | sed 's/^,//')
|
||||
tenant_network_types=$(echo $tenant_network_types | sed 's/^,//')
|
||||
|
||||
iniset $NEUTRON_CONF.$server_index tricircle type_drivers $type_drivers
|
||||
iniset $NEUTRON_CONF.$server_index tricircle tenant_network_types $tenant_network_types
|
||||
iniset $NEUTRON_CONF.$server_index tricircle enable_api_gateway False
|
||||
|
||||
# reconfigure api-paste.ini in central neutron server
|
||||
local API_PASTE_INI=$NEUTRON_CONF_DIR/api-paste.ini
|
||||
sudo sed -e "
|
||||
/^keystone.*neutronapiapp/s/neutronapiapp/request_source &/;
|
||||
/app:neutronapiapp/i\[filter:request_source]\npaste.filter_factory = tricircle.common.request_source:RequestSource.factory\n
|
||||
" -i $API_PASTE_INI
|
||||
|
||||
# default value of bridge_network_type is vxlan
|
||||
|
||||
if [ "$TRICIRCLE_ENABLE_QOS" == "True" ]; then
|
||||
local p_exist=$(grep "^extension_drivers" /$Q_PLUGIN_CONF_FILE)
|
||||
if [[ $p_exist != "" ]];then
|
||||
if ! [[ $(echo $p_exist | grep "qos") ]];then
|
||||
sed -i "s/$p_exist/$p_exist,qos/g" /$Q_PLUGIN_CONF_FILE
|
||||
fi
|
||||
else
|
||||
sed -i "s/^\[ml2\]/\[ml2\]\nextension_drivers = qos/g" /$Q_PLUGIN_CONF_FILE
|
||||
fi
|
||||
fi
|
||||
|
||||
recreate_database $Q_DB_NAME$server_index
|
||||
$NEUTRON_BIN_DIR/neutron-db-manage --config-file $NEUTRON_CONF.$server_index --config-file /$Q_PLUGIN_CONF_FILE upgrade head
|
||||
|
||||
enable_service q-svc$server_index
|
||||
run_process q-svc$server_index "$NEUTRON_BIN_DIR/neutron-server --config-file $NEUTRON_CONF.$server_index --config-file /$Q_PLUGIN_CONF_FILE"
|
||||
}
|
||||
|
||||
# install_tricircleclient() - Collect source and prepare
|
||||
function install_tricircleclient {
|
||||
if use_library_from_git "python-tricircleclient"; then
|
||||
git_clone_by_name "python-tricircleclient"
|
||||
setup_dev_lib "python-tricircleclient"
|
||||
else
|
||||
pip_install_gr tricircleclient
|
||||
fi
|
||||
}
|
||||
|
||||
|
||||
# if the plugin is enabled to run, that means the Tricircle is enabled
|
||||
# by default, so no need to judge the variable Q_ENABLE_TRICIRCLE
|
||||
|
||||
if [[ "$1" == "stack" && "$2" == "pre-install" ]]; then
|
||||
echo_summary "Tricircle pre-install"
|
||||
|
||||
# init_local_neutron_variables before installation
|
||||
init_local_neutron_variables
|
||||
|
||||
elif [[ "$1" == "stack" && "$2" == "install" ]]; then
|
||||
echo_summary "Installing Tricircle"
|
||||
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
|
||||
|
||||
echo_summary "Configuring Tricircle"
|
||||
install_tricircleclient
|
||||
export NEUTRON_CREATE_INITIAL_NETWORKS=False
|
||||
sudo install -d -o $STACK_USER -m 755 $TRICIRCLE_CONF_DIR
|
||||
|
||||
if [[ "$TRICIRCLE_START_SERVICES" == "True" ]]; then
|
||||
enable_service t-api t-job
|
||||
configure_tricircle_api
|
||||
configure_tricircle_xjob
|
||||
|
||||
if [[ "$TRICIRCLE_DEPLOY_WITH_WSGI" == "True" ]]; then
|
||||
configure_tricircle_api_wsgi
|
||||
fi
|
||||
fi
|
||||
|
||||
echo export PYTHONPATH=\$PYTHONPATH:$TRICIRCLE_DIR >> $RC_DIR/.localrc.auto
|
||||
|
||||
setup_package $TRICIRCLE_DIR -e
|
||||
|
||||
if [[ "$TRICIRCLE_START_SERVICES" == "True" ]]; then
|
||||
recreate_database tricircle
|
||||
tricircle-db-manage --config-file="$TRICIRCLE_API_CONF" db_sync
|
||||
|
||||
if is_service_enabled q-svc ; then
|
||||
start_central_neutron_server $CENTRAL_REGION_NAME $TRICIRCLE_NEUTRON_PORT
|
||||
fi
|
||||
fi
|
||||
|
||||
# update the local neutron.conf after the central Neutron has started
|
||||
init_local_neutron_conf
|
||||
|
||||
if [[ "$TRICIRCLE_DEPLOY_WITH_CELL" == "True" ]]; then
|
||||
# update the local nova.conf
|
||||
init_local_nova_conf
|
||||
else
|
||||
iniset $NOVA_CONF glance region_name $REGION_NAME
|
||||
fi
|
||||
|
||||
# add default bridges br-vlan, br-ext if needed, ovs-vsctl
|
||||
# is just being installed before this stage
|
||||
add_default_bridges
|
||||
|
||||
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
|
||||
echo_summary "Initializing Tricircle Service"
|
||||
|
||||
if [[ ${USE_VENV} = True ]]; then
|
||||
PROJECT_VENV["tricircle"]=${TRICIRCLE_DIR}.venv
|
||||
TRICIRCLE_BIN_DIR=${PROJECT_VENV["tricircle"]}/bin
|
||||
else
|
||||
TRICIRCLE_BIN_DIR=$(get_python_exec_prefix)
|
||||
fi
|
||||
|
||||
if is_service_enabled t-api; then
|
||||
|
||||
create_tricircle_accounts
|
||||
|
||||
if [[ "$TRICIRCLE_DEPLOY_WITH_WSGI" == "True" ]]; then
|
||||
start_tricircle_api_wsgi
|
||||
else
|
||||
run_process t-api "$TRICIRCLE_BIN_DIR/tricircle-api --config-file $TRICIRCLE_API_CONF"
|
||||
fi
|
||||
|
||||
if [[ "$TRICIRCLE_DEPLOY_WITH_CELL" == "True" && "$TRICIRCLE_START_SERVICES" == "True" ]]; then
|
||||
start_central_nova_server $REGION_NAME $CENTRAL_REGION_NAME $TRICIRCLE_NEUTRON_PORT
|
||||
fi
|
||||
fi
|
||||
|
||||
if is_service_enabled t-job; then
|
||||
run_process t-job "$TRICIRCLE_BIN_DIR/tricircle-xjob --config-file $TRICIRCLE_XJOB_CONF"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ "$1" == "unstack" ]]; then
|
||||
|
||||
if is_service_enabled t-api; then
|
||||
if [[ "$TRICIRCLE_DEPLOY_WITH_WSGI" == "True" ]]; then
|
||||
stop_tricircle_api_wsgi
|
||||
clean_tricircle_api_wsgi
|
||||
else
|
||||
stop_process t-api
|
||||
fi
|
||||
fi
|
||||
|
||||
if is_service_enabled t-job; then
|
||||
stop_process t-job
|
||||
fi
|
||||
|
||||
if is_service_enabled q-svc0; then
|
||||
stop_process q-svc0
|
||||
fi
|
||||
fi
|
|
@ -1,50 +0,0 @@
|
|||
# Git information
|
||||
TRICIRCLE_REPO=${TRICIRCLE_REPO:-https://opendev.org/openstack/tricircle/}
|
||||
TRICIRCLE_DIR=$DEST/tricircle
|
||||
TRICIRCLE_BRANCH=${TRICIRCLE_BRANCH:-master}
|
||||
|
||||
# common variables
|
||||
CENTRAL_REGION_NAME=${CENTRAL_REGION_NAME:-CentralRegion}
|
||||
TRICIRCLE_NEUTRON_PORT=${TRICIRCLE_NEUTRON_PORT:-20001}
|
||||
TRICIRCLE_START_SERVICES=${TRICIRCLE_START_SERVICES:-True}
|
||||
TRICIRCLE_DEPLOY_WITH_WSGI=${TRICIRCLE_DEPLOY_WITH_WSGI:-True}
|
||||
TRICIRCLE_DEPLOY_WITH_CELL=${TRICIRCLE_DEPLOY_WITH_CELL:-False}
|
||||
|
||||
# extensions working with tricircle
|
||||
TRICIRCLE_ENABLE_TRUNK=${TRICIRCLE_ENABLE_TRUNK:-False}
|
||||
TRICIRCLE_ENABLE_SFC=${TRICIRCLE_ENABLE_SFC:-False}
|
||||
TRICIRCLE_ENABLE_QOS=${TRICIRCLE_ENABLE_QOS:-False}
|
||||
|
||||
# these default settings are used for devstack based gate/check jobs
|
||||
TRICIRCLE_DEFAULT_VLAN_BRIDGE=${TRICIRCLE_DEFAULT_VLAN_BRIDGE:-br-vlan}
|
||||
TRICIRCLE_DEFAULT_VLAN_RANGE=${TRICIRCLE_DEFAULT_VLAN_RANGE:-101:150}
|
||||
TRICIRCLE_DEFAULT_EXT_BRIDGE=${TRICIRCLE_DEFAULT_EXT_BRIDGE:-br-ext}
|
||||
TRICIRCLE_DEFAULT_EXT_RANGE=${TRICIRCLE_DEFAULT_EXT_RANGE:-151:200}
|
||||
TRICIRCLE_ADD_DEFAULT_BRIDGES=${TRICIRCLE_ADD_DEFAULT_BRIDGES:-False}
|
||||
TRICIRCLE_DEFAULT_VXLAN_RANGE=${TRICIRCLE_DEFAULT_VXLAN_RANGE:-1001:2000}
|
||||
TRICIRCLE_DEFAULT_FLAT_NETWORKS=${TRICIRCLE_DEFAULT_FLAT_NETWORKS:-bridge,extern}
|
||||
|
||||
TRICIRCLE_CONF_DIR=${TRICIRCLE_CONF_DIR:-/etc/tricircle}
|
||||
TRICIRCLE_STATE_PATH=${TRICIRCLE_STATE_PATH:-/var/lib/tricircle}
|
||||
|
||||
# tricircle rest admin api
|
||||
TRICIRCLE_API=$TRICIRCLE_DIR/tricircle/cmd/api.py
|
||||
TRICIRCLE_API_CONF=$TRICIRCLE_CONF_DIR/api.conf
|
||||
TRICIRCLE_API_APACHE_TEMPLATE=$TRICIRCLE_DIR/devstack/apache-tricircle-api.template
|
||||
|
||||
TRICIRCLE_API_LISTEN_ADDRESS=${TRICIRCLE_API_LISTEN_ADDRESS:-0.0.0.0}
|
||||
TRICIRCLE_API_HOST=${TRICIRCLE_API_HOST:-$SERVICE_HOST}
|
||||
TRICIRCLE_API_PORT=${TRICIRCLE_API_PORT:-19999}
|
||||
TRICIRCLE_API_PROTOCOL=${TRICIRCLE_API_PROTOCOL:-$SERVICE_PROTOCOL}
|
||||
|
||||
# tricircle xjob
|
||||
TRICIRCLE_XJOB_CONF=$TRICIRCLE_CONF_DIR/xjob.conf
|
||||
|
||||
TRICIRCLE_AUTH_CACHE_DIR=${TRICIRCLE_AUTH_CACHE_DIR:-/var/cache/tricircle}
|
||||
|
||||
export PYTHONPATH=$PYTHONPATH:$TRICIRCLE_DIR
|
||||
|
||||
# Set up default directories for client
|
||||
GITREPO["python-tricircleclient"]=${TRICIRCLE_PYTHONCLIENT_REPO:-${GIT_BASE}/openstack/python-tricircleclient.git}
|
||||
GITBRANCH["python-tricircleclient"]=${TRICIRCLE_PYTHONCLIENT_BRANCH:-master}
|
||||
GITDIR["python-tricircleclient"]=$DEST/python-tricircleclient
|
|
@ -1,135 +0,0 @@
|
|||
#!/bin/bash
|
||||
#
|
||||
# Script name: verify_cross_pod_install.sh
|
||||
# This script is to verify the installation of Tricircle in cross pod L3 networking.
|
||||
# It verify both east-west and north-south networks.
|
||||
#
|
||||
# In this script, there are some parameters you need to consider before running it.
|
||||
#
|
||||
# 1, Post URL whether is 127.0.0.1 or something else,
|
||||
# 2, This script creates 2 subnets 10.0.1.0/24 and 10.0.2.0/24, Change these if needed.
|
||||
# 3, This script creates external subnet ext-net 10.50.11.0/26, Change it according to
|
||||
# your own environment.
|
||||
# 4, The floating ip attached to the VM with ip 10.0.2.3, created by the script
|
||||
# "verify_cross_pod_install.sh", modify it according to your own environment.
|
||||
#
|
||||
# Change the parameters according to your own environment.
|
||||
# Finally, execute "verify_cross_pod_install.sh" in the Node1.
|
||||
#
|
||||
# Author: Pengfei Shi <shipengfei92@gmail.com>
|
||||
#
|
||||
|
||||
set -o xtrace
|
||||
|
||||
TEST_DIR=$(pwd)
|
||||
echo "Test work directory is $TEST_DIR."
|
||||
|
||||
if [ ! -r admin-openrc.sh ];then
|
||||
set -o xtrace
|
||||
echo "Your work directory doesn't have admin-openrc.sh,"
|
||||
echo "Please check whether you are in tricircle/devstack/ or not and run this script."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Beginning the verify testing..."
|
||||
|
||||
echo "Import client environment variables:"
|
||||
source $TEST_DIR/admin-openrc.sh
|
||||
|
||||
echo "******************************"
|
||||
echo "* Verify Endpoint *"
|
||||
echo "******************************"
|
||||
|
||||
echo "List openstack endpoint:"
|
||||
openstack --debug endpoint list
|
||||
|
||||
token=$(openstack token issue | awk 'NR==5 {print $4}')
|
||||
|
||||
echo $token
|
||||
|
||||
openstack multiregion networking pod create --region-name RegionOne
|
||||
|
||||
openstack multiregion networking pod create --region-name Pod1 --availability-zone az1
|
||||
|
||||
openstack multiregion networking pod create --region-name Pod2 --availability-zone az2
|
||||
|
||||
echo "******************************"
|
||||
echo "* Verify Nova *"
|
||||
echo "******************************"
|
||||
|
||||
echo "Show nova aggregate:"
|
||||
nova aggregate-list
|
||||
|
||||
neutron net-create --availability-zone-hint az1 net1
|
||||
|
||||
neutron net-create --availability-zone-hint az2 net2
|
||||
|
||||
echo "Create external network ext-net:"
|
||||
neutron net-create --router:external --provider:network_type vlan --provider:physical_network extern --availability-zone-hint Pod2 ext-net
|
||||
|
||||
echo "Create test flavor:"
|
||||
nova flavor-create test 1 1024 10 1
|
||||
|
||||
echo "******************************"
|
||||
echo "* Verify Neutron *"
|
||||
echo "******************************"
|
||||
|
||||
echo "Create external subnet with floating ips:"
|
||||
neutron subnet-create --name ext-subnet --disable-dhcp ext-net 10.50.11.0/26 --allocation-pool start=10.50.11.30,end=10.50.11.50 --gateway 10.50.11.1
|
||||
|
||||
echo "Create router for subnets:"
|
||||
neutron router-create router
|
||||
|
||||
echo "Set router external gateway:"
|
||||
neutron router-gateway-set router ext-net
|
||||
|
||||
echo "Create net1 in Node1:"
|
||||
neutron subnet-create net1 10.0.1.0/24
|
||||
|
||||
echo "Create net2 in Node2:"
|
||||
neutron subnet-create net2 10.0.2.0/24
|
||||
|
||||
net1_id=$(neutron net-list |grep net1 | awk '{print $2}')
|
||||
net2_id=$(neutron net-list |grep net2 | awk '{print $2}')
|
||||
image_id=$(glance image-list |awk 'NR==4 {print $2}')
|
||||
|
||||
echo "Boot vm1 in az1:"
|
||||
nova boot --flavor 1 --image $image_id --nic net-id=$net1_id --availability-zone az1 vm1
|
||||
echo "Boot vm2 in az2:"
|
||||
nova boot --flavor 1 --image $image_id --nic net-id=$net2_id --availability-zone az2 vm2
|
||||
|
||||
subnet1_id=$(neutron net-list |grep net1 |awk '{print $6}')
|
||||
subnet2_id=$(neutron net-list |grep net2 |awk '{print $6}')
|
||||
|
||||
echo "Add interface of subnet1:"
|
||||
neutron router-interface-add router $subnet1_id
|
||||
echo "Add interface of subnet2:"
|
||||
neutron router-interface-add router $subnet2_id
|
||||
|
||||
echo "******************************"
|
||||
echo "* Verify VNC connection *"
|
||||
echo "******************************"
|
||||
|
||||
echo "Get the VNC url of vm1:"
|
||||
nova --os-region-name Pod1 get-vnc-console vm1 novnc
|
||||
echo "Get the VNC url of vm2:"
|
||||
nova --os-region-name Pod2 get-vnc-console vm2 novnc
|
||||
|
||||
echo "**************************************"
|
||||
echo "* Verify External network *"
|
||||
echo "**************************************"
|
||||
|
||||
echo "Create floating ip:"
|
||||
neutron floatingip-create ext-net
|
||||
|
||||
echo "Show floating ips:"
|
||||
neutron floatingip-list
|
||||
|
||||
echo "Show neutron ports:"
|
||||
neutron port-list
|
||||
|
||||
floatingip_id=$(neutron floatingip-list | awk 'NR==4 {print $2}')
|
||||
port_id=$(neutron port-list |grep 10.0.2.3 |awk '{print $2}')
|
||||
|
||||
echo "Associate floating ip:"
|
||||
neutron floatingip-associate $floatingip_id $port_id
|
|
@ -1,92 +0,0 @@
|
|||
#!/bin/bash
|
||||
#
|
||||
# Script name: verify_top_install.sh
|
||||
# This script is to verify the installation of Tricircle in Top OpenStack.
|
||||
#
|
||||
# In this script, there are some parameters you need to consider before running it.
|
||||
#
|
||||
# 1, Post URL whether is 127.0.0.1 or something else,
|
||||
# 2, This script create a subnet called net1 10.0.0.0/24, Change these if needed.
|
||||
#
|
||||
# Change the parameters according to your own environment.
|
||||
# Execute "verify_top_install.sh" in the top OpenStack
|
||||
#
|
||||
# Author: Pengfei Shi <shipengfei92@gmail.com>
|
||||
#
|
||||
|
||||
set -o xtrace
|
||||
|
||||
TEST_DIR=$(pwd)
|
||||
echo "Test work directory is $TEST_DIR."
|
||||
|
||||
if [ ! -r admin-openrc.sh ];then
|
||||
set -o xtrace
|
||||
echo "Your work directory doesn't have admin-openrc.sh,"
|
||||
echo "Please check whether you are in tricircle/devstack/ or not and run this script."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Beginning the verify testing..."
|
||||
|
||||
echo "Import client environment variables:"
|
||||
source $TEST_DIR/admin-openrc.sh
|
||||
|
||||
echo "******************************"
|
||||
echo "* Verify Endpoint *"
|
||||
echo "******************************"
|
||||
|
||||
echo "List openstack endpoint:"
|
||||
|
||||
openstack --debug endpoint list
|
||||
|
||||
token=$(openstack token issue | awk 'NR==5 {print $4}')
|
||||
|
||||
echo $token
|
||||
|
||||
openstack multiregion networking pod create --region-name RegionOne
|
||||
|
||||
openstack multiregion networking pod create --region-name Pod1 --availability-zone az1
|
||||
|
||||
echo "******************************"
|
||||
echo "* Verify Nova *"
|
||||
echo "******************************"
|
||||
|
||||
echo "Show nova aggregate:"
|
||||
nova --debug aggregate-list
|
||||
|
||||
echo "Create test flavor:"
|
||||
nova --debug flavor-create test 1 1024 10 1
|
||||
|
||||
echo "******************************"
|
||||
echo "* Verify Neutron *"
|
||||
echo "******************************"
|
||||
|
||||
echo "Create net1:"
|
||||
neutron --debug net-create net1
|
||||
|
||||
echo "Create subnet of net1:"
|
||||
neutron --debug subnet-create net1 10.0.0.0/24
|
||||
|
||||
image_id=$(glance image-list |awk 'NR==4 {print $2}')
|
||||
net_id=$(neutron net-list|grep net1 |awk '{print $2}')
|
||||
|
||||
echo "Boot vm1 in az1:"
|
||||
nova --debug boot --flavor 1 --image $image_id --nic net-id=$net_id --availability-zone az1 vm1
|
||||
|
||||
echo "******************************"
|
||||
echo "* Verify Cinder *"
|
||||
echo "******************************"
|
||||
|
||||
echo "Create a volume in az1:"
|
||||
cinder --debug create --availability-zone=az1 1
|
||||
|
||||
echo "Show volume list:"
|
||||
cinder --debug list
|
||||
volume_id=$(cinder list |grep lvmdriver-1 | awk '{print $2}')
|
||||
|
||||
echo "Show detailed volume info:"
|
||||
cinder --debug show $volume_id
|
||||
|
||||
echo "Delete test volume:"
|
||||
cinder --debug delete $volume_id
|
||||
cinder --debug list
|
File diff suppressed because it is too large
Load Diff
|
@ -1,25 +0,0 @@
|
|||
================================
|
||||
Command-Line Interface Reference
|
||||
================================
|
||||
|
||||
Synopsis
|
||||
========
|
||||
|
||||
Follow OpenStack CLI format ::
|
||||
|
||||
openstack [<global-options>] <command> [<command-arguments>]
|
||||
|
||||
The CLI for Tricircle can be executed as follows ::
|
||||
|
||||
openstack multiregion networking <command> [<command-arguments>]
|
||||
|
||||
All commands will issue request to Tricircle Admin API.
|
||||
|
||||
|
||||
Management commands
|
||||
===================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
tricircle-status
|
|
@ -1,9 +0,0 @@
|
|||
=====================
|
||||
Tricircle Admin Guide
|
||||
=====================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 3
|
||||
|
||||
api_v1
|
||||
cli
|
|
@ -1,78 +0,0 @@
|
|||
================
|
||||
tricircle-status
|
||||
================
|
||||
|
||||
Synopsis
|
||||
========
|
||||
|
||||
::
|
||||
|
||||
tricircle-status <category> <command> [<args>]
|
||||
|
||||
Description
|
||||
===========
|
||||
|
||||
:program:`tricircle-status` is a tool that provides routines for checking the
|
||||
status of a Tricircle deployment.
|
||||
|
||||
Options
|
||||
=======
|
||||
|
||||
The standard pattern for executing a :program:`tricircle-status` command is::
|
||||
|
||||
tricircle-status <category> <command> [<args>]
|
||||
|
||||
Run without arguments to see a list of available command categories::
|
||||
|
||||
tricircle-status
|
||||
|
||||
Categories are:
|
||||
|
||||
* ``upgrade``
|
||||
|
||||
Detailed descriptions are below.
|
||||
|
||||
You can also run with a category argument such as ``upgrade`` to see a list of
|
||||
all commands in that category::
|
||||
|
||||
tricircle-status upgrade
|
||||
|
||||
These sections describe the available categories and arguments for
|
||||
:program:`tricircle-status`.
|
||||
|
||||
Upgrade
|
||||
~~~~~~~
|
||||
|
||||
.. _tricircle-status-checks:
|
||||
|
||||
``tricircle-status upgrade check``
|
||||
Performs a release-specific readiness check before restarting services with
|
||||
new code. This command expects to have complete configuration and access
|
||||
to databases and services.
|
||||
|
||||
**Return Codes**
|
||||
|
||||
.. list-table::
|
||||
:widths: 20 80
|
||||
:header-rows: 1
|
||||
|
||||
* - Return code
|
||||
- Description
|
||||
* - 0
|
||||
- All upgrade readiness checks passed successfully and there is nothing
|
||||
to do.
|
||||
* - 1
|
||||
- At least one check encountered an issue and requires further
|
||||
investigation. This is considered a warning but the upgrade may be OK.
|
||||
* - 2
|
||||
- There was an upgrade status check failure that needs to be
|
||||
investigated. This should be considered something that stops an
|
||||
upgrade.
|
||||
* - 255
|
||||
- An unexpected error occurred.
|
||||
|
||||
**History of Checks**
|
||||
|
||||
**6.0.0 (Stein)**
|
||||
|
||||
* Placeholder to be filled in with checks as they are added in Stein.
|
|
@ -1,82 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
sys.path.insert(0, os.path.abspath('../..'))
|
||||
# -- General configuration ----------------------------------------------------
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
|
||||
extensions = [
|
||||
'sphinx.ext.autodoc',
|
||||
# 'sphinx.ext.intersphinx',
|
||||
'openstackdocstheme'
|
||||
]
|
||||
|
||||
# openstackdocstheme options
|
||||
repository_name = 'openstack/tricircle'
|
||||
bug_project = 'tricircle'
|
||||
bug_tag = ''
|
||||
|
||||
# autodoc generation is a bit aggressive and a nuisance when doing heavy
|
||||
# text edit cycles.
|
||||
# execute "export SPHINX_DEBUG=1" in your terminal to disable
|
||||
|
||||
# The suffix of source filenames.
|
||||
source_suffix = '.rst'
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = 'index'
|
||||
|
||||
# General information about the project.
|
||||
project = u'tricircle'
|
||||
copyright = u'2015, OpenStack Foundation'
|
||||
|
||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
||||
add_function_parentheses = True
|
||||
|
||||
# If true, the current module name will be prepended to all description
|
||||
# unit titles (such as .. function::).
|
||||
add_module_names = True
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = 'sphinx'
|
||||
|
||||
# -- Options for HTML output --------------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. Major themes that come with
|
||||
# Sphinx are currently 'default' and 'sphinxdoc'.
|
||||
# html_theme_path = ["."]
|
||||
# html_theme = '_theme'
|
||||
# html_static_path = ['static']
|
||||
html_theme = 'openstackdocs'
|
||||
|
||||
html_last_updated_fmt = '%Y-%m-%d %H:%M'
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
htmlhelp_basename = '%sdoc' % project
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title, author, documentclass
|
||||
# [howto/manual]).
|
||||
latex_documents = [
|
||||
('index',
|
||||
'%s.tex' % project,
|
||||
u'%s Documentation' % project,
|
||||
u'OpenStack Foundation', 'manual'),
|
||||
]
|
||||
|
||||
# Example configuration for intersphinx: refer to the Python standard library.
|
||||
# intersphinx_mapping = {'http://docs.python.org/': None}
|
|
@ -1,226 +0,0 @@
|
|||
===================
|
||||
Configuration Guide
|
||||
===================
|
||||
A brief introduction to configure Tricircle service. Only the
|
||||
configuration items for Tricircle will be described here. Logging,
|
||||
messaging, database, keystonemiddleware etc configuration which are
|
||||
generated from OpenStack Oslo library, will not be described here. Since
|
||||
these configuration items are common to Nova, Cinder, Neutron. Please
|
||||
refer to corresponding description from Nova, Cinder or Neutron.
|
||||
|
||||
Common Options
|
||||
==============
|
||||
In the common configuration options, the group of "client" need to be
|
||||
configured in Admin API, XJob, Local Plugin and Central Plugin. The
|
||||
"tricircle_db_connection" should be configured in Admin API, XJob and
|
||||
Central Plugin.
|
||||
|
||||
.. _Common:
|
||||
|
||||
.. list-table:: Description of common configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``tricircle_db_connection`` = ``None``
|
||||
- (String) database connection string for Tricircle, for example, mysql+pymysql://root:password@127.0.0.1/tricircle?charset=utf8
|
||||
* - **[client]**
|
||||
-
|
||||
* - ``admin_password`` = ``None``
|
||||
- (String) password of admin account, needed when auto_refresh_endpoint set to True, for example, password.
|
||||
* - ``admin_tenant`` = ``None``
|
||||
- (String) tenant name of admin account, needed when auto_refresh_endpoint set to True, for example, demo.
|
||||
* - ``admin_tenant_domain_name`` = ``Default``
|
||||
- (String) tenant domain name of admin account, needed when auto_refresh_endpoint set to True.
|
||||
* - ``admin_user_domain_name`` = ``Default``
|
||||
- (String) user domain name of admin account, needed when auto_refresh_endpoint set to True.
|
||||
* - ``admin_username`` = ``None``
|
||||
- (String) username of admin account, needed when auto_refresh_endpoint set to True.
|
||||
* - ``auth_url`` = ``http://127.0.0.1/identity``
|
||||
- (String) keystone authorization url, it's basically the internal or public endpoint of keystone, depends on how
|
||||
the common.client module can reach keystone, for example, http://$service_host/identity
|
||||
* - ``identity_url`` = ``http://127.0.0.1/identity/v3``
|
||||
- [Deprecated] (String) keystone service url, for example, http://$service_host/identity/v3 (this option is not
|
||||
used in code since Pike release, you can simply ignore this option)
|
||||
* - ``auto_refresh_endpoint`` = ``True``
|
||||
- (Boolean) if set to True, endpoint will be automatically refreshed if timeout accessing endpoint.
|
||||
* - ``bridge_cidr`` = ``100.0.0.0/9``
|
||||
- (String) cidr pool of the bridge network, for example, 100.0.0.0/9
|
||||
* - ``neutron_timeout`` = ``60``
|
||||
- (Integer) timeout for neutron client in seconds.
|
||||
* - ``top_region_name`` = ``None``
|
||||
- (String) region name of Central Neutron in which client needs to access, for example, CentralRegion.
|
||||
* - ``cross_pod_vxlan_mode`` = ``p2p``
|
||||
- (String) Cross-pod VxLAN networking support mode, possible choices are p2p l2gw and noop
|
||||
|
||||
|
||||
|
||||
|
||||
Tricircle Admin API Settings
|
||||
============================
|
||||
|
||||
Tricircle Admin API servers for managing the mapping between OpenStack instances
|
||||
and availability zone, retrieving object uuid routing and exposing API for
|
||||
maintenance. The following items should be configured in Tricircle's api.conf.
|
||||
|
||||
.. _Tricircle-Admin_API:
|
||||
|
||||
.. list-table:: Description of Tricircle Admin API configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``api_workers`` = ``1``
|
||||
- (Integer) The port to bind to
|
||||
* - ``auth_strategy`` = ``keystone``
|
||||
- (String) The type of authentication to use
|
||||
* - ``bind_host`` = ``0.0.0.0``
|
||||
- (String) The host IP to bind to
|
||||
* - ``bind_port`` = ``19999``
|
||||
- (Integer) The port to bind to
|
||||
|
||||
|
||||
Tricircle XJob Settings
|
||||
=======================
|
||||
|
||||
Tricircle XJob serves for receiving and processing cross Neutron
|
||||
functionality and other async jobs from Admin API or Tricircle Central
|
||||
Neutron Plugin. The following items should be configured in Tricircle's
|
||||
xjob.conf.
|
||||
|
||||
.. _Tricircle-Xjob:
|
||||
|
||||
.. list-table:: Description of Tricircle XJob configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``periodic_enable`` = ``True``
|
||||
- (Boolean) Enable periodic tasks
|
||||
* - ``periodic_fuzzy_delay`` = ``60``
|
||||
- (Integer) Range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0)
|
||||
* - ``report_interval`` = ``10``
|
||||
- (Integer) Seconds between nodes reporting state to datastore
|
||||
* - ``host`` = ``tricircle.xhost``
|
||||
- (String) The host name for RPC server, each node should have different host name.
|
||||
* - ``job_run_expire`` = ``180``
|
||||
- (Integer) Running job is considered expires after this time, in seconds
|
||||
* - ``workers`` = ``1``
|
||||
- (Integer) Number of workers
|
||||
* - ``worker_handle_timeout`` = ``1800``
|
||||
- (Integer) Timeout for worker's one turn of processing, in seconds
|
||||
* - ``worker_sleep_time`` = ``60``
|
||||
- (Float) Seconds a worker sleeps after one run in a loop
|
||||
* - ``redo_time_span`` = ``172800``
|
||||
- (Integer) Time span in seconds, we calculate the latest job timestamp by
|
||||
subtracting this time span from the current timestamp, jobs created
|
||||
between these two timestamps will be redone
|
||||
|
||||
Networking Setting for Tricircle
|
||||
================================
|
||||
To make the networking automation work, two plugins need to be configured:
|
||||
Tricircle Central Neutron Plugin and Tricircle Local Neutron Plugin.
|
||||
|
||||
**Tricircle Central Neutron Plugin**
|
||||
|
||||
The Tricircle Central Neutron Plugin serves for tenant level L2/L3 networking
|
||||
automation across multiple Neutron servers. The following items should be
|
||||
configured in central Neutron's neutron.conf.
|
||||
|
||||
.. _Central Neutron:
|
||||
|
||||
.. list-table:: Description of Central Neutron configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``core_plugin`` = ``None``
|
||||
- (String) core plugin central Neutron server uses, should be set to tricircle.network.central_plugin.TricirclePlugin
|
||||
* - **[tricircle]**
|
||||
-
|
||||
* - ``bridge_network_type`` = ``vxlan``
|
||||
- (String) Type of l3 bridge network, this type should be enabled in tenant_network_types and is not local type, for example, vlan or vxlan.
|
||||
* - ``default_region_for_external_network`` = ``RegionOne``
|
||||
- (String) Default region where the external network belongs to, it must exist, for example, RegionOne.
|
||||
* - ``network_vlan_ranges`` = ``None``
|
||||
- (String) List of <physical_network>:<vlan_min>:<vlan_max> or <physical_network> specifying physical_network names usable for VLAN provider and tenant networks, as well as ranges of VLAN tags on each available for allocation to tenant networks, for example, bridge:2001:3000.
|
||||
* - ``tenant_network_types`` = ``vxlan,local``
|
||||
- (String) Ordered list of network_types to allocate as tenant networks. The default value "local" is useful for single pod connectivity, for example, local vlan and vxlan.
|
||||
* - ``type_drivers`` = ``vxlan,local``
|
||||
- (String) List of network type driver entry points to be loaded from the tricircle.network.type_drivers namespace, for example, local vlan and vxlan.
|
||||
* - ``vni_ranges`` = ``None``
|
||||
- (String) Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of VXLAN VNI IDs that are available for tenant network allocation, for example, 1001:2000
|
||||
* - ``flat_networks`` = ``*``
|
||||
- (String) List of physical_network names with which flat networks can be created. Use default '*' to allow flat networks with arbitrary physical_network names. Use an empty list to disable flat networks.
|
||||
|
||||
|
||||
**Tricircle Local Neutron Plugin**
|
||||
|
||||
The Tricircle Local Neutron Plugin serves for cross Neutron networking
|
||||
automation triggering. It is a shim layer between real core plugin and
|
||||
Neutron API server. The following items should be configured in local
|
||||
Neutron's neutron.conf
|
||||
|
||||
.. list-table:: Description of Local Neutron configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description and Example
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``core_plugin`` = ``None``
|
||||
- (String) core plugin local Neutron server uses, should be set to tricircle.network.local_plugin.TricirclePlugin
|
||||
* - **[tricircle]**
|
||||
-
|
||||
* - ``central_neutron_url`` = ``None``
|
||||
- (String) Central Neutron server url, for example, http://$service_host:9696
|
||||
* - ``real_core_plugin`` = ``None``
|
||||
- (String) The core plugin the Tricircle local plugin will invoke, for example, neutron.plugins.ml2.plugin.Ml2Plugin
|
||||
|
||||
|
||||
**Tricircle Local Neutron L3 Plugin**
|
||||
|
||||
In multiple OpenStack clouds, if the external network is located in the
|
||||
first OpenStack cloud, but the port which will be associated with one
|
||||
floating ip is located in the second OpenStack cloud, then the network for
|
||||
this port may not be able to be added to the router in the first OpenStack.
|
||||
In Tricircle, to address this scenario, a bridge network will be used
|
||||
to connect the routers in these two OpenStack clouds if the network is not
|
||||
a cross Neutron L2 network. To make it happen, the Tricircle Local Neutron L3
|
||||
Plugin or other L3 service plugin should be able to associate a floating ip to
|
||||
a port whose network is not directly attached to the router. TricircleL3Plugin
|
||||
is inherited from Neutron original L3RouterPlugin, and overrides the original
|
||||
"get_router_for_floatingip" implementation to allow associating a floating ip
|
||||
to a port whose network is not directly attached to the router. If you want
|
||||
to configure local Neutron to use original L3RouterPlugin, then you need to
|
||||
patch the function "get_router_for_floatingip" as what has been done in
|
||||
TricircleL3Plugin.
|
||||
|
||||
If only cross Neutron L2 networking is needed in the deployment, it's not
|
||||
necessary to configure the service plugins.
|
||||
|
||||
The following item should be configured in local Neutron's neutron.conf
|
||||
|
||||
.. list-table:: Description of Local Neutron configuration options
|
||||
:header-rows: 1
|
||||
:class: config-ref-table
|
||||
|
||||
* - Configuration option = Default value
|
||||
- Description and Example
|
||||
* - **[DEFAULT]**
|
||||
-
|
||||
* - ``service_plugins`` = ``None``
|
||||
- (String) service plugins local Neutron server uses, can be set to tricircle.network.local_l3_plugin.TricircleL3Plugin
|
|
@ -1,8 +0,0 @@
|
|||
=============================
|
||||
Tricircle Configuration Guide
|
||||
=============================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 3
|
||||
|
||||
configuration
|
|
@ -1,4 +0,0 @@
|
|||
============
|
||||
Contributing
|
||||
============
|
||||
.. include:: ../../../CONTRIBUTING.rst
|
|
@ -1,8 +0,0 @@
|
|||
============================
|
||||
Tricircle Contribution Guide
|
||||
============================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
contributing
|
|
@ -1,51 +0,0 @@
|
|||
.. tricircle documentation master file, created by
|
||||
sphinx-quickstart on Wed Dec 2 17:00:36 2015.
|
||||
You can adapt this file completely to your liking, but it should at least
|
||||
contain the root `toctree` directive.
|
||||
|
||||
=====================================
|
||||
Welcome to Tricircle's documentation!
|
||||
=====================================
|
||||
|
||||
Tricircle User Guide
|
||||
====================
|
||||
.. toctree::
|
||||
:maxdepth: 3
|
||||
|
||||
user/index
|
||||
|
||||
Tricircle Contribution Guide
|
||||
============================
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
contributor/index
|
||||
|
||||
Tricircle Admin Guide
|
||||
=====================
|
||||
.. toctree::
|
||||
:maxdepth: 3
|
||||
|
||||
admin/index
|
||||
|
||||
Tricircle Installation Guide
|
||||
============================
|
||||
.. toctree::
|
||||
:maxdepth: 3
|
||||
|
||||
install/index
|
||||
|
||||
Tricircle Configuration Guide
|
||||
=============================
|
||||
.. toctree::
|
||||
:maxdepth: 3
|
||||
|
||||
configuration/index
|
||||
|
||||
Tricircle Networking Guide
|
||||
==========================
|
||||
.. toctree::
|
||||
:maxdepth: 4
|
||||
|
||||
networking/index
|
||||
|
|
@ -1,8 +0,0 @@
|
|||
============================
|
||||
Tricircle Installation Guide
|
||||
============================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 3
|
||||
|
||||
installation-guide
|
|
@ -1,152 +0,0 @@
|
|||
==================================
|
||||
Work with Nova cell v2(experiment)
|
||||
==================================
|
||||
|
||||
.. note:: Multi-cell support of Nova cell v2 is under development. DevStack
|
||||
doesn't support multi-cell deployment currently, so the steps discussed in
|
||||
this document may seem not that elegant. We will keep updating this document
|
||||
according to the progress of multi-cell development by Nova team.
|
||||
|
||||
Setup
|
||||
^^^^^
|
||||
|
||||
- 1 Follow "Multi-pod Installation with DevStack" document to prepare your
|
||||
local.conf for both nodes, and set TRICIRCLE_DEPLOY_WITH_CELL to True for
|
||||
both nodes. Start DevStack in node1, then node2.
|
||||
|
||||
.. note:: After running DevStack in both nodes, a multi-cell environment will
|
||||
be prepared: there is one CentralRegion, where Nova API and central Neutron
|
||||
will be registered. Nova has two cells, node1 belongs to cell1, node2 belongs
|
||||
to cell2, and each cell will be configured to use a dedicated local Neutron.
|
||||
For cell1, it's RegionOne Neutron in node1; for cell2, it's RegionTwo Neutron
|
||||
in node2(you can set the region name in local.conf to make the name more
|
||||
friendly). End user can access CentralRegion endpoint of Nova and Neutron to
|
||||
experience the integration of Nova cell v2 and Tricircle.
|
||||
|
||||
- 2 Stop the following services in node2::
|
||||
|
||||
systemctl stop devstack@n-sch.service
|
||||
systemctl stop devstack@n-super-cond.service
|
||||
systemctl stop devstack@n-api.service
|
||||
|
||||
if the service of devstack@n-api-meta.service exists, stop it::
|
||||
|
||||
systemctl stop devstack@n-api-meta.service
|
||||
|
||||
.. note:: Actually for cell v2, only one Nova API is required. We enable n-api
|
||||
in node2 because we need DevStack to help us create the necessary cell
|
||||
database. If n-api is disabled, neither API database nor cell database will
|
||||
be created.
|
||||
|
||||
- 3 In node2, run the following command::
|
||||
|
||||
mysql -u$user -p$password -Dnova_cell1 -e 'select host, mapped from compute_nodes'
|
||||
|
||||
you can see that this command returns you one row showing the host of node2
|
||||
is already mapped::
|
||||
|
||||
+-----------+--------+
|
||||
| host | mapped |
|
||||
+-----------+--------+
|
||||
| zhiyuan-2 | 1 |
|
||||
+-----------+--------+
|
||||
|
||||
This host is registered to Nova API in node2, which is already stopped by us,
|
||||
We need to update this row to set "mapped" to 0::
|
||||
|
||||
mysql -u$user -p$password -Dnova_cell1 -e 'update compute_nodes set mapped = 0 where host = "zhiyuan-2"'
|
||||
|
||||
then we can register this host again in step4.
|
||||
|
||||
- 4 In node1, run the following commands to register the new cell::
|
||||
|
||||
nova-manage cell_v2 create_cell --name cell2 \
|
||||
--transport-url rabbit://$rabbit_user:$rabbit_passwd@$node2_ip:5672/nova_cell1 \
|
||||
--database_connection mysql+pymysql://$db_user:$db_passwd@$node2_ip/nova_cell1?charset=utf8
|
||||
|
||||
nova-manage cell_v2 discover_hosts
|
||||
|
||||
then you can see the new cell and host are added in the database::
|
||||
|
||||
mysql -u$user -p$password -Dnova_api -e 'select cell_id, host from host_mappings'
|
||||
|
||||
+---------+-----------+
|
||||
| cell_id | host |
|
||||
+---------+-----------+
|
||||
| 2 | zhiyuan-1 |
|
||||
| 3 | zhiyuan-2 |
|
||||
+---------+-----------+
|
||||
|
||||
mysql -u$user -p$password -Dnova_api -e 'select id, name from cell_mappings'
|
||||
|
||||
+----+-------+
|
||||
| id | name |
|
||||
+----+-------+
|
||||
| 1 | cell0 |
|
||||
| 2 | cell1 |
|
||||
| 3 | cell2 |
|
||||
+----+-------+
|
||||
|
||||
- 5 In node1, run the following commands::
|
||||
|
||||
systemctl restart devstack@n-sch.service
|
||||
|
||||
- 6 In node1, check if compute services in both hosts are registered::
|
||||
|
||||
openstack --os-region-name CentralRegion compute service list
|
||||
|
||||
+----+------------------+-----------+----------+---------+-------+----------------------------+
|
||||
| ID | Binary | Host | Zone | Status | State | Updated At |
|
||||
+----+------------------+-----------+----------+---------+-------+----------------------------+
|
||||
| 5 | nova-scheduler | zhiyuan-1 | internal | enabled | up | 2017-09-20T06:56:02.000000 |
|
||||
| 6 | nova-conductor | zhiyuan-1 | internal | enabled | up | 2017-09-20T06:56:09.000000 |
|
||||
| 8 | nova-consoleauth | zhiyuan-1 | internal | enabled | up | 2017-09-20T06:56:01.000000 |
|
||||
| 1 | nova-conductor | zhiyuan-1 | internal | enabled | up | 2017-09-20T06:56:07.000000 |
|
||||
| 3 | nova-compute | zhiyuan-1 | nova | enabled | up | 2017-09-20T06:56:10.000000 |
|
||||
| 1 | nova-conductor | zhiyuan-2 | internal | enabled | up | 2017-09-20T06:56:07.000000 |
|
||||
| 3 | nova-compute | zhiyuan-2 | nova | enabled | up | 2017-09-20T06:56:09.000000 |
|
||||
+----+------------------+-----------+----------+---------+-------+----------------------------+
|
||||
|
||||
zhiyuan-1 has two nova-conductor services, because one of them is a super
|
||||
conductor service.
|
||||
|
||||
- 7 Create two aggregates and put the two hosts in each aggregate::
|
||||
|
||||
nova --os-region-name CentralRegion aggregate-create ag1 az1
|
||||
nova --os-region-name CentralRegion aggregate-create ag2 az2
|
||||
nova --os-region-name CentralRegion aggregate-add-host ag1 zhiyuan-1
|
||||
nova --os-region-name CentralRegion aggregate-add-host ag2 zhiyuan-2
|
||||
|
||||
- 8 Create pods, tricircle client is used::
|
||||
|
||||
openstack --os-region-name CentralRegion multiregion networking pod create --region-name CentralRegion
|
||||
openstack --os-region-name CentralRegion multiregion networking pod create --region-name RegionOne --availability-zone az1
|
||||
openstack --os-region-name CentralRegion multiregion networking pod create --region-name RegionTwo --availability-zone az2
|
||||
|
||||
- 9 Create network and boot virtual machines::
|
||||
|
||||
net_id=$(openstack --os-region-name CentralRegion network create --provider-network-type vxlan net1 -c id -f value)
|
||||
openstack --os-region-name CentralRegion subnet create --subnet-range 10.0.1.0/24 --network net1 subnet1
|
||||
image_id=$(openstack --os-region-name CentralRegion image list -c ID -f value)
|
||||
|
||||
openstack --os-region-name CentralRegion server create --flavor 1 --image $image_id --nic net-id=$net_id --availability-zone az1 vm1
|
||||
openstack --os-region-name CentralRegion server create --flavor 1 --image $image_id --nic net-id=$net_id --availability-zone az2 vm2
|
||||
|
||||
Trouble Shooting
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
- 1 After you run "compute service list" in step5, you only see services in node1, like::
|
||||
|
||||
+----+------------------+-----------+----------+---------+-------+----------------------------+
|
||||
| ID | Binary | Host | Zone | Status | State | Updated At |
|
||||
+----+------------------+-----------+----------+---------+-------+----------------------------+
|
||||
| 5 | nova-scheduler | zhiyuan-1 | internal | enabled | up | 2017-09-20T06:55:52.000000 |
|
||||
| 6 | nova-conductor | zhiyuan-1 | internal | enabled | up | 2017-09-20T06:55:59.000000 |
|
||||
| 8 | nova-consoleauth | zhiyuan-1 | internal | enabled | up | 2017-09-20T06:56:01.000000 |
|
||||
| 1 | nova-conductor | zhiyuan-1 | internal | enabled | up | 2017-09-20T06:55:57.000000 |
|
||||
| 3 | nova-compute | zhiyuan-1 | nova | enabled | up | 2017-09-20T06:56:00.000000 |
|
||||
+----+------------------+-----------+----------+---------+-------+----------------------------+
|
||||
|
||||
Though new cell has been registered in the database, the running n-api process
|
||||
in node1 may not recognize it. We find that restarting n-api can solve this
|
||||
problem.
|
|
@ -1,19 +0,0 @@
|
|||
Installation Guide
|
||||
------------------
|
||||
Now the Tricircle can be played with Devstack for all-in-one single pod and
|
||||
multi-pod. You can build different Tricircle environments with Devstack
|
||||
according to your needs. In the near future there will be a manual installation
|
||||
guide in this installation guide that discussing how to install the Tricircle
|
||||
step by step without DevStack for users who install OpenStack manually.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
.. include:: ./single-pod-installation-devstack.rst
|
||||
.. include:: ./multi-pod-installation-devstack.rst
|
||||
.. include:: ./installation-manual.rst
|
||||
.. include:: ./installation-cell.rst
|
||||
.. include:: ./installation-lbaas.rst
|
||||
.. include:: ./installation-lbaas_with_nova_cell_v2.rst
|
||||
.. include:: ./installation-tricircle_work_with_container.rst
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -1,296 +0,0 @@
|
|||
===================
|
||||
Manual Installation
|
||||
===================
|
||||
|
||||
The Tricircle works with Neutron to provide networking automation functionality
|
||||
across Neutron in multi-region OpenStack deployment. In this guide we discuss
|
||||
how to manually install the Tricircle with local and central Neutron server.
|
||||
|
||||
Local Neutron server, running with the Tricircle local plugin, is responsible
|
||||
for triggering cross-Neutron networking automation. Every OpenStack instance
|
||||
has one local Neutron service, registered in the same region with other core
|
||||
services like Nova, Cinder, Glance, etc. Central Neutron server, running with
|
||||
the Tricircle central plugin, is responsible for unified resource allocation
|
||||
and cross-Neutron networking building. Besides regions for each OpenStack
|
||||
instance, we also need one specific region for central Neutron service. Only
|
||||
the Tricircle administrator service needs to be registered in this region along
|
||||
with central Neutron service while other core services are not mandatory.
|
||||
|
||||
Installation with Central Neutron Server
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
- 1 Install the Tricircle package::
|
||||
|
||||
git clone https://github.com/openstack/tricircle.git
|
||||
cd tricircle
|
||||
pip install -e .
|
||||
|
||||
- 2 Register the Tricircle administrator API to Keystone::
|
||||
|
||||
openstack user create tricircle --password password
|
||||
openstack role add --project service --user tricircle service
|
||||
openstack service create tricircle --name tricircle --description "Cross Neutron Networking Automation Service"
|
||||
service_id=$(openstack service show tricircle -f value -c id)
|
||||
service_host=162.3.124.201
|
||||
service_port=19999
|
||||
service_region=CentralRegion
|
||||
service_url=http://$service_host:$service_port/v1.0
|
||||
openstack endpoint create $service_id public $service_url --region $service_region
|
||||
openstack endpoint create $service_id admin $service_url --region $service_region
|
||||
openstack endpoint create $service_id internal $service_url --region $service_region
|
||||
|
||||
change password, service_host, service_port and service_region in the above
|
||||
commands to adapt your deployment. OpenStack CLI tool will automatically find
|
||||
the endpoints to send to registration requests. If you would like to specify
|
||||
the region for endpoints, use::
|
||||
|
||||
openstack --os-region-name <region_name> <command>
|
||||
|
||||
- 3 Generate the Tricircle configuration sample::
|
||||
|
||||
cd tricircle
|
||||
oslo-config-generator --config-file=etc/api-cfg-gen.conf
|
||||
oslo-config-generator --config-file=etc/xjob-cfg-gen.conf
|
||||
|
||||
The generated sample files are located in tricircle/etc
|
||||
|
||||
- 4 Configure the Tricircle administrator API::
|
||||
|
||||
cd tricircle/etc
|
||||
cp api.conf.sample api.conf
|
||||
|
||||
Edit etc/api.conf, for detail configuration information, please refer to the
|
||||
configuration guide. Below only options necessary to be changed are listed.
|
||||
|
||||
.. csv-table::
|
||||
:header: "Option", "Description", "Example"
|
||||
|
||||
[DEFAULT] tricircle_db_connection, "database connection string for tricircle", mysql+pymysql://root:password@ 127.0.0.1/tricircle?charset=utf8
|
||||
[DEFAULT] transport_url, "a URL representing the used messaging driver and its full configuration", rabbit://user:password@ 127.0.0.1:5672
|
||||
[keystone_authtoken] auth_type, "authentication method", password
|
||||
[keystone_authtoken] auth_url, "keystone authorization url", http://$keystone_service_host/identity
|
||||
[keystone_authtoken] username, "username of service account, needed for password authentication", tricircle
|
||||
[keystone_authtoken] password, "password of service account, needed for password authentication", password
|
||||
[keystone_authtoken] user_domain_name, "user domain name of service account, needed for password authentication", Default
|
||||
[keystone_authtoken] project_name, "project name of service account, needed for password authentication", service
|
||||
[keystone_authtoken] project_domain_name, "project domain name of service account, needed for password authentication", Default
|
||||
[keystone_authtoken] www_authenticate_uri, "complete public Identity API endpoint", http://$keystone_service_host/identity
|
||||
[keystone_authtoken] cafile, "A PEM encoded Certificate Authority to use when verifying HTTPs", /opt/stack/data/ca-bundle.pem
|
||||
[keystone_authtoken] signing_dir, "Directory used to cache files related to PKI tokens", /var/cache/tricircle
|
||||
[keystone_authtoken] memcached_servers, "Optionally specify a list of memcached server(s) to use for caching", $keystone_service_host:11211
|
||||
[client] auth_url, "keystone authorization url", http://$keystone_service_host/identity
|
||||
[client] identity_url, "keystone service url", http://$keystone_service_host/identity/v3
|
||||
[client] auto_refresh_endpoint, "if set to True, endpoint will be automatically refreshed if timeout accessing", True
|
||||
[client] top_region_name, "name of central region which client needs to access", CentralRegion
|
||||
[client] admin_username, "username of admin account", admin
|
||||
[client] admin_password, "password of admin account", password
|
||||
[client] admin_tenant, "project name of admin account", demo
|
||||
[client] admin_user_domain_name, "user domain name of admin account", Default
|
||||
[client] admin_tenant_domain_name, "project name of admin account", Default
|
||||
|
||||
.. note:: The Tricircle utilizes the Oslo library to setup service, database,
|
||||
log and RPC, please refer to the configuration guide of the corresponding
|
||||
Oslo library if you need further configuration of these modules. Change
|
||||
keystone_service_host to the address of Keystone service.
|
||||
|
||||
.. note:: It's worth explaining the following options that can easily make users confused. **keystone_authtoken.auth_url**
|
||||
is the keystone endpoint url used by services to validate user tokens. **keystone_authtoken.www_authenticate_uri** will be put in
|
||||
the "WWW-Authenticate: Keystone uri=%s" header in the 401 response to tell users where they can get authentication.
|
||||
These two URLs can be the same, but sometimes people would like to use an internal URL for auth_url and a public URL
|
||||
for www_authenticate_uri. **client.auth_url** is used by the common.client module to construct a client to get authentication and
|
||||
access other services, it can be the either internal or public endpoint of keystone, depends on how the module can
|
||||
reach keystone. **client.identity_url** is no longer used in code since Pike release so you can simply ignore it, we
|
||||
will deprecate and remove this option later.
|
||||
|
||||
- 5 Create the Tricircle database(take mysql as an example)::
|
||||
|
||||
mysql -uroot -p -e "create database tricircle character set utf8;"
|
||||
cd tricircle
|
||||
tricircle-db-manage --config-file etc/api.conf db_sync
|
||||
|
||||
- 6 Start the Tricircle administrator API::
|
||||
|
||||
sudo mkdir /var/cache/tricircle
|
||||
sudo chown $(whoami) /var/cache/tricircle/
|
||||
cd tricircle
|
||||
tricircle-api --config-file etc/api.conf
|
||||
|
||||
- 7 Configure the Tricircle Xjob daemon::
|
||||
|
||||
cd tricircle/etc
|
||||
cp xjob.conf.sample xjob.conf
|
||||
|
||||
Edit etc/xjob.conf, for detail configuration information, please refer to the
|
||||
configuration guide. Below only options necessary to be changed are listed.
|
||||
|
||||
.. csv-table::
|
||||
:header: "Option", "Description", "Example"
|
||||
|
||||
[DEFAULT] tricircle_db_connection, "database connection string for tricircle", mysql+pymysql://root:password@ 127.0.0.1/tricircle?charset=utf8
|
||||
[DEFAULT] transport_url, "a URL representing the used messaging driver and its full configuration", rabbit://user:password@ 127.0.0.1:5672
|
||||
[client] auth_url, "keystone authorization url", http://$keystone_service_host/identity
|
||||
[client] identity_url, "keystone service url", http://$keystone_service_host/identity/v3
|
||||
[client] auto_refresh_endpoint, "if set to True, endpoint will be automatically refreshed if timeout accessing", True
|
||||
[client] top_region_name, "name of central region which client needs to access", CentralRegion
|
||||
[client] admin_username, "username of admin account", admin
|
||||
[client] admin_password, "password of admin account", password
|
||||
[client] admin_tenant, "project name of admin account", demo
|
||||
[client] admin_user_domain_name, "user domain name of admin account", Default
|
||||
[client] admin_tenant_domain_name, "project name of admin account", Default
|
||||
|
||||
.. note:: The Tricircle utilizes the Oslo library to setup service, database,
|
||||
log and RPC, please refer to the configuration guide of the corresponding
|
||||
Oslo library if you need further configuration of these modules. Change
|
||||
keystone_service_host to the address of Keystone service.
|
||||
|
||||
- 8 Start the Tricircle Xjob daemon::
|
||||
|
||||
cd tricircle
|
||||
tricircle-xjob --config-file etc/xjob.conf
|
||||
|
||||
- 9 Setup central Neutron server
|
||||
|
||||
In this guide we assume readers are familiar with how to install Neutron
|
||||
server, so we just briefly discuss the steps and extra configuration needed
|
||||
by central Neutron server. For detail information about the configuration
|
||||
options in "client" and "tricircle" groups, please refer to the configuration
|
||||
guide. Neutron server can be installed alone, or you can install a full
|
||||
OpenStack instance then remove or stop other services.
|
||||
|
||||
- install Neutron package
|
||||
|
||||
- configure central Neutron server
|
||||
|
||||
edit neutron.conf
|
||||
|
||||
.. csv-table::
|
||||
:header: "Option", "Description", "Example"
|
||||
|
||||
[database] connection, "database connection string for central Neutron server", mysql+pymysql://root:password@ 127.0.0.1/neutron?charset=utf8
|
||||
[DEFAULT] bind_port, "Port central Neutron server binds to", change to a different value rather than 9696 if you run central and local Neutron server in the same host
|
||||
[DEFAULT] core_plugin, "core plugin central Neutron server uses", tricircle.network.central_plugin. TricirclePlugin
|
||||
[DEFAULT] service_plugins, "service plugin central Neutron server uses", "(leave empty)"
|
||||
[DEFAULT] tricircle_db_connection, "database connection string for tricircle", mysql+pymysql://root:password@ 127.0.0.1/tricircle?charset=utf8
|
||||
[client] auth_url, "keystone authorization url", http://$keystone_service_host/identity
|
||||
[client] identity_url, "keystone service url", http://$keystone_service_host/identity/v3
|
||||
[client] auto_refresh_endpoint, "if set to True, endpoint will be automatically refreshed if timeout accessing", True
|
||||
[client] top_region_name, "name of central region which client needs to access", CentralRegion
|
||||
[client] admin_username, "username of admin account", admin
|
||||
[client] admin_password, "password of admin account", password
|
||||
[client] admin_tenant, "project name of admin account", demo
|
||||
[client] admin_user_domain_name, "user domain name of admin account", Default
|
||||
[client] admin_tenant_domain_name, "project name of admin account", Default
|
||||
[tricircle] type_drivers, "list of network type driver entry points to be loaded", "vxlan,vlan,flat,local"
|
||||
[tricircle] tenant_network_types, "ordered list of network_types to allocate as tenant networks", "vxlan,vlan,flat,local"
|
||||
[tricircle] network_vlan_ranges, "physical network names and VLAN tags range usable of VLAN provider", "bridge:2001:3000"
|
||||
[tricircle] vni_ranges, "VxLAN VNI range", "1001:2000"
|
||||
[tricircle] flat_networks, "physical network names with which flat networks can be created", bridge
|
||||
[tricircle] bridge_network_type, "l3 bridge network type which is enabled in tenant_network_types and is not local type", vxlan
|
||||
[tricircle] default_region_for_external_network, "Default Region where the external network belongs to", RegionOne
|
||||
[tricircle] enable_api_gateway, "whether the API gateway is enabled", False
|
||||
|
||||
.. note:: Change keystone_service_host to the address of Keystone service.
|
||||
|
||||
- create database for central Neutron server
|
||||
|
||||
- register central Neutron server endpoint in Keystone, central Neutron
|
||||
should be registered in the same region with the Tricircle
|
||||
|
||||
- start central Neutron server
|
||||
|
||||
Installation with Local Neutron Server
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
- 1 Install the Tricircle package::
|
||||
|
||||
git clone https://github.com/openstack/tricircle.git
|
||||
cd tricircle
|
||||
pip install -e .
|
||||
|
||||
- 2 Setup local Neutron server
|
||||
|
||||
In this guide we assume readers have already installed a complete OpenStack
|
||||
instance running services like Nova, Cinder, Neutron, etc, so we just discuss
|
||||
how to configure Neutron server to work with the Tricircle. For detail
|
||||
information about the configuration options in "client" and "tricircle"
|
||||
groups, please refer to the configuration guide. After the change, you just
|
||||
restart the Neutron server.
|
||||
|
||||
edit neutron.conf.
|
||||
|
||||
.. note::
|
||||
|
||||
Pay attention to the service_plugins configuration item, make sure
|
||||
the plugin which is configured can support the association of floating IP
|
||||
to a port whose network is not directly attached to the router. To support
|
||||
it, TricircleL3Plugin is inherited from Neutron original L3RouterPlugin
|
||||
and overrides the original "get_router_for_floatingip" implementation.
|
||||
In order to configure local Neutron to use original L3RouterPlugin, you
|
||||
will need to patch the function "get_router_for_floatingip" in the same
|
||||
way that has been done for TricircleL3Plugin.
|
||||
|
||||
It's not necessary to configure the service plugins if cross Neutron L2
|
||||
networking is the only need in the deployment.
|
||||
|
||||
.. csv-table::
|
||||
:header: "Option", "Description", "Example"
|
||||
|
||||
[DEFAULT] core_plugin, "core plugin local Neutron server uses", tricircle.network.local_plugin. TricirclePlugin
|
||||
[DEFAULT] service_plugins, "service plugins local Neutron server uses", tricircle.network.local_l3_plugin. TricircleL3Plugin
|
||||
[client] auth_url, "keystone authorization url", http://$keystone_service_host/identity
|
||||
[client] identity_url, "keystone service url", http://$keystone_service_host/identity/v3
|
||||
[client] auto_refresh_endpoint, "if set to True, endpoint will be automatically refreshed if timeout accessing", True
|
||||
[client] top_region_name, "name of central region which client needs to access", CentralRegion
|
||||
[client] admin_username, "username of admin account", admin
|
||||
[client] admin_password, "password of admin account", password
|
||||
[client] admin_tenant, "project name of admin account", demo
|
||||
[client] admin_user_domain_name, "user domain name of admin account", Default
|
||||
[client] admin_tenant_domain_name, "project name of admin account", Default
|
||||
[tricircle] real_core_plugin, "the core plugin the Tricircle local plugin invokes", neutron.plugins.ml2.plugin. Ml2Plugin
|
||||
[tricircle] central_neutron_url, "central Neutron server url", http://$neutron_service_host :9696
|
||||
|
||||
.. note:: Change keystone_service_host to the address of Keystone service,
|
||||
and neutron_service_host to the address of central Neutron service.
|
||||
|
||||
edit ml2_conf.ini
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
|
||||
* - Option
|
||||
- Description
|
||||
- Example
|
||||
* - [ml2] mechanism_drivers
|
||||
- add l2population if vxlan network is used
|
||||
- openvswitch,l2population
|
||||
* - [agent] l2_population
|
||||
- set to True if vxlan network is used
|
||||
- True
|
||||
* - [agent] tunnel_types
|
||||
- set to vxlan if vxlan network is used
|
||||
- vxlan
|
||||
* - [ml2_type_vlan] network_vlan_ranges
|
||||
- for a specific physical network, the vlan range should be the same with
|
||||
tricircle.network_vlan_ranges option for central Neutron, configure this
|
||||
option if vlan network is used
|
||||
- bridge:2001:3000
|
||||
* - [ml2_type_vxlan] vni_ranges
|
||||
- should be the same with tricircle.vni_ranges option for central Neutron,
|
||||
configure this option if vxlan network is used
|
||||
- 1001:2000
|
||||
* - [ml2_type_flat] flat_networks
|
||||
- should be part of the tricircle.network_vlan_ranges option for central
|
||||
Neutron, configure this option if flat network is used
|
||||
- bridge
|
||||
* - [ovs] bridge_mappings
|
||||
- map the physical network to an ovs bridge
|
||||
- bridge:br-bridge
|
||||
|
||||
.. note:: In tricircle.network_vlan_ranges option for central Neutron, all
|
||||
the available physical networks in all pods and their vlan ranges should
|
||||
be configured without duplication. It's possible that one local Neutron
|
||||
doesn't contain some of the physical networks configured in
|
||||
tricircle.network_vlan_ranges, in this case, users need to specify
|
||||
availability zone hints when creating network or booting instances in the
|
||||
correct pod, to ensure that the required physical network is available in
|
||||
the target pod.
|
|
@ -1,394 +0,0 @@
|
|||
====================================================
|
||||
Installation guide for Tricircle work with Container
|
||||
====================================================
|
||||
|
||||
Introduction
|
||||
^^^^^^^^^^^^
|
||||
|
||||
In the `Multi-pod Installation with DevStack <https://docs.openstack.org/tricircle/latest/install/installation-guide.html#multi-pod-installation-with-devstack>`_ ,
|
||||
we have discussed how to deploy Tricircle in multi-region scenario with DevStack.
|
||||
However, the previous installation guides have been on how to
|
||||
manage virtual machines using tricircle and Nova in cross-region
|
||||
openstack cloud environments. So, multi-region container management
|
||||
is not supported in Tricircle. Meanwhile, OpenStack uses Zun
|
||||
component to provide container management service, OpenStack also use
|
||||
kuyr component and kuryr-libnetwork component to provide container network.
|
||||
In view of the Tricircle Central_Neutron-Local_Neutron fashion, Tricircle work
|
||||
with zun and kuryr will provide a cross-region container management solution.
|
||||
This guide is to describe how tricircle work with container management and how
|
||||
to deploy a multi-region container environment.
|
||||
|
||||
|
||||
Prerequisite
|
||||
^^^^^^^^^^^^
|
||||
|
||||
In this guide, we need specific versions of the zun project and
|
||||
kuryr project source code. The source code versions of both projects
|
||||
must be the Train version and upper. If not, we need to manually change
|
||||
the source code for both projects. The modification example is as follows:
|
||||
|
||||
- 1 Zun Source Code Modification:
|
||||
For Zun project, we need modify the **neutron** function
|
||||
in /zun/zun/common/clients.py file.
|
||||
(The '+' sign represents the added line)
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
def neutron(self):
|
||||
if self._neutron:
|
||||
return self._neutron
|
||||
|
||||
session = self.keystone().session
|
||||
session.verify = self._get_client_option('neutron', 'ca_file') or True
|
||||
if self._get_client_option('neutron', 'insecure'):
|
||||
session.verify = False
|
||||
endpoint_type = self._get_client_option('neutron', 'endpoint_type')
|
||||
+ region_name = self._get_client_option('neutron', 'region_name')
|
||||
self._neutron = neutronclient.Client(session=session,
|
||||
endpoint_type=endpoint_type,
|
||||
+ region_name=region_name)
|
||||
|
||||
return self._neutron
|
||||
|
||||
- 2 Kuryr Source Code Modification:
|
||||
For kuryr project, we need modify the **get_neutron_client** function
|
||||
in /kuryr/kuryr/lib/utils.py file.
|
||||
(The '+' sign represents the added line)
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
def get_neutron_client(*args, **kwargs):
|
||||
conf_group = kuryr_config.neutron_group.name
|
||||
auth_plugin = get_auth_plugin(conf_group)
|
||||
session = get_keystone_session(conf_group, auth_plugin)
|
||||
endpoint_type = getattr(getattr(cfg.CONF, conf_group), 'endpoint_type')
|
||||
+ region_name = getattr(getattr(cfg.CONF, conf_group), 'region_name')
|
||||
|
||||
return client.Client(session=session,
|
||||
auth=auth_plugin,
|
||||
endpoint_type=endpoint_type,
|
||||
+ region_name=region_name)
|
||||
|
||||
|
||||
Setup
|
||||
^^^^^
|
||||
|
||||
In this guide we take two nodes deployment as an example, the node1 run as RegionOne and
|
||||
Central Region, the node2 run as RegionTwo.
|
||||
|
||||
- 1 For the node1 in RegionOne and the node2 in RegionTwo, clone the code from Zun repository
|
||||
and Kuryr repository to /opt/stack/ . If the code does not meet the requirements described
|
||||
in the Prerequisite Section, modify it with reference to the modification example of the Prerequisite Section.
|
||||
|
||||
- 2 Follow "Multi-pod Installation with DevStack" document `Multi-pod Installation with DevStack <https://docs.openstack.org/tricircle/latest/install/installation-guide.html#multi-pod-installation-with-devstack>`_
|
||||
to prepare your local.conf for the node1 in RegionOne and the node12 in RegionTwo, and add the
|
||||
following lines before installation. Start DevStack in node1 and node2.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
enable_plugin zun https://git.openstack.org/openstack/zun
|
||||
enable_plugin zun-tempest-plugin https://git.openstack.org/openstack/zun-tempest-plugin
|
||||
enable_plugin devstack-plugin-container https://git.openstack.org/openstack/devstack-plugin-container
|
||||
enable_plugin kuryr-libnetwork https://git.openstack.org/openstack/kuryr-libnetwork
|
||||
|
||||
KURYR_CAPABILITY_SCOPE=local
|
||||
KURYR_PROCESS_EXTERNAL_CONNECTIVITY=False
|
||||
|
||||
- 3 After DevStack successfully started and finished, we need make some configuration changes to
|
||||
Zun component and Kuryr component in node1 and node2.
|
||||
|
||||
- For Zun in node1, modify the /etc/zun/zun.conf
|
||||
|
||||
.. csv-table::
|
||||
:header: "Group", "Option", "Value"
|
||||
|
||||
[neutron_client], region_name, RegionOne
|
||||
|
||||
- Restart all the services of Zun in node1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo systemctl restart devstack@zun*
|
||||
|
||||
- For Kuryr in node1, modify the /etc/kuryr/kuryr.conf
|
||||
|
||||
.. csv-table::
|
||||
:header: "Group", "Option", "Value"
|
||||
|
||||
[neutron], region_name, RegionOne
|
||||
|
||||
- Restart all the services of Kuryr in node1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo systemctl restart devstack@kur*
|
||||
|
||||
- For Zun in node2, modify the /etc/zun/zun.conf
|
||||
|
||||
.. csv-table::
|
||||
:header: "Group", "Option", "Value"
|
||||
|
||||
[neutron_client], region_name, RegionTwo
|
||||
|
||||
- Restart all the services of Zun in node2.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo systemctl restart devstack@zun*
|
||||
|
||||
- For Kuryr in node2, modify the /etc/kuryr/kuryr.conf
|
||||
|
||||
.. csv-table::
|
||||
:header: "Group", "Option", "Value"
|
||||
|
||||
[neutron], region_name, RegionTwo
|
||||
|
||||
- Restart all the services of Zun in node2.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo systemctl restart devstack@kur*
|
||||
|
||||
- 4 Then, we must create environment variables for the admin user and use the admin project.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ source openrc admin admin
|
||||
$ unset OS_REGION_NAME
|
||||
|
||||
- 5 Finally, use tricircle client to create pods for multi-region.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-region-name CentralRegion multiregion networking pod create --region-name CentralRegion
|
||||
$ openstack --os-region-name CentralRegion multiregion networking pod create --region-name RegionOne --availability-zone az1
|
||||
$ openstack --os-region-name CentralRegion multiregion networking pod create --region-name RegionTwo --availability-zone az2
|
||||
|
||||
|
||||
How to play
|
||||
^^^^^^^^^^^
|
||||
|
||||
- 1 Create container glance image in RegionOne and RegionTwo.
|
||||
|
||||
- Get docker image from Docker Hub. Run these command in the node1 and the node2.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ docker pull cirros
|
||||
$ docker save cirros -o /opt/stack/container_cirros
|
||||
|
||||
- Use glance client to create container image.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ glance --os-region-name=RegionOne image-create --file /opt/stack/container_cirros --container-format=docker --disk-format=raw --name container_cirros --progress
|
||||
$ glance --os-region-name=RegionTwo image-create --file /opt/stack/container_cirros --container-format=docker --disk-format=raw --name container_cirros --progress
|
||||
|
||||
$ openstack --os-region-name RegionOne image list
|
||||
|
||||
+--------------------------------------+--------------------------+--------+
|
||||
| ID | Name | Status |
|
||||
+--------------------------------------+--------------------------+--------+
|
||||
| 11186baf-4381-4e52-956c-22878b0642df | cirros-0.4.0-x86_64-disk | active |
|
||||
| 87864205-4352-4a2c-b9b1-ca95df52c93c | container_cirros | active |
|
||||
+--------------------------------------+--------------------------+--------+
|
||||
|
||||
$ openstack --os-region-name RegionTwo image list
|
||||
|
||||
+--------------------------------------+--------------------------+--------+
|
||||
| ID | Name | Status |
|
||||
+--------------------------------------+--------------------------+--------+
|
||||
| cd062c19-bb3a-4f60-b5ef-9688eb67b3da | container_cirros | active |
|
||||
| cf4a2dc7-6d6e-4b7e-a772-44247246e1ff | cirros-0.4.0-x86_64-disk | active |
|
||||
+--------------------------------------+--------------------------+--------+
|
||||
|
||||
- 2 Create container network in CentralRegion.
|
||||
|
||||
- Create a net in CentralRegion.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-region-name CentralRegion network create container-net
|
||||
|
||||
+---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | None |
|
||||
| created_at | None |
|
||||
| description | None |
|
||||
| dns_domain | None |
|
||||
| id | 5e73dda5-902b-4322-b5b6-4121437fde26 |
|
||||
| ipv4_address_scope | None |
|
||||
| ipv6_address_scope | None |
|
||||
| is_default | None |
|
||||
| is_vlan_transparent | None |
|
||||
| location | cloud='', project.domain_id='default', project.domain_name=, project.id='2f314a39de10467bb62745bd96c5fe4d', project.name='admin', region_name='CentralRegion', zone= |
|
||||
| mtu | None |
|
||||
| name | container-net |
|
||||
| port_security_enabled | False |
|
||||
| project_id | 2f314a39de10467bb62745bd96c5fe4d |
|
||||
| provider:network_type | vxlan |
|
||||
| provider:physical_network | None |
|
||||
| provider:segmentation_id | 1070 |
|
||||
| qos_policy_id | None |
|
||||
| revision_number | None |
|
||||
| router:external | Internal |
|
||||
| segments | None |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tags | |
|
||||
| updated_at | None |
|
||||
+---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
- Create a subnet in container-net
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-region-name CentralRegion subnet create --subnet-range 10.0.60.0/24 --network container-net container-subnet
|
||||
|
||||
+-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| allocation_pools | 10.0.60.2-10.0.60.254 |
|
||||
| cidr | 10.0.60.0/24 |
|
||||
| created_at | 2019-12-10T07:13:21Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 10.0.60.1 |
|
||||
| host_routes | |
|
||||
| id | b7a7adbd-afd3-4449-9cbc-fbce16c7a2e7 |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | None |
|
||||
| ipv6_ra_mode | None |
|
||||
| location | cloud='', project.domain_id='default', project.domain_name=, project.id='2f314a39de10467bb62745bd96c5fe4d', project.name='admin', region_name='CentralRegion', zone= |
|
||||
| name | container-subnet |
|
||||
| network_id | 5e73dda5-902b-4322-b5b6-4121437fde26 |
|
||||
| prefix_length | None |
|
||||
| project_id | 2f314a39de10467bb62745bd96c5fe4d |
|
||||
| revision_number | 0 |
|
||||
| segment_id | None |
|
||||
| service_types | None |
|
||||
| subnetpool_id | None |
|
||||
| tags | |
|
||||
| updated_at | 2019-12-10T07:13:21Z |
|
||||
+-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
- 3 Create container in RegionOne and RegionTwo.
|
||||
|
||||
.. note:: We can give container a specific command to run it continually, e.g. "sudo nc -l -p 5000" .
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-region-name RegionOne appcontainer run --name container01 --net network=$container_net_id --image-driver glance $RegionTwo_container_cirros_id sudo nc -l -p 5000
|
||||
|
||||
+-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| tty | False |
|
||||
| addresses | None |
|
||||
| links | [{u'href': u'http://192.168.1.81/v1/containers/ca67055c-635d-4603-9b0b-19c16eed7ef9', u'rel': u'self'}, {u'href': u'http://192.168.1.81/containers/ca67055c-635d-4603-9b0b-19c16eed7ef9', u'rel': u'bookmark'}] |
|
||||
| image | 87864205-4352-4a2c-b9b1-ca95df52c93c |
|
||||
| labels | {} |
|
||||
| disk | 0 |
|
||||
| security_groups | None |
|
||||
| image_pull_policy | None |
|
||||
| user_id | 57df611fd8c7415dad6d2530bf962ecd |
|
||||
| uuid | ca67055c-635d-4603-9b0b-19c16eed7ef9 |
|
||||
| hostname | None |
|
||||
| auto_heal | False |
|
||||
| environment | {} |
|
||||
| memory | 0 |
|
||||
| project_id | 2f314a39de10467bb62745bd96c5fe4d |
|
||||
| privileged | False |
|
||||
| status | Creating |
|
||||
| workdir | None |
|
||||
| healthcheck | None |
|
||||
| auto_remove | False |
|
||||
| status_detail | None |
|
||||
| cpu_policy | shared |
|
||||
| host | None |
|
||||
| image_driver | glance |
|
||||
| task_state | None |
|
||||
| status_reason | None |
|
||||
| name | container01 |
|
||||
| restart_policy | None |
|
||||
| ports | None |
|
||||
| command | [u'sudo', u'nc', u'-l', u'-p', u'5000'] |
|
||||
| runtime | None |
|
||||
| registry_id | None |
|
||||
| cpu | 0.0 |
|
||||
| interactive | False |
|
||||
+-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
$ openstack --os-region-name RegionOne appcontainer list
|
||||
|
||||
+--------------------------------------+-------------+--------------------------------------+---------+------------+------------+-------+
|
||||
| uuid | name | image | status | task_state | addresses | ports |
|
||||
+--------------------------------------+-------------+--------------------------------------+---------+------------+------------+-------+
|
||||
| ca67055c-635d-4603-9b0b-19c16eed7ef9 | container01 | 87864205-4352-4a2c-b9b1-ca95df52c93c | Running | None | 10.0.60.62 | [] |
|
||||
+--------------------------------------+-------------+--------------------------------------+---------+------------+------------+-------+
|
||||
|
||||
|
||||
$ openstack --os-region-name RegionTwo appcontainer run --name container02 --net network=$container_net_id --image-driver glance $RegionTwo_container_cirros_id sudo nc -l -p 5000
|
||||
|
||||
+-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| tty | False |
|
||||
| addresses | None |
|
||||
| links | [{u'href': u'http://192.168.1.82/v1/containers/c359e48c-7637-4d9f-8219-95a4577683c3', u'rel': u'self'}, {u'href': u'http://192.168.1.82/containers/c359e48c-7637-4d9f-8219-95a4577683c3', u'rel': u'bookmark'}] |
|
||||
| image | cd062c19-bb3a-4f60-b5ef-9688eb67b3da |
|
||||
| labels | {} |
|
||||
| disk | 0 |
|
||||
| security_groups | None |
|
||||
| image_pull_policy | None |
|
||||
| user_id | 57df611fd8c7415dad6d2530bf962ecd |
|
||||
| uuid | c359e48c-7637-4d9f-8219-95a4577683c3 |
|
||||
| hostname | None |
|
||||
| auto_heal | False |
|
||||
| environment | {} |
|
||||
| memory | 0 |
|
||||
| project_id | 2f314a39de10467bb62745bd96c5fe4d |
|
||||
| privileged | False |
|
||||
| status | Creating |
|
||||
| workdir | None |
|
||||
| healthcheck | None |
|
||||
| auto_remove | False |
|
||||
| status_detail | None |
|
||||
| cpu_policy | shared |
|
||||
| host | None |
|
||||
| image_driver | glance |
|
||||
| task_state | None |
|
||||
| status_reason | None |
|
||||
| name | container02 |
|
||||
| restart_policy | None |
|
||||
| ports | None |
|
||||
| command | [u'sudo', u'nc', u'-l', u'-p', u'5000'] |
|
||||
| runtime | None |
|
||||
| registry_id | None |
|
||||
| cpu | 0.0 |
|
||||
| interactive | False |
|
||||
+-------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
$ openstack --os-region-name RegionTwo appcontainer list
|
||||
|
||||
+--------------------------------------+-------------+--------------------------------------+---------+------------+-------------+-------+
|
||||
| uuid | name | image | status | task_state | addresses | ports |
|
||||
+--------------------------------------+-------------+--------------------------------------+---------+------------+-------------+-------+
|
||||
| c359e48c-7637-4d9f-8219-95a4577683c3 | container02 | cd062c19-bb3a-4f60-b5ef-9688eb67b3da | Running | None | 10.0.60.134 | [] |
|
||||
+--------------------------------------+-------------+--------------------------------------+---------+------------+-------------+-------+
|
||||
|
||||
- 4 Execute container in RegionOne and RegionTwo.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-region-name RegionOne appcontainer exec --interactive container01 /bin/sh
|
||||
$ openstack --os-region-name RegionTwo appcontainer exec --interactive container02 /bin/sh
|
||||
|
||||
- 5 By now, we successfully created multi-region container scenario. So we can do something
|
||||
on cross-region container, e.g. 1) RegionOne container ping RegionTwo container 2) Cross-Region Container Load Balancing.
|
|
@ -1,409 +0,0 @@
|
|||
====================================
|
||||
Multi-pod Installation with DevStack
|
||||
====================================
|
||||
|
||||
Introduction
|
||||
^^^^^^^^^^^^
|
||||
|
||||
In the single pod installation guide, we discuss how to deploy the Tricircle in
|
||||
one single pod with DevStack. Besides the Tricircle API and the central Neutron
|
||||
server, only one pod(one pod means one OpenStack instance) is running. Network
|
||||
is created with the default network type: local. Local type network will be only
|
||||
presented in one pod. If a local type network is already hosting virtual machines
|
||||
in one pod, you can not use it to boot virtual machine in another pod. That is
|
||||
to say, local type network doesn't support cross-Neutron l2 networking.
|
||||
|
||||
With multi-pod installation of the Tricircle, you can try out cross-Neutron l2
|
||||
networking and cross-Neutron l3 networking features.
|
||||
|
||||
To support cross-Neutron l2 networking, we have added both VLAN and VxLAN
|
||||
network type to the Tricircle. When a VLAN type network created via the
|
||||
central Neutron server is used to boot virtual machines in different pods, local
|
||||
Neutron server in each pod will create a VLAN type network with the same VLAN
|
||||
ID and physical network as the central network, so each pod should be configured
|
||||
with the same VLAN allocation pool and physical network. Then virtual machines
|
||||
in different pods can communicate with each other in the same physical network
|
||||
with the same VLAN tag. Similarly, for VxLAN network type, each pod should be
|
||||
configured with the same VxLAN allocation pool, so local Neutron server in each
|
||||
pod can create a VxLAN type network with the same VxLAN ID as is allocated by
|
||||
the central Neutron server.
|
||||
|
||||
Cross-Neutron l3 networking is supported in two ways in the Tricircle. If two
|
||||
networks connected to the router are of local type, we utilize a shared
|
||||
VLAN or VxLAN network to achieve cross-Neutron l3 networking. When a subnet is
|
||||
attached to a router via the central Neutron server, the Tricircle not only
|
||||
creates corresponding subnet and router in the pod, but also creates a "bridge"
|
||||
network. Both tenant network and "bridge" network are attached to the router.
|
||||
Each tenant will have one allocated VLAN or VxLAN ID, which is shared by the
|
||||
tenant's "bridge" networks across Neutron servers. The CIDRs of "bridge" networks for one
|
||||
tenant are also the same, so the router interfaces in "bridge" networks across
|
||||
different Neutron servers can communicate with each other. By adding an extra route as
|
||||
following::
|
||||
|
||||
destination: CIDR of tenant network in another pod
|
||||
nexthop: "bridge" network interface ip in another pod
|
||||
|
||||
When a virtual machine sends a packet whose receiver is in another network and
|
||||
in another pod, the packet first goes to router, then is forwarded to the router
|
||||
in another pod according to the extra route, at last the packet is sent to the
|
||||
target virtual machine. This route configuration job is triggered when user
|
||||
attaches a subnet to a router via the central Neutron server and the job is
|
||||
finished asynchronously.
|
||||
|
||||
If one of the network connected to the router is not local type, meaning that
|
||||
cross-Neutron l2 networking is supported in this network(like VLAN type), and
|
||||
the l2 network can be stretched into current pod, packets sent to the virtual
|
||||
machine in this network will not pass through the "bridge" network. Instead,
|
||||
packets first go to router, then are directly forwarded to the target virtual
|
||||
machine via the l2 network. A l2 network's presence scope is determined by the
|
||||
network's availability zone hint. If the l2 network is not able to be stretched
|
||||
into the current pod, the packets will still pass through the "bridge network".
|
||||
For example, let's say we have two pods, pod1 and pod2, and two availability
|
||||
zones, az1 and az2. Pod1 belongs to az1 and pod2 belongs to az2. If the
|
||||
availability zone hint of one VLAN type network is set to az1, this
|
||||
network can not be stretched to pod2. So packets sent from pod2 to virtual
|
||||
machines in this network still need to pass through the "bridge network".
|
||||
|
||||
Prerequisite
|
||||
^^^^^^^^^^^^
|
||||
|
||||
In this guide we take two nodes deployment as an example. One node to run the
|
||||
Tricircle API, the central Neutron server and one pod, the other one node to run
|
||||
another pod. For VLAN network, both nodes should have two network interfaces,
|
||||
which are connected to the management network and provider VLAN network. The
|
||||
physical network infrastructure should support VLAN tagging. For VxLAN network,
|
||||
you can combine the management plane and data plane, in this case, only one
|
||||
network interface is needed. If you would like to try north-south networking,
|
||||
too, you should prepare one more network interface in the second node for the
|
||||
external network. In this guide, the external network is also VLAN type, so the
|
||||
local.conf sample is based on VLAN type external network setup. For the resource
|
||||
requirements to setup each node, please refer to
|
||||
`All-In-One Single Machine <https://docs.openstack.org/devstack/latest/guides.html#all-in-one-single-machine>`_
|
||||
for installing DevStack in bare metal server and
|
||||
`All-In-One Single VM <https://docs.openstack.org/devstack/latest/guides.html#all-in-one-single-vm>`_
|
||||
for installing DevStack in virtual machine.
|
||||
|
||||
If you want to experience cross Neutron VxLAN network, please make sure
|
||||
compute nodes are routable to each other on data plane, and enable L2
|
||||
population mechanism driver in OpenStack RegionOne and OpenStack RegionTwo.
|
||||
|
||||
|
||||
Setup
|
||||
^^^^^
|
||||
|
||||
In pod1 in node1 for Tricircle service, central Neutron and OpenStack
|
||||
RegionOne,
|
||||
|
||||
- 1 Install DevStack. Please refer to
|
||||
`DevStack document <https://docs.openstack.org/devstack/latest/>`_
|
||||
on how to install DevStack into single VM or bare metal server.
|
||||
|
||||
- 2 In DevStack folder, create a file local.conf, and copy the content of
|
||||
`local.conf node1 sample <https://github.com/openstack/tricircle/blob/master/devstack/local.conf.node_1.sample>`_
|
||||
to local.conf, change password in the file if needed.
|
||||
|
||||
- 3 Change the following options according to your environment
|
||||
|
||||
- change HOST_IP to your management interface ip::
|
||||
|
||||
HOST_IP=10.250.201.24
|
||||
|
||||
- the format of Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS is
|
||||
(network_vlan_ranges=<physical network name>:<min vlan>:<max vlan>),
|
||||
you can change physical network name, but remember to adapt your change
|
||||
to the commands showed in this guide; also, change min VLAN and max vlan
|
||||
to adapt the VLAN range your physical network supports. You need to
|
||||
additionally specify the physical network "extern" to ensure the
|
||||
central neutron can create "extern" physical network which located in
|
||||
other pods::
|
||||
|
||||
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:2001:3000,extern:3001:4000)
|
||||
|
||||
- if you would like to also configure vxlan network, you can set
|
||||
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS. the format of it is
|
||||
(vni_ranges=<min vxlan>:<max vxlan>)::
|
||||
|
||||
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=1001:2000)
|
||||
|
||||
- the format of OVS_BRIDGE_MAPPINGS is <physical network name>:<ovs bridge name>,
|
||||
you can change these names, but remember to adapt your change to the
|
||||
commands showed in this guide. You do not need specify the bridge mapping
|
||||
for "extern", because this physical network is located in other pods::
|
||||
|
||||
OVS_BRIDGE_MAPPINGS=bridge:br-vlan
|
||||
|
||||
this option can be omitted if only VxLAN networks are needed
|
||||
|
||||
- if you would like to also configure flat network, you can set
|
||||
Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS, the format of it is
|
||||
(flat_networks=phy_net1,phy_net2,...). Besides specifying a list of
|
||||
physical network names, you can also use '*' to allow flat networks with
|
||||
arbitrary physical network names; or use an empty list to disable flat
|
||||
networks. For simplicity, we use the same physical networks and bridge
|
||||
mappings for vlan and flat network configuration. Similar to vlan network,
|
||||
You need to additionally specify the physical network "extern" to ensure
|
||||
the central neutron can create "extern" physical network which located in
|
||||
other pods::
|
||||
|
||||
Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS=(flat_networks=bridge,extern)
|
||||
|
||||
- set TRICIRCLE_START_SERVICES to True to install the Tricircle service and
|
||||
central Neutron in node1::
|
||||
|
||||
TRICIRCLE_START_SERVICES=True
|
||||
|
||||
- 4 Create OVS bridge and attach the VLAN network interface to it ::
|
||||
|
||||
sudo ovs-vsctl add-br br-vlan
|
||||
sudo ovs-vsctl add-port br-vlan eth1
|
||||
|
||||
br-vlan is the OVS bridge name you configure on OVS_PHYSICAL_BRIDGE, eth1 is
|
||||
the device name of your VLAN network interface, this step can be omitted if
|
||||
only VxLAN networks are provided to tenants.
|
||||
|
||||
- 5 Run DevStack. In DevStack folder, run ::
|
||||
|
||||
./stack.sh
|
||||
|
||||
- 6 After DevStack successfully starts, begin to setup node2.
|
||||
|
||||
In pod2 in node2 for OpenStack RegionTwo,
|
||||
|
||||
- 1 Install DevStack. Please refer to
|
||||
`DevStack document <https://docs.openstack.org/devstack/latest/>`_
|
||||
on how to install DevStack into single VM or bare metal server.
|
||||
|
||||
- 2 In DevStack folder, create a file local.conf, and copy the content of
|
||||
`local.conf node2 sample <https://github.com/openstack/tricircle/blob/master/devstack/local.conf.node_2.sample>`_
|
||||
to local.conf, change password in the file if needed.
|
||||
|
||||
- 3 Change the following options according to your environment
|
||||
|
||||
- change HOST_IP to your management interface ip::
|
||||
|
||||
HOST_IP=10.250.201.25
|
||||
|
||||
- change KEYSTONE_SERVICE_HOST to management interface ip of node1::
|
||||
|
||||
KEYSTONE_SERVICE_HOST=10.250.201.24
|
||||
|
||||
- change KEYSTONE_AUTH_HOST to management interface ip of node1::
|
||||
|
||||
KEYSTONE_AUTH_HOST=10.250.201.24
|
||||
|
||||
- the format of Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS is
|
||||
(network_vlan_ranges=<physical network name>:<min vlan>:<max vlan>),
|
||||
you can change physical network name, but remember to adapt your change
|
||||
to the commands showed in this guide; also, change min vlan and max vlan
|
||||
to adapt the vlan range your physical network supports::
|
||||
|
||||
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:2001:3000,extern:3001:4000)
|
||||
|
||||
- if you would like to also configure vxlan network, you can set
|
||||
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS. the format of it is
|
||||
(vni_ranges=<min vxlan>:<max vxlan>)::
|
||||
|
||||
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=1001:2000)
|
||||
|
||||
- the format of OVS_BRIDGE_MAPPINGS is <physical network name>:<ovs bridge name>,
|
||||
you can change these names, but remember to adapt your change to the commands
|
||||
showed in this guide::
|
||||
|
||||
OVS_BRIDGE_MAPPINGS=bridge:br-vlan,extern:br-ext
|
||||
|
||||
if you only use vlan network for external network, it can be configured like::
|
||||
|
||||
OVS_BRIDGE_MAPPINGS=extern:br-ext
|
||||
|
||||
- if you would like to also configure flat network, you can set
|
||||
Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS, the format of it is
|
||||
(flat_networks=phy_net1,phy_net2,...). Besides specifying a list of
|
||||
physical network names, you can also use '*' to allow flat networks with
|
||||
arbitrary physical network names; or use an empty list to disable flat
|
||||
networks. For simplicity, we use the same physical networks and bridge
|
||||
mappings for vlan and flat network configuration::
|
||||
|
||||
Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS=(flat_networks=bridge,extern)
|
||||
|
||||
- set TRICIRCLE_START_SERVICES to False(it's True by default) so Tricircle
|
||||
services and central Neutron will not be started in node2::
|
||||
|
||||
TRICIRCLE_START_SERVICES=False
|
||||
|
||||
In this guide, we define two physical networks in node2, one is "bridge" for
|
||||
bridge network, the other one is "extern" for external network. If you do not
|
||||
want to try l3 north-south networking, you can simply remove the "extern"
|
||||
part. The external network type we use in the guide is VLAN, if you want to
|
||||
use other network type like flat, please refer to
|
||||
`DevStack document <https://docs.openstack.org/devstack/latest/>`_.
|
||||
|
||||
- 4 Create OVS bridge and attach the VLAN network interface to it ::
|
||||
|
||||
sudo ovs-vsctl add-br br-vlan
|
||||
sudo ovs-vsctl add-port br-vlan eth1
|
||||
sudo ovs-vsctl add-br br-ext
|
||||
sudo ovs-vsctl add-port br-ext eth2
|
||||
|
||||
br-vlan and br-ext are the OVS bridge names you configure on
|
||||
OVS_PHYSICAL_BRIDGE, eth1 and eth2 are the device names of your VLAN network
|
||||
interfaces, for the "bridge" network and the external network. Omit br-vlan
|
||||
if you only use vxlan network as tenant network.
|
||||
|
||||
- 5 Run DevStack. In DevStack folder, run ::
|
||||
|
||||
./stack.sh
|
||||
|
||||
- 6 After DevStack successfully starts, the setup is finished.
|
||||
|
||||
.. note:: In the newest version of codes, we may fail to boot an instance in
|
||||
node2. The reason is that Apache configuration file of Nova placement API
|
||||
doesn't grant access right to the placement API bin folder. You can use
|
||||
"screen -r" to check placement API is working well or not. If placement API
|
||||
is in stuck status, manually update "/etc/apache2/sites-enabled/placement-api.conf"
|
||||
placement API configuration file in node2 to add the following section::
|
||||
|
||||
<Directory /usr/local/bin>
|
||||
Require all granted
|
||||
</Directory>
|
||||
|
||||
After update, restart Apache service first, and then placement API.
|
||||
|
||||
**This problem no longer exists after this patch:**
|
||||
|
||||
https://github.com/openstack-dev/devstack/commit/6ed53156b6198e69d59d1cf3a3497e96f5b7a870
|
||||
|
||||
How to play
|
||||
^^^^^^^^^^^
|
||||
|
||||
- 1 After DevStack successfully starts, we need to create environment variables
|
||||
for the user (admin user as example in this guide). In DevStack folder ::
|
||||
|
||||
source openrc admin demo
|
||||
|
||||
- 2 Unset the region name environment variable, so that the command can be
|
||||
issued to specified region in following commands as needed ::
|
||||
|
||||
unset OS_REGION_NAME
|
||||
|
||||
- 3 Check if services have been correctly registered. Run ::
|
||||
|
||||
openstack --os-region-name=RegionOne endpoint list
|
||||
|
||||
you should get output looks like as following ::
|
||||
|
||||
+----------------------------------+---------------+--------------+----------------+
|
||||
| ID | Region | Service Name | Service Type |
|
||||
+----------------------------------+---------------+--------------+----------------+
|
||||
| 4adaab1426d94959be46314b4bd277c2 | RegionOne | glance | image |
|
||||
| 5314a11d168042ed85a1f32d40030b31 | RegionTwo | nova_legacy | compute_legacy |
|
||||
| ea43c53a8ab7493dacc4db079525c9b1 | RegionOne | keystone | identity |
|
||||
| a1f263473edf4749853150178be1328d | RegionOne | neutron | network |
|
||||
| ebea16ec07d94ed2b5356fb0a2a3223d | RegionTwo | neutron | network |
|
||||
| 8d374672c09845f297755117ec868e11 | CentralRegion | tricircle | Tricircle |
|
||||
| e62e543bb9cf45f593641b2d00d72700 | RegionOne | nova_legacy | compute_legacy |
|
||||
| 540bdedfc449403b9befef3c2bfe3510 | RegionOne | nova | compute |
|
||||
| d533429712954b29b9f37debb4f07605 | RegionTwo | glance | image |
|
||||
| c8bdae9506cd443995ee3c89e811fb45 | CentralRegion | neutron | network |
|
||||
| 991d304dfcc14ccf8de4f00271fbfa22 | RegionTwo | nova | compute |
|
||||
+----------------------------------+---------------+--------------+----------------+
|
||||
|
||||
"CentralRegion" is the region you set in local.conf via CENTRAL_REGION_NAME,
|
||||
whose default value is "CentralRegion", we use it as the region for the
|
||||
Tricircle API and central Neutron server. "RegionOne" and "RegionTwo" are the
|
||||
normal OpenStack regions which includes Nova, Neutron and Glance. Shared
|
||||
Keystone service is registered in "RegionOne".
|
||||
|
||||
- 4 Create pod instances for the Tricircle to manage the mapping between
|
||||
availability zones and OpenStack instances ::
|
||||
|
||||
openstack multiregion networking pod create --region-name CentralRegion
|
||||
|
||||
openstack multiregion networking pod create --region-name RegionOne --availability-zone az1
|
||||
|
||||
openstack multiregion networking pod create --region-name RegionTwo --availability-zone az2
|
||||
|
||||
Pay attention to "region_name" parameter we specify when creating pod. Pod name
|
||||
should exactly match the region name registered in Keystone. In the above
|
||||
commands, we create pods named "CentralRegion", "RegionOne" and "RegionTwo".
|
||||
|
||||
- 5 Create necessary resources in central Neutron server ::
|
||||
|
||||
neutron --os-region-name=CentralRegion net-create --availability-zone-hint RegionOne net1
|
||||
neutron --os-region-name=CentralRegion subnet-create net1 10.0.1.0/24
|
||||
neutron --os-region-name=CentralRegion net-create --availability-zone-hint RegionTwo net2
|
||||
neutron --os-region-name=CentralRegion subnet-create net2 10.0.2.0/24
|
||||
|
||||
Please note that the net1 and net2 ID will be used in later step to boot VM.
|
||||
|
||||
- 6 Get image ID and flavor ID which will be used in VM booting ::
|
||||
|
||||
glance --os-region-name=RegionOne image-list
|
||||
nova --os-region-name=RegionOne flavor-list
|
||||
glance --os-region-name=RegionTwo image-list
|
||||
nova --os-region-name=RegionTwo flavor-list
|
||||
|
||||
- 7 Boot virtual machines ::
|
||||
|
||||
nova --os-region-name=RegionOne boot --flavor 1 --image $image1_id --nic net-id=$net1_id vm1
|
||||
nova --os-region-name=RegionTwo boot --flavor 1 --image $image2_id --nic net-id=$net2_id vm2
|
||||
|
||||
- 8 Verify the VMs are connected to the networks ::
|
||||
|
||||
neutron --os-region-name=CentralRegion port-list
|
||||
neutron --os-region-name=RegionOne port-list
|
||||
nova --os-region-name=RegionOne list
|
||||
neutron --os-region-name=RegionTwo port-list
|
||||
nova --os-region-name=RegionTwo list
|
||||
|
||||
The ip address of each VM could be found in local Neutron server and central
|
||||
Neutron server. The port has same uuid in local Neutron server and central
|
||||
Neutron Server.
|
||||
|
||||
- 9 Create external network and subnet ::
|
||||
|
||||
neutron --os-region-name=CentralRegion net-create --router:external --provider:network_type vlan --provider:physical_network extern --availability-zone-hint RegionTwo ext-net
|
||||
neutron --os-region-name=CentralRegion subnet-create --name ext-subnet --disable-dhcp ext-net 163.3.124.0/24
|
||||
|
||||
Pay attention that when creating external network, we need to pass
|
||||
"availability_zone_hints" parameter, which is the name of the pod that will
|
||||
host external network.
|
||||
|
||||
*Currently external network needs to be created before attaching subnet to the
|
||||
router, because plugin needs to utilize external network information to setup
|
||||
bridge network when handling interface adding operation. This limitation will
|
||||
be removed later.*
|
||||
|
||||
- 10 Create router and attach subnets in central Neutron server ::
|
||||
|
||||
neutron --os-region-name=CentralRegion router-create router
|
||||
neutron --os-region-name=CentralRegion router-interface-add router $subnet1_id
|
||||
neutron --os-region-name=CentralRegion router-interface-add router $subnet2_id
|
||||
|
||||
- 11 Set router external gateway in central Neutron server ::
|
||||
|
||||
neutron --os-region-name=CentralRegion router-gateway-set router ext-net
|
||||
|
||||
Now virtual machine in the subnet attached to the router should be able to
|
||||
ping machines in the external network. In our test, we use hypervisor tool
|
||||
to directly start a virtual machine in the external network to check the
|
||||
network connectivity.
|
||||
|
||||
- 12 Launch VNC console and test connection ::
|
||||
|
||||
nova --os-region-name=RegionOne get-vnc-console vm1 novnc
|
||||
nova --os-region-name=RegionTwo get-vnc-console vm2 novnc
|
||||
|
||||
You should be able to ping vm1 from vm2 and vice versa.
|
||||
|
||||
- 13 Create floating ip in central Neutron server ::
|
||||
|
||||
neutron --os-region-name=CentralRegion floatingip-create ext-net
|
||||
|
||||
- 14 Associate floating ip ::
|
||||
|
||||
neutron --os-region-name=CentralRegion floatingip-list
|
||||
neutron --os-region-name=CentralRegion port-list
|
||||
neutron --os-region-name=CentralRegion floatingip-associate $floatingip_id $port_id
|
||||
|
||||
Now you should be able to access virtual machine with floating ip bound from
|
||||
the external network.
|
|
@ -1,97 +0,0 @@
|
|||
=====================================
|
||||
Single pod installation with DevStack
|
||||
=====================================
|
||||
|
||||
Now the Tricircle can be played with all-in-one single pod DevStack. For
|
||||
the resource requirement to setup single pod DevStack, please refer
|
||||
to `All-In-One Single Machine <https://docs.openstack.org/devstack/latest/guides.html#all-in-one-single-machine>`_ for
|
||||
installing DevStack in bare metal server
|
||||
or `All-In-One Single VM <https://docs.openstack.org/devstack/latest/guides.html#all-in-one-single-vm>`_ for
|
||||
installing DevStack in virtual machine.
|
||||
|
||||
- 1 Install DevStack. Please refer to `DevStack document
|
||||
<https://docs.openstack.org/devstack/latest/>`_
|
||||
on how to install DevStack into single VM or bare metal server.
|
||||
|
||||
- 2 In DevStack folder, create a file local.conf, and copy the content of
|
||||
https://github.com/openstack/tricircle/blob/master/devstack/local.conf.sample
|
||||
to local.conf, change password in the file if needed.
|
||||
|
||||
- 3 Run DevStack. In DevStack folder, run ::
|
||||
|
||||
./stack.sh
|
||||
|
||||
- 4 After DevStack successfully starts, we need to create environment variables for
|
||||
the user (admin user as example in this document). In DevStack folder ::
|
||||
|
||||
source openrc admin demo
|
||||
|
||||
- 5 Unset the region name environment variable, so that the command can be issued to
|
||||
specified region in following commands as needed ::
|
||||
|
||||
unset OS_REGION_NAME
|
||||
|
||||
- 6 Check if services have been correctly registered. Run ::
|
||||
|
||||
openstack --os-region-name=RegionOne endpoint list
|
||||
|
||||
you should get output looks like as following ::
|
||||
|
||||
+----------------------------------+---------------+--------------+----------------+
|
||||
| ID | Region | Service Name | Service Type |
|
||||
+----------------------------------+---------------+--------------+----------------+
|
||||
| 3944592550764e349d0e82dba19a8e64 | RegionOne | cinder | volume |
|
||||
| 2ce48c73cca44e66a558ad69f1aa4436 | CentralRegion | tricircle | Tricircle |
|
||||
| d214b688923a4348b908525266db66ed | RegionOne | nova_legacy | compute_legacy |
|
||||
| c5dd60f23f2e4442865f601758a73982 | RegionOne | keystone | identity |
|
||||
| a99d5742c76a4069bb8621e0303c6004 | RegionOne | cinderv3 | volumev3 |
|
||||
| 8a3c711a24b2443a9a4420bcc302ed2c | RegionOne | glance | image |
|
||||
| e136af00d64a4cdf8b6b367210476f49 | RegionOne | nova | compute |
|
||||
| 4c3e5d52a90e493ab720213199ab22cd | RegionOne | neutron | network |
|
||||
| 8a1312afb6944492b47c5a35f1e5caeb | RegionOne | cinderv2 | volumev2 |
|
||||
| e0a5530abff749e1853a342b5747492e | CentralRegion | neutron | network |
|
||||
+----------------------------------+---------------+--------------+----------------+
|
||||
|
||||
"CentralRegion" is the region you set in local.conf via CENTRAL_REGION_NAME,
|
||||
whose default value is "CentralRegion", we use it as the region for the
|
||||
central Neutron server and Tricircle Admin API(ID is
|
||||
2ce48c73cca44e66a558ad69f1aa4436 in the above list).
|
||||
"RegionOne" is the normal OpenStack region which includes Nova, Cinder,
|
||||
Neutron.
|
||||
|
||||
- 7 Create pod instances for the Tricircle to manage the mapping between
|
||||
availability zone and OpenStack instances ::
|
||||
|
||||
openstack multiregion networking pod create --region-name CentralRegion
|
||||
|
||||
openstack multiregion networking pod create --region-name RegionOne --availability-zone az1
|
||||
|
||||
Pay attention to "region_name" parameter we specify when creating pod. Pod name
|
||||
should exactly match the region name registered in Keystone. In the above
|
||||
commands, we create pods named "CentralRegion" and "RegionOne".
|
||||
|
||||
- 8 Create necessary resources in central Neutron server ::
|
||||
|
||||
neutron --os-region-name=CentralRegion net-create --availability-zone-hint RegionOne net1
|
||||
neutron --os-region-name=CentralRegion subnet-create net1 10.0.0.0/24
|
||||
|
||||
Please note that the net1 ID will be used in later step to boot VM.
|
||||
|
||||
- 9 Get image ID and flavor ID which will be used in VM booting ::
|
||||
|
||||
glance --os-region-name=RegionOne image-list
|
||||
nova --os-region-name=RegionOne flavor-list
|
||||
|
||||
- 10 Boot a virtual machine ::
|
||||
|
||||
nova --os-region-name=RegionOne boot --flavor 1 --image $image_id --nic net-id=$net_id vm1
|
||||
|
||||
- 11 Verify the VM is connected to the net1 ::
|
||||
|
||||
neutron --os-region-name=CentralRegion port-list
|
||||
neutron --os-region-name=RegionOne port-list
|
||||
nova --os-region-name=RegionOne list
|
||||
|
||||
The IP address of the VM could be found in local Neutron server and central
|
||||
Neutron server. The port has same uuid in local Neutron server and central
|
||||
Neutron Server.
|
|
@ -1,8 +0,0 @@
|
|||
==========================
|
||||
Tricircle Networking Guide
|
||||
==========================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 4
|
||||
|
||||
networking-guide
|
|
@ -1,406 +0,0 @@
|
|||
===================================================
|
||||
North South Networking via Direct Provider Networks
|
||||
===================================================
|
||||
|
||||
The following figure illustrates one typical networking mode, instances have
|
||||
two interfaces, one interface is connected to net1 for heartbeat or
|
||||
data replication, the other interface is connected to phy_net1 or phy_net2 to
|
||||
provide service. There is different physical network in different region to
|
||||
support service redundancy in case of region level failure.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
+-----------------+ +-----------------+
|
||||
|RegionOne | |RegionTwo |
|
||||
| | | |
|
||||
| phy_net1 | | phy_net2 |
|
||||
| +--+---------+ | | +--+---------+ |
|
||||
| | | | | |
|
||||
| | | | | |
|
||||
| +--+--------+ | | +--+--------+ |
|
||||
| | | | | | | |
|
||||
| | Instance1 | | | | Instance2 | |
|
||||
| +------+----+ | | +------+----+ |
|
||||
| | | | | |
|
||||
| | | | | |
|
||||
| net1 | | | | |
|
||||
| +------+-------------------------+---+ |
|
||||
| | | |
|
||||
+-----------------+ +-----------------+
|
||||
|
||||
How to create this network topology
|
||||
===================================
|
||||
|
||||
Create provider network phy_net1, which will be located in RegionOne.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion net-create --provider:network_type vlan --provider:physical_network extern --availability-zone-hint RegionOne phy_net1
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | RegionOne |
|
||||
| id | b7832cbb-d399-4d5d-bcfd-d1b804506a1a |
|
||||
| name | phy_net1 |
|
||||
| project_id | ce444c8be6da447bb412db7d30cd7023 |
|
||||
| provider:network_type | vlan |
|
||||
| provider:physical_network | extern |
|
||||
| provider:segmentation_id | 170 |
|
||||
| router:external | False |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tenant_id | ce444c8be6da447bb412db7d30cd7023 |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
Create subnet in phy_net1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion subnet-create phy_net1 202.96.1.0/24
|
||||
+-------------------+------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+------------------------------------------------+
|
||||
| allocation_pools | {"start": "202.96.1.2", "end": "202.96.1.254"} |
|
||||
| cidr | 202.96.1.0/24 |
|
||||
| created_at | 2017-01-11T08:43:48Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 202.96.1.1 |
|
||||
| host_routes | |
|
||||
| id | 4941c48e-5602-40fc-a117-e84833b85ed3 |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | |
|
||||
| ipv6_ra_mode | |
|
||||
| name | |
|
||||
| network_id | b7832cbb-d399-4d5d-bcfd-d1b804506a1a |
|
||||
| project_id | ce444c8be6da447bb412db7d30cd7023 |
|
||||
| revision_number | 2 |
|
||||
| subnetpool_id | |
|
||||
| tenant_id | ce444c8be6da447bb412db7d30cd7023 |
|
||||
| updated_at | 2017-01-11T08:43:48Z |
|
||||
+-------------------+------------------------------------------------+
|
||||
|
||||
Create provider network phy_net2, which will be located in RegionTwo.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion net-create --provider:network_type vlan --provider:physical_network extern --availability-zone-hint RegionTwo phy_net2
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | RegionTwo |
|
||||
| id | 731293af-e68f-4677-b433-f46afd6431f3 |
|
||||
| name | phy_net2 |
|
||||
| project_id | ce444c8be6da447bb412db7d30cd7023 |
|
||||
| provider:network_type | vlan |
|
||||
| provider:physical_network | extern |
|
||||
| provider:segmentation_id | 168 |
|
||||
| router:external | False |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tenant_id | ce444c8be6da447bb412db7d30cd7023 |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
Create subnet in phy_net2.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion subnet-create phy_net2 202.96.2.0/24
|
||||
+-------------------+------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+------------------------------------------------+
|
||||
| allocation_pools | {"start": "202.96.2.2", "end": "202.96.2.254"} |
|
||||
| cidr | 202.96.2.0/24 |
|
||||
| created_at | 2017-01-11T08:47:07Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 202.96.2.1 |
|
||||
| host_routes | |
|
||||
| id | f5fb4f11-4bc1-4911-bcca-b0eaccc6eaf9 |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | |
|
||||
| ipv6_ra_mode | |
|
||||
| name | |
|
||||
| network_id | 731293af-e68f-4677-b433-f46afd6431f3 |
|
||||
| project_id | ce444c8be6da447bb412db7d30cd7023 |
|
||||
| revision_number | 2 |
|
||||
| subnetpool_id | |
|
||||
| tenant_id | ce444c8be6da447bb412db7d30cd7023 |
|
||||
| updated_at | 2017-01-11T08:47:08Z |
|
||||
+-------------------+------------------------------------------------+
|
||||
|
||||
Create net1 which will work as the L2 network across RegionOne and RegionTwo.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
If net1 is vlan based cross-Neutron L2 network
|
||||
$ neutron --os-region-name=CentralRegion net-create --provider:network_type vlan --provider:physical_network bridge --availability-zone-hint az1 --availability-zone-hint az2 net1
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | az1 |
|
||||
| | az2 |
|
||||
| id | 1897a446-bf6a-4bce-9374-6a3825ee5051 |
|
||||
| name | net1 |
|
||||
| project_id | ce444c8be6da447bb412db7d30cd7023 |
|
||||
| provider:network_type | vlan |
|
||||
| provider:physical_network | bridge |
|
||||
| provider:segmentation_id | 132 |
|
||||
| router:external | False |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tenant_id | ce444c8be6da447bb412db7d30cd7023 |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
If net1 is vxlan based cross-Neutron L2 network
|
||||
$ neutron --os-region-name=CentralRegion net-create --provider:network_type vxlan --availability-zone-hint az1 --availability-zone-hint az2 net1
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | az1 |
|
||||
| | az2 |
|
||||
| id | 0093f32c-2ecd-4888-a8c2-a6a424bddfe8 |
|
||||
| name | net1 |
|
||||
| project_id | ce444c8be6da447bb412db7d30cd7023 |
|
||||
| provider:network_type | vxlan |
|
||||
| provider:physical_network | |
|
||||
| provider:segmentation_id | 1036 |
|
||||
| router:external | False |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tenant_id | ce444c8be6da447bb412db7d30cd7023 |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
Create subnet in net1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion subnet-create net1 10.0.1.0/24
|
||||
+-------------------+--------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------------+
|
||||
| allocation_pools | {"start": "10.0.1.2", "end": "10.0.1.254"} |
|
||||
| cidr | 10.0.1.0/24 |
|
||||
| created_at | 2017-01-11T08:49:53Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 10.0.1.1 |
|
||||
| host_routes | |
|
||||
| id | 6a6c63b4-7f41-4a8f-9393-55cd79380e5a |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | |
|
||||
| ipv6_ra_mode | |
|
||||
| name | |
|
||||
| network_id | 1897a446-bf6a-4bce-9374-6a3825ee5051 |
|
||||
| project_id | ce444c8be6da447bb412db7d30cd7023 |
|
||||
| revision_number | 2 |
|
||||
| subnetpool_id | |
|
||||
| tenant_id | ce444c8be6da447bb412db7d30cd7023 |
|
||||
| updated_at | 2017-01-11T08:49:53Z |
|
||||
+-------------------+--------------------------------------------+
|
||||
|
||||
List available images in RegionOne.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ glance --os-region-name=RegionOne image-list
|
||||
+--------------------------------------+---------------------------------+
|
||||
| ID | Name |
|
||||
+--------------------------------------+---------------------------------+
|
||||
| 924a5078-efe5-4abf-85e8-992b7e5f6ac3 | cirros-0.3.4-x86_64-uec |
|
||||
| d3e8349d-d58d-4d17-b0ab-951c095fbbc4 | cirros-0.3.4-x86_64-uec-kernel |
|
||||
| c4cd7482-a145-4f26-9f41-a9ac17b9492c | cirros-0.3.4-x86_64-uec-ramdisk |
|
||||
+--------------------------------------+---------------------------------+
|
||||
|
||||
List available flavors in RegionOne.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova --os-region-name=RegionOne flavor-list
|
||||
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
||||
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
|
||||
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
||||
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
|
||||
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
|
||||
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
|
||||
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
|
||||
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
|
||||
| c1 | cirros256 | 256 | 0 | 0 | | 1 | 1.0 | True |
|
||||
| d1 | ds512M | 512 | 5 | 0 | | 1 | 1.0 | True |
|
||||
| d2 | ds1G | 1024 | 10 | 0 | | 1 | 1.0 | True |
|
||||
| d3 | ds2G | 2048 | 10 | 0 | | 2 | 1.0 | True |
|
||||
| d4 | ds4G | 4096 | 20 | 0 | | 4 | 1.0 | True |
|
||||
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
||||
|
||||
Boot instance1 in RegionOne, and connect this instance to net1 and phy_net1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova --os-region-name=RegionOne boot --flavor 1 --image 924a5078-efe5-4abf-85e8-992b7e5f6ac3 --nic net-id=1897a446-bf6a-4bce-9374-6a3825ee5051 --nic net-id=b7832cbb-d399-4d5d-bcfd-d1b804506a1a instance1
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| Property | Value |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| OS-DCF:diskConfig | MANUAL |
|
||||
| OS-EXT-AZ:availability_zone | |
|
||||
| OS-EXT-SRV-ATTR:host | - |
|
||||
| OS-EXT-SRV-ATTR:hostname | instance1 |
|
||||
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
|
||||
| OS-EXT-SRV-ATTR:instance_name | |
|
||||
| OS-EXT-SRV-ATTR:kernel_id | d3e8349d-d58d-4d17-b0ab-951c095fbbc4 |
|
||||
| OS-EXT-SRV-ATTR:launch_index | 0 |
|
||||
| OS-EXT-SRV-ATTR:ramdisk_id | c4cd7482-a145-4f26-9f41-a9ac17b9492c |
|
||||
| OS-EXT-SRV-ATTR:reservation_id | r-eeu5hjq7 |
|
||||
| OS-EXT-SRV-ATTR:root_device_name | - |
|
||||
| OS-EXT-SRV-ATTR:user_data | - |
|
||||
| OS-EXT-STS:power_state | 0 |
|
||||
| OS-EXT-STS:task_state | scheduling |
|
||||
| OS-EXT-STS:vm_state | building |
|
||||
| OS-SRV-USG:launched_at | - |
|
||||
| OS-SRV-USG:terminated_at | - |
|
||||
| accessIPv4 | |
|
||||
| accessIPv6 | |
|
||||
| adminPass | ZB3Ve3nPS66g |
|
||||
| config_drive | |
|
||||
| created | 2017-01-11T10:49:32Z |
|
||||
| description | - |
|
||||
| flavor | m1.tiny (1) |
|
||||
| hostId | |
|
||||
| host_status | |
|
||||
| id | 5fd0f616-1077-46df-bebd-b8b53d09663c |
|
||||
| image | cirros-0.3.4-x86_64-uec (924a5078-efe5-4abf-85e8-992b7e5f6ac3) |
|
||||
| key_name | - |
|
||||
| locked | False |
|
||||
| metadata | {} |
|
||||
| name | instance1 |
|
||||
| os-extended-volumes:volumes_attached | [] |
|
||||
| progress | 0 |
|
||||
| security_groups | default |
|
||||
| status | BUILD |
|
||||
| tags | [] |
|
||||
| tenant_id | ce444c8be6da447bb412db7d30cd7023 |
|
||||
| updated | 2017-01-11T10:49:33Z |
|
||||
| user_id | 66d7b31664a840939f7d3f2de5e717a9 |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
|
||||
List available images in RegionTwo.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ glance --os-region-name=RegionTwo image-list
|
||||
+--------------------------------------+---------------------------------+
|
||||
| ID | Name |
|
||||
+--------------------------------------+---------------------------------+
|
||||
| 1da4303c-96bf-4714-a4dc-cbd5709eda29 | cirros-0.3.4-x86_64-uec |
|
||||
| fb35d578-a984-4807-8234-f0d0ca393e89 | cirros-0.3.4-x86_64-uec-kernel |
|
||||
| a615d6df-be63-4d5a-9a05-5cf7e23a438a | cirros-0.3.4-x86_64-uec-ramdisk |
|
||||
+--------------------------------------+---------------------------------+
|
||||
|
||||
List available flavors in RegionTwo.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova --os-region-name=RegionTwo flavor-list
|
||||
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
||||
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
|
||||
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
||||
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
|
||||
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
|
||||
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
|
||||
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
|
||||
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
|
||||
| c1 | cirros256 | 256 | 0 | 0 | | 1 | 1.0 | True |
|
||||
| d1 | ds512M | 512 | 5 | 0 | | 1 | 1.0 | True |
|
||||
| d2 | ds1G | 1024 | 10 | 0 | | 1 | 1.0 | True |
|
||||
| d3 | ds2G | 2048 | 10 | 0 | | 2 | 1.0 | True |
|
||||
| d4 | ds4G | 4096 | 20 | 0 | | 4 | 1.0 | True |
|
||||
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
||||
|
||||
Boot instance2 in RegionTwo, and connect this instance to net1 and phy_net2.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova --os-region-name=RegionTwo boot --flavor 1 --image 1da4303c-96bf-4714-a4dc-cbd5709eda29 --nic net-id=1897a446-bf6a-4bce-9374-6a3825ee5051 --nic net-id=731293af-e68f-4677-b433-f46afd6431f3 instance2
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| Property | Value |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| OS-DCF:diskConfig | MANUAL |
|
||||
| OS-EXT-AZ:availability_zone | |
|
||||
| OS-EXT-SRV-ATTR:host | - |
|
||||
| OS-EXT-SRV-ATTR:hostname | instance2 |
|
||||
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
|
||||
| OS-EXT-SRV-ATTR:instance_name | |
|
||||
| OS-EXT-SRV-ATTR:kernel_id | fb35d578-a984-4807-8234-f0d0ca393e89 |
|
||||
| OS-EXT-SRV-ATTR:launch_index | 0 |
|
||||
| OS-EXT-SRV-ATTR:ramdisk_id | a615d6df-be63-4d5a-9a05-5cf7e23a438a |
|
||||
| OS-EXT-SRV-ATTR:reservation_id | r-m0duhg40 |
|
||||
| OS-EXT-SRV-ATTR:root_device_name | - |
|
||||
| OS-EXT-SRV-ATTR:user_data | - |
|
||||
| OS-EXT-STS:power_state | 0 |
|
||||
| OS-EXT-STS:task_state | scheduling |
|
||||
| OS-EXT-STS:vm_state | building |
|
||||
| OS-SRV-USG:launched_at | - |
|
||||
| OS-SRV-USG:terminated_at | - |
|
||||
| accessIPv4 | |
|
||||
| accessIPv6 | |
|
||||
| adminPass | M5FodqwcsTiJ |
|
||||
| config_drive | |
|
||||
| created | 2017-01-11T12:55:35Z |
|
||||
| description | - |
|
||||
| flavor | m1.tiny (1) |
|
||||
| hostId | |
|
||||
| host_status | |
|
||||
| id | 010a0a24-0453-4e73-ae8d-21c7275a9df5 |
|
||||
| image | cirros-0.3.4-x86_64-uec (1da4303c-96bf-4714-a4dc-cbd5709eda29) |
|
||||
| key_name | - |
|
||||
| locked | False |
|
||||
| metadata | {} |
|
||||
| name | instance2 |
|
||||
| os-extended-volumes:volumes_attached | [] |
|
||||
| progress | 0 |
|
||||
| security_groups | default |
|
||||
| status | BUILD |
|
||||
| tags | [] |
|
||||
| tenant_id | ce444c8be6da447bb412db7d30cd7023 |
|
||||
| updated | 2017-01-11T12:55:35Z |
|
||||
| user_id | 66d7b31664a840939f7d3f2de5e717a9 |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
|
||||
Make sure the instance1 is active in RegionOne.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova --os-region-name=RegionOne list
|
||||
+--------------------------------------+-----------+--------+------------+-------------+-------------------------------------+
|
||||
| ID | Name | Status | Task State | Power State | Networks |
|
||||
+--------------------------------------+-----------+--------+------------+-------------+-------------------------------------+
|
||||
| 5fd0f616-1077-46df-bebd-b8b53d09663c | instance1 | ACTIVE | - | Running | net1=10.0.1.4; phy_net1=202.96.1.13 |
|
||||
+--------------------------------------+-----------+--------+------------+-------------+-------------------------------------+
|
||||
|
||||
Make sure the instance2 is active in RegionTwo.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova --os-region-name=RegionTwo list
|
||||
+--------------------------------------+-----------+--------+------------+-------------+------------------------------------+
|
||||
| ID | Name | Status | Task State | Power State | Networks |
|
||||
+--------------------------------------+-----------+--------+------------+-------------+------------------------------------+
|
||||
| 010a0a24-0453-4e73-ae8d-21c7275a9df5 | instance2 | ACTIVE | - | Running | phy_net2=202.96.2.5; net1=10.0.1.5 |
|
||||
+--------------------------------------+-----------+--------+------------+-------------+------------------------------------+
|
||||
|
||||
Now you can ping instance2's IP address 10.0.1.5 from instance1, or ping
|
||||
instance1's IP address 10.0.1.4 from instance2.
|
||||
|
||||
Note: Not all images will bring up the second nic, so you can ssh into
|
||||
instance1 or instance2, use ifconfig -a to check whether all NICs are created,
|
||||
and bring up all NICs if necessary.
|
|
@ -1,420 +0,0 @@
|
|||
================
|
||||
Local Networking
|
||||
================
|
||||
|
||||
The following figure illustrates one networking mode without cross
|
||||
Neutron networking requirement, only networking inside one region is needed.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
+-----------------+ +-----------------+
|
||||
| RegionOne | | RegionTwo |
|
||||
| | | |
|
||||
| ext-net1 | | ext-net2 |
|
||||
| +-----+-----+ | | +-----+-----+ |
|
||||
| | | | | |
|
||||
| | | | | |
|
||||
| +--+--+ | | +--+--+ |
|
||||
| | | | | | | |
|
||||
| | R1 | | | | R2 | |
|
||||
| | | | | | | |
|
||||
| +--+--+ | | +--+--+ |
|
||||
| | | | | |
|
||||
| | | | | |
|
||||
| +---+-+-+ | | +---++--+ |
|
||||
| net1 | | | net2 | |
|
||||
| | | | | |
|
||||
| +-------+---+ | | +-------+----+ |
|
||||
| | instance1 | | | | instance2 | |
|
||||
| +-----------+ | | +------------+ |
|
||||
+-----------------+ +-----------------+
|
||||
|
||||
How to create this network topology
|
||||
===================================
|
||||
|
||||
Create external network ext-net1, which will be located in RegionOne.
|
||||
Need to specify region name as the value of availability-zone-hint.
|
||||
If availability-zone-hint is not provided, then the external network
|
||||
will be created in a default region.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion net-create --provider:network_type vlan --provider:physical_network extern --router:external --availability-zone-hint RegionOne ext-net1
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | RegionOne |
|
||||
| id | a3a23b20-b0c1-461a-bc00-3db04ce212ca |
|
||||
| name | ext-net1 |
|
||||
| project_id | c0e194dfadd44fc1983fd6dd7c8ed384 |
|
||||
| provider:network_type | vlan |
|
||||
| provider:physical_network | extern |
|
||||
| provider:segmentation_id | 170 |
|
||||
| router:external | True |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tenant_id | c0e194dfadd44fc1983fd6dd7c8ed384 |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
Now you can also create flat type external network
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion net-create --provider:network_type flat --provider:physical_network extern --router:external --availability-zone-hint RegionOne ext-net1
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | RegionOne |
|
||||
| id | df2c8e3a-3f25-4cba-a902-33289f3a8aee |
|
||||
| name | ext-net1 |
|
||||
| project_id | c0e194dfadd44fc1983fd6dd7c8ed384 |
|
||||
| provider:network_type | flat |
|
||||
| provider:physical_network | extern |
|
||||
| provider:segmentation_id | |
|
||||
| router:external | True |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tenant_id | c0e194dfadd44fc1983fd6dd7c8ed384 |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
For external network, the network will be created in the region specified in
|
||||
availability-zone-hint too.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion net-list
|
||||
+--------------------------------------+----------+---------+
|
||||
| id | name | subnets |
|
||||
+--------------------------------------+----------+---------+
|
||||
| a3a23b20-b0c1-461a-bc00-3db04ce212ca | ext-net1 | |
|
||||
+--------------------------------------+----------+---------+
|
||||
|
||||
$ neutron --os-region-name=RegionOne net-list
|
||||
+--------------------------------------+--------------------------------------+---------+
|
||||
| id | name | subnets |
|
||||
+--------------------------------------+--------------------------------------+---------+
|
||||
| a3a23b20-b0c1-461a-bc00-3db04ce212ca | a3a23b20-b0c1-461a-bc00-3db04ce212ca | |
|
||||
+--------------------------------------+--------------------------------------+---------+
|
||||
|
||||
Create subnet in ext-net1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion subnet-create --name ext-subnet1 --disable-dhcp ext-net1 163.3.124.0/24
|
||||
+-------------------+--------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------------------+
|
||||
| allocation_pools | {"start": "163.3.124.2", "end": "163.3.124.254"} |
|
||||
| cidr | 163.3.124.0/24 |
|
||||
| created_at | 2017-01-10T04:49:16Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | False |
|
||||
| gateway_ip | 163.3.124.1 |
|
||||
| host_routes | |
|
||||
| id | 055ec17a-5b64-4cff-878c-c898427aabe3 |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | |
|
||||
| ipv6_ra_mode | |
|
||||
| name | ext-subnet1 |
|
||||
| network_id | a3a23b20-b0c1-461a-bc00-3db04ce212ca |
|
||||
| project_id | c0e194dfadd44fc1983fd6dd7c8ed384 |
|
||||
| revision_number | 2 |
|
||||
| subnetpool_id | |
|
||||
| tenant_id | c0e194dfadd44fc1983fd6dd7c8ed384 |
|
||||
| updated_at | 2017-01-10T04:49:16Z |
|
||||
+-------------------+--------------------------------------------------+
|
||||
|
||||
Create local router R1 in RegionOne
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion router-create --availability-zone-hint RegionOne R1
|
||||
+-------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | RegionOne |
|
||||
| availability_zones | |
|
||||
| created_at | 2017-01-10T04:50:06Z |
|
||||
| description | |
|
||||
| external_gateway_info | |
|
||||
| id | 7ce3282f-3864-4c55-84bf-fc5edc3293cb |
|
||||
| name | R1 |
|
||||
| project_id | c0e194dfadd44fc1983fd6dd7c8ed384 |
|
||||
| revision_number | 1 |
|
||||
| status | ACTIVE |
|
||||
| tenant_id | c0e194dfadd44fc1983fd6dd7c8ed384 |
|
||||
| updated_at | 2017-01-10T04:50:06Z |
|
||||
+-------------------------+--------------------------------------+
|
||||
|
||||
Set the router gateway to ext-net1 for R1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion router-gateway-set R1 ext-net1
|
||||
Set gateway for router R1
|
||||
|
||||
$ neutron --os-region-name=CentralRegion router-show R1
|
||||
+-----------------------+------------------------------------------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+------------------------------------------------------------------------------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| created_at | 2017-01-10T04:50:06Z |
|
||||
| description | |
|
||||
| external_gateway_info | {"network_id": "a3a23b20-b0c1-461a-bc00-3db04ce212ca", "external_fixed_ips": [{"subnet_id": "055ec17a-5b64 |
|
||||
| | -4cff-878c-c898427aabe3", "ip_address": "163.3.124.5"}]} |
|
||||
| id | 7ce3282f-3864-4c55-84bf-fc5edc3293cb |
|
||||
| name | R1 |
|
||||
| project_id | c0e194dfadd44fc1983fd6dd7c8ed384 |
|
||||
| revision_number | 3 |
|
||||
| status | ACTIVE |
|
||||
| tenant_id | c0e194dfadd44fc1983fd6dd7c8ed384 |
|
||||
| updated_at | 2017-01-10T04:51:19Z |
|
||||
+-----------------------+------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
Create local network net1 in RegionOne.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion net-create --availability-zone-hint RegionOne net1
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | RegionOne |
|
||||
| id | beaf59eb-c597-4b69-bd41-8bf9fee2dc6a |
|
||||
| name | net1 |
|
||||
| project_id | c0e194dfadd44fc1983fd6dd7c8ed384 |
|
||||
| provider:network_type | local |
|
||||
| provider:physical_network | |
|
||||
| provider:segmentation_id | |
|
||||
| router:external | False |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tenant_id | c0e194dfadd44fc1983fd6dd7c8ed384 |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
Create a subnet in net1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion subnet-create net1 10.0.1.0/24
|
||||
+-------------------+--------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------------+
|
||||
| allocation_pools | {"start": "10.0.1.2", "end": "10.0.1.254"} |
|
||||
| cidr | 10.0.1.0/24 |
|
||||
| created_at | 2017-01-10T04:54:29Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 10.0.1.1 |
|
||||
| host_routes | |
|
||||
| id | ab812ed5-1a4c-4b12-859c-6c9b3df21642 |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | |
|
||||
| ipv6_ra_mode | |
|
||||
| name | |
|
||||
| network_id | beaf59eb-c597-4b69-bd41-8bf9fee2dc6a |
|
||||
| project_id | c0e194dfadd44fc1983fd6dd7c8ed384 |
|
||||
| revision_number | 2 |
|
||||
| subnetpool_id | |
|
||||
| tenant_id | c0e194dfadd44fc1983fd6dd7c8ed384 |
|
||||
| updated_at | 2017-01-10T04:54:29Z |
|
||||
+-------------------+--------------------------------------------+
|
||||
|
||||
Add this subnet to router R1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion router-interface-add R1 ab812ed5-1a4c-4b12-859c-6c9b3df21642
|
||||
Added interface 2b7eceaf-8333-49cd-a7fe-aa101d5c9598 to router R1.
|
||||
|
||||
List the available images in RegionOne.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ glance --os-region-name=RegionOne image-list
|
||||
+--------------------------------------+---------------------------------+
|
||||
| ID | Name |
|
||||
+--------------------------------------+---------------------------------+
|
||||
| 2f73b93e-8b8a-4e07-8732-87f968852d82 | cirros-0.3.4-x86_64-uec |
|
||||
| 4040ca54-2ebc-4ccd-8a0d-4284f4713ef1 | cirros-0.3.4-x86_64-uec-kernel |
|
||||
| 7e86341f-2d6e-4a2a-b01a-e334fa904cf0 | cirros-0.3.4-x86_64-uec-ramdisk |
|
||||
+--------------------------------------+---------------------------------+
|
||||
|
||||
List the available flavors in RegionOne.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova --os-region-name=RegionOne flavor-list
|
||||
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
||||
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
|
||||
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
||||
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
|
||||
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
|
||||
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
|
||||
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
|
||||
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
|
||||
| c1 | cirros256 | 256 | 0 | 0 | | 1 | 1.0 | True |
|
||||
| d1 | ds512M | 512 | 5 | 0 | | 1 | 1.0 | True |
|
||||
| d2 | ds1G | 1024 | 10 | 0 | | 1 | 1.0 | True |
|
||||
| d3 | ds2G | 2048 | 10 | 0 | | 2 | 1.0 | True |
|
||||
| d4 | ds4G | 4096 | 20 | 0 | | 4 | 1.0 | True |
|
||||
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
||||
|
||||
Boot instance1 in RegionOne, and connect this instance to net1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova --os-region-name=RegionOne boot --flavor 1 --image 2f73b93e-8b8a-4e07-8732-87f968852d82 --nic net-id=beaf59eb-c597-4b69-bd41-8bf9fee2dc6a instance1
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| Property | Value |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| OS-DCF:diskConfig | MANUAL |
|
||||
| OS-EXT-AZ:availability_zone | |
|
||||
| OS-EXT-SRV-ATTR:host | - |
|
||||
| OS-EXT-SRV-ATTR:hostname | instance1 |
|
||||
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
|
||||
| OS-EXT-SRV-ATTR:instance_name | |
|
||||
| OS-EXT-SRV-ATTR:kernel_id | 4040ca54-2ebc-4ccd-8a0d-4284f4713ef1 |
|
||||
| OS-EXT-SRV-ATTR:launch_index | 0 |
|
||||
| OS-EXT-SRV-ATTR:ramdisk_id | 7e86341f-2d6e-4a2a-b01a-e334fa904cf0 |
|
||||
| OS-EXT-SRV-ATTR:reservation_id | r-5t409rww |
|
||||
| OS-EXT-SRV-ATTR:root_device_name | - |
|
||||
| OS-EXT-SRV-ATTR:user_data | - |
|
||||
| OS-EXT-STS:power_state | 0 |
|
||||
| OS-EXT-STS:task_state | scheduling |
|
||||
| OS-EXT-STS:vm_state | building |
|
||||
| OS-SRV-USG:launched_at | - |
|
||||
| OS-SRV-USG:terminated_at | - |
|
||||
| accessIPv4 | |
|
||||
| accessIPv6 | |
|
||||
| adminPass | 23DipTvrpCvn |
|
||||
| config_drive | |
|
||||
| created | 2017-01-10T04:59:25Z |
|
||||
| description | - |
|
||||
| flavor | m1.tiny (1) |
|
||||
| hostId | |
|
||||
| host_status | |
|
||||
| id | 301546be-b675-49eb-b6c2-c5c986235ecb |
|
||||
| image | cirros-0.3.4-x86_64-uec (2f73b93e-8b8a-4e07-8732-87f968852d82) |
|
||||
| key_name | - |
|
||||
| locked | False |
|
||||
| metadata | {} |
|
||||
| name | instance1 |
|
||||
| os-extended-volumes:volumes_attached | [] |
|
||||
| progress | 0 |
|
||||
| security_groups | default |
|
||||
| status | BUILD |
|
||||
| tags | [] |
|
||||
| tenant_id | c0e194dfadd44fc1983fd6dd7c8ed384 |
|
||||
| updated | 2017-01-10T04:59:26Z |
|
||||
| user_id | a7b7420bd76c48c2bb5cb97c16bb165d |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
|
||||
Make sure instance1 is active in RegionOne.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova --os-region-name=RegionOne list
|
||||
+--------------------------------------+-----------+--------+------------+-------------+---------------+
|
||||
| ID | Name | Status | Task State | Power State | Networks |
|
||||
+--------------------------------------+-----------+--------+------------+-------------+---------------+
|
||||
| 301546be-b675-49eb-b6c2-c5c986235ecb | instance1 | ACTIVE | - | Running | net1=10.0.1.4 |
|
||||
+--------------------------------------+-----------+--------+------------+-------------+---------------+
|
||||
|
||||
Verify regarding networking resource are provisioned in RegionOne.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=RegionOne router-list
|
||||
+------------------------------------+------------------------------------+------------------------------------+-------------+-------+
|
||||
| id | name | external_gateway_info | distributed | ha |
|
||||
+------------------------------------+------------------------------------+------------------------------------+-------------+-------+
|
||||
| d6cd0978-f3cc-4a0b-b45b- | 7ce3282f-3864-4c55-84bf- | {"network_id": "a3a23b20-b0c1 | False | False |
|
||||
| a427ebc51382 | fc5edc3293cb | -461a-bc00-3db04ce212ca", | | |
|
||||
| | | "enable_snat": true, | | |
|
||||
| | | "external_fixed_ips": | | |
|
||||
| | | [{"subnet_id": "055ec17a-5b64 | | |
|
||||
| | | -4cff-878c-c898427aabe3", | | |
|
||||
| | | "ip_address": "163.3.124.5"}]} | | |
|
||||
+------------------------------------+------------------------------------+------------------------------------+-------------+-------+
|
||||
|
||||
|
||||
Create a floating IP for instance1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion floatingip-create ext-net1
|
||||
+---------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+--------------------------------------+
|
||||
| created_at | 2017-01-10T05:17:48Z |
|
||||
| description | |
|
||||
| fixed_ip_address | |
|
||||
| floating_ip_address | 163.3.124.7 |
|
||||
| floating_network_id | a3a23b20-b0c1-461a-bc00-3db04ce212ca |
|
||||
| id | 0c031c3f-93ba-49bf-9c98-03bf4b0c7b2b |
|
||||
| port_id | |
|
||||
| project_id | c0e194dfadd44fc1983fd6dd7c8ed384 |
|
||||
| revision_number | 1 |
|
||||
| router_id | |
|
||||
| status | DOWN |
|
||||
| tenant_id | c0e194dfadd44fc1983fd6dd7c8ed384 |
|
||||
| updated_at | 2017-01-10T05:17:48Z |
|
||||
+---------------------+--------------------------------------+
|
||||
|
||||
List the port in net1 for instance1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion port-list
|
||||
+------------------------------------+------------------------------------+-------------------+--------------------------------------+
|
||||
| id | name | mac_address | fixed_ips |
|
||||
+------------------------------------+------------------------------------+-------------------+--------------------------------------+
|
||||
| 0b55c3b3-ae5f-4d03-899b- | | fa:16:3e:b5:1d:95 | {"subnet_id": "ab812ed5-1a4c-4b12 |
|
||||
| f056d967942e | | | -859c-6c9b3df21642", "ip_address": |
|
||||
| | | | "10.0.1.4"} |
|
||||
| 2b7eceaf-8333-49cd-a7fe- | | fa:16:3e:59:b3:ef | {"subnet_id": "ab812ed5-1a4c-4b12 |
|
||||
| aa101d5c9598 | | | -859c-6c9b3df21642", "ip_address": |
|
||||
| | | | "10.0.1.1"} |
|
||||
| 572ad59f- | dhcp_port_ab812ed5-1a4c-4b12-859c- | fa:16:3e:56:7f:2b | {"subnet_id": "ab812ed5-1a4c-4b12 |
|
||||
| 5a15-4662-9fb8-f92a49389b28 | 6c9b3df21642 | | -859c-6c9b3df21642", "ip_address": |
|
||||
| | | | "10.0.1.2"} |
|
||||
| bf398883-c435-4cb2-8693-017a790825 | interface_RegionOne_ab812ed5-1a4c- | fa:16:3e:15:ef:1f | {"subnet_id": "ab812ed5-1a4c-4b12 |
|
||||
| 9e | 4b12-859c-6c9b3df21642 | | -859c-6c9b3df21642", "ip_address": |
|
||||
| | | | "10.0.1.7"} |
|
||||
| 452b8ebf- | | fa:16:3e:1f:59:b2 | {"subnet_id": "055ec17a-5b64-4cff- |
|
||||
| c9c6-4990-9048-644a3a6fde1a | | | 878c-c898427aabe3", "ip_address": |
|
||||
| | | | "163.3.124.5"} |
|
||||
| 8e77c6ab-2884-4779-91e2-c3a4975fdf | | fa:16:3e:3c:88:7d | {"subnet_id": "055ec17a-5b64-4cff- |
|
||||
| 50 | | | 878c-c898427aabe3", "ip_address": |
|
||||
| | | | "163.3.124.7"} |
|
||||
+------------------------------------+------------------------------------+-------------------+--------------------------------------+
|
||||
|
||||
Associate the floating IP to instance1's IP in net1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion floatingip-associate 0c031c3f-93ba-49bf-9c98-03bf4b0c7b2b 0b55c3b3-ae5f-4d03-899b-f056d967942e
|
||||
Associated floating IP 0c031c3f-93ba-49bf-9c98-03bf4b0c7b2b
|
||||
|
||||
Verify floating IP is associated in RegionOne too.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=RegionOne floatingip-list
|
||||
+--------------------------------------+------------------+---------------------+--------------------------------------+
|
||||
| id | fixed_ip_address | floating_ip_address | port_id |
|
||||
+--------------------------------------+------------------+---------------------+--------------------------------------+
|
||||
| b28baa80-d798-43e7-baff-e65873bd1ec2 | 10.0.1.4 | 163.3.124.7 | 0b55c3b3-ae5f-4d03-899b-f056d967942e |
|
||||
+--------------------------------------+------------------+---------------------+--------------------------------------+
|
||||
|
||||
You can create topology in RegionTwo like what has been done in RegionOne.
|
|
@ -1,919 +0,0 @@
|
|||
=====================================================
|
||||
North South Networking via Multiple External Networks
|
||||
=====================================================
|
||||
|
||||
The following figure illustrates one typical networking mode, instances have
|
||||
two interfaces, one interface is connected to net3 for heartbeat or
|
||||
data replication, the other interface is connected to net1 or net2 to provide
|
||||
service. There is different external network in different region to support
|
||||
service redundancy in case of region level failure.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
+-----------------+ +-----------------+
|
||||
| RegionOne | | RegionTwo |
|
||||
| | | |
|
||||
| ext_net1 | | ext_net2 |
|
||||
| +-----+-----+ | | +-----+-----+ |
|
||||
| | | | | |
|
||||
| | | | | |
|
||||
| +--+--+ | | +--+--+ |
|
||||
| | | | | | | |
|
||||
| | R1 | | | | R2 | |
|
||||
| | | | | | | |
|
||||
| +--+--+ | | +--+--+ |
|
||||
| | | | | |
|
||||
| | | | | |
|
||||
| +---+-+-+ | | +---+-+-+ |
|
||||
| net1 | | | net2 | |
|
||||
| | | | | |
|
||||
| +--------+--+ | | +--------+--+ |
|
||||
| | Instance1 | | | | Instance2 | |
|
||||
| +-----------+ | | +-----------+ |
|
||||
| | | | | |
|
||||
| | | net3 | | |
|
||||
| +------+-------------------------+----+ |
|
||||
| | | |
|
||||
+-----------------+ +-----------------+
|
||||
|
||||
How to create this network topology
|
||||
===================================
|
||||
|
||||
Create external network ext-net1, which will be located in RegionOne.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion net-create --provider:network_type vlan --provider:physical_network extern --router:external --availability-zone-hint RegionOne ext-net1
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | RegionOne |
|
||||
| id | 9b3d04be-0c00-40ed-88ff-088da6fcd8bd |
|
||||
| name | ext-net1 |
|
||||
| project_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| provider:network_type | vlan |
|
||||
| provider:physical_network | extern |
|
||||
| provider:segmentation_id | 170 |
|
||||
| router:external | True |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
Now you can also create flat type external network
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion net-create --provider:network_type flat --provider:physical_network extern --router:external --availability-zone-hint RegionOne ext-net1
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | RegionOne |
|
||||
| id | 17d969a5-efe3-407f-9657-61658a4a5193 |
|
||||
| name | ext-net1 |
|
||||
| project_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| provider:network_type | flat |
|
||||
| provider:physical_network | extern |
|
||||
| provider:segmentation_id | |
|
||||
| router:external | True |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
Create subnet in ext-net1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion subnet-create --name ext-subnet1 --disable-dhcp ext-net1 163.3.124.0/24
|
||||
+-------------------+--------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------------------+
|
||||
| allocation_pools | {"start": "163.3.124.2", "end": "163.3.124.254"} |
|
||||
| cidr | 163.3.124.0/24 |
|
||||
| created_at | 2017-01-12T07:03:45Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | False |
|
||||
| gateway_ip | 163.3.124.1 |
|
||||
| host_routes | |
|
||||
| id | a2eecc16-deb8-42a6-a41b-5058847ed20a |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | |
|
||||
| ipv6_ra_mode | |
|
||||
| name | ext-subnet1 |
|
||||
| network_id | 9b3d04be-0c00-40ed-88ff-088da6fcd8bd |
|
||||
| project_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| revision_number | 2 |
|
||||
| subnetpool_id | |
|
||||
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| updated_at | 2017-01-12T07:03:45Z |
|
||||
+-------------------+--------------------------------------------------+
|
||||
|
||||
Create router R1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion router-create --availability-zone-hint RegionOne R1
|
||||
+-------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | RegionOne |
|
||||
| availability_zones | |
|
||||
| created_at | 2017-01-12T07:04:13Z |
|
||||
| description | |
|
||||
| external_gateway_info | |
|
||||
| id | 063de74b-d962-4fc2-96d9-87e2cb35c082 |
|
||||
| name | R1 |
|
||||
| project_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| revision_number | 1 |
|
||||
| status | ACTIVE |
|
||||
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| updated_at | 2017-01-12T07:04:13Z |
|
||||
+-------------------------+--------------------------------------+
|
||||
|
||||
Set the router gateway to ext-net1 for R1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion router-gateway-set R1 ext-net1
|
||||
Set gateway for router R1
|
||||
|
||||
$ neutron --os-region-name=CentralRegion router-show R1
|
||||
+-----------------------+------------------------------------------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+------------------------------------------------------------------------------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| created_at | 2017-01-12T07:04:13Z |
|
||||
| description | |
|
||||
| external_gateway_info | {"network_id": "9b3d04be-0c00-40ed-88ff-088da6fcd8bd", "external_fixed_ips": [{"subnet_id": |
|
||||
| | "a2eecc16-deb8-42a6-a41b-5058847ed20a", "ip_address": "163.3.124.5"}]} |
|
||||
| id | 063de74b-d962-4fc2-96d9-87e2cb35c082 |
|
||||
| name | R1 |
|
||||
| project_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| revision_number | 3 |
|
||||
| status | ACTIVE |
|
||||
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| updated_at | 2017-01-12T07:04:36Z |
|
||||
+-----------------------+------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
Create local network net1 which will reside in RegionOne, so you use RegionOne
|
||||
as the value of availability-zone-hint.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion net-create --availability-zone-hint RegionOne net1
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | RegionOne |
|
||||
| id | de4fda27-e4f7-4448-80f6-79ee5ea2478b |
|
||||
| name | net1 |
|
||||
| project_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| provider:network_type | local |
|
||||
| provider:physical_network | |
|
||||
| provider:segmentation_id | |
|
||||
| router:external | False |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
Create a subnet in net1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion subnet-create net1 10.0.1.0/24
|
||||
+-------------------+--------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------------+
|
||||
| allocation_pools | {"start": "10.0.1.2", "end": "10.0.1.254"} |
|
||||
| cidr | 10.0.1.0/24 |
|
||||
| created_at | 2017-01-12T07:05:57Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 10.0.1.1 |
|
||||
| host_routes | |
|
||||
| id | 2c8f446f-ba02-4140-a793-913033aa3580 |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | |
|
||||
| ipv6_ra_mode | |
|
||||
| name | |
|
||||
| network_id | de4fda27-e4f7-4448-80f6-79ee5ea2478b |
|
||||
| project_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| revision_number | 2 |
|
||||
| subnetpool_id | |
|
||||
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| updated_at | 2017-01-12T07:05:57Z |
|
||||
+-------------------+--------------------------------------------+
|
||||
|
||||
Add this subnet to router R1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion router-interface-add R1 2c8f446f-ba02-4140-a793-913033aa3580
|
||||
Added interface d48a8e87-61a0-494b-bc06-54f7a008ea78 to router R1.
|
||||
|
||||
Create net3 which will work as the L2 network across RegionOne and RegionTwo.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
If net3 is vlan based cross-Neutron L2 network
|
||||
$ neutron --os-region-name=CentralRegion net-create --provider:network_type vlan --provider:physical_network bridge --availability-zone-hint az1 --availability-zone-hint az2 net3
|
||||
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | az1 |
|
||||
| | az2 |
|
||||
| id | 68d04c60-469d-495d-bb23-0d36d56235bd |
|
||||
| name | net3 |
|
||||
| project_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| provider:network_type | vlan |
|
||||
| provider:physical_network | bridge |
|
||||
| provider:segmentation_id | 138 |
|
||||
| router:external | False |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
If net3 is vxlan based cross-Neutron L2 network
|
||||
$ neutron --os-region-name=CentralRegion net-create --provider:network_type vxlan --availability-zone-hint az1 --availability-zone-hint az2 net3
|
||||
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | az1 |
|
||||
| | az2 |
|
||||
| id | 0f171049-0c15-4d1b-95cd-ede8dc554b44 |
|
||||
| name | net3 |
|
||||
| project_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| provider:network_type | vxlan |
|
||||
| provider:physical_network | |
|
||||
| provider:segmentation_id | 1031 |
|
||||
| router:external | False |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
Create a subnet in net3.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion subnet-create net3 10.0.3.0/24
|
||||
+-------------------+--------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------------+
|
||||
| allocation_pools | {"start": "10.0.3.2", "end": "10.0.3.254"} |
|
||||
| cidr | 10.0.3.0/24 |
|
||||
| created_at | 2017-01-12T07:07:42Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 10.0.3.1 |
|
||||
| host_routes | |
|
||||
| id | 5ab92c3c-b799-451c-b5d5-b72274fb0fcc |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | |
|
||||
| ipv6_ra_mode | |
|
||||
| name | |
|
||||
| network_id | 68d04c60-469d-495d-bb23-0d36d56235bd |
|
||||
| project_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| revision_number | 2 |
|
||||
| subnetpool_id | |
|
||||
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| updated_at | 2017-01-12T07:07:42Z |
|
||||
+-------------------+--------------------------------------------+
|
||||
|
||||
List the available images in RegionOne.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ glance --os-region-name=RegionOne image-list
|
||||
+--------------------------------------+---------------------------------+
|
||||
| ID | Name |
|
||||
+--------------------------------------+---------------------------------+
|
||||
| 8747fd6a-72aa-4075-b936-a24bc48ed57b | cirros-0.3.4-x86_64-uec |
|
||||
| 3a54e6fd-d215-437b-9d67-eac840c97f9c | cirros-0.3.4-x86_64-uec-kernel |
|
||||
| 02b06834-2a9f-4dad-8d59-2a77963af8a5 | cirros-0.3.4-x86_64-uec-ramdisk |
|
||||
+--------------------------------------+---------------------------------+
|
||||
|
||||
List the available flavors in RegionOne.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova --os-region-name=RegionOne flavor-list
|
||||
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
||||
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
|
||||
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
||||
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
|
||||
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
|
||||
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
|
||||
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
|
||||
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
|
||||
| c1 | cirros256 | 256 | 0 | 0 | | 1 | 1.0 | True |
|
||||
| d1 | ds512M | 512 | 5 | 0 | | 1 | 1.0 | True |
|
||||
| d2 | ds1G | 1024 | 10 | 0 | | 1 | 1.0 | True |
|
||||
| d3 | ds2G | 2048 | 10 | 0 | | 2 | 1.0 | True |
|
||||
| d4 | ds4G | 4096 | 20 | 0 | | 4 | 1.0 | True |
|
||||
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
||||
|
||||
|
||||
Boot instance1 in RegionOne, and connect this instance to net1 and net3.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova --os-region-name=RegionOne boot --flavor 1 --image 8747fd6a-72aa-4075-b936-a24bc48ed57b --nic net-id=68d04c60-469d-495d-bb23-0d36d56235bd --nic net-id=de4fda27-e4f7-4448-80f6-79ee5ea2478b instance1
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| Property | Value |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| OS-DCF:diskConfig | MANUAL |
|
||||
| OS-EXT-AZ:availability_zone | |
|
||||
| OS-EXT-SRV-ATTR:host | - |
|
||||
| OS-EXT-SRV-ATTR:hostname | instance1 |
|
||||
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
|
||||
| OS-EXT-SRV-ATTR:instance_name | |
|
||||
| OS-EXT-SRV-ATTR:kernel_id | 3a54e6fd-d215-437b-9d67-eac840c97f9c |
|
||||
| OS-EXT-SRV-ATTR:launch_index | 0 |
|
||||
| OS-EXT-SRV-ATTR:ramdisk_id | 02b06834-2a9f-4dad-8d59-2a77963af8a5 |
|
||||
| OS-EXT-SRV-ATTR:reservation_id | r-9cnhvave |
|
||||
| OS-EXT-SRV-ATTR:root_device_name | - |
|
||||
| OS-EXT-SRV-ATTR:user_data | - |
|
||||
| OS-EXT-STS:power_state | 0 |
|
||||
| OS-EXT-STS:task_state | scheduling |
|
||||
| OS-EXT-STS:vm_state | building |
|
||||
| OS-SRV-USG:launched_at | - |
|
||||
| OS-SRV-USG:terminated_at | - |
|
||||
| accessIPv4 | |
|
||||
| accessIPv6 | |
|
||||
| adminPass | zDFR3x8pDDKi |
|
||||
| config_drive | |
|
||||
| created | 2017-01-12T07:09:53Z |
|
||||
| description | - |
|
||||
| flavor | m1.tiny (1) |
|
||||
| hostId | |
|
||||
| host_status | |
|
||||
| id | 3d53560e-4e04-43a0-b774-cfa3deecbca4 |
|
||||
| image | cirros-0.3.4-x86_64-uec (8747fd6a-72aa-4075-b936-a24bc48ed57b) |
|
||||
| key_name | - |
|
||||
| locked | False |
|
||||
| metadata | {} |
|
||||
| name | instance1 |
|
||||
| os-extended-volumes:volumes_attached | [] |
|
||||
| progress | 0 |
|
||||
| security_groups | default |
|
||||
| status | BUILD |
|
||||
| tags | [] |
|
||||
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| updated | 2017-01-12T07:09:54Z |
|
||||
| user_id | d2521e53aa8c4916b3a8e444f20cf1da |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
|
||||
Make sure the instance1 is active in RegionOne.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova --os-region-name=RegionOne list
|
||||
+--------------------------------------+-----------+--------+------------+-------------+-------------------------------+
|
||||
| ID | Name | Status | Task State | Power State | Networks |
|
||||
+--------------------------------------+-----------+--------+------------+-------------+-------------------------------+
|
||||
| 3d53560e-4e04-43a0-b774-cfa3deecbca4 | instance1 | ACTIVE | - | Running | net3=10.0.3.7; net1=10.0.1.13 |
|
||||
+--------------------------------------+-----------+--------+------------+-------------+-------------------------------+
|
||||
|
||||
|
||||
Create a floating IP for instance1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion floatingip-create ext-net1
|
||||
+---------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+--------------------------------------+
|
||||
| created_at | 2017-01-12T07:12:50Z |
|
||||
| description | |
|
||||
| fixed_ip_address | |
|
||||
| floating_ip_address | 163.3.124.6 |
|
||||
| floating_network_id | 9b3d04be-0c00-40ed-88ff-088da6fcd8bd |
|
||||
| id | 645f9cd6-d8d4-427a-88fe-770240c96d09 |
|
||||
| port_id | |
|
||||
| project_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| revision_number | 1 |
|
||||
| router_id | |
|
||||
| status | DOWN |
|
||||
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| updated_at | 2017-01-12T07:12:50Z |
|
||||
+---------------------+--------------------------------------+
|
||||
|
||||
List the port in net1 for instance1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion port-list
|
||||
+------------------------------------+------------------------------------+-------------------+--------------------------------------+
|
||||
| id | name | mac_address | fixed_ips |
|
||||
+------------------------------------+------------------------------------+-------------------+--------------------------------------+
|
||||
| 185b5185-0254-486c-9d8b- | | fa:16:3e:da:ae:99 | {"subnet_id": "2c8f446f- |
|
||||
| 198af4b4d40e | | | ba02-4140-a793-913033aa3580", |
|
||||
| | | | "ip_address": "10.0.1.13"} |
|
||||
| 248f9072-76d6-405a- | | fa:16:3e:dc:2f:b3 | {"subnet_id": "5ab92c3c-b799-451c- |
|
||||
| 8eb5-f0d3475c542d | | | b5d5-b72274fb0fcc", "ip_address": |
|
||||
| | | | "10.0.3.7"} |
|
||||
| d48a8e87-61a0-494b- | | fa:16:3e:c6:8e:c5 | {"subnet_id": "2c8f446f- |
|
||||
| bc06-54f7a008ea78 | | | ba02-4140-a793-913033aa3580", |
|
||||
| | | | "ip_address": "10.0.1.1"} |
|
||||
| ce3a1530-20f4-4760-a451-81e5f939aa | dhcp_port_2c8f446f- | fa:16:3e:e6:32:0f | {"subnet_id": "2c8f446f- |
|
||||
| fc | ba02-4140-a793-913033aa3580 | | ba02-4140-a793-913033aa3580", |
|
||||
| | | | "ip_address": "10.0.1.2"} |
|
||||
| 7925a3cc- | interface_RegionOne_2c8f446f- | fa:16:3e:c5:ad:6f | {"subnet_id": "2c8f446f- |
|
||||
| 6c36-4bc3-a798-a6145fed442a | ba02-4140-a793-913033aa3580 | | ba02-4140-a793-913033aa3580", |
|
||||
| | | | "ip_address": "10.0.1.3"} |
|
||||
| 077c63b6-0184-4bf7-b3aa- | dhcp_port_5ab92c3c-b799-451c- | fa:16:3e:d2:a3:53 | {"subnet_id": "5ab92c3c-b799-451c- |
|
||||
| b071de6f39be | b5d5-b72274fb0fcc | | b5d5-b72274fb0fcc", "ip_address": |
|
||||
| | | | "10.0.3.2"} |
|
||||
| c90be7bc- | interface_RegionOne_5ab92c3c-b799 | fa:16:3e:b6:e4:bc | {"subnet_id": "5ab92c3c-b799-451c- |
|
||||
| 31ea-4015-a432-2bef62e343d1 | -451c-b5d5-b72274fb0fcc | | b5d5-b72274fb0fcc", "ip_address": |
|
||||
| | | | "10.0.3.9"} |
|
||||
| 3053fcb9-b6ad-4a9c-b89e- | bridge_port_532890c765604609a8d2ef | fa:16:3e:fc:d0:fc | {"subnet_id": "53def0ac-59ef- |
|
||||
| ffe6aff6523b | 6fc8e5f6ef_0c4faa42-5230-4adc- | | 4c7b-b694-3375598954da", |
|
||||
| | bab5-10ee53ebf888 | | "ip_address": "100.0.0.11"} |
|
||||
| ce787983-a140-4c53-96d2-71f62e1545 | | fa:16:3e:1a:62:7f | {"subnet_id": "a2eecc16-deb8-42a6 |
|
||||
| 3a | | | -a41b-5058847ed20a", "ip_address": |
|
||||
| | | | "163.3.124.5"} |
|
||||
| 2d9fc640-1858-4c7e-b42c- | | fa:16:3e:00:7c:6e | {"subnet_id": "a2eecc16-deb8-42a6 |
|
||||
| d3ed3f338b8a | | | -a41b-5058847ed20a", "ip_address": |
|
||||
| | | | "163.3.124.6"} |
|
||||
+------------------------------------+------------------------------------+-------------------+--------------------------------------+
|
||||
|
||||
Associate the floating IP to instance1's IP in net1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion floatingip-associate 645f9cd6-d8d4-427a-88fe-770240c96d09 185b5185-0254-486c-9d8b-198af4b4d40e
|
||||
Associated floating IP 645f9cd6-d8d4-427a-88fe-770240c96d09
|
||||
|
||||
Verify the floating IP was associated.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion floatingip-list
|
||||
+--------------------------------------+------------------+---------------------+--------------------------------------+
|
||||
| id | fixed_ip_address | floating_ip_address | port_id |
|
||||
+--------------------------------------+------------------+---------------------+--------------------------------------+
|
||||
| 645f9cd6-d8d4-427a-88fe-770240c96d09 | 10.0.1.13 | 163.3.124.6 | 185b5185-0254-486c-9d8b-198af4b4d40e |
|
||||
+--------------------------------------+------------------+---------------------+--------------------------------------+
|
||||
|
||||
You can also check that in RegionOne.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=RegionOne floatingip-list
|
||||
+--------------------------------------+------------------+---------------------+--------------------------------------+
|
||||
| id | fixed_ip_address | floating_ip_address | port_id |
|
||||
+--------------------------------------+------------------+---------------------+--------------------------------------+
|
||||
| d59362fa-aea0-4e35-917e-8e586212c867 | 10.0.1.13 | 163.3.124.6 | 185b5185-0254-486c-9d8b-198af4b4d40e |
|
||||
+--------------------------------------+------------------+---------------------+--------------------------------------+
|
||||
|
||||
$ neutron --os-region-name=RegionOne router-list
|
||||
+------------------------------------+------------------------------------+------------------------------------+-------------+-------+
|
||||
| id | name | external_gateway_info | distributed | ha |
|
||||
+------------------------------------+------------------------------------+------------------------------------+-------------+-------+
|
||||
| 0c4faa42-5230-4adc- | 063de74b-d962-4fc2-96d9-87e2cb35c0 | {"network_id": "6932cd71-3cd4-4560 | False | False |
|
||||
| bab5-10ee53ebf888 | 82 | -88f3-2a112fff0cea", | | |
|
||||
| | | "enable_snat": false, | | |
|
||||
| | | "external_fixed_ips": | | |
|
||||
| | | [{"subnet_id": "53def0ac-59ef- | | |
|
||||
| | | 4c7b-b694-3375598954da", | | |
|
||||
| | | "ip_address": "100.0.0.11"}]} | | |
|
||||
| f99dcc0c-d94a- | ns_router_063de74b-d962-4fc2-96d9- | {"network_id": "9b3d04be-0c00 | False | False |
|
||||
| 4b41-9236-2c0169f3ab7d | 87e2cb35c082 | -40ed-88ff-088da6fcd8bd", | | |
|
||||
| | | "enable_snat": true, | | |
|
||||
| | | "external_fixed_ips": | | |
|
||||
| | | [{"subnet_id": "a2eecc16-deb8-42a6 | | |
|
||||
| | | -a41b-5058847ed20a", "ip_address": | | |
|
||||
| | | "163.3.124.5"}]} | | |
|
||||
+------------------------------------+------------------------------------+------------------------------------+-------------+-------+
|
||||
|
||||
Create network topology in RegionTwo.
|
||||
|
||||
Create external network ext-net2, which will be located in RegionTwo.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion net-create --provider:network_type vlan --provider:physical_network extern --router:external --availability-zone-hint RegionTwo ext-net2
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | RegionTwo |
|
||||
| id | ae806ecb-fa3e-4b3c-a582-caef3d8cd9b4 |
|
||||
| name | ext-net2 |
|
||||
| project_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| provider:network_type | vlan |
|
||||
| provider:physical_network | extern |
|
||||
| provider:segmentation_id | 183 |
|
||||
| router:external | True |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
Now you can also create flat type external network
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion net-create --provider:network_type flat --provider:physical_network extern --router:external --availability-zone-hint RegionTwo ext-net2
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | RegionTwo |
|
||||
| id | 0b6d43d1-a837-4f91-930e-dfcc74ef483b |
|
||||
| name | ext-net2 |
|
||||
| project_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| provider:network_type | flat |
|
||||
| provider:physical_network | extern |
|
||||
| provider:segmentation_id | |
|
||||
| router:external | True |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
Create subnet in ext-net2.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion subnet-create --name ext-subnet2 --disable-dhcp ext-net2 163.3.125.0/24
|
||||
+-------------------+--------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------------------+
|
||||
| allocation_pools | {"start": "163.3.125.2", "end": "163.3.125.254"} |
|
||||
| cidr | 163.3.125.0/24 |
|
||||
| created_at | 2017-01-12T07:43:04Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | False |
|
||||
| gateway_ip | 163.3.125.1 |
|
||||
| host_routes | |
|
||||
| id | 9fb32423-95a8-4589-b69c-e2955234ae56 |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | |
|
||||
| ipv6_ra_mode | |
|
||||
| name | ext-subnet2 |
|
||||
| network_id | ae806ecb-fa3e-4b3c-a582-caef3d8cd9b4 |
|
||||
| project_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| revision_number | 2 |
|
||||
| subnetpool_id | |
|
||||
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| updated_at | 2017-01-12T07:43:04Z |
|
||||
+-------------------+--------------------------------------------------+
|
||||
|
||||
Create router R2 which will work in RegionTwo.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion router-create --availability-zone-hint RegionTwo R2
|
||||
+-------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | RegionTwo |
|
||||
| availability_zones | |
|
||||
| created_at | 2017-01-12T07:19:23Z |
|
||||
| description | |
|
||||
| external_gateway_info | |
|
||||
| id | 8a8571db-e3ba-4b78-98ca-13d4dc1a4fb0 |
|
||||
| name | R2 |
|
||||
| project_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| revision_number | 1 |
|
||||
| status | ACTIVE |
|
||||
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| updated_at | 2017-01-12T07:19:23Z |
|
||||
+-------------------------+--------------------------------------+
|
||||
|
||||
Set the router gateway to ext-net2 for R2.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion router-gateway-set R2 ext-net2
|
||||
Set gateway for router R2
|
||||
|
||||
Check router R2.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion router-show R2
|
||||
+-----------------------+------------------------------------------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+------------------------------------------------------------------------------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| created_at | 2017-01-12T07:19:23Z |
|
||||
| description | |
|
||||
| external_gateway_info | {"network_id": "ae806ecb-fa3e-4b3c-a582-caef3d8cd9b4", "external_fixed_ips": [{"subnet_id": |
|
||||
| | "9fb32423-95a8-4589-b69c-e2955234ae56", "ip_address": "163.3.125.3"}]} |
|
||||
| id | 8a8571db-e3ba-4b78-98ca-13d4dc1a4fb0 |
|
||||
| name | R2 |
|
||||
| project_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| revision_number | 7 |
|
||||
| status | ACTIVE |
|
||||
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| updated_at | 2017-01-12T07:44:00Z |
|
||||
+-----------------------+------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
|
||||
Create net2 in RegionTwo.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion net-create --availability-zone-hint RegionTwo net2
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | RegionTwo |
|
||||
| id | 71b06c5d-2eb8-4ef4-a978-c5c98874811b |
|
||||
| name | net2 |
|
||||
| project_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| provider:network_type | local |
|
||||
| provider:physical_network | |
|
||||
| provider:segmentation_id | |
|
||||
| router:external | False |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
Create subnet in net2.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion subnet-create net2 10.0.2.0/24
|
||||
+-------------------+--------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------------+
|
||||
| allocation_pools | {"start": "10.0.2.2", "end": "10.0.2.254"} |
|
||||
| cidr | 10.0.2.0/24 |
|
||||
| created_at | 2017-01-12T07:45:55Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 10.0.2.1 |
|
||||
| host_routes | |
|
||||
| id | 356947cf-88e2-408b-ab49-7c0e79110a25 |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | |
|
||||
| ipv6_ra_mode | |
|
||||
| name | |
|
||||
| network_id | 71b06c5d-2eb8-4ef4-a978-c5c98874811b |
|
||||
| project_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| revision_number | 2 |
|
||||
| subnetpool_id | |
|
||||
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| updated_at | 2017-01-12T07:45:55Z |
|
||||
+-------------------+--------------------------------------------+
|
||||
|
||||
Add router interface for the subnet to R2.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion router-interface-add R2 356947cf-88e2-408b-ab49-7c0e79110a25
|
||||
Added interface 805b16de-fbe9-4b54-b891-b39bc2f73a86 to router R2.
|
||||
|
||||
List available images in RegionTwo.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ glance --os-region-name=RegionTwo image-list
|
||||
+--------------------------------------+---------------------------------+
|
||||
| ID | Name |
|
||||
+--------------------------------------+---------------------------------+
|
||||
| 6fbad28b-d5f1-4924-a330-f9d5a6cf6c62 | cirros-0.3.4-x86_64-uec |
|
||||
| cc912d30-5cbe-406d-89f2-8c09a73012c4 | cirros-0.3.4-x86_64-uec-kernel |
|
||||
| 8660610d-d362-4f20-8f99-4d64c7c21284 | cirros-0.3.4-x86_64-uec-ramdisk |
|
||||
+--------------------------------------+---------------------------------+
|
||||
|
||||
List available flavors in RegionTwo.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova --os-region-name=RegionTwo flavor-list
|
||||
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
||||
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
|
||||
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
||||
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
|
||||
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
|
||||
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
|
||||
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
|
||||
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
|
||||
| c1 | cirros256 | 256 | 0 | 0 | | 1 | 1.0 | True |
|
||||
| d1 | ds512M | 512 | 5 | 0 | | 1 | 1.0 | True |
|
||||
| d2 | ds1G | 1024 | 10 | 0 | | 1 | 1.0 | True |
|
||||
| d3 | ds2G | 2048 | 10 | 0 | | 2 | 1.0 | True |
|
||||
| d4 | ds4G | 4096 | 20 | 0 | | 4 | 1.0 | True |
|
||||
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
||||
|
||||
Boot instance2, and connect the instance2 to net2 and net3.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova --os-region-name=RegionTwo boot --flavor 1 --image 6fbad28b-d5f1-4924-a330-f9d5a6cf6c62 --nic net-id=68d04c60-469d-495d-bb23-0d36d56235bd --nic net-id=71b06c5d-2eb8-4ef4-a978-c5c98874811b instance2
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| Property | Value |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| OS-DCF:diskConfig | MANUAL |
|
||||
| OS-EXT-AZ:availability_zone | |
|
||||
| OS-EXT-SRV-ATTR:host | - |
|
||||
| OS-EXT-SRV-ATTR:hostname | instance2 |
|
||||
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
|
||||
| OS-EXT-SRV-ATTR:instance_name | |
|
||||
| OS-EXT-SRV-ATTR:kernel_id | cc912d30-5cbe-406d-89f2-8c09a73012c4 |
|
||||
| OS-EXT-SRV-ATTR:launch_index | 0 |
|
||||
| OS-EXT-SRV-ATTR:ramdisk_id | 8660610d-d362-4f20-8f99-4d64c7c21284 |
|
||||
| OS-EXT-SRV-ATTR:reservation_id | r-xylwc16h |
|
||||
| OS-EXT-SRV-ATTR:root_device_name | - |
|
||||
| OS-EXT-SRV-ATTR:user_data | - |
|
||||
| OS-EXT-STS:power_state | 0 |
|
||||
| OS-EXT-STS:task_state | scheduling |
|
||||
| OS-EXT-STS:vm_state | building |
|
||||
| OS-SRV-USG:launched_at | - |
|
||||
| OS-SRV-USG:terminated_at | - |
|
||||
| accessIPv4 | |
|
||||
| accessIPv6 | |
|
||||
| adminPass | Lmanqrz9GN77 |
|
||||
| config_drive | |
|
||||
| created | 2017-01-13T01:41:19Z |
|
||||
| description | - |
|
||||
| flavor | m1.tiny (1) |
|
||||
| hostId | |
|
||||
| host_status | |
|
||||
| id | dbcfef20-0794-4b5e-aa3f-d08dc6086eb6 |
|
||||
| image | cirros-0.3.4-x86_64-uec (6fbad28b-d5f1-4924-a330-f9d5a6cf6c62) |
|
||||
| key_name | - |
|
||||
| locked | False |
|
||||
| metadata | {} |
|
||||
| name | instance2 |
|
||||
| os-extended-volumes:volumes_attached | [] |
|
||||
| progress | 0 |
|
||||
| security_groups | default |
|
||||
| status | BUILD |
|
||||
| tags | [] |
|
||||
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| updated | 2017-01-13T01:41:19Z |
|
||||
| user_id | d2521e53aa8c4916b3a8e444f20cf1da |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
|
||||
Check to see if instance2 is active.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova --os-region-name=RegionTwo list
|
||||
+--------------------------------------+-----------+--------+------------+-------------+------------------------------+
|
||||
| ID | Name | Status | Task State | Power State | Networks |
|
||||
+--------------------------------------+-----------+--------+------------+-------------+------------------------------+
|
||||
| dbcfef20-0794-4b5e-aa3f-d08dc6086eb6 | instance2 | ACTIVE | - | Running | net3=10.0.3.4; net2=10.0.2.3 |
|
||||
+--------------------------------------+-----------+--------+------------+-------------+------------------------------+
|
||||
|
||||
Create floating IP for instance2.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion floatingip-create ext-net2
|
||||
+---------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+--------------------------------------+
|
||||
| created_at | 2017-01-13T01:45:10Z |
|
||||
| description | |
|
||||
| fixed_ip_address | |
|
||||
| floating_ip_address | 163.3.125.4 |
|
||||
| floating_network_id | ae806ecb-fa3e-4b3c-a582-caef3d8cd9b4 |
|
||||
| id | e0dcbe62-0023-41a8-a099-a4c4b5285e03 |
|
||||
| port_id | |
|
||||
| project_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| revision_number | 1 |
|
||||
| router_id | |
|
||||
| status | DOWN |
|
||||
| tenant_id | 532890c765604609a8d2ef6fc8e5f6ef |
|
||||
| updated_at | 2017-01-13T01:45:10Z |
|
||||
+---------------------+--------------------------------------+
|
||||
|
||||
List port of instance2.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion port-list
|
||||
+------------------------------------+------------------------------------+-------------------+--------------------------------------+
|
||||
| id | name | mac_address | fixed_ips |
|
||||
+------------------------------------+------------------------------------+-------------------+--------------------------------------+
|
||||
| 185b5185-0254-486c-9d8b- | | fa:16:3e:da:ae:99 | {"subnet_id": "2c8f446f- |
|
||||
| 198af4b4d40e | | | ba02-4140-a793-913033aa3580", |
|
||||
| | | | "ip_address": "10.0.1.13"} |
|
||||
| 248f9072-76d6-405a- | | fa:16:3e:dc:2f:b3 | {"subnet_id": "5ab92c3c-b799-451c- |
|
||||
| 8eb5-f0d3475c542d | | | b5d5-b72274fb0fcc", "ip_address": |
|
||||
| | | | "10.0.3.7"} |
|
||||
| 6b0fe2e0-a236-40db-bcbf- | | fa:16:3e:73:21:6c | {"subnet_id": "356947cf-88e2-408b- |
|
||||
| 2f31f7124d83 | | | ab49-7c0e79110a25", "ip_address": |
|
||||
| | | | "10.0.2.3"} |
|
||||
| ab6dd6f4-b48a-4a3e- | | fa:16:3e:67:03:73 | {"subnet_id": "5ab92c3c-b799-451c- |
|
||||
| 9f43-90d0fccc181a | | | b5d5-b72274fb0fcc", "ip_address": |
|
||||
| | | | "10.0.3.4"} |
|
||||
| 5c0e0e7a-0faf- | | fa:16:3e:7b:11:c6 | |
|
||||
| 44c4-a735-c8745faa9920 | | | |
|
||||
| d48a8e87-61a0-494b- | | fa:16:3e:c6:8e:c5 | {"subnet_id": "2c8f446f- |
|
||||
| bc06-54f7a008ea78 | | | ba02-4140-a793-913033aa3580", |
|
||||
| | | | "ip_address": "10.0.1.1"} |
|
||||
| ce3a1530-20f4-4760-a451-81e5f939aa | dhcp_port_2c8f446f- | fa:16:3e:e6:32:0f | {"subnet_id": "2c8f446f- |
|
||||
| fc | ba02-4140-a793-913033aa3580 | | ba02-4140-a793-913033aa3580", |
|
||||
| | | | "ip_address": "10.0.1.2"} |
|
||||
| 7925a3cc- | interface_RegionOne_2c8f446f- | fa:16:3e:c5:ad:6f | {"subnet_id": "2c8f446f- |
|
||||
| 6c36-4bc3-a798-a6145fed442a | ba02-4140-a793-913033aa3580 | | ba02-4140-a793-913033aa3580", |
|
||||
| | | | "ip_address": "10.0.1.3"} |
|
||||
| 805b16de- | | fa:16:3e:94:cd:82 | {"subnet_id": "356947cf-88e2-408b- |
|
||||
| fbe9-4b54-b891-b39bc2f73a86 | | | ab49-7c0e79110a25", "ip_address": |
|
||||
| | | | "10.0.2.1"} |
|
||||
| 30243711-d113-42b7-b712-81ca0d7454 | dhcp_port_356947cf-88e2-408b- | fa:16:3e:83:3d:c8 | {"subnet_id": "356947cf-88e2-408b- |
|
||||
| 6d | ab49-7c0e79110a25 | | ab49-7c0e79110a25", "ip_address": |
|
||||
| | | | "10.0.2.2"} |
|
||||
| 27fab5a2-0710-4742-a731-331f6c2150 | interface_RegionTwo_356947cf-88e2 | fa:16:3e:39:0a:f5 | {"subnet_id": "356947cf-88e2-408b- |
|
||||
| fa | -408b-ab49-7c0e79110a25 | | ab49-7c0e79110a25", "ip_address": |
|
||||
| | | | "10.0.2.6"} |
|
||||
| a7d0bae1-51de- | interface_RegionTwo_5ab92c3c-b799 | fa:16:3e:d6:3f:ca | {"subnet_id": "5ab92c3c-b799-451c- |
|
||||
| 4b47-9f81-b012e511e4a7 | -451c-b5d5-b72274fb0fcc | | b5d5-b72274fb0fcc", "ip_address": |
|
||||
| | | | "10.0.3.11"} |
|
||||
| 077c63b6-0184-4bf7-b3aa- | dhcp_port_5ab92c3c-b799-451c- | fa:16:3e:d2:a3:53 | {"subnet_id": "5ab92c3c-b799-451c- |
|
||||
| b071de6f39be | b5d5-b72274fb0fcc | | b5d5-b72274fb0fcc", "ip_address": |
|
||||
| | | | "10.0.3.2"} |
|
||||
| c90be7bc- | interface_RegionOne_5ab92c3c-b799 | fa:16:3e:b6:e4:bc | {"subnet_id": "5ab92c3c-b799-451c- |
|
||||
| 31ea-4015-a432-2bef62e343d1 | -451c-b5d5-b72274fb0fcc | | b5d5-b72274fb0fcc", "ip_address": |
|
||||
| | | | "10.0.3.9"} |
|
||||
| 3053fcb9-b6ad-4a9c-b89e- | bridge_port_532890c765604609a8d2ef | fa:16:3e:fc:d0:fc | {"subnet_id": "53def0ac-59ef- |
|
||||
| ffe6aff6523b | 6fc8e5f6ef_0c4faa42-5230-4adc- | | 4c7b-b694-3375598954da", |
|
||||
| | bab5-10ee53ebf888 | | "ip_address": "100.0.0.11"} |
|
||||
| 5a10c53f-1f8f-43c1-a61c- | bridge_port_532890c765604609a8d2ef | fa:16:3e:dc:f7:4a | {"subnet_id": "53def0ac-59ef- |
|
||||
| 6cdbd052985e | 6fc8e5f6ef_cf71a43d-6df1-491d- | | 4c7b-b694-3375598954da", |
|
||||
| | 894d-bd2e6620acfc | | "ip_address": "100.0.0.8"} |
|
||||
| ce787983-a140-4c53-96d2-71f62e1545 | | fa:16:3e:1a:62:7f | {"subnet_id": "a2eecc16-deb8-42a6 |
|
||||
| 3a | | | -a41b-5058847ed20a", "ip_address": |
|
||||
| | | | "163.3.124.5"} |
|
||||
| 2d9fc640-1858-4c7e-b42c- | | fa:16:3e:00:7c:6e | {"subnet_id": "a2eecc16-deb8-42a6 |
|
||||
| d3ed3f338b8a | | | -a41b-5058847ed20a", "ip_address": |
|
||||
| | | | "163.3.124.6"} |
|
||||
| bfd53cea-6135-4515-ae63-f346125335 | | fa:16:3e:ae:81:6f | {"subnet_id": "9fb32423-95a8-4589 |
|
||||
| 27 | | | -b69c-e2955234ae56", "ip_address": |
|
||||
| | | | "163.3.125.3"} |
|
||||
| 12495d5b-5346-48d0-8ed2-daea6ad42a | | fa:16:3e:d4:83:cc | {"subnet_id": "9fb32423-95a8-4589 |
|
||||
| 3a | | | -b69c-e2955234ae56", "ip_address": |
|
||||
| | | | "163.3.125.4"} |
|
||||
+------------------------------------+------------------------------------+-------------------+--------------------------------------+
|
||||
|
||||
Associate the floating IP to the instance2's IP address in net2.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion floatingip-associate e0dcbe62-0023-41a8-a099-a4c4b5285e03 6b0fe2e0-a236-40db-bcbf-2f31f7124d83
|
||||
Associated floating IP e0dcbe62-0023-41a8-a099-a4c4b5285e03
|
||||
|
||||
Make sure the floating IP association works.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion floatingip-list
|
||||
+--------------------------------------+------------------+---------------------+--------------------------------------+
|
||||
| id | fixed_ip_address | floating_ip_address | port_id |
|
||||
+--------------------------------------+------------------+---------------------+--------------------------------------+
|
||||
| 645f9cd6-d8d4-427a-88fe-770240c96d09 | 10.0.1.13 | 163.3.124.6 | 185b5185-0254-486c-9d8b-198af4b4d40e |
|
||||
| e0dcbe62-0023-41a8-a099-a4c4b5285e03 | 10.0.2.3 | 163.3.125.4 | 6b0fe2e0-a236-40db-bcbf-2f31f7124d83 |
|
||||
+--------------------------------------+------------------+---------------------+--------------------------------------+
|
||||
|
||||
You can verify that in RegionTwo.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=RegionTwo floatingip-list
|
||||
+--------------------------------------+------------------+---------------------+--------------------------------------+
|
||||
| id | fixed_ip_address | floating_ip_address | port_id |
|
||||
+--------------------------------------+------------------+---------------------+--------------------------------------+
|
||||
| b8a6b83a-cc8f-4335-894c-ef71e7504ee1 | 10.0.2.3 | 163.3.125.4 | 6b0fe2e0-a236-40db-bcbf-2f31f7124d83 |
|
||||
+--------------------------------------+------------------+---------------------+--------------------------------------+
|
||||
|
||||
Instance1 can ping instance2 through the IP address in the net3, and vice versa.
|
||||
|
||||
Note: Not all images will bring up the second nic, so you can ssh into
|
||||
instance1 or instance2, use ifconfig -a to check whether all NICs are created,
|
||||
and bring up all NICs if necessary.
|
File diff suppressed because it is too large
Load Diff
|
@ -1,228 +0,0 @@
|
|||
================================================================
|
||||
How to use the new layer-3 networking model for multi-NS-with-EW
|
||||
================================================================
|
||||
|
||||
The following figure illustrates the new layer-3 networking model for multi-NS-with-EW::
|
||||
|
||||
ext-net1 ext-net2
|
||||
|
||||
+---+---+ +---+---+
|
||||
| |
|
||||
+---+---+ +---+---+
|
||||
| R1 | | R2 |
|
||||
+---+---+ +---+---+
|
||||
| |
|
||||
+---+--------------------+---+
|
||||
| bridge-net |
|
||||
+-------------+--------------+
|
||||
|
|
||||
|
|
||||
+-------------+--------------+
|
||||
| R3 |
|
||||
+---+--------------------+---+
|
||||
| net1 net2 |
|
||||
+---+-----+-+ +---+-+---+
|
||||
| |
|
||||
+---------+-+ +--+--------+
|
||||
| Instance1 | | Instance2 |
|
||||
+-----------+ +-----------+
|
||||
|
||||
Figure 1 Logical topology in central Neutron
|
||||
|
||||
As shown in Fig. 1, each external network(i.e., ext-net1, ext-net2) will connect to a Router(i.e., R1, R2).
|
||||
These routers will take charge of routing NS traffic and connect with the logical(non-local) router through
|
||||
bridge network. This is the networking model in the spec [1]_, a routed network is using to manage the
|
||||
external networks in central Neutron.
|
||||
|
||||
When we create a logical router(i.e., R3) in central Neutron, Tricircle will create local router in each region.
|
||||
Then attach the network(i.e, net1, net2) to central router(i.e, R3), this router will take charge of all
|
||||
traffic (no matter NS or EW traffic).
|
||||
|
||||
For EW traffic, from net1 to net2, R3(in net1's region) will forwards packets to the
|
||||
interface of net2 in R3(in net2's region) router namespace. For NS traffic, R3 forwards
|
||||
packets to the interface of an available local router (i.e., R1 or R2)
|
||||
which attached to the real external network.
|
||||
|
||||
More details in the specs of A New Layer-3 Networking multi-NS-with-EW-enabled [1]
|
||||
|
||||
How to use segment for managing multiple networks in this network topology
|
||||
==========================================================================
|
||||
|
||||
1. Enable the configuration of enable_l3_route_network in /tricircle/network/central_plugin.py
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cfg.BoolOpt('enable_l3_route_network',
|
||||
default=True,
|
||||
help=_('Whether using new l3 networking model. When it is'
|
||||
'set True, Tricircle will create a local router'
|
||||
'automatically after creating an external network'))
|
||||
|
||||
2. Add segment plugin in /etc/neutron/neutron.conf.0
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
service_plugins = tricircle.network.segment_plugin.TricircleSegmentPlugin
|
||||
|
||||
Now we start to create segments and subnetworks.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
stack@stack-atom:~/devstack$ openstack multiregion networking pod list
|
||||
stack@stack-atom:~/devstack$ openstack multiregion networking pod create --region-name CentralRegion
|
||||
+-------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+--------------------------------------+
|
||||
| az_name | |
|
||||
| dc_name | |
|
||||
| pod_az_name | |
|
||||
| pod_id | f2f5757d-350f-4278-91a4-3baca12ebccc |
|
||||
| region_name | CentralRegion |
|
||||
+-------------+--------------------------------------+
|
||||
stack@stack-atom:~/devstack$ openstack multiregion networking pod create --region-name RegionOne --availability-zone az1
|
||||
+-------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+--------------------------------------+
|
||||
| az_name | az1 |
|
||||
| dc_name | |
|
||||
| pod_az_name | |
|
||||
| pod_id | 7c34177a-a210-4edc-a5ca-b9615a7061b3 |
|
||||
| region_name | RegionOne |
|
||||
+-------------+--------------------------------------+
|
||||
stack@stack-atom:~/devstack$ openstack --os-region-name=CentralRegion network create --share --provider-physical-network extern --provider-network-type vlan --provider-segment 3005 multisegment
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | None |
|
||||
| created_at | None |
|
||||
| description | None |
|
||||
| dns_domain | None |
|
||||
| id | e848d653-e777-4715-9596-bd0427d9fd27 |
|
||||
| ipv4_address_scope | None |
|
||||
| ipv6_address_scope | None |
|
||||
| is_default | None |
|
||||
| is_vlan_transparent | None |
|
||||
| location | None |
|
||||
| mtu | None |
|
||||
| name | multisegment |
|
||||
| port_security_enabled | False |
|
||||
| project_id | 1f31124fadd247f18098a20a6da207ec |
|
||||
| provider:network_type | vlan |
|
||||
| provider:physical_network | extern |
|
||||
| provider:segmentation_id | 3005 |
|
||||
| qos_policy_id | None |
|
||||
| revision_number | None |
|
||||
| router:external | Internal |
|
||||
| segments | None |
|
||||
| shared | True |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tags | |
|
||||
| updated_at | None |
|
||||
+---------------------------+--------------------------------------+
|
||||
stack@stack-atom:~/devstack$ openstack --os-region-name=CentralRegion network segment create --physical-network extern --network-type vlan --segment 3005 --network multisegment newl3-RegionOne-sgmtnet01
|
||||
+------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+------------------+--------------------------------------+
|
||||
| description | |
|
||||
| id | 802ccc73-1c99-455e-858a-1c19d77d1995 |
|
||||
| location | None |
|
||||
| name | newl3-RegionOne-sgmtnet01 |
|
||||
| network_id | e848d653-e777-4715-9596-bd0427d9fd27 |
|
||||
| network_type | vlan |
|
||||
| physical_network | extern |
|
||||
| segmentation_id | 3005 |
|
||||
+------------------+--------------------------------------+
|
||||
stack@stack-atom:~/devstack$ openstack --os-region-name=CentralRegion network list
|
||||
+--------------------------------------+---------------------------+---------+
|
||||
| ID | Name | Subnets |
|
||||
+--------------------------------------+---------------------------+---------+
|
||||
| 5596d53f-d6ed-4ac5-9722-ad7e3e82e187 | newl3-RegionOne-sgmtnet01 | |
|
||||
| e848d653-e777-4715-9596-bd0427d9fd27 | multisegment | |
|
||||
+--------------------------------------+---------------------------+---------+
|
||||
stack@stack-atom:~/devstack$ openstack --os-region-name=RegionOne network list
|
||||
+--------------------------------------+---------------------------+---------+
|
||||
| ID | Name | Subnets |
|
||||
+--------------------------------------+---------------------------+---------+
|
||||
| 2b9f4e56-57be-4624-87b9-ab745ec321c0 | newl3-RegionOne-sgmtnet01 | |
|
||||
+--------------------------------------+---------------------------+---------+
|
||||
stack@stack-atom:~/devstack$ openstack --os-region-name=CentralRegion subnet create --network newl3-RegionOne-sgmtnet01 --subnet-range 10.0.0.0/24 newl3segment01-subnet-v4
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| allocation_pools | 10.0.0.2-10.0.0.254 |
|
||||
| cidr | 10.0.0.0/24 |
|
||||
| created_at | 2018-11-28T09:22:39Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 10.0.0.1 |
|
||||
| host_routes | |
|
||||
| id | f00f7eb0-a72a-4c25-8f71-46e3d872064a |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | None |
|
||||
| ipv6_ra_mode | None |
|
||||
| location | None |
|
||||
| name | newl3segment01-subnet-v4 |
|
||||
| network_id | 5596d53f-d6ed-4ac5-9722-ad7e3e82e187 |
|
||||
| project_id | 1f31124fadd247f18098a20a6da207ec |
|
||||
| revision_number | 0 |
|
||||
| segment_id | None |
|
||||
| service_types | None |
|
||||
| subnetpool_id | None |
|
||||
| tags | |
|
||||
| updated_at | 2018-11-28T09:22:39Z |
|
||||
+-------------------+--------------------------------------+
|
||||
stack@stack-atom:~/devstack$ openstack --os-region-name=CentralRegion network list
|
||||
+--------------------------------------+---------------------------+--------------------------------------+
|
||||
| ID | Name | Subnets |
|
||||
+--------------------------------------+---------------------------+--------------------------------------+
|
||||
| 5596d53f-d6ed-4ac5-9722-ad7e3e82e187 | newl3-RegionOne-sgmtnet01 | f00f7eb0-a72a-4c25-8f71-46e3d872064a |
|
||||
| e848d653-e777-4715-9596-bd0427d9fd27 | multisegment | |
|
||||
+--------------------------------------+---------------------------+--------------------------------------+
|
||||
stack@stack-atom:~/devstack$ openstack --os-region-name=CentralRegion subnet list
|
||||
+--------------------------------------+--------------------------+--------------------------------------+-------------+
|
||||
| ID | Name | Network | Subnet |
|
||||
+--------------------------------------+--------------------------+--------------------------------------+-------------+
|
||||
| f00f7eb0-a72a-4c25-8f71-46e3d872064a | newl3segment01-subnet-v4 | 5596d53f-d6ed-4ac5-9722-ad7e3e82e187 | 10.0.0.0/24 |
|
||||
+--------------------------------------+--------------------------+--------------------------------------+-------------+
|
||||
stack@stack-atom:~/devstack$ openstack --os-region-name=RegionOne network list
|
||||
+--------------------------------------+---------------------------+--------------------------------------+
|
||||
| ID | Name | Subnets |
|
||||
+--------------------------------------+---------------------------+--------------------------------------+
|
||||
| 2b9f4e56-57be-4624-87b9-ab745ec321c0 | newl3-RegionOne-sgmtnet01 | f00f7eb0-a72a-4c25-8f71-46e3d872064a |
|
||||
+--------------------------------------+---------------------------+--------------------------------------+
|
||||
stack@stack-atom:~/devstack$ openstack --os-region-name=RegionOne subnet list
|
||||
+--------------------------------------+--------------------------------------+--------------------------------------+-------------+
|
||||
| ID | Name | Network | Subnet |
|
||||
+--------------------------------------+--------------------------------------+--------------------------------------+-------------+
|
||||
| f00f7eb0-a72a-4c25-8f71-46e3d872064a | f00f7eb0-a72a-4c25-8f71-46e3d872064a | 2b9f4e56-57be-4624-87b9-ab745ec321c0 | 10.0.0.0/24 |
|
||||
+--------------------------------------+--------------------------------------+--------------------------------------+-------------+
|
||||
|
||||
This part is for how to delete segments and subnetworks.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
stack@stack-atom:~/devstack$ openstack --os-region-name=CentralRegion subnet delete newl3segment01-subnet-v4
|
||||
stack@stack-atom:~/devstack$ openstack --os-region-name=CentralRegion subnet list
|
||||
|
||||
stack@stack-atom:~/devstack$ openstack --os-region-name=RegionOne subnet list
|
||||
|
||||
stack@stack-atom:~/devstack$ openstack --os-region-name=CentralRegion network delete newl3-RegionOne-sgmtnet01
|
||||
stack@stack-atom:~/devstack$ openstack --os-region-name=CentralRegion network list
|
||||
+--------------------------------------+--------------+---------+
|
||||
| ID | Name | Subnets |
|
||||
+--------------------------------------+--------------+---------+
|
||||
| e848d653-e777-4715-9596-bd0427d9fd27 | multisegment | |
|
||||
+--------------------------------------+--------------+---------+
|
||||
stack@stack-atom:~/devstack$ openstack --os-region-name=RegionOne network list
|
||||
|
||||
stack@stack-atom:~/devstack$
|
||||
|
||||
|
||||
Reference
|
||||
=========
|
||||
|
||||
.. [1] https://github.com/openstack/tricircle/blob/master/specs/stein/new-l3-networking-mulit-NS-with-EW.rst
|
|
@ -1,664 +0,0 @@
|
|||
==================================================
|
||||
North South Networking via Single External Network
|
||||
==================================================
|
||||
|
||||
The following figure illustrates one typical networking mode, the north
|
||||
south networking traffic for the tenant will be centralized through
|
||||
single external network. Only one virtual non-local router R1 is needed
|
||||
even if the tenant's network are located in multiple OpenStack regions.
|
||||
|
||||
Only Neutron and Tricircle Local Neutron Plugin are required to be deployed
|
||||
in RegionThree if you want to make the external network being floating and
|
||||
applicable to all tenant's network.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
+-------------+
|
||||
| RegionThree |
|
||||
| |
|
||||
| ext-net1 |
|
||||
| +----+----+ |
|
||||
| | |
|
||||
| +--+--+ |
|
||||
| | R1 | |
|
||||
| +-+---+ |
|
||||
| | |
|
||||
+-----+-------+
|
||||
+-----------------+ | +-----------------+
|
||||
| RegionOne | | | RegionTwo |
|
||||
| | bridge | net | |
|
||||
| ++-----------------+-----------------+-+ |
|
||||
| | | | | |
|
||||
| +--+--+ | | +-+---+ |
|
||||
| | R1 | | | | R1 | |
|
||||
| +--+--+ | | +--+--+ |
|
||||
| | net1 | | net2 | |
|
||||
| +---+--+-+ | | +-+--+---+ |
|
||||
| | | | | |
|
||||
| +---------+-+ | | +--+--------+ |
|
||||
| | Instance1 | | | | Instance2 | |
|
||||
| +-----------+ | | +-----------+ |
|
||||
+-----------------+ +-----------------+
|
||||
|
||||
Figure 1 North South Networking via Single External Network
|
||||
|
||||
.. note:: Please note that if local network and external network are located
|
||||
in the same region, attaching router interface to non local router will
|
||||
lead to one additional logical router for east-west networking. For example,
|
||||
in the following figure, external network ext-net1 is in RegionTwo, and
|
||||
if local network net2 is attached to the router R1, then the additional
|
||||
logical router R1 for east-west networking is created.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
+-----------------+ +-----------------+ +-----------------+ +-----------------+
|
||||
| RegionOne | | RegionTwo | | RegionOne | | RegionTwo |
|
||||
| | | | | | | |
|
||||
| | | ext-net1 | | | | ext-net1 |
|
||||
| | | +-------+---+ | | | | +-------+---+ |
|
||||
| bridge net | | | bridge net | |
|
||||
| -+-------+-------+---+-+ | | | -+-------+-------+-+-+-+ | |
|
||||
| | | | | +--+--+ | | | | | | | +--+--+ |
|
||||
| +--+--+ | | +----+ R1 | | | +--+--+ | | | +----+ R1 | |
|
||||
| | R1 | | | +-----+ | ---> | | R1 | | | | +-----+ |
|
||||
| +--+--+ | | | | +--+--+ | | | |
|
||||
| | | | | | | | | | +-----+ |
|
||||
| | | | | | | | | +---+ R1 | |
|
||||
| | | | | | | | | +--+--+ |
|
||||
| | | | | | | | | | |
|
||||
| | net1 | | | | | net1 | | net2 | |
|
||||
| +---+--+-+ | | | | +---+--+-+ | | +-+--+---+ |
|
||||
| | | | | | | | | | |
|
||||
| | | | | | | | | | |
|
||||
| +---------+-+ | | | | +---------+-+ | | +--+--------+ |
|
||||
| | Instance1 | | | | | | Instance1 | | | | Instance2 | |
|
||||
| +-----------+ | | | | +-----------+ | | +-----------+ |
|
||||
+-----------------+ +-----------------+ +-----------------+ +-----------------+
|
||||
|
||||
Figure 2 What happens if local network and external network are in the same region
|
||||
|
||||
How to create this network topology
|
||||
===================================
|
||||
|
||||
Following commands are executed to create the Figure 1 topology. Different
|
||||
order to create this topology is also possible, for example, create router
|
||||
and tenant network first, then boot instance, set the router gateway, and
|
||||
associate floating IP as the last step.
|
||||
|
||||
Create external network ext-net1, which will be located in RegionThree.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion net-create --provider:network_type vlan --provider:physical_network extern --router:external --availability-zone-hint RegionThree ext-net1
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | RegionThree |
|
||||
| id | 494a1d2f-9a0f-4d0d-a5e9-f926fce912ac |
|
||||
| name | ext-net1 |
|
||||
| project_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
| provider:network_type | vlan |
|
||||
| provider:physical_network | extern |
|
||||
| provider:segmentation_id | 170 |
|
||||
| router:external | True |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tenant_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
Now you can also create flat type external network
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion net-create --provider:network_type flat --provider:physical_network extern --router:external --availability-zone-hint RegionTwo ext-net1
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | RegionTwo |
|
||||
| id | c151c1a2-ec8c-4975-bb85-9a8e143100b0 |
|
||||
| name | ext-net1 |
|
||||
| project_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
| provider:network_type | flat |
|
||||
| provider:physical_network | extern |
|
||||
| provider:segmentation_id | |
|
||||
| router:external | True |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tenant_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
Create subnet in ext-net1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion subnet-create --name ext-subnet1 --disable-dhcp ext-net1 163.3.124.0/24
|
||||
+-------------------+--------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------------------+
|
||||
| allocation_pools | {"start": "163.3.124.2", "end": "163.3.124.254"} |
|
||||
| cidr | 163.3.124.0/24 |
|
||||
| created_at | 2017-01-14T02:11:48Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | False |
|
||||
| gateway_ip | 163.3.124.1 |
|
||||
| host_routes | |
|
||||
| id | 5485feab-f843-4ffe-abd5-6afe5319ad82 |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | |
|
||||
| ipv6_ra_mode | |
|
||||
| name | ext-subnet1 |
|
||||
| network_id | 494a1d2f-9a0f-4d0d-a5e9-f926fce912ac |
|
||||
| project_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
| revision_number | 2 |
|
||||
| subnetpool_id | |
|
||||
| tenant_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
| updated_at | 2017-01-14T02:11:48Z |
|
||||
+-------------------+--------------------------------------------------+
|
||||
|
||||
Create router R1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion router-create R1
|
||||
+-----------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| created_at | 2017-01-14T02:12:15Z |
|
||||
| description | |
|
||||
| external_gateway_info | |
|
||||
| id | 4c4c164d-2cfa-4d2b-ba81-3711f44a6962 |
|
||||
| name | R1 |
|
||||
| project_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
| revision_number | 1 |
|
||||
| status | ACTIVE |
|
||||
| tenant_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
| updated_at | 2017-01-14T02:12:15Z |
|
||||
+-----------------------+--------------------------------------+
|
||||
|
||||
Set the router gateway to ext-net1 for R1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion router-gateway-set R1 ext-net1
|
||||
Set gateway for router R1
|
||||
|
||||
Create local network net1 which will reside in RegionOne, so you use RegionOne
|
||||
as the value of availability-zone-hint.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion net-create --availability-zone-hint RegionOne net1
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | RegionOne |
|
||||
| id | dde37c9b-7fe6-4ca9-be1a-0abb9ba1eddf |
|
||||
| name | net1 |
|
||||
| project_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
| provider:network_type | local |
|
||||
| provider:physical_network | |
|
||||
| provider:segmentation_id | |
|
||||
| router:external | False |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tenant_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
Create subnet in net1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion subnet-create net1 10.0.1.0/24
|
||||
+-------------------+--------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------------+
|
||||
| allocation_pools | {"start": "10.0.1.2", "end": "10.0.1.254"} |
|
||||
| cidr | 10.0.1.0/24 |
|
||||
| created_at | 2017-01-14T02:14:09Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 10.0.1.1 |
|
||||
| host_routes | |
|
||||
| id | 409f3b9e-3b14-4147-9443-51930eb9a882 |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | |
|
||||
| ipv6_ra_mode | |
|
||||
| name | |
|
||||
| network_id | dde37c9b-7fe6-4ca9-be1a-0abb9ba1eddf |
|
||||
| project_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
| revision_number | 2 |
|
||||
| subnetpool_id | |
|
||||
| tenant_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
| updated_at | 2017-01-14T02:14:09Z |
|
||||
+-------------------+--------------------------------------------+
|
||||
|
||||
Add this subnet to router R1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion router-interface-add R1 409f3b9e-3b14-4147-9443-51930eb9a882
|
||||
Added interface 92eaf94d-e345-489a-bc91-3d3645d27f8b to router R1.
|
||||
|
||||
List the available images in RegionOne.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ glance --os-region-name=RegionOne image-list
|
||||
+--------------------------------------+---------------------------------+
|
||||
| ID | Name |
|
||||
+--------------------------------------+---------------------------------+
|
||||
| 570b5674-4d7d-4c17-9e8a-1caed6194ff1 | cirros-0.3.4-x86_64-uec |
|
||||
| 548cf82c-4353-407e-9aa2-3feac027c297 | cirros-0.3.4-x86_64-uec-kernel |
|
||||
| 1d40fb9f-1669-4b4d-82b8-4c3b9cde0c03 | cirros-0.3.4-x86_64-uec-ramdisk |
|
||||
+--------------------------------------+---------------------------------+
|
||||
|
||||
List the available flavors in RegionOne.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova --os-region-name=RegionOne flavor-list
|
||||
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
||||
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
|
||||
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
||||
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
|
||||
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
|
||||
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
|
||||
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
|
||||
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
|
||||
| c1 | cirros256 | 256 | 0 | 0 | | 1 | 1.0 | True |
|
||||
| d1 | ds512M | 512 | 5 | 0 | | 1 | 1.0 | True |
|
||||
| d2 | ds1G | 1024 | 10 | 0 | | 1 | 1.0 | True |
|
||||
| d3 | ds2G | 2048 | 10 | 0 | | 2 | 1.0 | True |
|
||||
| d4 | ds4G | 4096 | 20 | 0 | | 4 | 1.0 | True |
|
||||
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
||||
|
||||
|
||||
Boot instance1 in RegionOne, and connect this instance to net1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova --os-region-name=RegionOne boot --flavor 1 --image 570b5674-4d7d-4c17-9e8a-1caed6194ff1 --nic net-id=dde37c9b-7fe6-4ca9-be1a-0abb9ba1eddf instance1
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| Property | Value |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| OS-DCF:diskConfig | MANUAL |
|
||||
| OS-EXT-AZ:availability_zone | |
|
||||
| OS-EXT-SRV-ATTR:host | - |
|
||||
| OS-EXT-SRV-ATTR:hostname | instance1 |
|
||||
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
|
||||
| OS-EXT-SRV-ATTR:instance_name | |
|
||||
| OS-EXT-SRV-ATTR:kernel_id | 548cf82c-4353-407e-9aa2-3feac027c297 |
|
||||
| OS-EXT-SRV-ATTR:launch_index | 0 |
|
||||
| OS-EXT-SRV-ATTR:ramdisk_id | 1d40fb9f-1669-4b4d-82b8-4c3b9cde0c03 |
|
||||
| OS-EXT-SRV-ATTR:reservation_id | r-n0k0u15s |
|
||||
| OS-EXT-SRV-ATTR:root_device_name | - |
|
||||
| OS-EXT-SRV-ATTR:user_data | - |
|
||||
| OS-EXT-STS:power_state | 0 |
|
||||
| OS-EXT-STS:task_state | scheduling |
|
||||
| OS-EXT-STS:vm_state | building |
|
||||
| OS-SRV-USG:launched_at | - |
|
||||
| OS-SRV-USG:terminated_at | - |
|
||||
| accessIPv4 | |
|
||||
| accessIPv6 | |
|
||||
| adminPass | N9A9iArByrdt |
|
||||
| config_drive | |
|
||||
| created | 2017-01-14T02:17:05Z |
|
||||
| description | - |
|
||||
| flavor | m1.tiny (1) |
|
||||
| hostId | |
|
||||
| host_status | |
|
||||
| id | e7206415-e497-4110-b644-a64272625cef |
|
||||
| image | cirros-0.3.4-x86_64-uec (570b5674-4d7d-4c17-9e8a-1caed6194ff1) |
|
||||
| key_name | - |
|
||||
| locked | False |
|
||||
| metadata | {} |
|
||||
| name | instance1 |
|
||||
| os-extended-volumes:volumes_attached | [] |
|
||||
| progress | 0 |
|
||||
| security_groups | default |
|
||||
| status | BUILD |
|
||||
| tags | [] |
|
||||
| tenant_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
| updated | 2017-01-14T02:17:05Z |
|
||||
| user_id | 8e84fae0a5b74464b3300a4576d090a4 |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
|
||||
Make sure the instance1 is active in RegionOne.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova --os-region-name=RegionOne list
|
||||
+--------------------------------------+-----------+--------+------------+-------------+---------------+
|
||||
| ID | Name | Status | Task State | Power State | Networks |
|
||||
+--------------------------------------+-----------+--------+------------+-------------+---------------+
|
||||
| e7206415-e497-4110-b644-a64272625cef | instance1 | ACTIVE | - | Running | net1=10.0.1.5 |
|
||||
+--------------------------------------+-----------+--------+------------+-------------+---------------+
|
||||
|
||||
Create a floating IP for instance1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion floatingip-create ext-net1
|
||||
+---------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+--------------------------------------+
|
||||
| created_at | 2017-01-14T02:19:24Z |
|
||||
| description | |
|
||||
| fixed_ip_address | |
|
||||
| floating_ip_address | 163.3.124.7 |
|
||||
| floating_network_id | 494a1d2f-9a0f-4d0d-a5e9-f926fce912ac |
|
||||
| id | 04c18e73-675b-4273-a73a-afaf1e4f9811 |
|
||||
| port_id | |
|
||||
| project_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
| revision_number | 1 |
|
||||
| router_id | |
|
||||
| status | DOWN |
|
||||
| tenant_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
| updated_at | 2017-01-14T02:19:24Z |
|
||||
+---------------------+--------------------------------------+
|
||||
|
||||
List the port in net1 for instance1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion port-list
|
||||
+------------------------------------+------------------------------------+-------------------+--------------------------------------+
|
||||
| id | name | mac_address | fixed_ips |
|
||||
+------------------------------------+------------------------------------+-------------------+--------------------------------------+
|
||||
| 37e9cfe5-d410-4625-963d- | | fa:16:3e:14:47:a8 | {"subnet_id": "409f3b9e- |
|
||||
| b7ea4347d72e | | | 3b14-4147-9443-51930eb9a882", |
|
||||
| | | | "ip_address": "10.0.1.5"} |
|
||||
| 92eaf94d-e345-489a- | | fa:16:3e:63:a9:08 | {"subnet_id": "409f3b9e- |
|
||||
| bc91-3d3645d27f8b | | | 3b14-4147-9443-51930eb9a882", |
|
||||
| | | | "ip_address": "10.0.1.1"} |
|
||||
| d3ca5e74-470e-4953-a280-309b5e8e11 | dhcp_port_409f3b9e- | fa:16:3e:7e:72:98 | {"subnet_id": "409f3b9e- |
|
||||
| 46 | 3b14-4147-9443-51930eb9a882 | | 3b14-4147-9443-51930eb9a882", |
|
||||
| | | | "ip_address": "10.0.1.2"} |
|
||||
| b4eef6a0-70e6-4a42-b0c5-f8f49cee25 | interface_RegionOne_409f3b9e- | fa:16:3e:00:e1:5b | {"subnet_id": "409f3b9e- |
|
||||
| c0 | 3b14-4147-9443-51930eb9a882 | | 3b14-4147-9443-51930eb9a882", |
|
||||
| | | | "ip_address": "10.0.1.7"} |
|
||||
| 65b52fe3-f765-4124-a97f- | bridge_port_640e791e767e49939d5c60 | fa:16:3e:df:7b:97 | {"subnet_id": "d637f4e5-4b9a-4237 |
|
||||
| f73a76e820e6 | 0fdb3f8431_daa08da0-c60e- | | -b3bc-ccfba45a5c37", "ip_address": |
|
||||
| | 42c8-bc30-1ed887111ecb | | "100.0.0.7"} |
|
||||
| e0755307-a498-473e- | | fa:16:3e:1c:70:b9 | {"subnet_id": "5485feab-f843-4ffe- |
|
||||
| 99e5-30cbede36b8e | | | abd5-6afe5319ad82", "ip_address": |
|
||||
| | | | "163.3.124.7"} |
|
||||
| 2404eb83-f2f4-4a36-b377-dbc8befee1 | | fa:16:3e:25:80:e6 | {"subnet_id": "5485feab-f843-4ffe- |
|
||||
| 93 | | | abd5-6afe5319ad82", "ip_address": |
|
||||
| | | | "163.3.124.9"} |
|
||||
+------------------------------------+------------------------------------+-------------------+--------------------------------------+
|
||||
|
||||
Associate the floating IP to instance1's IP in net1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion floatingip-associate 04c18e73-675b-4273-a73a-afaf1e4f9811 37e9cfe5-d410-4625-963d-b7ea4347d72e
|
||||
Associated floating IP 04c18e73-675b-4273-a73a-afaf1e4f9811
|
||||
|
||||
Proceed with the creation network topology in RegionTwo.
|
||||
|
||||
Create net2 in RegionTwo.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion net-create --availability-zone-hint RegionTwo net2
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | RegionTwo |
|
||||
| id | cfe622f9-1851-4033-a4ba-6718659a147c |
|
||||
| name | net2 |
|
||||
| project_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
| provider:network_type | local |
|
||||
| provider:physical_network | |
|
||||
| provider:segmentation_id | |
|
||||
| router:external | False |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tenant_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
Create subnet in net2.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion subnet-create net2 10.0.2.0/24
|
||||
+-------------------+--------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------------+
|
||||
| allocation_pools | {"start": "10.0.2.2", "end": "10.0.2.254"} |
|
||||
| cidr | 10.0.2.0/24 |
|
||||
| created_at | 2017-01-14T02:36:03Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 10.0.2.1 |
|
||||
| host_routes | |
|
||||
| id | 4e3376f8-0bda-450d-b4fb-9eb77c4ef919 |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | |
|
||||
| ipv6_ra_mode | |
|
||||
| name | |
|
||||
| network_id | cfe622f9-1851-4033-a4ba-6718659a147c |
|
||||
| project_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
| revision_number | 2 |
|
||||
| subnetpool_id | |
|
||||
| tenant_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
| updated_at | 2017-01-14T02:36:03Z |
|
||||
+-------------------+--------------------------------------------+
|
||||
|
||||
Add router interface for the subnet to R2.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion router-interface-add R1 4e3376f8-0bda-450d-b4fb-9eb77c4ef919
|
||||
Added interface d4b0e6d9-8bfb-4cd6-8824-92731c0226da to router R1.
|
||||
|
||||
List available images in RegionTwo.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ glance --os-region-name=RegionTwo image-list
|
||||
+--------------------------------------+---------------------------------+
|
||||
| ID | Name |
|
||||
+--------------------------------------+---------------------------------+
|
||||
| 392aa24f-a1a8-4897-bced-70301e1c7e3b | cirros-0.3.4-x86_64-uec |
|
||||
| 41ac5372-764a-4e31-8c3a-66cdc5a6529e | cirros-0.3.4-x86_64-uec-kernel |
|
||||
| 55523513-719d-4949-b697-db98ab3e938e | cirros-0.3.4-x86_64-uec-ramdisk |
|
||||
+--------------------------------------+---------------------------------+
|
||||
|
||||
List available flavors in RegionTwo.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova --os-region-name=RegionTwo flavor-list
|
||||
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
||||
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
|
||||
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
||||
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
|
||||
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
|
||||
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
|
||||
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
|
||||
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
|
||||
| c1 | cirros256 | 256 | 0 | 0 | | 1 | 1.0 | True |
|
||||
| d1 | ds512M | 512 | 5 | 0 | | 1 | 1.0 | True |
|
||||
| d2 | ds1G | 1024 | 10 | 0 | | 1 | 1.0 | True |
|
||||
| d3 | ds2G | 2048 | 10 | 0 | | 2 | 1.0 | True |
|
||||
| d4 | ds4G | 4096 | 20 | 0 | | 4 | 1.0 | True |
|
||||
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
||||
|
||||
Boot instance2, and connect the instance2 to net2.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova --os-region-name=RegionTwo boot --flavor 1 --image 392aa24f-a1a8-4897-bced-70301e1c7e3b --nic net-id=cfe622f9-1851-4033-a4ba-6718659a147c instance2
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| Property | Value |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| OS-DCF:diskConfig | MANUAL |
|
||||
| OS-EXT-AZ:availability_zone | |
|
||||
| OS-EXT-SRV-ATTR:host | - |
|
||||
| OS-EXT-SRV-ATTR:hostname | instance2 |
|
||||
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
|
||||
| OS-EXT-SRV-ATTR:instance_name | |
|
||||
| OS-EXT-SRV-ATTR:kernel_id | 41ac5372-764a-4e31-8c3a-66cdc5a6529e |
|
||||
| OS-EXT-SRV-ATTR:launch_index | 0 |
|
||||
| OS-EXT-SRV-ATTR:ramdisk_id | 55523513-719d-4949-b697-db98ab3e938e |
|
||||
| OS-EXT-SRV-ATTR:reservation_id | r-3v42ltzp |
|
||||
| OS-EXT-SRV-ATTR:root_device_name | - |
|
||||
| OS-EXT-SRV-ATTR:user_data | - |
|
||||
| OS-EXT-STS:power_state | 0 |
|
||||
| OS-EXT-STS:task_state | scheduling |
|
||||
| OS-EXT-STS:vm_state | building |
|
||||
| OS-SRV-USG:launched_at | - |
|
||||
| OS-SRV-USG:terminated_at | - |
|
||||
| accessIPv4 | |
|
||||
| accessIPv6 | |
|
||||
| adminPass | o62QufgY2JAF |
|
||||
| config_drive | |
|
||||
| created | 2017-01-14T02:39:42Z |
|
||||
| description | - |
|
||||
| flavor | m1.tiny (1) |
|
||||
| hostId | |
|
||||
| host_status | |
|
||||
| id | e489ab4e-957d-4537-9870-fff87406aac5 |
|
||||
| image | cirros-0.3.4-x86_64-uec (392aa24f-a1a8-4897-bced-70301e1c7e3b) |
|
||||
| key_name | - |
|
||||
| locked | False |
|
||||
| metadata | {} |
|
||||
| name | instance2 |
|
||||
| os-extended-volumes:volumes_attached | [] |
|
||||
| progress | 0 |
|
||||
| security_groups | default |
|
||||
| status | BUILD |
|
||||
| tags | [] |
|
||||
| tenant_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
| updated | 2017-01-14T02:39:42Z |
|
||||
| user_id | 8e84fae0a5b74464b3300a4576d090a4 |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
|
||||
Check to see if instance2 is active.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ nova --os-region-name=RegionTwo list
|
||||
+--------------------------------------+-----------+--------+------------+-------------+----------------+
|
||||
| ID | Name | Status | Task State | Power State | Networks |
|
||||
+--------------------------------------+-----------+--------+------------+-------------+----------------+
|
||||
| e489ab4e-957d-4537-9870-fff87406aac5 | instance2 | ACTIVE | - | Running | net2=10.0.2.10 |
|
||||
+--------------------------------------+-----------+--------+------------+-------------+----------------+
|
||||
|
||||
You can ping instance2 from instance1, or vice versa now.
|
||||
|
||||
Create floating IP for instance2.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion floatingip-create ext-net1
|
||||
+---------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+--------------------------------------+
|
||||
| created_at | 2017-01-14T02:40:55Z |
|
||||
| description | |
|
||||
| fixed_ip_address | |
|
||||
| floating_ip_address | 163.3.124.13 |
|
||||
| floating_network_id | 494a1d2f-9a0f-4d0d-a5e9-f926fce912ac |
|
||||
| id | f917dede-6e0d-4c5a-8d02-7d5774d094ba |
|
||||
| port_id | |
|
||||
| project_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
| revision_number | 1 |
|
||||
| router_id | |
|
||||
| status | DOWN |
|
||||
| tenant_id | 640e791e767e49939d5c600fdb3f8431 |
|
||||
| updated_at | 2017-01-14T02:40:55Z |
|
||||
+---------------------+--------------------------------------+
|
||||
|
||||
List port of instance2.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion port-list
|
||||
+------------------------------------+------------------------------------+-------------------+--------------------------------------+
|
||||
| id | name | mac_address | fixed_ips |
|
||||
+------------------------------------+------------------------------------+-------------------+--------------------------------------+
|
||||
| 37e9cfe5-d410-4625-963d- | | fa:16:3e:14:47:a8 | {"subnet_id": "409f3b9e- |
|
||||
| b7ea4347d72e | | | 3b14-4147-9443-51930eb9a882", |
|
||||
| | | | "ip_address": "10.0.1.5"} |
|
||||
| ed9bdc02-0f0d-4763-a993-e0972c6563 | | fa:16:3e:c1:10:a3 | {"subnet_id": "4e3376f8-0bda-450d- |
|
||||
| fa | | | b4fb-9eb77c4ef919", "ip_address": |
|
||||
| | | | "10.0.2.10"} |
|
||||
| 92eaf94d-e345-489a- | | fa:16:3e:63:a9:08 | {"subnet_id": "409f3b9e- |
|
||||
| bc91-3d3645d27f8b | | | 3b14-4147-9443-51930eb9a882", |
|
||||
| | | | "ip_address": "10.0.1.1"} |
|
||||
| f98ceee7-777b-4cff- | interface_RegionTwo_409f3b9e- | fa:16:3e:aa:cf:e2 | {"subnet_id": "409f3b9e- |
|
||||
| b5b9-c27b4277bb7f | 3b14-4147-9443-51930eb9a882 | | 3b14-4147-9443-51930eb9a882", |
|
||||
| | | | "ip_address": "10.0.1.12"} |
|
||||
| d3ca5e74-470e-4953-a280-309b5e8e11 | dhcp_port_409f3b9e- | fa:16:3e:7e:72:98 | {"subnet_id": "409f3b9e- |
|
||||
| 46 | 3b14-4147-9443-51930eb9a882 | | 3b14-4147-9443-51930eb9a882", |
|
||||
| | | | "ip_address": "10.0.1.2"} |
|
||||
| b4eef6a0-70e6-4a42-b0c5-f8f49cee25 | interface_RegionOne_409f3b9e- | fa:16:3e:00:e1:5b | {"subnet_id": "409f3b9e- |
|
||||
| c0 | 3b14-4147-9443-51930eb9a882 | | 3b14-4147-9443-51930eb9a882", |
|
||||
| | | | "ip_address": "10.0.1.7"} |
|
||||
| d4b0e6d9-8bfb- | | fa:16:3e:f9:5f:4e | {"subnet_id": "4e3376f8-0bda-450d- |
|
||||
| 4cd6-8824-92731c0226da | | | b4fb-9eb77c4ef919", "ip_address": |
|
||||
| | | | "10.0.2.1"} |
|
||||
| e54f0a40-837f- | interface_RegionTwo_4e3376f8-0bda- | fa:16:3e:fa:84:da | {"subnet_id": "4e3376f8-0bda-450d- |
|
||||
| 48e7-9397-55170300d06e | 450d-b4fb-9eb77c4ef919 | | b4fb-9eb77c4ef919", "ip_address": |
|
||||
| | | | "10.0.2.11"} |
|
||||
| d458644d-a401-4d98-bec3-9468fdd56d | dhcp_port_4e3376f8-0bda-450d-b4fb- | fa:16:3e:b2:a6:03 | {"subnet_id": "4e3376f8-0bda-450d- |
|
||||
| 1c | 9eb77c4ef919 | | b4fb-9eb77c4ef919", "ip_address": |
|
||||
| | | | "10.0.2.2"} |
|
||||
| 65b52fe3-f765-4124-a97f- | bridge_port_640e791e767e49939d5c60 | fa:16:3e:df:7b:97 | {"subnet_id": "d637f4e5-4b9a-4237 |
|
||||
| f73a76e820e6 | 0fdb3f8431_daa08da0-c60e- | | -b3bc-ccfba45a5c37", "ip_address": |
|
||||
| | 42c8-bc30-1ed887111ecb | | "100.0.0.7"} |
|
||||
| cee45aac- | bridge_port_640e791e767e49939d5c60 | fa:16:3e:d0:50:0d | {"subnet_id": "d637f4e5-4b9a-4237 |
|
||||
| fd07-4a2f-8008-02757875d1fe | 0fdb3f8431_b072000e-3cd1-4a1a- | | -b3bc-ccfba45a5c37", "ip_address": |
|
||||
| | aa60-9ffbca119b1a | | "100.0.0.8"} |
|
||||
| dd4707cc-fe2d-429c-8c2f- | | fa:16:3e:9e:85:62 | {"subnet_id": "5485feab-f843-4ffe- |
|
||||
| 084b525e1789 | | | abd5-6afe5319ad82", "ip_address": |
|
||||
| | | | "163.3.124.13"} |
|
||||
| e0755307-a498-473e- | | fa:16:3e:1c:70:b9 | {"subnet_id": "5485feab-f843-4ffe- |
|
||||
| 99e5-30cbede36b8e | | | abd5-6afe5319ad82", "ip_address": |
|
||||
| | | | "163.3.124.7"} |
|
||||
| 2404eb83-f2f4-4a36-b377-dbc8befee1 | | fa:16:3e:25:80:e6 | {"subnet_id": "5485feab-f843-4ffe- |
|
||||
| 93 | | | abd5-6afe5319ad82", "ip_address": |
|
||||
| | | | "163.3.124.9"} |
|
||||
+------------------------------------+------------------------------------+-------------------+--------------------------------------+
|
||||
|
||||
Associate the floating IP to the instance2's IP address in net2.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion floatingip-associate f917dede-6e0d-4c5a-8d02-7d5774d094ba ed9bdc02-0f0d-4763-a993-e0972c6563fa
|
||||
Associated floating IP f917dede-6e0d-4c5a-8d02-7d5774d094ba
|
||||
|
||||
Make sure the floating IP association works.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron --os-region-name=CentralRegion floatingip-list
|
||||
+--------------------------------------+------------------+---------------------+--------------------------------------+
|
||||
| id | fixed_ip_address | floating_ip_address | port_id |
|
||||
+--------------------------------------+------------------+---------------------+--------------------------------------+
|
||||
| 04c18e73-675b-4273-a73a-afaf1e4f9811 | 10.0.1.5 | 163.3.124.7 | 37e9cfe5-d410-4625-963d-b7ea4347d72e |
|
||||
| f917dede-6e0d-4c5a-8d02-7d5774d094ba | 10.0.2.10 | 163.3.124.13 | ed9bdc02-0f0d-4763-a993-e0972c6563fa |
|
||||
+--------------------------------------+------------------+---------------------+--------------------------------------+
|
||||
|
||||
$ neutron --os-region-name=RegionThree floatingip-list
|
||||
+--------------------------------------+------------------+---------------------+--------------------------------------+
|
||||
| id | fixed_ip_address | floating_ip_address | port_id |
|
||||
+--------------------------------------+------------------+---------------------+--------------------------------------+
|
||||
| 3a220f53-fdfe-44e3-847a-b00464135416 | 10.0.1.5 | 163.3.124.7 | 37e9cfe5-d410-4625-963d-b7ea4347d72e |
|
||||
| fe15192f-04cb-48c8-8a90-7a7c016f40ae | 10.0.2.10 | 163.3.124.13 | ed9bdc02-0f0d-4763-a993-e0972c6563fa |
|
||||
+--------------------------------------+------------------+---------------------+--------------------------------------+
|
|
@ -1,14 +0,0 @@
|
|||
Networking Guide
|
||||
================
|
||||
|
||||
The Tricircle is to provide networking automation across Neutron
|
||||
servers in multi-region OpenStack clouds deployment, many cross Neutron
|
||||
networking modes are supported. In this guide, how to use CLI to setup
|
||||
typical networking mode will be described.
|
||||
|
||||
|
||||
.. include:: ./networking-terms.rst
|
||||
.. include:: ./networking-prerequisites.rst
|
||||
.. include:: ./networking-scenarios.rst
|
||||
.. include:: ./service-function-chaining-guide.rst
|
||||
.. include:: ./vlan-aware-vms-guide.rst
|
|
@ -1,130 +0,0 @@
|
|||
=============
|
||||
Prerequisites
|
||||
=============
|
||||
One CentralRegion in which central Neutron and Tricircle services
|
||||
are started, and central Neutron is configured with Tricircle Central Neutron
|
||||
plugin properly. And at least two regions(RegionOne, RegionTwo) in which
|
||||
Tricircle Local Neutron plugin is configured properly in local Neutron.
|
||||
|
||||
RegionOne is mapped to az1, and RegionTwo is mapped to az2 by pod management
|
||||
through Tricircle Admin API.
|
||||
|
||||
You can use az1 or RegionOne as the value of availability-zone-hint when
|
||||
creating a network. Although in this document only one region in one
|
||||
availability zone, one availability zone can include more than one region in
|
||||
Tricircle pod management, so if you specify az1 as the value, then it means
|
||||
the network will reside in az1, and az1 is mapped to RegionOne, if you add
|
||||
more regions into az1, then the network can spread into these regions too.
|
||||
|
||||
Please refer to the installation guide and configuration guide how to setup
|
||||
multi-region environment with Tricircle service enabled.
|
||||
|
||||
If you setup the environment through devstack, you can get these settings
|
||||
which are used in this document as follows:
|
||||
|
||||
Suppose that each node has 3 interfaces, and eth1 for tenant vlan network,
|
||||
eth2 for external vlan network. If you want to verify the data plane
|
||||
connectivity, please make sure the bridges "br-vlan" and "br-ext" are
|
||||
connected to regarding interface. Using following command to connect
|
||||
the bridge to physical ethernet interface, as shown below, "br-vlan" is
|
||||
wired to eth1, and "br-ext" to eth2::
|
||||
|
||||
sudo ovs-vsctl add-br br-vlan
|
||||
sudo ovs-vsctl add-port br-vlan eth1
|
||||
sudo ovs-vsctl add-br br-ext
|
||||
sudo ovs-vsctl add-port br-ext eth2
|
||||
|
||||
Suppose the vlan range for tenant network is 101~150, external network is
|
||||
151~200, in the node which will run central Neutron and Tricircle services,
|
||||
configure the local.conf like this::
|
||||
|
||||
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:101:150,extern:151:200)
|
||||
OVS_BRIDGE_MAPPINGS=bridge:br-vlan,extern:br-ext
|
||||
|
||||
TRICIRCLE_START_SERVICES=True
|
||||
enable_plugin tricircle https://github.com/openstack/tricircle/
|
||||
|
||||
In the node which will run local Neutron without Tricircle services, configure
|
||||
the local.conf like this::
|
||||
|
||||
Q_ML2_PLUGIN_VLAN_TYPE_OPTIONS=(network_vlan_ranges=bridge:101:150,extern:151:200)
|
||||
OVS_BRIDGE_MAPPINGS=bridge:br-vlan,extern:br-ext
|
||||
|
||||
TRICIRCLE_START_SERVICES=False
|
||||
enable_plugin tricircle https://github.com/openstack/tricircle/
|
||||
|
||||
You may have noticed that the only difference is TRICIRCLE_START_SERVICES
|
||||
is True or False. All examples given in this document will be based on these
|
||||
settings.
|
||||
|
||||
If you also want to configure vxlan network, suppose the vxlan range for tenant
|
||||
network is 1001~2000, add the following configuration to the above local.conf::
|
||||
|
||||
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=1001:2000)
|
||||
|
||||
If you also want to configure flat network, suppose you use the same physical
|
||||
network as the vlan network, configure the local.conf like this::
|
||||
|
||||
Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS=(flat_networks=bridge,extern)
|
||||
|
||||
In both RegionOne and RegionTwo, external network is able to be provisioned,
|
||||
the settings will look like this in /etc/neutron/plugins/ml2/ml2_conf.ini::
|
||||
|
||||
network_vlan_ranges = bridge:101:150,extern:151:200
|
||||
|
||||
vni_ranges = 1001:2000(or the range that you configure)
|
||||
|
||||
flat_networks = bridge,extern
|
||||
|
||||
bridge_mappings = bridge:br-vlan,extern:br-ext
|
||||
|
||||
Please be aware that the physical network name for tenant VLAN network is
|
||||
"bridge", and the external network physical network name is "extern".
|
||||
|
||||
In central Neutron's configuration file, the default settings look like as
|
||||
follows::
|
||||
|
||||
bridge_network_type = vxlan
|
||||
network_vlan_ranges = bridge:101:150,extern:151:200
|
||||
vni_ranges = 1001:2000
|
||||
flat_networks = bridge,extern
|
||||
tenant_network_types = vxlan,vlan,flat,local
|
||||
type_drivers = vxlan,vlan,flat,local
|
||||
|
||||
If you want to create a local network, it is recommend that you specify
|
||||
availability_zone_hint as region name when creating the network, instead of
|
||||
specifying the network type as "local". The "local" type has two drawbacks.
|
||||
One is that you can not control the exact type of the network in local Neutron,
|
||||
it's up to your local Neutron's configuration. The other is that the segment
|
||||
ID of the network is allocated by local Neutron, so it may conflict with a
|
||||
segment ID that is allocated by central Neutron. Considering such problems, we
|
||||
have plan to deprecate "local" type.
|
||||
|
||||
If you want to create a L2 network across multiple Neutron servers, then you
|
||||
have to speficy --provider-network-type vlan in network creation
|
||||
command for vlan network type, or --provider-network-type vxlan for vxlan
|
||||
network type. Both vlan and vxlan network type could work as the bridge
|
||||
network. The default bridge network type is vxlan.
|
||||
|
||||
If you want to create a flat network, which is usually used as the external
|
||||
network type, then you have to specify --provider-network-type flat in network
|
||||
creation command.
|
||||
|
||||
You can create L2 network for different purposes, and the supported network
|
||||
types for different purposes are summarized as follows.
|
||||
|
||||
.. _supported_network_types:
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
|
||||
* - Networking purpose
|
||||
- Supported
|
||||
* - Local L2 network for instances
|
||||
- FLAT, VLAN, VxLAN
|
||||
* - Cross Neutron L2 network for instances
|
||||
- FLAT, VLAN, VxLAN
|
||||
* - Bridge network for routers
|
||||
- FLAT, VLAN, VxLAN
|
||||
* - External network
|
||||
- FLAT, VLAN
|
|
@ -1,13 +0,0 @@
|
|||
===================
|
||||
Networking Scenario
|
||||
===================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 4
|
||||
|
||||
networking-guide-direct-provider-networks.rst
|
||||
networking-guide-multiple-external-networks.rst
|
||||
networking-guide-multiple-ns-with-ew-enabled.rst
|
||||
networking-guide-single-external-network.rst
|
||||
networking-guide-local-networking.rst
|
||||
networking-guide-newL3-using-segment.rst
|
|
@ -1,93 +0,0 @@
|
|||
================
|
||||
Networking Terms
|
||||
================
|
||||
|
||||
There are four important networking terms will be used in networking
|
||||
automation across Neutron.
|
||||
|
||||
Local Network
|
||||
- Local Network is a network which can only reside in one OpenStack cloud.
|
||||
- Network type could be VLAN, VxLAN, Flat.
|
||||
- If you specify a region name as the value of availability-zone-hint
|
||||
during network creation, then the network will be created as local
|
||||
network in that region.
|
||||
- If the default network type to be created is configured to "local" in
|
||||
central Neutron, then no matter you specify availability-zone-hint or
|
||||
not, the network will be local network if the network was created
|
||||
without explicitly given non-local provider network type.
|
||||
- External network should be created as local network, that means external
|
||||
network is explicitly existing in some specified region. It's possible
|
||||
that each region provides multiple external networks, that means there
|
||||
is no limitation on how many external networks can be created.
|
||||
- For example, local network could be created as follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
openstack --os-region-name=CentralRegion network create --availability-zone-hint=RegionOne net1
|
||||
|
||||
Local Router
|
||||
- Local Router is a logical router which can only reside in one OpenStack
|
||||
cloud.
|
||||
- If you specify a region name as the value of availability-zone-hint
|
||||
during router creation, then the router will be created as local
|
||||
router in that region.
|
||||
- For example, local router could be created as follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
neutron --os-region-name=CentralRegion router-create --availability-zone-hint RegionOne R1
|
||||
|
||||
Cross Neutron L2 Network
|
||||
- Cross Neutron L2 Network is a network which can be stretched into more
|
||||
than one Neutron servers, these Neutron servers may work in one
|
||||
OpenStack cloud or multiple OpenStack clouds.
|
||||
- Network type could be VLAN, VxLAN, Flat.
|
||||
- During the network creation, if availability-zone-hint is not specified,
|
||||
or specified with availability zone name, or more than one region name,
|
||||
or more than one availability zone name, then the network will be created
|
||||
as cross Neutron L2 network.
|
||||
- If the default network type to be created is not configured to "local" in
|
||||
central Neutron, then the network will be cross Neutron L2 network if
|
||||
the network was created without specified provider network type and single
|
||||
region name in availability-zone-hint.
|
||||
- For example, cross Neutron L2 network could be created as follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
neutron --os-region-name=CentralRegion net-create --provider:network_type vxlan --availability-zone-hint RegionOne --availability-zone-hint RegionTwo net1
|
||||
|
||||
Non-Local Router
|
||||
- Non-Local Router will be able to reside in more than one OpenStack cloud,
|
||||
and internally inter-connected with bridge network.
|
||||
- Bridge network used internally for non-local router is a special cross
|
||||
Neutron L2 network.
|
||||
- Local networks or cross Neutron L2 networks can be attached to local
|
||||
router or non-local routers if the network can be presented in the region
|
||||
where the router can reside.
|
||||
- During the router creation, if availability-zone-hint is not specified,
|
||||
or specified with availability zone name, or more than one region name,
|
||||
or more than one availability zone name, then the router will be created
|
||||
as non-local router.
|
||||
- For example, non-local router could be created as follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
neutron --os-region-name=CentralRegion router-create --availability-zone-hint RegionOne --availability-zone-hint RegionTwo R3
|
||||
|
||||
It's also important to understand that cross Neutron L2 network, local
|
||||
router and non-local router can be created for different north-south/east-west
|
||||
networking purpose.
|
||||
|
||||
North-South and East-West Networking
|
||||
- Instances in different OpenStack clouds can be attached to a cross
|
||||
Neutron L2 network directly, so that they can communicate with
|
||||
each other no matter in which OpenStack cloud.
|
||||
- If L3 networking across OpenStack clouds is preferred, local network
|
||||
attached to non-local router can be created for instances to attach.
|
||||
- Local router can be set gateway with external networks to support
|
||||
north-south traffic handled locally.
|
||||
- Non-local router can work only for cross Neutron east-west networking
|
||||
purpose if no external network is set to the router.
|
||||
- Non-local router can serve as the centralized north-south traffic gateway
|
||||
if external network is attached to the router, and support east-west
|
||||
traffic at the same time.
|
|
@ -1,120 +0,0 @@
|
|||
===============================
|
||||
Service Function Chaining Guide
|
||||
===============================
|
||||
|
||||
Service Function Chaining provides the ability to define an ordered list of
|
||||
network services (e.g. firewalls, load balancers). These services are then
|
||||
“stitched” together in the network to create a service chain.
|
||||
|
||||
|
||||
Installation
|
||||
^^^^^^^^^^^^
|
||||
|
||||
After installing tricircle, please refer to
|
||||
https://docs.openstack.org/networking-sfc/latest/install/install.html
|
||||
to install networking-sfc.
|
||||
|
||||
Configuration
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
- 1 Configure central Neutron server
|
||||
|
||||
After installing the Tricircle and networing-sfc, enable the service plugins
|
||||
in central Neutron server by adding them in ``neutron.conf.0``
|
||||
(typically found in ``/etc/neutron/``)::
|
||||
|
||||
service_plugins=networking_sfc.services.flowclassifier.plugin.FlowClassifierPlugin,tricircle.network.central_sfc_plugin.TricircleSfcPlugin
|
||||
|
||||
In the same configuration file, specify the driver to use in the plugins. ::
|
||||
|
||||
[sfc]
|
||||
drivers = tricircle_sfc
|
||||
|
||||
[flowclassifier]
|
||||
drivers = tricircle_fc
|
||||
|
||||
- 2 Configure local Neutron
|
||||
|
||||
Please refer to https://docs.openstack.org/networking-sfc/latest/install/configuration.html
|
||||
to config local networking-sfc.
|
||||
|
||||
|
||||
How to play
|
||||
^^^^^^^^^^^
|
||||
|
||||
- 1 Create pods via Tricircle Admin API
|
||||
|
||||
- 2 Create necessary resources in central Neutron server ::
|
||||
|
||||
neutron --os-region-name=CentralRegion net-create --provider:network_type vxlan net1
|
||||
neutron --os-region-name=CentralRegion subnet-create net1 10.0.0.0/24
|
||||
neutron --os-region-name=CentralRegion port-create net1 --name p1
|
||||
neutron --os-region-name=CentralRegion port-create net1 --name p2
|
||||
neutron --os-region-name=CentralRegion port-create net1 --name p3
|
||||
neutron --os-region-name=CentralRegion port-create net1 --name p4
|
||||
neutron --os-region-name=CentralRegion port-create net1 --name p5
|
||||
neutron --os-region-name=CentralRegion port-create net1 --name p6
|
||||
|
||||
Please note that network type must be vxlan.
|
||||
|
||||
- 3 Get image ID and flavor ID which will be used in VM booting. In the following step,
|
||||
the VM will boot from RegionOne and RegionTwo. ::
|
||||
|
||||
glance --os-region-name=RegionOne image-list
|
||||
nova --os-region-name=RegionOne flavor-list
|
||||
glance --os-region-name=RegionTwo image-list
|
||||
nova --os-region-name=RegionTwo flavor-list
|
||||
|
||||
- 4 Boot virtual machines ::
|
||||
|
||||
openstack --os-region-name=RegionOne server create --flavor 1 --image $image1_id --nic port-id=$p1_id vm_src
|
||||
openstack --os-region-name=RegionOne server create --flavor 1 --image $image1_id --nic port-id=$p2_id --nic port-id=$p3_id vm_sfc1
|
||||
openstack --os-region-name=RegionTwo server create --flavor 1 --image $image2_id --nic port-id=$p4_id --nic port-id=$p5_id vm_sfc2
|
||||
openstack --os-region-name=RegionTwo server create --flavor 1 --image $image2_id --nic port-id=$p6_id vm_dst
|
||||
|
||||
- 5 Create port pairs in central Neutron server ::
|
||||
|
||||
neutron --os-region-name=CentralRegion port-pair-create --ingress p2 --egress p3 pp1
|
||||
neutron --os-region-name=CentralRegion port-pair-create --ingress p4 --egress p5 pp2
|
||||
|
||||
- 6 Create port pair groups in central Neutron server ::
|
||||
|
||||
neutron --os-region-name=CentralRegion port-pair-group-create --port-pair pp1 ppg1
|
||||
neutron --os-region-name=CentralRegion port-pair-group-create --port-pair pp2 ppg2
|
||||
|
||||
- 7 Create flow classifier in central Neutron server ::
|
||||
|
||||
neutron --os-region-name=CentralRegion flow-classifier-create --source-ip-prefix 10.0.0.0/24 --logical-source-port p1 fc1
|
||||
|
||||
- 8 Create port chain in central Neutron server ::
|
||||
|
||||
neutron --os-region-name=CentralRegion port-chain-create --flow-classifier fc1 --port-pair-group ppg1 --port-pair-group ppg2 pc1
|
||||
|
||||
- 9 Show result in CentralRegion, RegionOne and RegionTwo ::
|
||||
|
||||
neutron --os-region-name=CentralRegion port-chain-list
|
||||
neutron --os-region-name=RegionOne port-chain-list
|
||||
neutron --os-region-name=RegionTwo port-chain-list
|
||||
|
||||
You will find a same port chain in each region.
|
||||
|
||||
- 10 Check if the port chain is working
|
||||
|
||||
In vm_dst, ping the p1's ip address, it should fail.
|
||||
|
||||
Enable vm_sfc1, vm_sfc2's forwarding function ::
|
||||
|
||||
sudo sh
|
||||
echo 1 > /proc/sys/net/ipv4/ip_forward
|
||||
|
||||
Add the following route for vm_sfc1, vm_sfc2 ::
|
||||
|
||||
sudo ip route add $p6_ip_address dev eth1
|
||||
|
||||
In vm_dst, ping the p1's ip address, it should be successfully this time.
|
||||
|
||||
.. note:: Not all images will bring up the second NIC, so you can ssh into vm, use
|
||||
"ifconfig -a" to check whether all NICs are up, and bring up all NICs if necessary.
|
||||
In CirrOS you can type the following command to bring up one NIC. ::
|
||||
|
||||
sudo cirros-dhcpc up $nic_name
|
|
@ -1,79 +0,0 @@
|
|||
====================
|
||||
VLAN aware VMs Guide
|
||||
====================
|
||||
|
||||
VLAN aware VM is a VM that sends and receives VLAN tagged frames over its vNIC.
|
||||
The main point of that is to overcome the limitations of the current one vNIC
|
||||
per network model. A VLAN (or other encapsulation) aware VM can differentiate
|
||||
between traffic of many networks by different encapsulation types and IDs,
|
||||
instead of using many vNICs. This approach scales to higher number of networks
|
||||
and enables dynamic handling of network attachments (without hotplugging vNICs).
|
||||
|
||||
Installation
|
||||
^^^^^^^^^^^^
|
||||
|
||||
No additional installation required, Please refer to the Tricircle
|
||||
installation guide to install Tricircle then configure Neutron server to
|
||||
enable trunk extension.
|
||||
|
||||
Configuration
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
- 1 Configure central Neutron server
|
||||
|
||||
Edit neutron.conf, add the following configuration then restart central
|
||||
Neutron server
|
||||
|
||||
.. csv-table::
|
||||
:header: "Option", "Description", "Example"
|
||||
|
||||
[DEFAULT] service_plugins, "service plugin central Neutron server uses", tricircle.network.central_trunk_plugin. TricircleTrunkPlugin
|
||||
|
||||
- 2 Configure local Neutron server
|
||||
|
||||
Edit neutron.conf, add the following configuration then restart local
|
||||
Neutron server
|
||||
|
||||
.. csv-table::
|
||||
:header: "Option", "Description", "Example"
|
||||
|
||||
[DEFAULT] service_plugins, "service plugin central Neutron server uses", trunk
|
||||
|
||||
How to play
|
||||
^^^^^^^^^^^
|
||||
|
||||
- 1 Create pods via Tricircle Admin API
|
||||
|
||||
- 2 Create necessary resources in central Neutron server ::
|
||||
|
||||
neutron --os-region-name=CentralRegion net-create --provider:network_type vlan net1
|
||||
neutron --os-region-name=CentralRegion subnet-create net1 10.0.1.0/24
|
||||
neutron --os-region-name=CentralRegion port-create net1 --name p1
|
||||
neutron --os-region-name=CentralRegion net-create --provider:network_type vlan net2
|
||||
neutron --os-region-name=CentralRegion subnet-create net2 10.0.2.0/24
|
||||
neutron --os-region-name=CentralRegion port-create net2 --name p2
|
||||
|
||||
Please note that network type must be vlan, the port p1, p2 and net2's provider
|
||||
segmentation_id will be used in later step to create trunk and boot vm.
|
||||
|
||||
- 3 Create trunk in central Neutron server ::
|
||||
|
||||
openstack --os-region-name=CentralRegion network trunk create trunk1 --parent-port p1 --subport port=p2,segmentation-type=vlan,segmentation-id=$net2_segment_id
|
||||
|
||||
- 4 Get image ID and flavor ID which will be used in VM booting. In the following step,
|
||||
the trunk is to be used in the VM in RegionOne, you can replace RegionOne to other
|
||||
region's name if you want to boot VLAN aware VM in other region. ::
|
||||
|
||||
glance --os-region-name=RegionOne image-list
|
||||
nova --os-region-name=RegionOne flavor-list
|
||||
|
||||
- 5 Boot virtual machines ::
|
||||
|
||||
nova --os-region-name=RegionOne boot --flavor 1 --image $image1_id --nic port-id=$p1_id vm1
|
||||
|
||||
- 6 Show result on CentralRegion and RegionOne ::
|
||||
|
||||
openstack --os-region-name=CentralRegion network trunk show trunk1
|
||||
openstack --os-region-name=RegionOne network trunk show trunk1
|
||||
|
||||
The result will be the same, except for the trunk id.
|
|
@ -1,9 +0,0 @@
|
|||
====================
|
||||
Tricircle User Guide
|
||||
====================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 3
|
||||
|
||||
readme
|
||||
usage
|
|
@ -1,2 +0,0 @@
|
|||
.. include:: ../../../README.rst
|
||||
:start-line: 10
|
|
@ -1,7 +0,0 @@
|
|||
=====
|
||||
Usage
|
||||
=====
|
||||
|
||||
To use tricircle in a project::
|
||||
|
||||
import tricircle
|
|
@ -1,16 +0,0 @@
|
|||
[DEFAULT]
|
||||
output_file = etc/api.conf.sample
|
||||
wrap_width = 79
|
||||
namespace = tricircle.api
|
||||
namespace = tricircle.common
|
||||
namespace = tricircle.db
|
||||
namespace = oslo.log
|
||||
namespace = oslo.messaging
|
||||
namespace = oslo.policy
|
||||
namespace = oslo.service.periodic_task
|
||||
namespace = oslo.service.service
|
||||
namespace = oslo.service.sslutils
|
||||
namespace = oslo.db
|
||||
namespace = oslo.middleware
|
||||
namespace = oslo.concurrency
|
||||
namespace = keystonemiddleware.auth_token
|
|
@ -1,3 +0,0 @@
|
|||
[DEFAULT]
|
||||
output_file = etc/tricircle-policy.yaml.sample
|
||||
namespace = tricircle
|
|
@ -1,15 +0,0 @@
|
|||
[DEFAULT]
|
||||
output_file = etc/xjob.conf.sample
|
||||
wrap_width = 79
|
||||
namespace = tricircle.xjob
|
||||
namespace = tricircle.common
|
||||
namespace = oslo.log
|
||||
namespace = oslo.messaging
|
||||
namespace = oslo.policy
|
||||
namespace = oslo.service.periodic_task
|
||||
namespace = oslo.service.service
|
||||
namespace = oslo.service.sslutils
|
||||
namespace = oslo.db
|
||||
namespace = oslo.middleware
|
||||
namespace = oslo.concurrency
|
||||
namespace = keystonemiddleware.auth_token
|
44
index.rst
44
index.rst
|
@ -1,44 +0,0 @@
|
|||
.. tricircle documentation master file, created by
|
||||
sphinx-quickstart on Wed Dec 2 17:00:36 2015.
|
||||
You can adapt this file completely to your liking, but it should at least
|
||||
contain the root `toctree` directive.
|
||||
|
||||
=====================================
|
||||
Welcome to Tricircle's documentation!
|
||||
=====================================
|
||||
|
||||
User Documentation
|
||||
==================
|
||||
.. toctree::
|
||||
:maxdepth: 3
|
||||
|
||||
user/index
|
||||
|
||||
Contributor Guide
|
||||
=================
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
contributor/index
|
||||
|
||||
Admin Guide
|
||||
===========
|
||||
.. toctree::
|
||||
:maxdepth: 3
|
||||
|
||||
admin/index
|
||||
|
||||
Installation Guide
|
||||
==================
|
||||
.. toctree::
|
||||
:maxdepth: 3
|
||||
|
||||
install/index
|
||||
|
||||
Configuration Guide
|
||||
===================
|
||||
.. toctree::
|
||||
:maxdepth: 3
|
||||
|
||||
configuration/index
|
||||
|
|
@ -1,154 +0,0 @@
|
|||
alabaster==0.7.10
|
||||
alembic==0.8.10
|
||||
amqp==2.1.1
|
||||
appdirs==1.3.0
|
||||
astroid==1.6.5
|
||||
Babel==2.3.4
|
||||
bandit==1.1.0
|
||||
bashate==0.5.1
|
||||
beautifulsoup4==4.6.0
|
||||
cachetools==2.0.0
|
||||
cliff==2.8.0
|
||||
cmd2==0.8.0
|
||||
contextlib2==0.4.0
|
||||
coverage==4.0
|
||||
ddt==1.0.1
|
||||
debtcollector==1.19.0
|
||||
decorator==3.4.0
|
||||
deprecation==1.0
|
||||
docutils==0.11
|
||||
dogpile.cache==0.6.2
|
||||
dulwich==0.15.0
|
||||
eventlet==0.18.2
|
||||
extras==1.0.0
|
||||
fasteners==0.7.0
|
||||
fixtures==3.0.0
|
||||
future==0.16.0
|
||||
futurist==1.2.0
|
||||
gitdb==0.6.4
|
||||
GitPython==1.0.1
|
||||
greenlet==0.4.10
|
||||
httplib2==0.9.1
|
||||
imagesize==0.7.1
|
||||
iso8601==0.1.11
|
||||
Jinja2==2.10
|
||||
jmespath==0.9.0
|
||||
jsonpatch==1.16
|
||||
jsonpointer==1.13
|
||||
jsonschema==2.6.0
|
||||
keystoneauth1==3.4.0;python_version<'3.3'
|
||||
keystoneauth1==3.14.0;python_version>'3.3'
|
||||
keystonemiddleware==4.17.0
|
||||
kombu==4.0.0
|
||||
linecache2==1.0.0
|
||||
logilab-common==1.4.1
|
||||
logutils==0.3.5
|
||||
Mako==0.4.0
|
||||
MarkupSafe==1.0
|
||||
mccabe==0.2.1
|
||||
mock==3.0.0
|
||||
monotonic==0.6;python_version<'3.3'
|
||||
mox3==0.20.0
|
||||
msgpack-python==0.4.0
|
||||
munch==2.1.0
|
||||
netaddr==0.7.18
|
||||
netifaces==0.10.4
|
||||
networking-sfc==8.0.0.0b1
|
||||
neutron-lib==1.25.0;python_version<'3.3'
|
||||
neutron-lib==1.29.1;python_version>'3.3'
|
||||
openstackdocstheme==1.30.0
|
||||
openstacksdk==0.31.2
|
||||
os-client-config==1.28.0
|
||||
os-service-types==1.7.0
|
||||
os-xenapi==0.3.1
|
||||
osc-lib==1.8.0
|
||||
oslo.cache==1.26.0
|
||||
oslo.concurrency==3.26.0
|
||||
oslo.config==5.2.0
|
||||
oslo.context==2.19.2
|
||||
oslo.db==4.37.0
|
||||
oslo.i18n==3.15.3
|
||||
oslo.log==3.36.0
|
||||
oslo.messaging==5.29.0
|
||||
oslo.middleware==3.31.0
|
||||
oslo.policy==1.30.0
|
||||
oslo.privsep==1.32.0
|
||||
oslo.reports==1.18.0
|
||||
oslo.rootwrap==5.8.0
|
||||
oslo.serialization==2.18.0
|
||||
oslo.service==1.24.0
|
||||
oslo.upgradecheck==0.1.1
|
||||
oslo.utils==3.33.0
|
||||
oslo.versionedobjects==1.35.1
|
||||
oslosphinx==4.7.0
|
||||
oslotest==3.2.0
|
||||
osprofiler==2.3.0
|
||||
os-testr==1.0.0
|
||||
ovs==2.8.0
|
||||
ovsdbapp==0.12.1
|
||||
Paste==2.0.2
|
||||
PasteDeploy==1.5.0
|
||||
pbr==4.0.0
|
||||
pecan==1.3.2
|
||||
pika-pool==0.1.3
|
||||
pika==0.10.0
|
||||
positional==1.2.1
|
||||
prettytable==0.7.2
|
||||
psutil==3.2.2
|
||||
pycadf==1.1.0
|
||||
pycodestyle==2.4.0
|
||||
pycparser==2.18
|
||||
Pygments==2.2.0
|
||||
pyinotify==0.9.6
|
||||
pylint==2.2.0
|
||||
PyMySQL==0.7.6
|
||||
pyparsing==2.1.0
|
||||
pyperclip==1.5.27
|
||||
pyroute2==0.5.3
|
||||
python-cinderclient==3.3.0
|
||||
python-dateutil==2.5.3
|
||||
python-designateclient==2.7.0
|
||||
python-editor==1.0.3
|
||||
python-glanceclient==2.8.0
|
||||
python-keystoneclient==3.8.0
|
||||
python-mimeparse==1.6.0
|
||||
python-neutronclient==6.7.0
|
||||
python-novaclient==9.1.0
|
||||
python-subunit==1.0.0
|
||||
pytz==2013.6
|
||||
PyYAML==3.12
|
||||
reno==2.5.0
|
||||
repoze.lru==0.7
|
||||
requests==2.14.2
|
||||
requests-mock==1.2.0
|
||||
requestsexceptions==1.2.0
|
||||
rfc3986==0.3.1
|
||||
Routes==2.3.1
|
||||
ryu==4.24
|
||||
simplejson==3.5.1
|
||||
six==1.10.0
|
||||
smmap==0.9.0
|
||||
snowballstemmer==1.2.1
|
||||
Sphinx==1.6.5
|
||||
sphinxcontrib-websupport==1.0.1
|
||||
sqlalchemy-migrate==0.11.0
|
||||
SQLAlchemy==1.2.0
|
||||
sqlparse==0.2.2
|
||||
statsd==3.2.1
|
||||
stestr==1.0.0
|
||||
stevedore==1.20.0
|
||||
Tempita==0.5.2
|
||||
tenacity==3.2.1
|
||||
testrepository==0.0.18
|
||||
testresources==2.0.0
|
||||
testscenarios==0.4
|
||||
testtools==2.2.0
|
||||
tinyrpc==0.6
|
||||
traceback2==1.4.0
|
||||
unittest2==1.1.0
|
||||
vine==1.1.4
|
||||
waitress==1.1.0
|
||||
weakrefmethod==1.0.2
|
||||
WebOb==1.8.2
|
||||
WebTest==2.0.27
|
||||
wrapt==1.7.0
|
|
@ -1,80 +0,0 @@
|
|||
- hosts: primary
|
||||
tasks:
|
||||
|
||||
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
|
||||
synchronize:
|
||||
src: '{{ ansible_user_dir }}/workspace/'
|
||||
dest: '{{ zuul.executor.log_root }}'
|
||||
mode: pull
|
||||
copy_links: true
|
||||
verify_host: true
|
||||
rsync_opts:
|
||||
- --include=**/*nose_results.html
|
||||
- --include=*/
|
||||
- --exclude=*
|
||||
- --prune-empty-dirs
|
||||
|
||||
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
|
||||
synchronize:
|
||||
src: '{{ ansible_user_dir }}/workspace/'
|
||||
dest: '{{ zuul.executor.log_root }}'
|
||||
mode: pull
|
||||
copy_links: true
|
||||
verify_host: true
|
||||
rsync_opts:
|
||||
- --include=**/*testr_results.html.gz
|
||||
- --include=*/
|
||||
- --exclude=*
|
||||
- --prune-empty-dirs
|
||||
|
||||
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
|
||||
synchronize:
|
||||
src: '{{ ansible_user_dir }}/workspace/'
|
||||
dest: '{{ zuul.executor.log_root }}'
|
||||
mode: pull
|
||||
copy_links: true
|
||||
verify_host: true
|
||||
rsync_opts:
|
||||
- --include=/.testrepository/tmp*
|
||||
- --include=*/
|
||||
- --exclude=*
|
||||
- --prune-empty-dirs
|
||||
|
||||
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
|
||||
synchronize:
|
||||
src: '{{ ansible_user_dir }}/workspace/'
|
||||
dest: '{{ zuul.executor.log_root }}'
|
||||
mode: pull
|
||||
copy_links: true
|
||||
verify_host: true
|
||||
rsync_opts:
|
||||
- --include=**/*testrepository.subunit.gz
|
||||
- --include=*/
|
||||
- --exclude=*
|
||||
- --prune-empty-dirs
|
||||
|
||||
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
|
||||
synchronize:
|
||||
src: '{{ ansible_user_dir }}/workspace/'
|
||||
dest: '{{ zuul.executor.log_root }}/tox'
|
||||
mode: pull
|
||||
copy_links: true
|
||||
verify_host: true
|
||||
rsync_opts:
|
||||
- --include=/.tox/*/log/*
|
||||
- --include=*/
|
||||
- --exclude=*
|
||||
- --prune-empty-dirs
|
||||
|
||||
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
|
||||
synchronize:
|
||||
src: '{{ ansible_user_dir }}/workspace/'
|
||||
dest: '{{ zuul.executor.log_root }}'
|
||||
mode: pull
|
||||
copy_links: true
|
||||
verify_host: true
|
||||
rsync_opts:
|
||||
- --include=/logs/**
|
||||
- --include=*/
|
||||
- --exclude=*
|
||||
- --prune-empty-dirs
|
|
@ -1,72 +0,0 @@
|
|||
- hosts: all
|
||||
name: Autoconverted job legacy-tricircle-dsvm-functional from old job gate-tricircle-dsvm-functional-ubuntu-xenial
|
||||
tasks:
|
||||
|
||||
- name: Ensure legacy workspace directory
|
||||
file:
|
||||
path: '{{ ansible_user_dir }}/workspace'
|
||||
state: directory
|
||||
|
||||
- shell:
|
||||
cmd: |
|
||||
set -e
|
||||
set -x
|
||||
cat > clonemap.yaml << EOF
|
||||
clonemap:
|
||||
- name: openstack/devstack-gate
|
||||
dest: devstack-gate
|
||||
EOF
|
||||
/usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \
|
||||
https://opendev.org \
|
||||
openstack/devstack-gate
|
||||
executable: /bin/bash
|
||||
chdir: '{{ ansible_user_dir }}/workspace'
|
||||
environment: '{{ zuul | zuul_legacy_vars }}'
|
||||
|
||||
- shell:
|
||||
cmd: |
|
||||
set -e
|
||||
set -x
|
||||
cat << 'EOF' >>"/tmp/dg-local.conf"
|
||||
[[local|localrc]]
|
||||
enable_plugin tricircle https://opendev.org/openstack/tricircle
|
||||
|
||||
EOF
|
||||
executable: /bin/bash
|
||||
chdir: '{{ ansible_user_dir }}/workspace'
|
||||
environment: '{{ zuul | zuul_legacy_vars }}'
|
||||
|
||||
- shell:
|
||||
cmd: |
|
||||
set -e
|
||||
set -x
|
||||
export PYTHONUNBUFFERED=true
|
||||
export BRANCH_OVERRIDE=default
|
||||
export PROJECTS="openstack/tricircle openstack/neutron openstack/networking-sfc $PROJECTS"
|
||||
export LIBS_FROM_GIT="neutron,networking-sfc"
|
||||
export DEVSTACK_GATE_NEUTRON=1
|
||||
export DEVSTACK_GATE_TEMPEST=0
|
||||
export DEVSTACK_GATE_TEMPEST_ALL_PLUGINS=0
|
||||
export DEVSTACK_GATE_TEMPEST_REGEX="tricircle.tempestplugin"
|
||||
|
||||
if [ "$BRANCH_OVERRIDE" != "default" ] ; then
|
||||
export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
|
||||
fi
|
||||
|
||||
function pre_test_hook {
|
||||
cd /opt/stack/new/tricircle/tricircle/tempestplugin/
|
||||
./pre_test_hook.sh
|
||||
}
|
||||
export -f pre_test_hook
|
||||
|
||||
function post_test_hook {
|
||||
cd /opt/stack/new/tricircle/tricircle/tempestplugin/
|
||||
./post_test_hook.sh
|
||||
}
|
||||
export -f post_test_hook
|
||||
|
||||
cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
|
||||
./safe-devstack-vm-gate-wrap.sh
|
||||
executable: /bin/bash
|
||||
chdir: '{{ ansible_user_dir }}/workspace'
|
||||
environment: '{{ zuul | zuul_legacy_vars }}'
|
|
@ -1,80 +0,0 @@
|
|||
- hosts: primary
|
||||
tasks:
|
||||
|
||||
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
|
||||
synchronize:
|
||||
src: '{{ ansible_user_dir }}/workspace/'
|
||||
dest: '{{ zuul.executor.log_root }}'
|
||||
mode: pull
|
||||
copy_links: true
|
||||
verify_host: true
|
||||
rsync_opts:
|
||||
- --include=**/*nose_results.html
|
||||
- --include=*/
|
||||
- --exclude=*
|
||||
- --prune-empty-dirs
|
||||
|
||||
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
|
||||
synchronize:
|
||||
src: '{{ ansible_user_dir }}/workspace/'
|
||||
dest: '{{ zuul.executor.log_root }}'
|
||||
mode: pull
|
||||
copy_links: true
|
||||
verify_host: true
|
||||
rsync_opts:
|
||||
- --include=**/*testr_results.html.gz
|
||||
- --include=*/
|
||||
- --exclude=*
|
||||
- --prune-empty-dirs
|
||||
|
||||
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
|
||||
synchronize:
|
||||
src: '{{ ansible_user_dir }}/workspace/'
|
||||
dest: '{{ zuul.executor.log_root }}'
|
||||
mode: pull
|
||||
copy_links: true
|
||||
verify_host: true
|
||||
rsync_opts:
|
||||
- --include=/.testrepository/tmp*
|
||||
- --include=*/
|
||||
- --exclude=*
|
||||
- --prune-empty-dirs
|
||||
|
||||
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
|
||||
synchronize:
|
||||
src: '{{ ansible_user_dir }}/workspace/'
|
||||
dest: '{{ zuul.executor.log_root }}'
|
||||
mode: pull
|
||||
copy_links: true
|
||||
verify_host: true
|
||||
rsync_opts:
|
||||
- --include=**/*testrepository.subunit.gz
|
||||
- --include=*/
|
||||
- --exclude=*
|
||||
- --prune-empty-dirs
|
||||
|
||||
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
|
||||
synchronize:
|
||||
src: '{{ ansible_user_dir }}/workspace/'
|
||||
dest: '{{ zuul.executor.log_root }}/tox'
|
||||
mode: pull
|
||||
copy_links: true
|
||||
verify_host: true
|
||||
rsync_opts:
|
||||
- --include=/.tox/*/log/*
|
||||
- --include=*/
|
||||
- --exclude=*
|
||||
- --prune-empty-dirs
|
||||
|
||||
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
|
||||
synchronize:
|
||||
src: '{{ ansible_user_dir }}/workspace/'
|
||||
dest: '{{ zuul.executor.log_root }}'
|
||||
mode: pull
|
||||
copy_links: true
|
||||
verify_host: true
|
||||
rsync_opts:
|
||||
- --include=/logs/**
|
||||
- --include=*/
|
||||
- --exclude=*
|
||||
- --prune-empty-dirs
|
|
@ -1,66 +0,0 @@
|
|||
- hosts: primary
|
||||
name: Autoconverted job legacy-tricircle-dsvm-multiregion from old job gate-tricircle-dsvm-multiregion-ubuntu-xenial
|
||||
tasks:
|
||||
|
||||
- name: Ensure legacy workspace directory
|
||||
file:
|
||||
path: '{{ ansible_user_dir }}/workspace'
|
||||
state: directory
|
||||
|
||||
- shell:
|
||||
cmd: |
|
||||
set -e
|
||||
set -x
|
||||
cat > clonemap.yaml << EOF
|
||||
clonemap:
|
||||
- name: openstack/devstack-gate
|
||||
dest: devstack-gate
|
||||
EOF
|
||||
/usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \
|
||||
https://opendev.org \
|
||||
openstack/devstack-gate
|
||||
executable: /bin/bash
|
||||
chdir: '{{ ansible_user_dir }}/workspace'
|
||||
environment: '{{ zuul | zuul_legacy_vars }}'
|
||||
|
||||
- shell:
|
||||
cmd: |
|
||||
set -e
|
||||
set -x
|
||||
export PYTHONUNBUFFERED=true
|
||||
export PROJECTS="openstack/tricircle $PROJECTS"
|
||||
export PROJECTS="openstack/networking-sfc $PROJECTS"
|
||||
export DEVSTACK_GATE_CONFIGDRIVE=0
|
||||
export DEVSTACK_GATE_NEUTRON=1
|
||||
export DEVSTACK_GATE_USE_PYTHON3=True
|
||||
export DEVSTACK_GATE_TEMPEST=0
|
||||
export DEVSTACK_GATE_TEMPEST_ALL_PLUGINS=0
|
||||
export DEVSTACK_GATE_TEMPEST_REGEX="tricircle.tempestplugin"
|
||||
|
||||
# Keep localrc to be able to set some vars in pre_test_hook
|
||||
export KEEP_LOCALRC=1
|
||||
|
||||
# Enable multinode mode, so that the subnode(the second node)
|
||||
# will be configured to run as second region in pre_test_hook.sh
|
||||
export DEVSTACK_GATE_TOPOLOGY="multinode"
|
||||
|
||||
export BRANCH_OVERRIDE=default
|
||||
if [ "$BRANCH_OVERRIDE" != "default" ] ; then
|
||||
export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
|
||||
fi
|
||||
|
||||
function gate_hook {
|
||||
bash -xe $BASE/new/tricircle/tricircle/tempestplugin/gate_hook.sh
|
||||
}
|
||||
export -f gate_hook
|
||||
|
||||
function post_test_hook {
|
||||
bash -xe $BASE/new/tricircle/tricircle/tempestplugin/post_test_hook.sh
|
||||
}
|
||||
export -f post_test_hook
|
||||
|
||||
cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
|
||||
./safe-devstack-vm-gate-wrap.sh
|
||||
executable: /bin/bash
|
||||
chdir: '{{ ansible_user_dir }}/workspace'
|
||||
environment: '{{ zuul | zuul_legacy_vars }}'
|
|
@ -1,10 +0,0 @@
|
|||
---
|
||||
features:
|
||||
- |
|
||||
Support LBaaS in multi-region scenario. To enable adding instances as
|
||||
members with VIP, amphora routes the traffic sent from VIP to its
|
||||
gateway. However, in Tricircle, the gateway obtained from central neutron
|
||||
is not the real gateway in local neutron. As a result, only subnet
|
||||
without gateway is supported as member subnet. We will remove the
|
||||
limitation in the future, and LBaaS working together with Nova Cells V2
|
||||
multi-cells will also be supported in the future.
|
|
@ -1,5 +0,0 @@
|
|||
---
|
||||
features:
|
||||
- Provide central Neutron QoS plugin and implement QoS driver. Support QoS
|
||||
policy creation, update and delete, QoS policy binding with network or
|
||||
port.
|
|
@ -1,6 +0,0 @@
|
|||
---
|
||||
features:
|
||||
- |
|
||||
Support service function chaining creation and deletion based on networking-sfc,
|
||||
currently all the ports in the port chain need to be in the same network and the
|
||||
network type must be VxLAN.
|
|
@ -1,4 +0,0 @@
|
|||
---
|
||||
features:
|
||||
- |
|
||||
Support VLAN aware VMs
|
|
@ -1,12 +0,0 @@
|
|||
---
|
||||
features:
|
||||
- |
|
||||
Asynchronous job management API allows administrator
|
||||
to perform CRUD operations on a job. For jobs in job
|
||||
log, only list and show operations are allowed.
|
||||
|
||||
* Create a job
|
||||
* List jobs
|
||||
* Show job details
|
||||
* Delete a job
|
||||
* Redo a job
|
|
@ -1,4 +0,0 @@
|
|||
---
|
||||
features:
|
||||
- North-south bridge network and east-west bridge network are combined into
|
||||
one to bring better DVR and shared VxLAN network support.
|
|
@ -1,6 +0,0 @@
|
|||
---
|
||||
upgrade:
|
||||
- |
|
||||
Python 2.7 support has been dropped. Last release of Tricircle
|
||||
to support py2.7 is OpenStack Train. The minimum version of Python now
|
||||
supported by Tricircle is Python 3.6.
|
|
@ -1,4 +0,0 @@
|
|||
---
|
||||
features:
|
||||
- |
|
||||
Enable allowed-address-pairs in the central plugin.
|
|
@ -1,9 +0,0 @@
|
|||
---
|
||||
features:
|
||||
- |
|
||||
Router
|
||||
|
||||
* Support availability zone for router
|
||||
* Local router, which will reside only inside one region, can be
|
||||
attached with external network directly, no additional intermediate
|
||||
router is needed.
|
|
@ -1,3 +0,0 @@
|
|||
---
|
||||
features:
|
||||
- Support updating default security group using asynchronous methods.
|
|
@ -1,4 +0,0 @@
|
|||
---
|
||||
features:
|
||||
- |
|
||||
Support flat type of tenant network or external network
|
|
@ -1,82 +0,0 @@
|
|||
---
|
||||
prelude: >
|
||||
The Tricircle is to provide networking automation across
|
||||
Neutron in OpenStack multi-region deployment.
|
||||
features:
|
||||
- |
|
||||
Network
|
||||
|
||||
* List networks
|
||||
* Create network
|
||||
* Show network details
|
||||
* Delete network
|
||||
|
||||
- |
|
||||
Subnet
|
||||
|
||||
* List subnets
|
||||
* Create subnet
|
||||
* Show subnet details
|
||||
* Delete subnet
|
||||
|
||||
- |
|
||||
Port
|
||||
|
||||
* List ports
|
||||
* Create port
|
||||
* Show port details
|
||||
* Delete port
|
||||
|
||||
- |
|
||||
Router
|
||||
|
||||
* List routers
|
||||
* Create router
|
||||
* Show router details
|
||||
* Delete router
|
||||
* Add interface to router
|
||||
* Delete interface from router
|
||||
* List floating IPs
|
||||
* Create floating IP
|
||||
* Show floating IP details
|
||||
* Update floating IP
|
||||
* Delete floating IP
|
||||
|
||||
- |
|
||||
Security Group
|
||||
|
||||
* List security groups
|
||||
* Create security group
|
||||
* Show security group details
|
||||
* List security group rules
|
||||
* Create security group rule
|
||||
* Delete security group rule
|
||||
|
||||
- |
|
||||
Note for Networking
|
||||
|
||||
* Only Local Network and VLAN network supported.
|
||||
Local Network means the network will only present in one region,
|
||||
it could be VxLAN or VLAN network.
|
||||
VLAN is the only L2 network type which supports cross
|
||||
Neutron L2 networking and the bridge network for L3 networking.
|
||||
* Pagination and sort are not supported at the same time for list
|
||||
operation.
|
||||
* For security group rule, remote group is not supported yet. Use IP
|
||||
prefix to create security group rule.
|
||||
* One availability zone can include more than one region through
|
||||
Tricircle pod management.
|
||||
* Availability zone or region name for availability zone hint can be
|
||||
specified during network creation, that means this network will be
|
||||
presented in the specified list of availability zone or region. If no
|
||||
availability zone hint is specified and the network is not Local
|
||||
Network, then the network can be spread into all regions. For Local
|
||||
Network without availability zone hint specified in creation, then
|
||||
the network will only be presented in the first region where the
|
||||
resource(VM, baremetal or container) is booted and plugged into this
|
||||
network.
|
||||
* Need to specify one region name as the availability zone hint for
|
||||
external network creation, that means the external network will
|
||||
be located in the specified region.
|
||||
issues:
|
||||
- refer to https://bugs.launchpad.net/tricircle
|
|
@ -1,5 +0,0 @@
|
|||
---
|
||||
features:
|
||||
- Support network topology that each OpenStack cloud provides external
|
||||
network for tenant's north-south traffic and at the same time east-west
|
||||
networking of tenant networks among OpenStack clouds is also enabled
|
|
@ -1,21 +0,0 @@
|
|||
---
|
||||
features:
|
||||
- |
|
||||
Network
|
||||
|
||||
* Update networks
|
||||
|
||||
* qos-policy not supported
|
||||
|
||||
- |
|
||||
Subnet
|
||||
|
||||
* Update subnets
|
||||
|
||||
issues:
|
||||
- |
|
||||
Update network or subnet may not lead to the expected result if an
|
||||
instance is being booted at the same time. You can redo the update
|
||||
operation later to make it execute correctly.
|
||||
|
||||
|
|
@ -1,15 +0,0 @@
|
|||
---
|
||||
features:
|
||||
- |
|
||||
Port
|
||||
|
||||
* Update port
|
||||
|
||||
* name, description, admin_state_up, extra_dhcp_opts, device_owner,
|
||||
device_id, mac_address, security group attribute updates supported
|
||||
|
||||
issues:
|
||||
- |
|
||||
Update port may not lead to the expected result if an instance is being
|
||||
booted at the same time. You can redo the update operation later to make
|
||||
it execute correctly.
|
|
@ -1,22 +0,0 @@
|
|||
---
|
||||
features:
|
||||
- |
|
||||
Resource routing APIs add operations on resource routing
|
||||
table. This makes it possible to create, show, delete
|
||||
and update the resource routing entry in the resource
|
||||
routing by cloud administrator for the maintenance and
|
||||
emergency fix need. But the update and delete operations
|
||||
on the entry generated by the Tricircle itself is not
|
||||
proposed, because central Neutron may make wrong
|
||||
judgement on whether the resource exists or not
|
||||
without this routing entry. Moreover, related request
|
||||
can not be forwarded to the proper local Neutron
|
||||
either. So even though the update and delete operations
|
||||
are provided, they are better not to be used in case of
|
||||
causing unexpected problems.
|
||||
|
||||
* List resource routings
|
||||
* Create resource routing
|
||||
* Show resource routing details
|
||||
* Delete resource routing
|
||||
* Update resource routing
|
|
@ -1,7 +0,0 @@
|
|||
---
|
||||
features:
|
||||
- |
|
||||
Support pagination for asynchronous job list operation. Jobs in job table
|
||||
will be shown ahead of those in job log table. If page size is not specified
|
||||
from client, then maximum pagination limit from configuration will be used.
|
||||
|
|
@ -1,6 +0,0 @@
|
|||
---
|
||||
features:
|
||||
- |
|
||||
Support pagination for resource routing list operation. If page size is
|
||||
not specified from client, then maximum pagination limit from
|
||||
configuration will be used.
|
|
@ -1,6 +0,0 @@
|
|||
---
|
||||
prelude: >
|
||||
Tricircle Admin API now supports WSGI deployment. The endpoint of
|
||||
Tricircle Admin API could be accessed via the format of
|
||||
http://host/tricircle, and no need to expose special port, thus
|
||||
reduce the risk of port management.
|
|
@ -1,13 +0,0 @@
|
|||
---
|
||||
prelude: >
|
||||
Added new tool ``tricircle-status upgrade check``.
|
||||
features:
|
||||
- |
|
||||
New framework for ``tricircle-status upgrade check`` command is added.
|
||||
This framework allows adding various checks which can be run before a
|
||||
Tricircle upgrade to ensure if the upgrade can be performed safely.
|
||||
upgrade:
|
||||
- |
|
||||
Operator can now use new CLI tool ``tricircle-status upgrade check``
|
||||
to check if Tricircle deployment can be safely upgraded from
|
||||
N-1 to N release.
|
|
@ -1,5 +0,0 @@
|
|||
---
|
||||
features:
|
||||
- |
|
||||
Support VxLAN network type for tenant network and bridge network to be
|
||||
stretched into multiple OpenStack clouds
|
|
@ -1,279 +0,0 @@
|
|||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# Glance Release Notes documentation build configuration file, created by
|
||||
# sphinx-quickstart on Tue Nov 3 17:40:50 2015.
|
||||
#
|
||||
# This file is execfile()d with the current directory set to its
|
||||
# containing dir.
|
||||
#
|
||||
# Note that not all possible configuration values are present in this
|
||||
# autogenerated file.
|
||||
#
|
||||
# All configuration values have a default; values that are commented out
|
||||
# serve to show the default.
|
||||
|
||||
# If extensions (or modules to document with autodoc) are in another directory,
|
||||
# add these directories to sys.path here. If the directory is relative to the
|
||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
# sys.path.insert(0, os.path.abspath('.'))
|
||||
|
||||
# -- General configuration ------------------------------------------------
|
||||
|
||||
# If your documentation needs a minimal Sphinx version, state it here.
|
||||
# needs_sphinx = '1.0'
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
||||
# ones.
|
||||
extensions = [
|
||||
'openstackdocstheme',
|
||||
'reno.sphinxext',
|
||||
]
|
||||
|
||||
# openstackdocstheme options
|
||||
repository_name = 'openstack/tricircle'
|
||||
bug_project = 'tricircle'
|
||||
bug_tag = ''
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ['_templates']
|
||||
|
||||
# The suffix of source filenames.
|
||||
source_suffix = '.rst'
|
||||
|
||||
# The encoding of source files.
|
||||
# source_encoding = 'utf-8-sig'
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = 'index'
|
||||
|
||||
# General information about the project.
|
||||
project = u'The Tricircle Release Notes'
|
||||
copyright = u'2016, OpenStack Foundation'
|
||||
|
||||
# The version info for the project you're documenting, acts as replacement for
|
||||
# |version| and |release|, also used in various other places throughout the
|
||||
# built documents.
|
||||
#
|
||||
# The short X.Y version.
|
||||
# The full version, including alpha/beta/rc tags.
|
||||
release = ''
|
||||
# The short X.Y version.
|
||||
version = ''
|
||||
|
||||
# The language for content autogenerated by Sphinx. Refer to documentation
|
||||
# for a list of supported languages.
|
||||
# language = None
|
||||
|
||||
# There are two options for replacing |today|: either, you set today to some
|
||||
# non-false value, then it is used:
|
||||
# today = ''
|
||||
# Else, today_fmt is used as the format for a strftime call.
|
||||
# today_fmt = '%B %d, %Y'
|
||||
|
||||
# List of patterns, relative to source directory, that match files and
|
||||
# directories to ignore when looking for source files.
|
||||
exclude_patterns = []
|
||||
|
||||
# The reST default role (used for this markup: `text`) to use for all
|
||||
# documents.
|
||||
# default_role = None
|
||||
|
||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
||||
# add_function_parentheses = True
|
||||
|
||||
# If true, the current module name will be prepended to all description
|
||||
# unit titles (such as .. function::).
|
||||
# add_module_names = True
|
||||
|
||||
# If true, sectionauthor and moduleauthor directives will be shown in the
|
||||
# output. They are ignored by default.
|
||||
# show_authors = False
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = 'sphinx'
|
||||
|
||||
# A list of ignored prefixes for module index sorting.
|
||||
# modindex_common_prefix = []
|
||||
|
||||
# If true, keep warnings as "system message" paragraphs in the built documents.
|
||||
# keep_warnings = False
|
||||
|
||||
|
||||
# -- Options for HTML output ----------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||
# a list of builtin themes.
|
||||
html_theme = 'openstackdocs'
|
||||
|
||||
# Theme options are theme-specific and customize the look and feel of a theme
|
||||
# further. For a list of options available for each theme, see the
|
||||
# documentation.
|
||||
# html_theme_options = {}
|
||||
|
||||
# Add any paths that contain custom themes here, relative to this directory.
|
||||
# html_theme_path = []
|
||||
|
||||
# The name for this set of Sphinx documents. If None, it defaults to
|
||||
# "<project> v<release> documentation".
|
||||
# html_title = None
|
||||
|
||||
# A shorter title for the navigation bar. Default is the same as html_title.
|
||||
# html_short_title = None
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top
|
||||
# of the sidebar.
|
||||
# html_logo = None
|
||||
|
||||
# The name of an image file (within the static path) to use as favicon of the
|
||||
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
|
||||
# pixels large.
|
||||
# html_favicon = None
|
||||
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
# relative to this directory. They are copied after the builtin static files,
|
||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||
html_static_path = ['_static']
|
||||
|
||||
# Add any extra paths that contain custom files (such as robots.txt or
|
||||
# .htaccess) here, relative to this directory. These files are copied
|
||||
# directly to the root of the documentation.
|
||||
# html_extra_path = []
|
||||
|
||||
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
|
||||
# using the given strftime format.
|
||||
html_last_updated_fmt = '%Y-%m-%d %H:%M'
|
||||
|
||||
# If true, SmartyPants will be used to convert quotes and dashes to
|
||||
# typographically correct entities.
|
||||
# html_use_smartypants = True
|
||||
|
||||
# Custom sidebar templates, maps document names to template names.
|
||||
# html_sidebars = {}
|
||||
|
||||
# Additional templates that should be rendered to pages, maps page names to
|
||||
# template names.
|
||||
# html_additional_pages = {}
|
||||
|
||||
# If false, no module index is generated.
|
||||
# html_domain_indices = True
|
||||
|
||||
# If false, no index is generated.
|
||||
# html_use_index = True
|
||||
|
||||
# If true, the index is split into individual pages for each letter.
|
||||
# html_split_index = False
|
||||
|
||||
# If true, links to the reST sources are added to the pages.
|
||||
# html_show_sourcelink = True
|
||||
|
||||
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
|
||||
# html_show_sphinx = True
|
||||
|
||||
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
|
||||
# html_show_copyright = True
|
||||
|
||||
# If true, an OpenSearch description file will be output, and all pages will
|
||||
# contain a <link> tag referring to it. The value of this option must be the
|
||||
# base URL from which the finished HTML is served.
|
||||
# html_use_opensearch = ''
|
||||
|
||||
# This is the file name suffix for HTML files (e.g. ".xhtml").
|
||||
# html_file_suffix = None
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
htmlhelp_basename = 'GlanceReleaseNotesdoc'
|
||||
|
||||
|
||||
# -- Options for LaTeX output ---------------------------------------------
|
||||
|
||||
latex_elements = {
|
||||
# The paper size ('letterpaper' or 'a4paper').
|
||||
# 'papersize': 'letterpaper',
|
||||
|
||||
# The font size ('10pt', '11pt' or '12pt').
|
||||
# 'pointsize': '10pt',
|
||||
|
||||
# Additional stuff for the LaTeX preamble.
|
||||
# 'preamble': '',
|
||||
}
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title,
|
||||
# author, documentclass [howto, manual, or own class]).
|
||||
latex_documents = [
|
||||
('index', 'GlanceReleaseNotes.tex', u'Glance Release Notes Documentation',
|
||||
u'Glance Developers', 'manual'),
|
||||
]
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top of
|
||||
# the title page.
|
||||
# latex_logo = None
|
||||
|
||||
# For "manual" documents, if this is true, then toplevel headings are parts,
|
||||
# not chapters.
|
||||
# latex_use_parts = False
|
||||
|
||||
# If true, show page references after internal links.
|
||||
# latex_show_pagerefs = False
|
||||
|
||||
# If true, show URL addresses after external links.
|
||||
# latex_show_urls = False
|
||||
|
||||
# Documents to append as an appendix to all manuals.
|
||||
# latex_appendices = []
|
||||
|
||||
# If false, no module index is generated.
|
||||
# latex_domain_indices = True
|
||||
|
||||
|
||||
# -- Options for manual page output ---------------------------------------
|
||||
|
||||
# One entry per manual page. List of tuples
|
||||
# (source start file, name, description, authors, manual section).
|
||||
man_pages = [
|
||||
('index', 'glancereleasenotes', u'Glance Release Notes Documentation',
|
||||
[u'Glance Developers'], 1)
|
||||
]
|
||||
|
||||
# If true, show URL addresses after external links.
|
||||
# man_show_urls = False
|
||||
|
||||
|
||||
# -- Options for Texinfo output -------------------------------------------
|
||||
|
||||
# Grouping the document tree into Texinfo files. List of tuples
|
||||
# (source start file, target name, title, author,
|
||||
# dir menu entry, description, category)
|
||||
texinfo_documents = [
|
||||
('index', 'GlanceReleaseNotes', u'Glance Release Notes Documentation',
|
||||
u'Glance Developers', 'GlanceReleaseNotes',
|
||||
'One line description of project.',
|
||||
'Miscellaneous'),
|
||||
]
|
||||
|
||||
# Documents to append as an appendix to all manuals.
|
||||
# texinfo_appendices = []
|
||||
|
||||
# If false, no module index is generated.
|
||||
# texinfo_domain_indices = True
|
||||
|
||||
# How to display URL addresses: 'footnote', 'no', or 'inline'.
|
||||
# texinfo_show_urls = 'footnote'
|
||||
|
||||
# If true, do not generate a @detailmenu in the "Top" node's menu.
|
||||
# texinfo_no_detailmenu = False
|
||||
|
||||
# -- Options for Internationalization output ------------------------------
|
||||
locale_dirs = ['locale/']
|
|
@ -1,28 +0,0 @@
|
|||
..
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
not use this file except in compliance with the License. You may obtain
|
||||
a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
License for the specific language governing permissions and limitations
|
||||
under the License.
|
||||
|
||||
========================
|
||||
Tricircle Release Notes
|
||||
========================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
unreleased
|
||||
ussuri
|
||||
train
|
||||
stein
|
||||
rocky
|
||||
queens
|
||||
pike
|
||||
ocata
|
|
@ -1,6 +0,0 @@
|
|||
===================================
|
||||
Ocata Series Release Notes
|
||||
===================================
|
||||
|
||||
.. release-notes::
|
||||
:branch: origin/stable/ocata
|
|
@ -1,6 +0,0 @@
|
|||
===================================
|
||||
Pike Series Release Notes
|
||||
===================================
|
||||
|
||||
.. release-notes::
|
||||
:branch: stable/pike
|
|
@ -1,6 +0,0 @@
|
|||
===================================
|
||||
Queens Series Release Notes
|
||||
===================================
|
||||
|
||||
.. release-notes::
|
||||
:branch: stable/queens
|
|
@ -1,6 +0,0 @@
|
|||
===================================
|
||||
Rocky Series Release Notes
|
||||
===================================
|
||||
|
||||
.. release-notes::
|
||||
:branch: stable/rocky
|
|
@ -1,6 +0,0 @@
|
|||
===================================
|
||||
Stein Series Release Notes
|
||||
===================================
|
||||
|
||||
.. release-notes::
|
||||
:branch: stable/stein
|
|
@ -1,6 +0,0 @@
|
|||
==========================
|
||||
Train Series Release Notes
|
||||
==========================
|
||||
|
||||
.. release-notes::
|
||||
:branch: stable/train
|
|
@ -1,5 +0,0 @@
|
|||
==============================
|
||||
Current Series Release Notes
|
||||
==============================
|
||||
|
||||
.. release-notes::
|
|
@ -1,6 +0,0 @@
|
|||
===========================
|
||||
Ussuri Series Release Notes
|
||||
===========================
|
||||
|
||||
.. release-notes::
|
||||
:branch: stable/ussuri
|
|
@ -1,55 +0,0 @@
|
|||
# The order of packages is significant, because pip processes them in the order
|
||||
# of appearance. Changing the order has an impact on the overall integration
|
||||
# process, which may cause wedges in the gate later.
|
||||
pbr!=2.1.0,>=4.0.0 # Apache-2.0
|
||||
Babel!=2.4.0,>=2.3.4 # BSD
|
||||
|
||||
Paste>=2.0.2 # MIT
|
||||
PasteDeploy>=1.5.0 # MIT
|
||||
Routes>=2.3.1 # MIT
|
||||
debtcollector>=1.19.0 # Apache-2.0
|
||||
eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT
|
||||
pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.3.2 # BSD
|
||||
requests>=2.14.2 # Apache-2.0
|
||||
Jinja2>=2.10 # BSD License (3 clause)
|
||||
keystonemiddleware>=4.17.0 # Apache-2.0
|
||||
netaddr>=0.7.18 # BSD
|
||||
netifaces>=0.10.4 # MIT
|
||||
SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.2.0 # MIT
|
||||
WebOb>=1.8.2 # MIT
|
||||
python-cinderclient>=3.3.0 # Apache-2.0
|
||||
python-glanceclient>=2.8.0 # Apache-2.0
|
||||
python-keystoneclient>=3.8.0 # Apache-2.0
|
||||
python-neutronclient>=6.7.0 # Apache-2.0
|
||||
python-novaclient>=9.1.0 # Apache-2.0
|
||||
alembic>=0.8.10 # MIT
|
||||
six>=1.10.0 # MIT
|
||||
stevedore>=1.20.0 # Apache-2.0
|
||||
oslo.concurrency>=3.26.0 # Apache-2.0
|
||||
oslo.config>=5.2.0 # Apache-2.0
|
||||
oslo.context>=2.19.2 # Apache-2.0
|
||||
oslo.db>=4.37.0 # Apache-2.0
|
||||
oslo.i18n>=3.15.3 # Apache-2.0
|
||||
oslo.log>=3.36.0 # Apache-2.0
|
||||
oslo.messaging>=5.29.0 # Apache-2.0
|
||||
oslo.middleware>=3.31.0 # Apache-2.0
|
||||
oslo.policy>=1.30.0 # Apache-2.0
|
||||
oslo.rootwrap>=5.8.0 # Apache-2.0
|
||||
oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0
|
||||
oslo.service!=1.28.1,>=1.24.0 # Apache-2.0
|
||||
oslo.upgradecheck>=0.1.1 # Apache-2.0
|
||||
oslo.utils>=3.33.0 # Apache-2.0
|
||||
sqlalchemy-migrate>=0.11.0 # Apache-2.0
|
||||
|
||||
# These repos are installed from git in OpenStack CI if the job
|
||||
# configures them as required-projects:
|
||||
#keystoneauth1>=3.4.0;python_version<'3.3' # Apache-2.0
|
||||
#neutron-lib>=1.29.1;python_version>'3.3' # Apache-2.0
|
||||
neutron>=12.0.0 # Apache-2.0
|
||||
networking-sfc>=8.0.0.0b1 # Apache-2.0
|
||||
|
||||
# The comment below indicates this project repo is current with neutron-lib
|
||||
# and should receive neutron-lib consumption patches as they are released
|
||||
# in neutron-lib. It also implies the project will stay current with TC
|
||||
# and infra initiatives ensuring consumption patches can land.
|
||||
# neutron-lib-current
|
74
setup.cfg
74
setup.cfg
|
@ -1,74 +0,0 @@
|
|||
[metadata]
|
||||
name = tricircle
|
||||
summary = The Tricircle is to provide networking automation across Neutron in multi-region OpenStack deployments.
|
||||
description-file = README.rst
|
||||
author = OpenStack
|
||||
author-email = openstack-discuss@lists.openstack.org
|
||||
home-page = https://docs.openstack.org/tricircle/latest/
|
||||
classifier =
|
||||
Environment :: OpenStack
|
||||
Intended Audience :: Information Technology
|
||||
Intended Audience :: System Administrators
|
||||
License :: OSI Approved :: Apache Software License
|
||||
Operating System :: POSIX :: Linux
|
||||
Programming Language :: Python
|
||||
Programming Language :: Python :: 3
|
||||
Programming Language :: Python :: 3.6
|
||||
Programming Language :: Python :: 3.7
|
||||
|
||||
[files]
|
||||
packages =
|
||||
tricircle
|
||||
|
||||
[build_sphinx]
|
||||
source-dir = doc/source
|
||||
build-dir = doc/build
|
||||
all_files = 1
|
||||
warning-is-error = 1
|
||||
|
||||
[upload_sphinx]
|
||||
upload-dir = doc/build/html
|
||||
|
||||
[compile_catalog]
|
||||
directory = tricircle/locale
|
||||
domain = tricircle
|
||||
|
||||
[update_catalog]
|
||||
domain = tricircle
|
||||
output_dir = tricircle/locale
|
||||
input_file = tricircle/locale/tricircle.pot
|
||||
|
||||
[extract_messages]
|
||||
keywords = _ gettext ngettext l_ lazy_gettext
|
||||
mapping_file = babel.cfg
|
||||
output_file = tricircle/locale/tricircle.pot
|
||||
|
||||
[entry_points]
|
||||
console_scripts =
|
||||
tricircle-api = tricircle.cmd.api:main
|
||||
tricircle-db-manage = tricircle.cmd.manage:main
|
||||
tricircle-status = tricircle.cmd.status:main
|
||||
tricircle-xjob = tricircle.cmd.xjob:main
|
||||
wsgi_scripts =
|
||||
tricircle-api-wsgi = tricircle.api.wsgi:init_application
|
||||
oslo.config.opts =
|
||||
tricircle.api = tricircle.api.opts:list_opts
|
||||
tricircle.common = tricircle.common.opts:list_opts
|
||||
tricircle.db = tricircle.db.opts:list_opts
|
||||
tricircle.network = tricircle.network.opts:list_opts
|
||||
tricircle.xjob = tricircle.xjob.opts:list_opts
|
||||
oslo.policy.policies =
|
||||
tricircle = tricircle.common.policy:list_policies
|
||||
tricircle.network.type_drivers =
|
||||
local = tricircle.network.drivers.type_local:LocalTypeDriver
|
||||
vlan = tricircle.network.drivers.type_vlan:VLANTypeDriver
|
||||
vxlan = tricircle.network.drivers.type_vxlan:VxLANTypeDriver
|
||||
flat = tricircle.network.drivers.type_flat:FlatTypeDriver
|
||||
tricircle.network.extension_drivers =
|
||||
qos = neutron.plugins.ml2.extensions.qos:QosExtensionDriver
|
||||
networking_sfc.flowclassifier.drivers =
|
||||
tricircle_fc = tricircle.network.central_fc_driver:TricircleFcDriver
|
||||
networking_sfc.sfc.drivers =
|
||||
tricircle_sfc = tricircle.network.central_sfc_driver:TricircleSfcDriver
|
||||
networking_trunk.trunk.drivers =
|
||||
tricircle_tk = tricircle.network.central_trunk_driver:TricircleTrunkDriver
|
29
setup.py
29
setup.py
|
@ -1,29 +0,0 @@
|
|||
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
|
||||
import setuptools
|
||||
|
||||
# In python < 2.7.4, a lazy loading of package `pbr` will break
|
||||
# setuptools if some other modules registered functions in `atexit`.
|
||||
# solution from: http://bugs.python.org/issue15881#msg170215
|
||||
try:
|
||||
import multiprocessing # noqa
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
setuptools.setup(
|
||||
setup_requires=['pbr>=2.0.0'],
|
||||
pbr=True)
|
|
@ -1,564 +0,0 @@
|
|||
========================================
|
||||
Cross Neutron L2 networking in Tricircle
|
||||
========================================
|
||||
|
||||
Background
|
||||
==========
|
||||
The Tricircle provides unified OpenStack API gateway and networking automation
|
||||
functionality. Those main functionalities allow cloud operators to manage
|
||||
multiple OpenStack instances which are running in one site or multiple sites
|
||||
as a single OpenStack cloud.
|
||||
|
||||
Each bottom OpenStack instance which is managed by the Tricircle is also called
|
||||
a pod.
|
||||
|
||||
The Tricircle has the following components:
|
||||
|
||||
* Nova API-GW
|
||||
* Cinder API-GW
|
||||
* Neutron API Server with Neutron Tricircle plugin
|
||||
* Admin API
|
||||
* XJob
|
||||
* DB
|
||||
|
||||
Nova API-GW provides the functionality to trigger automatic networking creation
|
||||
when new VMs are being provisioned. Neutron Tricircle plug-in is the
|
||||
functionality to create cross Neutron L2/L3 networking for new VMs. After the
|
||||
binding of tenant-id and pod finished in the Tricircle, Cinder API-GW and Nova
|
||||
API-GW will pass the cinder api or nova api request to appropriate bottom
|
||||
OpenStack instance.
|
||||
|
||||
Please refer to the Tricircle design blueprint[1], especially from
|
||||
'7. Stateless Architecture Proposal' for the detail description of each
|
||||
components.
|
||||
|
||||
|
||||
Problem Description
|
||||
===================
|
||||
When a user wants to create a network in Neutron API Server, the user can
|
||||
specify the 'availability_zone_hints'(AZ or az will be used for short for
|
||||
availability zone) during network creation[5], in the Tricircle, the
|
||||
'az_hints' means which AZ the network should be spread into. The 'az_hints'
|
||||
meaning in Tricircle is a little different from the 'az_hints' meaning in
|
||||
Neutron[5]. If no 'az_hints' was specified during network creation, this created
|
||||
network will be spread into any AZ. If there is a list of 'az_hints' during the
|
||||
network creation, that means the network should be able to be spread into these
|
||||
AZs which are suggested by a list of 'az_hints'.
|
||||
|
||||
When a user creates VM or Volume, there is also one parameter called
|
||||
availability zone. The AZ parameter is used for Volume and VM co-location, so
|
||||
that the Volume and VM will be created into same bottom OpenStack instance.
|
||||
|
||||
When a VM is being attached to a network, the Tricircle will check whether a
|
||||
VM's AZ is inside in the network's AZs scope. If a VM is not in the network's
|
||||
AZs scope, the VM creation will be rejected.
|
||||
|
||||
Currently, the Tricircle only supports one pod in one AZ. And only supports a
|
||||
network associated with one AZ. That means currently a tenant's network will
|
||||
be presented only in one bottom OpenStack instance, that also means all VMs
|
||||
connected to the network will be located at one bottom OpenStack instance.
|
||||
If there are more than one pod in one AZ, refer to the dynamic pod binding[6].
|
||||
|
||||
There are lots of use cases where a tenant needs a network being able to be
|
||||
spread out into multiple bottom OpenStack instances in one AZ or multiple AZs.
|
||||
|
||||
* Capacity expansion: tenants add VMs more and more, the capacity of one
|
||||
OpenStack may not be enough, then a new OpenStack instance has to be added
|
||||
to the cloud. But the tenant still wants to add new VMs into same network.
|
||||
|
||||
* Cross Neutron network service chaining. Service chaining is based on
|
||||
the port-pairs. Leveraging the cross Neutron L2 networking capability which
|
||||
is provided by the Tricircle, the chaining could also be done by across sites.
|
||||
For example, vRouter1 in pod1, but vRouter2 in pod2, these two VMs could be
|
||||
chained.
|
||||
|
||||
* Applications are often required to run in different availability zones to
|
||||
achieve high availability. Application needs to be designed as
|
||||
Active-Standby/Active-Active/N-Way to achieve high availability, and some
|
||||
components inside one application are designed to work as distributed
|
||||
cluster, this design typically leads to state replication or heart
|
||||
beat among application components (directly or via replicated database
|
||||
services, or via private designed message format). When this kind of
|
||||
applications are distributedly deployed into multiple OpenStack instances,
|
||||
cross Neutron L2 networking is needed to support heart beat
|
||||
or state replication.
|
||||
|
||||
* When a tenant's VMs are provisioned in different OpenStack instances, there
|
||||
is E-W (East-West) traffic for these VMs, the E-W traffic should be only
|
||||
visible to the tenant, and isolation is needed. If the traffic goes through
|
||||
N-S (North-South) via tenant level VPN, overhead is too much, and the
|
||||
orchestration for multiple site to site VPN connection is also complicated.
|
||||
Therefore cross Neutron L2 networking to bridge the tenant's routers in
|
||||
different Neutron servers can provide more light weight isolation.
|
||||
|
||||
* In hybrid cloud, there is cross L2 networking requirement between the
|
||||
private OpenStack and the public OpenStack. Cross Neutron L2 networking will
|
||||
help the VMs migration in this case and it's not necessary to change the
|
||||
IP/MAC/Security Group configuration during VM migration.
|
||||
|
||||
The spec[5] is to explain how one AZ can support more than one pod, and how
|
||||
to schedule a proper pod during VM or Volume creation.
|
||||
|
||||
And this spec is to deal with the cross Neutron L2 networking automation in
|
||||
the Tricircle.
|
||||
|
||||
The simplest way to spread out L2 networking to multiple OpenStack instances
|
||||
is to use same VLAN. But there is a lot of limitations: (1) A number of VLAN
|
||||
segment is limited, (2) the VLAN network itself is not good to spread out
|
||||
multiple sites, although you can use some gateways to do the same thing.
|
||||
|
||||
So flexible tenant level L2 networking across multiple Neutron servers in
|
||||
one site or in multiple sites is needed.
|
||||
|
||||
Proposed Change
|
||||
===============
|
||||
|
||||
Cross Neutron L2 networking can be divided into three categories,
|
||||
``VLAN``, ``Shared VxLAN`` and ``Mixed VLAN/VxLAN``.
|
||||
|
||||
* VLAN
|
||||
|
||||
Network in each bottom OpenStack is VLAN type and has the same VLAN ID.
|
||||
If we want VLAN L2 networking to work in multi-site scenario, i.e.,
|
||||
Multiple OpenStack instances in multiple sites, physical gateway needs to
|
||||
be manually configured to make one VLAN networking be extended to other
|
||||
sites.
|
||||
|
||||
*Manual setup physical gateway is out of the scope of this spec*
|
||||
|
||||
* Shared VxLAN
|
||||
|
||||
Network in each bottom OpenStack instance is VxLAN type and has the same
|
||||
VxLAN ID.
|
||||
|
||||
Leverage L2GW[2][3] to implement this type of L2 networking.
|
||||
|
||||
* Mixed VLAN/VxLAN
|
||||
|
||||
Network in each bottom OpenStack instance may have different types and/or
|
||||
have different segment IDs.
|
||||
|
||||
Leverage L2GW[2][3] to implement this type of L2 networking.
|
||||
|
||||
There is another network type called “Local Network”. For “Local Network”,
|
||||
the network will be only presented in one bottom OpenStack instance. And the
|
||||
network won't be presented in different bottom OpenStack instances. If a VM
|
||||
in another pod tries to attach to the “Local Network”, it should be failed.
|
||||
This use case is quite useful for the scenario in which cross Neutron L2
|
||||
networking is not required, and one AZ will not include more than bottom
|
||||
OpenStack instance.
|
||||
|
||||
Cross Neutron L2 networking will be able to be established dynamically during
|
||||
tenant's VM is being provisioned.
|
||||
|
||||
There is assumption here that only one type of L2 networking will work in one
|
||||
cloud deployment.
|
||||
|
||||
|
||||
A Cross Neutron L2 Networking Creation
|
||||
--------------------------------------
|
||||
|
||||
A cross Neutron L2 networking creation will be able to be done with the az_hint
|
||||
attribute of the network. If az_hint includes one AZ or more AZs, the network
|
||||
will be presented only in this AZ or these AZs, if no AZ in az_hint, it means
|
||||
that the network can be extended to any bottom OpenStack.
|
||||
|
||||
There is a special use case for external network creation. For external
|
||||
network creation, you need to specify the pod_id but not AZ in the az_hint
|
||||
so that the external network will be only created in one specified pod per AZ.
|
||||
|
||||
*Support of External network in multiple OpenStack instances in one AZ
|
||||
is out of scope of this spec.*
|
||||
|
||||
Pluggable L2 networking framework is proposed to deal with three types of
|
||||
L2 cross Neutron networking, and it should be compatible with the
|
||||
``Local Network``.
|
||||
|
||||
1. Type Driver under Tricircle Plugin in Neutron API server
|
||||
|
||||
* Type driver to distinguish different type of cross Neutron L2 networking. So
|
||||
the Tricircle plugin need to load type driver according to the configuration.
|
||||
The Tricircle can reuse the type driver of ML2 with update.
|
||||
|
||||
* Type driver to allocate VLAN segment id for VLAN L2 networking.
|
||||
|
||||
* Type driver to allocate VxLAN segment id for shared VxLAN L2 networking.
|
||||
|
||||
* Type driver for mixed VLAN/VxLAN to allocate VxLAN segment id for the
|
||||
network connecting L2GWs[2][3].
|
||||
|
||||
* Type driver for Local Network only updating ``network_type`` for the
|
||||
network to the Tricircle Neutron DB.
|
||||
|
||||
When a network creation request is received in Neutron API Server in the
|
||||
Tricircle, the type driver will be called based on the configured network
|
||||
type.
|
||||
|
||||
2. Nova API-GW to trigger the bottom networking automation
|
||||
|
||||
Nova API-GW can be aware of when a new VM is provisioned if boot VM api request
|
||||
is received, therefore Nova API-GW is responsible for the network creation in
|
||||
the bottom OpenStack instances.
|
||||
|
||||
Nova API-GW needs to get the network type from Neutron API server in the
|
||||
Tricircle, and deal with the networking automation based on the network type:
|
||||
|
||||
* VLAN
|
||||
Nova API-GW creates network in bottom OpenStack instance in which the VM will
|
||||
run with the VLAN segment id, network name and type that are retrieved from
|
||||
the Neutron API server in the Tricircle.
|
||||
|
||||
* Shared VxLAN
|
||||
Nova API-GW creates network in bottom OpenStack instance in which the VM will
|
||||
run with the VxLAN segment id, network name and type which are retrieved from
|
||||
Tricricle Neutron API server. After the network in the bottom OpenStack
|
||||
instance is created successfully, Nova API-GW needs to make this network in the
|
||||
bottom OpenStack instance as one of the segments in the network in the Tricircle.
|
||||
|
||||
* Mixed VLAN/VxLAN
|
||||
Nova API-GW creates network in different bottom OpenStack instance in which the
|
||||
VM will run with the VLAN or VxLAN segment id respectively, network name and type
|
||||
which are retrieved from Tricricle Neutron API server. After the network in the
|
||||
bottom OpenStack instances is created successfully, Nova API-GW needs to update
|
||||
network in the Tricircle with the segmentation information of bottom netwoks.
|
||||
|
||||
3. L2GW driver under Tricircle Plugin in Neutron API server
|
||||
|
||||
Tricircle plugin needs to support multi-segment network extension[4].
|
||||
|
||||
For Shared VxLAN or Mixed VLAN/VxLAN L2 network type, L2GW driver will utilize the
|
||||
multi-segment network extension in Neutron API server to build the L2 network in the
|
||||
Tricircle. Each network in the bottom OpenStack instance will be a segment for the
|
||||
whole cross Neutron L2 networking in the Tricircle.
|
||||
|
||||
After the network in the bottom OpenStack instance was created successfully, Nova
|
||||
API-GW will call Neutron server API to update the network in the Tricircle with a
|
||||
new segment from the network in the bottom OpenStack instance.
|
||||
|
||||
If the network in the bottom OpenStack instance was removed successfully, Nova
|
||||
API-GW will call Neutron server api to remove the segment in the bottom OpenStack
|
||||
instance from network in the Tricircle.
|
||||
|
||||
When L2GW driver under Tricircle plugin in Neutron API server receives the
|
||||
segment update request, L2GW driver will start async job to orchestrate L2GW API
|
||||
for L2 networking automation[2][3].
|
||||
|
||||
|
||||
Data model impact
|
||||
-----------------
|
||||
|
||||
In database, we are considering setting physical_network in top OpenStack instance
|
||||
as ``bottom_physical_network#bottom_pod_id`` to distinguish segmentation information
|
||||
in different bottom OpenStack instance.
|
||||
|
||||
REST API impact
|
||||
---------------
|
||||
|
||||
None
|
||||
|
||||
Security impact
|
||||
---------------
|
||||
|
||||
None
|
||||
|
||||
Notifications impact
|
||||
--------------------
|
||||
|
||||
None
|
||||
|
||||
Other end user impact
|
||||
---------------------
|
||||
|
||||
None
|
||||
|
||||
Performance Impact
|
||||
------------------
|
||||
|
||||
None
|
||||
|
||||
Other deployer impact
|
||||
---------------------
|
||||
|
||||
None
|
||||
|
||||
Developer impact
|
||||
----------------
|
||||
|
||||
None
|
||||
|
||||
|
||||
Implementation
|
||||
==============
|
||||
|
||||
**Local Network Implementation**
|
||||
|
||||
For Local Network, L2GW is not required. In this scenario, no cross Neutron L2/L3
|
||||
networking is required.
|
||||
|
||||
A user creates network ``Net1`` with single AZ1 in az_hint, the Tricircle plugin
|
||||
checks the configuration, if ``tenant_network_type`` equals ``local_network``,
|
||||
it will invoke Local Network type driver. Local Network driver under the
|
||||
Tricircle plugin will update ``network_type`` in database.
|
||||
|
||||
For example, a user creates VM1 in AZ1 which has only one pod ``POD1``, and
|
||||
connects it to network ``Net1``. ``Nova API-GW`` will send network creation
|
||||
request to ``POD1`` and the VM will be booted in AZ1 (There should be only one
|
||||
pod in AZ1).
|
||||
|
||||
If a user wants to create VM2 in AZ2 or ``POD2`` in AZ1, and connect it to
|
||||
network ``Net1`` in the Tricircle, it would be failed. Because the ``Net1`` is
|
||||
local_network type network and it is limited to present in ``POD1`` in AZ1 only.
|
||||
|
||||
**VLAN Implementation**
|
||||
|
||||
For VLAN, L2GW is not required. This is the most simplest cross Neutron
|
||||
L2 networking for limited scenario. For example, with a small number of
|
||||
networks, all VLANs are extended through physical gateway to support cross
|
||||
Neutron VLAN networking, or all Neutron servers under same core switch with same visible
|
||||
VLAN ranges that supported by the core switch are connected by the core
|
||||
switch.
|
||||
|
||||
when a user creates network called ``Net1``, the Tricircle plugin checks the
|
||||
configuration. If ``tenant_network_type`` equals ``vlan``, the
|
||||
Tricircle will invoke VLAN type driver. VLAN driver will
|
||||
create ``segment``, and assign ``network_type`` with VLAN, update
|
||||
``segment`` and ``network_type`` and ``physical_network`` with DB
|
||||
|
||||
A user creates VM1 in AZ1, and connects it to network Net1. If VM1 will be
|
||||
booted in ``POD1``, ``Nova API-GW`` needs to get the network information and
|
||||
send network creation message to ``POD1``. Network creation message includes
|
||||
``network_type`` and ``segment`` and ``physical_network``.
|
||||
|
||||
Then the user creates VM2 in AZ2, and connects it to network Net1. If VM will
|
||||
be booted in ``POD2``, ``Nova API-GW`` needs to get the network information and
|
||||
send create network message to ``POD2``. Create network message includes
|
||||
``network_type`` and ``segment`` and ``physical_network``.
|
||||
|
||||
**Shared VxLAN Implementation**
|
||||
|
||||
A user creates network ``Net1``, the Tricircle plugin checks the configuration, if
|
||||
``tenant_network_type`` equals ``shared_vxlan``, it will invoke shared VxLAN
|
||||
driver. Shared VxLAN driver will allocate ``segment``, and assign
|
||||
``network_type`` with VxLAN, and update network with ``segment`` and
|
||||
``network_type`` with DB
|
||||
|
||||
A user creates VM1 in AZ1, and connects it to network ``Net1``. If VM1 will be
|
||||
booted in ``POD1``, ``Nova API-GW`` needs to get the network information and send
|
||||
create network message to ``POD1``, create network message includes
|
||||
``network_type`` and ``segment``.
|
||||
|
||||
``Nova API-GW`` should update ``Net1`` in Tricircle with the segment information
|
||||
got by ``POD1``.
|
||||
|
||||
Then the user creates VM2 in AZ2, and connects it to network ``Net1``. If VM2 will
|
||||
be booted in ``POD2``, ``Nova API-GW`` needs to get the network information and
|
||||
send network creation massage to ``POD2``, network creation message includes
|
||||
``network_type`` and ``segment``.
|
||||
|
||||
``Nova API-GW`` should update ``Net1`` in the Tricircle with the segment information
|
||||
get by ``POD2``.
|
||||
|
||||
The Tricircle plugin detects that the network includes more than one segment
|
||||
network, calls L2GW driver to start async job for cross Neutron networking for
|
||||
``Net1``. The L2GW driver will create L2GW1 in ``POD1`` and L2GW2 in ``POD2``. In
|
||||
``POD1``, L2GW1 will connect the local ``Net1`` and create L2GW remote connection
|
||||
to L2GW2, then populate the information of MAC/IP which resides in L2GW1. In
|
||||
``POD2``, L2GW2 will connect the local ``Net1`` and create L2GW remote connection
|
||||
to L2GW1, then populate remote MAC/IP information which resides in ``POD1`` in L2GW2.
|
||||
|
||||
L2GW driver in the Tricircle will also detect the new port creation/deletion API
|
||||
request. If port (MAC/IP) created or deleted in ``POD1`` or ``POD2``, it needs to
|
||||
refresh the L2GW2 MAC/IP information.
|
||||
|
||||
Whether to populate the information of port (MAC/IP) should be configurable according
|
||||
to L2GW capability. And only populate MAC/IP information for the ports that are not
|
||||
resides in the same pod.
|
||||
|
||||
**Mixed VLAN/VxLAN**
|
||||
|
||||
To achieve cross Neutron L2 networking, L2GW will be used to connect L2 network
|
||||
in different Neutron servers, using L2GW should work for Shared VxLAN and Mixed VLAN/VxLAN
|
||||
scenario.
|
||||
|
||||
When L2GW connected with local network in the same OpenStack instance, no
|
||||
matter it's VLAN or VxLAN or GRE, the L2GW should be able to connect the
|
||||
local network, and because L2GW is extension of Neutron, only network
|
||||
UUID should be enough for L2GW to connect the local network.
|
||||
|
||||
When admin user creates network in Tricircle, he/she specifies the network
|
||||
type as one of the network type as discussed above. In the phase of creating
|
||||
network in Tricircle, only one record is saved in the database, no network
|
||||
will be created in bottom OpenStack.
|
||||
|
||||
After the network in the bottom created successfully, need to retrieve the
|
||||
network information like segment id, network name and network type, and make
|
||||
this network in the bottom pod as one of the segments in the network in
|
||||
Tricircle.
|
||||
|
||||
In the Tricircle, network could be created by tenant or admin. For tenant, no way
|
||||
to specify the network type and segment id, then default network type will
|
||||
be used instead. When user uses the network to boot a VM, ``Nova API-GW``
|
||||
checks the network type. For Mixed VLAN/VxLAN network, ``Nova API-GW`` first
|
||||
creates network in bottom OpenStack without specifying network type and segment
|
||||
ID, then updates the top network with bottom network segmentation information
|
||||
returned by bottom OpenStack.
|
||||
|
||||
A user creates network ``Net1``, plugin checks the configuration, if
|
||||
``tenant_network_type`` equals ``mixed_vlan_vxlan``, it will invoke mixed VLAN
|
||||
and VxLAN driver. The driver needs to do nothing since segment is allocated
|
||||
in bottom.
|
||||
|
||||
A user creates VM1 in AZ1, and connects it to the network ``Net1``, the VM is
|
||||
booted in bottom ``POD1``, and ``Nova API-GW`` creates network in ``POD1`` and
|
||||
queries the network detail segmentation information (using admin role), and
|
||||
gets network type, segment id, then updates this new segment to the ``Net1``
|
||||
in Tricircle ``Neutron API Server``.
|
||||
|
||||
Then the user creates another VM2, and with AZ info AZ2, then the VM should be
|
||||
able to be booted in bottom ``POD2`` which is located in AZ2. And when VM2 should
|
||||
be able to be booted in AZ2, ``Nova API-GW`` also creates a network in ``POD2``,
|
||||
and queries the network information including segment and network type,
|
||||
updates this new segment to the ``Net1`` in Tricircle ``Neutron API Server``.
|
||||
|
||||
The Tricircle plugin detects that the ``Net1`` includes more than one network
|
||||
segments, calls L2GW driver to start async job for cross Neutron networking for
|
||||
``Net1``. The L2GW driver will create L2GW1 in ``POD1`` and L2GW2 in ``POD2``. In
|
||||
``POD1``, L2GW1 will connect the local ``Net1`` and create L2GW remote connection
|
||||
to L2GW2, then populate information of MAC/IP which resides in ``POD2`` in L2GW1.
|
||||
In ``POD2``, L2GW2 will connect the local ``Net1`` and create L2GW remote connection
|
||||
to L2GW1, then populate remote MAC/IP information which resides in ``POD1`` in L2GW2.
|
||||
|
||||
L2GW driver in Tricircle will also detect the new port creation/deletion api
|
||||
calling, if port (MAC/IP) created or deleted in ``POD1``, then needs to refresh
|
||||
the L2GW2 MAC/IP information. If port (MAC/IP) created or deleted in ``POD2``,
|
||||
then needs to refresh the L2GW1 MAC/IP information,
|
||||
|
||||
Whether to populate MAC/IP information should be configurable according to
|
||||
L2GW capability. And only populate MAC/IP information for the ports that are
|
||||
not resides in the same pod.
|
||||
|
||||
**L3 bridge network**
|
||||
|
||||
Current implementation without cross Neutron L2 networking.
|
||||
|
||||
* A special bridge network is created and connected to the routers in
|
||||
different bottom OpenStack instances. We configure the extra routes of the routers
|
||||
to route the packets from one OpenStack to another. In current
|
||||
implementation, we create this special bridge network in each bottom
|
||||
OpenStack with the same ``VLAN ID``, so we have an L2 network to connect
|
||||
the routers.
|
||||
|
||||
Difference between L2 networking for tenant's VM and for L3 bridging network.
|
||||
|
||||
* The creation of bridge network is triggered during attaching router
|
||||
interface and adding router external gateway.
|
||||
|
||||
* The L2 network for VM is triggered by ``Nova API-GW`` when a VM is to be
|
||||
created in one pod, and finds that there is no network, then the network
|
||||
will be created before the VM is booted, network or port parameter is
|
||||
required to boot VM. The IP/Mac for VM is allocated in the ``Tricircle``,
|
||||
top layer to avoid IP/mac collision if they are allocated separately in
|
||||
bottom pods.
|
||||
|
||||
After cross Neutron L2 networking is introduced, the L3 bridge network should
|
||||
be updated too.
|
||||
|
||||
L3 bridge network N-S (North-South):
|
||||
|
||||
* For each tenant, one cross Neutron N-S bridge network should be created for
|
||||
router N-S inter-connection. Just replace the current VLAN N-S bridge network
|
||||
to corresponding Shared VxLAN or Mixed VLAN/VxLAN.
|
||||
|
||||
L3 bridge network E-W (East-West):
|
||||
|
||||
* When attaching router interface happened, for VLAN, it will keep
|
||||
current process to establish E-W bridge network. For Shared VxLAN and Mixed
|
||||
VLAN/VxLAN, if a L2 network is able to expand to the current pod, then just
|
||||
expand the L2 network to the pod, all E-W traffic will go out from local L2
|
||||
network, then no bridge network is needed.
|
||||
|
||||
* For example, (Net1, Router1) in ``Pod1``, (Net2, Router1) in ``Pod2``, if
|
||||
``Net1`` is a cross Neutron L2 network, and can be expanded to Pod2, then
|
||||
will just expand ``Net1`` to Pod2. After the ``Net1`` expansion ( just like
|
||||
cross Neutron L2 networking to spread one network in multiple Neutron servers ), it'll
|
||||
look like (Net1, Router1) in ``Pod1``, (Net1, Net2, Router1) in ``Pod2``, In
|
||||
``Pod2``, no VM in ``Net1``, only for E-W traffic. Now the E-W traffic will
|
||||
look like this:
|
||||
|
||||
from Net2 to Net1:
|
||||
|
||||
Net2 in Pod2 -> Router1 in Pod2 -> Net1 in Pod2 -> L2GW in Pod2 ---> L2GW in
|
||||
Pod1 -> Net1 in Pod1.
|
||||
|
||||
Note: The traffic for ``Net1`` in ``Pod2`` to ``Net1`` in ``Pod1`` can bypass the L2GW in
|
||||
``Pod2``, that means outbound traffic can bypass the local L2GW if the remote VTEP of
|
||||
L2GW is known to the local compute node and the packet from the local compute
|
||||
node with VxLAN encapsulation cloud be routed to remote L2GW directly. It's up
|
||||
to the L2GW implementation. With the inbound traffic through L2GW, the inbound
|
||||
traffic to the VM will not be impacted by the VM migration from one host to
|
||||
another.
|
||||
|
||||
If ``Net2`` is a cross Neutron L2 network, and can be expanded to ``Pod1`` too,
|
||||
then will just expand ``Net2`` to ``Pod1``. After the ``Net2`` expansion(just
|
||||
like cross Neutron L2 networking to spread one network in multiple Neutron servers ), it'll
|
||||
look like (Net2, Net1, Router1) in ``Pod1``, (Net1, Net2, Router1) in ``Pod2``,
|
||||
In ``Pod1``, no VM in Net2, only for E-W traffic. Now the E-W traffic will look
|
||||
like this: from ``Net1`` to ``Net2``:
|
||||
|
||||
Net1 in Pod1 -> Router1 in Pod1 -> Net2 in Pod1 -> L2GW in Pod1 ---> L2GW in
|
||||
Pod2 -> Net2 in Pod2.
|
||||
|
||||
To limit the complexity, one network's az_hint can only be specified when
|
||||
creating, and no update is allowed, if az_hint need to be updated, you have
|
||||
to delete the network and create again.
|
||||
|
||||
If the network can't be expanded, then E-W bridge network is needed. For
|
||||
example, Net1(AZ1, AZ2,AZ3), Router1; Net2(AZ4, AZ5, AZ6), Router1.
|
||||
Then a cross Neutron L2 bridge network has to be established:
|
||||
|
||||
Net1(AZ1, AZ2, AZ3), Router1 --> E-W bridge network ---> Router1,
|
||||
Net2(AZ4, AZ5, AZ6).
|
||||
|
||||
Assignee(s)
|
||||
------------
|
||||
|
||||
Primary assignee:
|
||||
|
||||
|
||||
Other contributors:
|
||||
|
||||
|
||||
Work Items
|
||||
------------
|
||||
|
||||
Dependencies
|
||||
============
|
||||
|
||||
None
|
||||
|
||||
|
||||
Testing
|
||||
=======
|
||||
|
||||
None
|
||||
|
||||
|
||||
Documentation Impact
|
||||
====================
|
||||
|
||||
None
|
||||
|
||||
|
||||
References
|
||||
==========
|
||||
[1] https://docs.google.com/document/d/18kZZ1snMOCD9IQvUKI5NVDzSASpw-QKj7l2zNqMEd3g/
|
||||
|
||||
[2] https://review.opendev.org/#/c/270786/
|
||||
|
||||
[3] https://github.com/openstack/networking-l2gw/blob/master/specs/kilo/l2-gateway-api.rst
|
||||
|
||||
[4] https://docs.openstack.org/api-ref/network/v2/index.html#networks-multi-provider-ext
|
||||
|
||||
[5] https://docs.openstack.org/mitaka/networking-guide/config-az.html
|
||||
|
||||
[6] https://review.opendev.org/#/c/306224/
|
|
@ -1,236 +0,0 @@
|
|||
=================================
|
||||
Dynamic Pod Binding in Tricircle
|
||||
=================================
|
||||
|
||||
Background
|
||||
===========
|
||||
|
||||
Most public cloud infrastructure is built with Availability Zones (AZs).
|
||||
Each AZ is consisted of one or more discrete data centers, each with high
|
||||
bandwidth and low latency network connection, separate power and facilities.
|
||||
These AZs offer cloud tenants the ability to operate production
|
||||
applications and databases deployed into multiple AZs are more highly
|
||||
available, fault tolerant and scalable than a single data center.
|
||||
|
||||
In production clouds, each AZ is built by modularized OpenStack, and each
|
||||
OpenStack is one pod. Moreover, one AZ can include multiple pods. Among the
|
||||
pods, they are classified into different categories. For example, servers
|
||||
in one pod are only for general purposes, and the other pods may be built
|
||||
for heavy load CAD modeling with GPU. So pods in one AZ could be divided
|
||||
into different groups. Different pod groups for different purposes, and
|
||||
the VM's cost and performance are also different.
|
||||
|
||||
The concept "pod" is created for the Tricircle to facilitate managing
|
||||
OpenStack instances among AZs, which therefore is transparent to cloud
|
||||
tenants. The Tricircle maintains and manages a pod binding table which
|
||||
records the mapping relationship between a cloud tenant and pods. When the
|
||||
cloud tenant creates a VM or a volume, the Tricircle tries to assign a pod
|
||||
based on the pod binding table.
|
||||
|
||||
Motivation
|
||||
===========
|
||||
|
||||
In resource allocation scenario, when a tenant creates a VM in one pod and a
|
||||
new volume in a another pod respectively. If the tenant attempt to attach the
|
||||
volume to the VM, the operation will fail. In other words, the volume should
|
||||
be in the same pod where the VM is, otherwise the volume and VM would not be
|
||||
able to finish the attachment. Hence, the Tricircle needs to ensure the pod
|
||||
binding so as to guarantee that VM and volume are created in one pod.
|
||||
|
||||
In capacity expansion scenario, when resources in one pod are exhausted,
|
||||
then a new pod with the same type should be added into the AZ. Therefore,
|
||||
new resources of this type should be provisioned in the new added pod, which
|
||||
requires dynamical change of pod binding. The pod binding could be done
|
||||
dynamically by the Tricircle, or by admin through admin api for maintenance
|
||||
purpose. For example, for maintenance(upgrade, repairement) window, all
|
||||
new provision requests should be forwarded to the running one, but not
|
||||
the one under maintenance.
|
||||
|
||||
Solution: dynamic pod binding
|
||||
==============================
|
||||
|
||||
It's quite headache for capacity expansion inside one pod, you have to
|
||||
estimate, calculate, monitor, simulate, test, and do online grey expansion
|
||||
for controller nodes and network nodes whenever you add new machines to the
|
||||
pod. It's quite big challenge as more and more resources added to one pod,
|
||||
and at last you will reach limitation of one OpenStack. If this pod's
|
||||
resources exhausted or reach the limit for new resources provisioning, the
|
||||
Tricircle needs to bind tenant to a new pod instead of expanding the current
|
||||
pod unlimitedly. The Tricircle needs to select a proper pod and stay binding
|
||||
for a duration, in this duration VM and volume will be created for one tenant
|
||||
in the same pod.
|
||||
|
||||
For example, suppose we have two groups of pods, and each group has 3 pods,
|
||||
i.e.,
|
||||
|
||||
GroupA(Pod1, Pod2, Pod3) for general purpose VM,
|
||||
|
||||
GroupB(Pod4, Pod5, Pod6) for CAD modeling.
|
||||
|
||||
Tenant1 is bound to Pod1, Pod4 during the first phase for several months.
|
||||
In the first phase, we can just add weight in Pod, for example, Pod1, weight 1,
|
||||
Pod2, weight2, this could be done by adding one new field in pod table, or no
|
||||
field at all, just link them by the order created in the Tricircle. In this
|
||||
case, we use the pod creation time as the weight.
|
||||
|
||||
If the tenant wants to allocate VM/volume for general VM, Pod1 should be
|
||||
selected. It can be implemented with flavor or volume type metadata. For
|
||||
general VM/Volume, there is no special tag in flavor or volume type metadata.
|
||||
|
||||
If the tenant wants to allocate VM/volume for CAD modeling VM, Pod4 should be
|
||||
selected. For CAD modeling VM/Volume, a special tag "resource: CAD Modeling"
|
||||
in flavor or volume type metadata determines the binding.
|
||||
|
||||
When it is detected that there is no more resources in Pod1, Pod4. Based on
|
||||
the resource_affinity_tag, the Tricircle queries the pod table for available
|
||||
pods which provision a specific type of resources. The field resource_affinity
|
||||
is a key-value pair. The pods will be selected when there are matched
|
||||
key-value in flavor extra-spec or volume extra-spec. A tenant will be bound
|
||||
to one pod in one group of pods with same resource_affinity_tag. In this case,
|
||||
the Tricircle obtains Pod2 and Pod3 for general purpose, as well as Pod5 an
|
||||
Pod6 for CAD purpose. The Tricircle needs to change the binding, for example,
|
||||
tenant1 needs to be bound to Pod2, Pod5.
|
||||
|
||||
Implementation
|
||||
===============
|
||||
|
||||
Measurement
|
||||
-------------
|
||||
|
||||
To get the information of resource utilization of pods, the Tricircle needs to
|
||||
conduct some measurements on pods. The statistic task should be done in
|
||||
bottom pod.
|
||||
|
||||
For resources usages, current cells provide interface to retrieve usage for
|
||||
cells [1]. OpenStack provides details of capacity of a cell, including disk
|
||||
and ram via api of showing cell capacities [1].
|
||||
|
||||
If OpenStack is not running with cells mode, we can ask Nova to provide
|
||||
an interface to show the usage detail in AZ. Moreover, an API for usage
|
||||
query at host level is provided for admins [3], through which we can obtain
|
||||
details of a host, including cpu, memory, disk, and so on.
|
||||
|
||||
Cinder also provides interface to retrieve the backend pool usage,
|
||||
including updated time, total capacity, free capacity and so on [2].
|
||||
|
||||
The Tricircle needs to have one task to collect the usage in the bottom on
|
||||
daily base, to evaluate whether the threshold is reached or not. A threshold
|
||||
or headroom could be configured for each pod, but not to reach 100% exhaustion
|
||||
of resources.
|
||||
|
||||
On top there should be no heavy process. So getting the sum info from the
|
||||
bottom can be done in the Tricircle. After collecting the details, the
|
||||
Tricircle can judge whether a pod reaches its limit.
|
||||
|
||||
Tricircle
|
||||
----------
|
||||
|
||||
The Tricircle needs a framework to support different binding policy (filter).
|
||||
|
||||
Each pod is one OpenStack instance, including controller nodes and compute
|
||||
nodes. E.g.,
|
||||
|
||||
::
|
||||
|
||||
+-> controller(s) - pod1 <--> compute nodes <---+
|
||||
|
|
||||
The tricircle +-> controller(s) - pod2 <--> compute nodes <---+ resource migration, if necessary
|
||||
(resource controller) .... |
|
||||
+-> controller(s) - pod{N} <--> compute nodes <-+
|
||||
|
||||
|
||||
The Tricircle selects a pod to decide where the requests should be forwarded
|
||||
to which controller. Then the controllers in the selected pod will do its own
|
||||
scheduling.
|
||||
|
||||
One simplest binding filter is as follows. Line up all available pods in a
|
||||
list and always select the first one. When all the resources in the first pod
|
||||
has been allocated, remove it from the list. This is quite like how production
|
||||
cloud is built: at first, only a few pods are in the list, and then add more
|
||||
and more pods if there is not enough resources in current cloud. For example,
|
||||
|
||||
List1 for general pool: Pod1 <- Pod2 <- Pod3
|
||||
List2 for CAD modeling pool: Pod4 <- Pod5 <- Pod6
|
||||
|
||||
If Pod1's resource exhausted, Pod1 is removed from List1. The List1 is changed
|
||||
to: Pod2 <- Pod3.
|
||||
If Pod4's resource exhausted, Pod4 is removed from List2. The List2 is changed
|
||||
to: Pod5 <- Pod6
|
||||
|
||||
If the tenant wants to allocate resources for general VM, the Tricircle
|
||||
selects Pod2. If the tenant wants to allocate resources for CAD modeling VM,
|
||||
the Tricircle selects Pod5.
|
||||
|
||||
Filtering
|
||||
-------------
|
||||
|
||||
For the strategy of selecting pods, we need a series of filters. Before
|
||||
implementing dynamic pod binding, the binding criteria are hard coded to
|
||||
select the first pod in the AZ. Hence, we need to design a series of filter
|
||||
algorithms. Firstly, we plan to design an ALLPodsFilter which does no
|
||||
filtering and passes all the available pods. Secondly, we plan to design an
|
||||
AvailabilityZoneFilter which passes the pods matching the specified available
|
||||
zone. Thirdly, we plan to design a ResourceAffiniyFilter which passes the pods
|
||||
matching the specified resource type. Based on the resource_affinity_tag,
|
||||
the Tricircle can be aware of which type of resource the tenant wants to
|
||||
provision. In the future, we can add more filters, which requires adding more
|
||||
information in the pod table.
|
||||
|
||||
Weighting
|
||||
-------------
|
||||
|
||||
After filtering all the pods, the Tricircle obtains the available pods for a
|
||||
tenant. The Tricircle needs to select the most suitable pod for the tenant.
|
||||
Hence, we need to define a weight function to calculate the corresponding
|
||||
weight of each pod. Based on the weights, the Tricircle selects the pod which
|
||||
has the maximum weight value. When calculating the weight of a pod, we need
|
||||
to design a series of weigher. We first take the pod creation time into
|
||||
consideration when designing the weight function. The second one is the idle
|
||||
capacity, to select a pod which has the most idle capacity. Other metrics
|
||||
will be added in the future, e.g., cost.
|
||||
|
||||
Data Model Impact
|
||||
==================
|
||||
|
||||
Firstly, we need to add a column “resource_affinity_tag” to the pod table,
|
||||
which is used to store the key-value pair, to match flavor extra-spec and
|
||||
volume extra-spec.
|
||||
|
||||
Secondly, in the pod binding table, we need to add fields of start binding
|
||||
time and end binding time, so the history of the binding relationship could
|
||||
be stored.
|
||||
|
||||
Thirdly, we need a table to store the usage of each pod for Cinder/Nova.
|
||||
We plan to use JSON object to store the usage information. Hence, even if
|
||||
the usage structure is changed, we don't need to update the table. And if
|
||||
the usage value is null, that means the usage has not been initialized yet.
|
||||
As just mentioned above, the usage could be refreshed in daily basis. If it's
|
||||
not initialized yet, it means there is still lots of resources available,
|
||||
which could be scheduled just like this pod has not reach usage threshold.
|
||||
|
||||
Dependencies
|
||||
=============
|
||||
|
||||
None
|
||||
|
||||
|
||||
Testing
|
||||
========
|
||||
|
||||
None
|
||||
|
||||
|
||||
Documentation Impact
|
||||
=====================
|
||||
|
||||
None
|
||||
|
||||
|
||||
Reference
|
||||
==========
|
||||
|
||||
[1] https://docs.openstack.org/api-ref/compute/#capacities
|
||||
|
||||
[2] https://docs.openstack.org/api-ref/block-storage/v2/index.html#volumes-volumes
|
||||
|
||||
[3] https://docs.openstack.org/api-ref/compute/#show-server-details
|
|
@ -1,234 +0,0 @@
|
|||
=======================================
|
||||
Enhance Reliability of Asynchronous Job
|
||||
=======================================
|
||||
|
||||
Background
|
||||
==========
|
||||
|
||||
Currently we are using cast method in our RPC client to trigger asynchronous
|
||||
job in XJob daemon. After one of the worker threads receives the RPC message
|
||||
from the message broker, it registers the job in the database and starts to
|
||||
run the handle function. The registration guarantees that asynchronous job will
|
||||
not be lost after the job fails and the failed job can be redone. The detailed
|
||||
discussion of the asynchronous job process in XJob daemon is covered in our
|
||||
design document [1]_.
|
||||
|
||||
Though asynchronous jobs are correctly saved after worker threads get the RPC
|
||||
message, we still have risk to lose jobs. By using cast method, it's only
|
||||
guaranteed that the message is received by the message broker, but there's no
|
||||
guarantee that the message can be received by the message consumer, i.e., the
|
||||
RPC server thread running in XJob daemon. According to the RabbitMQ document,
|
||||
undelivered messages will be lost if RabbitMQ server stops [2]_. Message
|
||||
persistence or publisher confirm [3]_ can be used to increase reliability, but
|
||||
they sacrifice performance. On the other hand, we can not assume that message
|
||||
brokers other than RabbitMQ will provide similar persistence or confirmation
|
||||
functionality. Therefore, Tricircle itself should handle the asynchronous job
|
||||
reliability problem as far as possible. Since we already have a framework to
|
||||
register, run and redo asynchronous jobs in XJob daemon, we propose a cheaper
|
||||
way to improve reliability.
|
||||
|
||||
Proposal
|
||||
========
|
||||
|
||||
One straightforward way to make sure that the RPC server has received the RPC
|
||||
message is to use call method. RPC client will be blocked until the RPC server
|
||||
replies the message if it uses call method to send the RPC request. So if
|
||||
something wrong happens before the reply, RPC client can be aware of it. Of
|
||||
course we cannot make RPC client wait too long, thus RPC handlers in the RPC
|
||||
server side need to be simple and quick to run. Thanks to the asynchronous job
|
||||
framework we already have, migrating from cast method to call method is easy.
|
||||
|
||||
Here is the flow of the current process::
|
||||
|
||||
+--------+ +--------+ +---------+ +---------------+ +----------+
|
||||
| | | | | | | | | |
|
||||
| API | | RPC | | Message | | RPC Server | | Database |
|
||||
| Server | | client | | Broker | | Handle Worker | | |
|
||||
| | | | | | | | | |
|
||||
+---+----+ +---+----+ +----+----+ +-------+-------+ +----+-----+
|
||||
| | | | |
|
||||
| call RPC API | | | |
|
||||
+--------------> | | |
|
||||
| | send cast message | | |
|
||||
| +-------------------> | |
|
||||
| call return | | dispatch message | |
|
||||
<--------------+ +------------------> |
|
||||
| | | | register job |
|
||||
| | | +---------------->
|
||||
| | | | |
|
||||
| | | | obtain lock |
|
||||
| | | +---------------->
|
||||
| | | | |
|
||||
| | | | run job |
|
||||
| | | +----+ |
|
||||
| | | | | |
|
||||
| | | | | |
|
||||
| | | <----+ |
|
||||
| | | | |
|
||||
| | | | |
|
||||
+ + + + +
|
||||
|
||||
We can just leave **register job** phase in the RPC handle and put **obtain
|
||||
lock** and **run job** phase in a separate thread, so the RPC handle is simple
|
||||
enough to use call method to invoke it. Here is the proposed flow::
|
||||
|
||||
+--------+ +--------+ +---------+ +---------------+ +----------+ +-------------+ +-------+
|
||||
| | | | | | | | | | | | | |
|
||||
| API | | RPC | | Message | | RPC Server | | Database | | RPC Server | | Job |
|
||||
| Server | | client | | Broker | | Handle Worker | | | | Loop Worker | | Queue |
|
||||
| | | | | | | | | | | | | |
|
||||
+---+----+ +---+----+ +----+----+ +-------+-------+ +----+-----+ +------+------+ +---+---+
|
||||
| | | | | | |
|
||||
| call RPC API | | | | | |
|
||||
+--------------> | | | | |
|
||||
| | send call message | | | | |
|
||||
| +--------------------> | | | |
|
||||
| | | dispatch message | | | |
|
||||
| | +------------------> | | |
|
||||
| | | | register job | | |
|
||||
| | | +----------------> | |
|
||||
| | | | | | |
|
||||
| | | | job enqueue | | |
|
||||
| | | +------------------------------------------------>
|
||||
| | | | | | |
|
||||
| | | reply message | | | job dequeue |
|
||||
| | <------------------+ | |-------------->
|
||||
| | send reply message | | | obtain lock | |
|
||||
| <--------------------+ | <----------------+ |
|
||||
| call return | | | | | |
|
||||
<--------------+ | | | run job | |
|
||||
| | | | | +----+ |
|
||||
| | | | | | | |
|
||||
| | | | | | | |
|
||||
| | | | | +----> |
|
||||
| | | | | | |
|
||||
| | | | | | |
|
||||
+ + + + + + +
|
||||
|
||||
In the above graph, **Loop Worker** is a new-introduced thread to do the actual
|
||||
work. **Job Queue** is an eventlet queue [4]_ used to coordinate **Handle
|
||||
Worker** who produces job entries and **Loop Worker** who consumes job entries.
|
||||
While accessing an empty queue, **Loop Worker** will be blocked until some job
|
||||
entries are put into the queue. **Loop Worker** retrieves job entries from the
|
||||
job queue then starts to run it. Similar to the original flow, since multiple
|
||||
workers may get the same type of job for the same resource at the same time,
|
||||
workers need to obtain the lock before it can run the job. One problem occurs
|
||||
whenever XJob daemon stops before it finishes all the jobs in the job queue;
|
||||
all unfinished jobs are lost. To solve it, we make changes to the original
|
||||
periodical task that is used to redo failed job, and let it also handle the
|
||||
jobs which have been registered for a certain time but haven't been started.
|
||||
So both failed jobs and "orphan" new jobs can be picked up and redone.
|
||||
|
||||
You can see that **Handle Worker** doesn't do many works, it just consumes RPC
|
||||
messages, registers jobs then puts job items in the job queue. So one extreme
|
||||
solution here, will be to register new jobs in the API server side and start
|
||||
worker threads to retrieve jobs from the database and run them. In this way, we
|
||||
can remove all the RPC processes and use database to coordinate. The drawback
|
||||
of this solution is that we don't dispatch jobs. All the workers query jobs
|
||||
from the database so there is high probability that some of the workers obtain
|
||||
the same job and thus race occurs. In the first solution, message broker
|
||||
helps us to dispatch messages, and so dispatch jobs.
|
||||
|
||||
Considering job dispatch is important, we can make some changes to the second
|
||||
solution and move to the third one, that is to also register new jobs in the
|
||||
API server side, but we still use cast method to trigger asynchronous job in
|
||||
XJob daemon. Since job registration is done in the API server side, we are not
|
||||
afraid that the jobs will be lost if cast messages are lost. If API server side
|
||||
fails to register the job, it will return response of failure; If registration
|
||||
of job succeeds, the job will be done by XJob daemon at last. By using RPC, we
|
||||
dispatch jobs with the help of message brokers. One thing which makes cast
|
||||
method better than call method is that retrieving RPC messages and running job
|
||||
handles are done in the same thread so if one XJob daemon is busy handling
|
||||
jobs, RPC messages will not be dispatched to it. However when using call
|
||||
method, RPC messages are retrieved by one thread(the **Handle Worker**) and job
|
||||
handles are run by another thread(the **Loop Worker**), so XJob daemon may
|
||||
accumulate many jobs in the queue and at the same time it's busy handling jobs.
|
||||
This solution has the same problem with the call method solution. If cast
|
||||
messages are lost, the new jobs are registered in the database but no XJob
|
||||
daemon is aware of these new jobs. Same way to solve it, use periodical task to
|
||||
pick up these "orphan" jobs. Here is the flow::
|
||||
|
||||
+--------+ +--------+ +---------+ +---------------+ +----------+
|
||||
| | | | | | | | | |
|
||||
| API | | RPC | | Message | | RPC Server | | Database |
|
||||
| Server | | client | | Broker | | Handle Worker | | |
|
||||
| | | | | | | | | |
|
||||
+---+----+ +---+----+ +----+----+ +-------+-------+ +----+-----+
|
||||
| | | | |
|
||||
| call RPC API | | | |
|
||||
+--------------> | | |
|
||||
| | register job | | |
|
||||
| +------------------------------------------------------->
|
||||
| | | | |
|
||||
| | [if succeed to | | |
|
||||
| | register job] | | |
|
||||
| | send cast message | | |
|
||||
| +-------------------> | |
|
||||
| call return | | dispatch message | |
|
||||
<--------------+ +------------------> |
|
||||
| | | | obtain lock |
|
||||
| | | +---------------->
|
||||
| | | | |
|
||||
| | | | run job |
|
||||
| | | +----+ |
|
||||
| | | | | |
|
||||
| | | | | |
|
||||
| | | <----+ |
|
||||
| | | | |
|
||||
| | | | |
|
||||
+ + + + +
|
||||
|
||||
Discussion
|
||||
==========
|
||||
|
||||
In this section we discuss the pros and cons of the above three solutions.
|
||||
|
||||
.. list-table:: **Solution Comparison**
|
||||
:header-rows: 1
|
||||
|
||||
* - Solution
|
||||
- Pros
|
||||
- Cons
|
||||
* - API server uses call
|
||||
- no RPC message lost
|
||||
- downtime of unfinished jobs in the job queue when XJob daemon stops,
|
||||
job dispatch not based on XJob daemon workload
|
||||
* - API server register jobs + no RPC
|
||||
- no requirement on RPC(message broker), no downtime
|
||||
- no job dispatch, conflict costs time
|
||||
* - API server register jobs + uses cast
|
||||
- job dispatch based on XJob daemon workload
|
||||
- downtime of lost jobs due to cast messages lost
|
||||
|
||||
Downtime means that after a job is dispatched to a worker, other workers need
|
||||
to wait for a certain time to determine that job is expired and take over it.
|
||||
|
||||
Conclusion
|
||||
==========
|
||||
|
||||
We decide to implement the third solution(API server register jobs + uses cast)
|
||||
since it improves the asynchronous job reliability and at the mean time has
|
||||
better work load dispatch.
|
||||
|
||||
Data Model Impact
|
||||
=================
|
||||
|
||||
None
|
||||
|
||||
Dependencies
|
||||
============
|
||||
|
||||
None
|
||||
|
||||
Documentation Impact
|
||||
====================
|
||||
|
||||
None
|
||||
|
||||
References
|
||||
==========
|
||||
|
||||
.. [1] https://docs.google.com/document/d/1zcxwl8xMEpxVCqLTce2-dUOtB-ObmzJTbV1uSQ6qTsY
|
||||
.. [2] https://www.rabbitmq.com/tutorials/tutorial-two-python.html
|
||||
.. [3] https://www.rabbitmq.com/confirms.html
|
||||
.. [4] http://eventlet.net/doc/modules/queue.html
|
|
@ -1,566 +0,0 @@
|
|||
==============================================
|
||||
Layer-3 Networking and Combined Bridge Network
|
||||
==============================================
|
||||
|
||||
Background
|
||||
==========
|
||||
|
||||
To achieve cross-Neutron layer-3 networking, we utilize a bridge network to
|
||||
connect networks in each Neutron server, as shown below:
|
||||
|
||||
East-West networking::
|
||||
|
||||
+-----------------------+ +-----------------------+
|
||||
| OpenStack1 | | OpenStack2 |
|
||||
| | | |
|
||||
| +------+ +---------+ | +------------+ | +---------+ +------+ |
|
||||
| | net1 | | ip1| | | bridge net | | |ip2 | | net2 | |
|
||||
| | +--+ R +---+ +---+ R +--+ | |
|
||||
| | | | | | | | | | | | | |
|
||||
| +------+ +---------+ | +------------+ | +---------+ +------+ |
|
||||
+-----------------------+ +-----------------------+
|
||||
|
||||
Fig 1
|
||||
|
||||
North-South networking::
|
||||
|
||||
+---------------------+ +-------------------------------+
|
||||
| OpenStack1 | | OpenStack2 |
|
||||
| | | |
|
||||
| +------+ +-------+ | +--------------+ | +-------+ +----------------+ |
|
||||
| | net1 | | ip1| | | bridge net | | | ip2| | external net | |
|
||||
| | +--+ R1 +---+ +---+ R2 +--+ | |
|
||||
| | | | | | | 100.0.1.0/24 | | | | | 163.3.124.0/24 | |
|
||||
| +------+ +-------+ | +--------------+ | +-------+ +----------------+ |
|
||||
+---------------------+ +-------------------------------+
|
||||
|
||||
Fig 2
|
||||
|
||||
To support east-west networking, we configure extra routes in routers in each
|
||||
OpenStack cloud::
|
||||
|
||||
In OpenStack1, destination: net2, nexthop: ip2
|
||||
In OpenStack2, destination: net1, nexthop: ip1
|
||||
|
||||
To support north-south networking, we set bridge network as the external
|
||||
network in OpenStack1 and as the internal network in OpenStack2. For instance
|
||||
in net1 to access the external network, the packets are SNATed twice, first
|
||||
SNATed to ip1, then SNATed to ip2. For floating ip binding, ip in net1 is first
|
||||
bound to ip(like 100.0.1.5) in bridge network(bridge network is attached to R1
|
||||
as external network), then the ip(100.0.1.5) in bridge network is bound to ip
|
||||
(like 163.3.124.8)in the real external network (bridge network is attached to
|
||||
R2 as internal network).
|
||||
|
||||
Problems
|
||||
========
|
||||
|
||||
The idea of introducing a bridge network is good, but there are some problems
|
||||
in the current usage of the bridge network.
|
||||
|
||||
Redundant Bridge Network
|
||||
------------------------
|
||||
|
||||
We use two bridge networks to achieve layer-3 networking for each tenant. If
|
||||
VLAN is used as the bridge network type, limited by the range of VLAN tag, only
|
||||
2048 pairs of bridge networks can be created. The number of tenants supported
|
||||
is far from enough.
|
||||
|
||||
Redundant SNAT
|
||||
--------------
|
||||
|
||||
In the current implementation, packets are SNATed two times for outbound
|
||||
traffic and are DNATed two times for inbound traffic. The drawback is that
|
||||
packets of outbound traffic consume extra operations. Also, we need to maintain
|
||||
extra floating ip pool for inbound traffic.
|
||||
|
||||
DVR support
|
||||
-----------
|
||||
|
||||
Bridge network is attached to the router as an internal network for east-west
|
||||
networking and north-south networking when the real external network and the
|
||||
router are not located in the same OpenStack cloud. It's fine when the bridge
|
||||
network is VLAN type, since packets directly go out of the host and are
|
||||
exchanged by switches. But if we would like to support VxLAN as the bridge
|
||||
network type later, attaching bridge network as an internal network in the
|
||||
DVR scenario will cause some troubles. How DVR connects the internal networks
|
||||
is that packets are routed locally in each host, and if the destination is not
|
||||
in the local host, the packets are sent to the destination host via a VxLAN
|
||||
tunnel. Here comes the problem, if bridge network is attached as an internal
|
||||
network, the router interfaces will exist in all the hosts where the router
|
||||
namespaces are created, so we need to maintain lots of VTEPs and VxLAN tunnels
|
||||
for bridge network in the Tricircle. Ports in bridge network are located in
|
||||
different OpenStack clouds so local Neutron server is not aware of ports in
|
||||
other OpenStack clouds and will not setup VxLAN tunnel for us.
|
||||
|
||||
Proposal
|
||||
========
|
||||
|
||||
To address the above problems, we propose to combine the bridge networks for
|
||||
east-west and north-south networking. Bridge network is always attached to
|
||||
routers as an external network. In the DVR scenario, different from router
|
||||
interfaces, router gateway will only exist in the SNAT namespace in a specific
|
||||
host, which reduces the number of VTEPs and VxLAN tunnels the Tricircle needs
|
||||
to handle. By setting "enable_snat" option to "False" when attaching the router
|
||||
gateway, packets will not be SNATed when go through the router gateway, so
|
||||
packets are only SNATed and DNATed one time in the real external gateway.
|
||||
However, since one router can only be attached to one external network, in the
|
||||
OpenStack cloud where the real external network is located, we need to add one
|
||||
more router to connect the bridge network with the real external network. The
|
||||
network topology is shown below::
|
||||
|
||||
+-------------------------+ +-------------------------+
|
||||
|OpenStack1 | |OpenStack2 |
|
||||
| +------+ +--------+ | +------------+ | +--------+ +------+ |
|
||||
| | | | IP1| | | | | |IP2 | | | |
|
||||
| | net1 +---+ R1 XXXXXXX bridge net XXXXXXX R2 +---+ net2 | |
|
||||
| | | | | | | | | | | | | |
|
||||
| +------+ +--------+ | +---X----+---+ | +--------+ +------+ |
|
||||
| | X | | |
|
||||
+-------------------------+ X | +-------------------------+
|
||||
X |
|
||||
X |
|
||||
+--------------------------------X----|-----------------------------------+
|
||||
|OpenStack3 X | |
|
||||
| X | |
|
||||
| +------+ +--------+ X | +--------+ +--------------+ |
|
||||
| | | | IP3| X | |IP4 | | | |
|
||||
| | net3 +----+ R3 XXXXXXXXXX +---+ R4 XXXXXX external net | |
|
||||
| | | | | | | | | |
|
||||
| +------+ +--------+ +--------+ +--------------+ |
|
||||
| |
|
||||
+-------------------------------------------------------------------------+
|
||||
|
||||
router interface: -----
|
||||
router gateway: XXXXX
|
||||
IPn: router gateway ip or router interface ip
|
||||
|
||||
Fig 3
|
||||
|
||||
Extra routes and gateway ip are configured to build the connection::
|
||||
|
||||
routes of R1: net2 via IP2
|
||||
net3 via IP3
|
||||
external gateway ip of R1: IP4
|
||||
(IP2 and IP3 are from bridge net, so routes will only be created in
|
||||
SNAT namespace)
|
||||
|
||||
routes of R2: net1 via IP1
|
||||
net3 via IP3
|
||||
external gateway ip of R2: IP4
|
||||
(IP1 and IP3 are from bridge net, so routes will only be created in
|
||||
SNAT namespace)
|
||||
|
||||
routes of R3: net1 via IP1
|
||||
net2 via IP2
|
||||
external gateway ip of R3: IP4
|
||||
(IP1 and IP2 are from bridge net, so routes will only be created in
|
||||
SNAT namespace)
|
||||
|
||||
routes of R4: net1 via IP1
|
||||
net2 via IP2
|
||||
net3 via IP3
|
||||
external gateway ip of R1: real-external-gateway-ip
|
||||
disable DVR mode
|
||||
|
||||
An alternative solution which can reduce the extra router is that for the
|
||||
router that locates in the same OpenStack cloud with the real external network,
|
||||
we attach the bridge network as an internal network, so the real external
|
||||
network can be attached to the same router. Here is the topology::
|
||||
|
||||
+-------------------------+ +-------------------------+
|
||||
|OpenStack1 | |OpenStack2 |
|
||||
| +------+ +--------+ | +------------+ | +--------+ +------+ |
|
||||
| | | | IP1| | | | | |IP2 | | | |
|
||||
| | net1 +---+ R1 XXXXXXX bridge net XXXXXXX R2 +---+ net2 | |
|
||||
| | | | | | | | | | | | | |
|
||||
| +------+ +--------+ | +-----+------+ | +--------+ +------+ |
|
||||
| | | | |
|
||||
+-------------------------+ | +-------------------------+
|
||||
|
|
||||
|
|
||||
+----------------------|---------------------------------+
|
||||
|OpenStack3 | |
|
||||
| | |
|
||||
| +------+ +---+----+ +--------------+ |
|
||||
| | | | IP3 | | | |
|
||||
| | net3 +----+ R3 XXXXXXXX external net | |
|
||||
| | | | | | | |
|
||||
| +------+ +--------+ +--------------+ |
|
||||
| |
|
||||
+--------------------------------------------------------+
|
||||
|
||||
router interface: -----
|
||||
router gateway: XXXXX
|
||||
IPn: router gateway ip or router interface ip
|
||||
|
||||
Fig 4
|
||||
|
||||
The limitation of this solution is that R3 needs to be set as non-DVR mode.
|
||||
As is discussed above, for network attached to DVR mode router, the router
|
||||
interfaces of this network will be created in all the hosts where the router
|
||||
namespaces are created. Since these interfaces all have the same IP and MAC,
|
||||
packets sent between instances(could be virtual machine, container or bare
|
||||
metal) can't be directly wrapped in the VxLAN packets, otherwise packets sent
|
||||
from different hosts will have the same MAC. How Neutron solve this problem is
|
||||
to introduce DVR MACs which are allocated by Neutron server and assigned to
|
||||
each host hosting DVR mode router. Before wrapping the packets in the VxLAN
|
||||
packets, the source MAC of the packets are replaced by the DVR MAC of the host.
|
||||
If R3 is DVR mode, source MAC of packets sent from net3 to bridge network will
|
||||
be changed, but after the packets reach R1 or R2, R1 and R2 don't recognize the
|
||||
DVR MAC, so the packets are dropped.
|
||||
|
||||
The same, extra routes and gateway ip are configured to build the connection::
|
||||
|
||||
routes of R1: net2 via IP2
|
||||
net3 via IP3
|
||||
external gateway ip of R1: IP3
|
||||
(IP2 and IP3 are from bridge net, so routes will only be created in
|
||||
SNAT namespace)
|
||||
|
||||
routes of R2: net1 via IP1
|
||||
net3 via IP3
|
||||
external gateway ip of R1: IP3
|
||||
(IP1 and IP3 are from bridge net, so routes will only be created in
|
||||
SNAT namespace)
|
||||
|
||||
routes of R3: net1 via IP1
|
||||
net2 via IP2
|
||||
external gateway ip of R3: real-external-gateway-ip
|
||||
(non-DVR mode, routes will all be created in the router namespace)
|
||||
|
||||
The real external network can be deployed in one dedicated OpenStack cloud. In
|
||||
that case, there is no need to run services like Nova and Cinder in that cloud.
|
||||
Instance and volume will not be provisioned in that cloud. Only Neutron service
|
||||
is required. Then the above two topologies transform to the same one::
|
||||
|
||||
+-------------------------+ +-------------------------+
|
||||
|OpenStack1 | |OpenStack2 |
|
||||
| +------+ +--------+ | +------------+ | +--------+ +------+ |
|
||||
| | | | IP1| | | | | |IP2 | | | |
|
||||
| | net1 +---+ R1 XXXXXXX bridge net XXXXXXX R2 +---+ net2 | |
|
||||
| | | | | | | | | | | | | |
|
||||
| +------+ +--------+ | +-----+------+ | +--------+ +------+ |
|
||||
| | | | |
|
||||
+-------------------------+ | +-------------------------+
|
||||
|
|
||||
|
|
||||
+-----------|-----------------------------------+
|
||||
|OpenStack3 | |
|
||||
| | |
|
||||
| | +--------+ +--------------+ |
|
||||
| | |IP3 | | | |
|
||||
| +---+ R3 XXXXXX external net | |
|
||||
| | | | | |
|
||||
| +--------+ +--------------+ |
|
||||
| |
|
||||
+-----------------------------------------------+
|
||||
|
||||
Fig 5
|
||||
|
||||
The motivation of putting the real external network in a dedicated OpenStack
|
||||
cloud is to simplify the real external network management, and also to separate
|
||||
the real external network and the internal networking area, for better security
|
||||
control.
|
||||
|
||||
Discussion
|
||||
==========
|
||||
|
||||
The implementation of DVR does bring some restrictions to our cross-Neutron
|
||||
layer-2 and layer-3 networking, resulting in the limitation of the above two
|
||||
proposals. In the first proposal, if the real external network is deployed with
|
||||
internal networks in the same OpenStack cloud, one extra router is needed in
|
||||
that cloud. Also, since one of the router is DVR mode and the other is non-DVR
|
||||
mode, we need to deploy at least two l3 agents, one is dvr-snat mode and the
|
||||
other is legacy mode. The limitation of the second proposal is that the router
|
||||
is non-DVR mode, so east-west and north-south traffic are all go through the
|
||||
router namespace in the network node.
|
||||
|
||||
Also, cross-Neutron layer-2 networking can not work with DVR because of
|
||||
source MAC replacement. Considering the following topology::
|
||||
|
||||
+----------------------------------------------+ +-------------------------------+
|
||||
|OpenStack1 | |OpenStack2 |
|
||||
| +-----------+ +--------+ +-----------+ | | +--------+ +------------+ |
|
||||
| | | | | | | | | | | | | |
|
||||
| | net1 +---+ R1 +---+ net2 | | | | R2 +---+ net2 | |
|
||||
| | Instance1 | | | | Instance2 | | | | | | Instance3 | |
|
||||
| +-----------+ +--------+ +-----------+ | | +--------+ +------------+ |
|
||||
| | | |
|
||||
+----------------------------------------------+ +-------------------------------+
|
||||
|
||||
Fig 6
|
||||
|
||||
net2 supports cross-Neutron layer-2 networking, so instances in net2 can be
|
||||
created in both OpenStack clouds. If the router net1 and net2 connected to is
|
||||
DVR mode, when Instance1 ping Instance2, the packets are routed locally and
|
||||
exchanged via a VxLAN tunnel. Source MAC replacement is correctly handled
|
||||
inside OpenStack1. But when Instance1 tries to ping Instance3, OpenStack2 does
|
||||
not recognize the DVR MAC from OpenStack1, thus connection fails. Therefore,
|
||||
only local type network can be attached to a DVR mode router.
|
||||
|
||||
Cross-Neutron layer-2 networking and DVR may co-exist after we address the
|
||||
DVR MAC recognition problem(we will issue a discussion about this problem in
|
||||
the Neutron community) or introduce l2 gateway. Actually this bridge network
|
||||
approach is just one of the implementation, we are considering in the near
|
||||
future to provide a mechanism to let SDN controller to plug in, which DVR and
|
||||
bridge network may be not needed.
|
||||
|
||||
Having the above limitation, can our proposal support the major user scenarios?
|
||||
Considering whether the tenant network and router are local or across Neutron
|
||||
servers, we divide the user scenarios into four categories. For the scenario of
|
||||
cross-Neutron router, we use the proposal shown in Fig 3 in our discussion.
|
||||
|
||||
Local Network and Local Router
|
||||
------------------------------
|
||||
|
||||
Topology::
|
||||
|
||||
+-----------------+ +-----------------+
|
||||
|OpenStack1 | |OpenStack2 |
|
||||
| | | |
|
||||
| ext net1 | | ext net2 |
|
||||
| +-----+-----+ | | +-----+-----+ |
|
||||
| | | | | |
|
||||
| | | | | |
|
||||
| +--+--+ | | +--+--+ |
|
||||
| | | | | | | |
|
||||
| | R1 | | | | R2 | |
|
||||
| | | | | | | |
|
||||
| +--+--+ | | +--+--+ |
|
||||
| | | | | |
|
||||
| | | | | |
|
||||
| +---+---+ | | +---+---+ |
|
||||
| net1 | | net2 |
|
||||
| | | |
|
||||
+-----------------+ +-----------------+
|
||||
|
||||
Fig 7
|
||||
|
||||
Each OpenStack cloud has its own external network, instance in each local
|
||||
network accesses the external network via the local router. If east-west
|
||||
networking is not required, this scenario has no requirement on cross-Neutron
|
||||
layer-2 and layer-3 networking functionality. Both central Neutron server and
|
||||
local Neutron server can process network resource management request. While if
|
||||
east-west networking is needed, we have two choices to extend the above
|
||||
topology::
|
||||
|
||||
*
|
||||
+-----------------+ +-----------------+ * +-----------------+ +-----------------+
|
||||
|OpenStack1 | |OpenStack2 | * |OpenStack1 | |OpenStack2 |
|
||||
| | | | * | | | |
|
||||
| ext net1 | | ext net2 | * | ext net1 | | ext net2 |
|
||||
| +-----+-----+ | | +-----+-----+ | * | +-----+-----+ | | +-----+-----+ |
|
||||
| | | | | | * | | | | | |
|
||||
| | | | | | * | | | | | |
|
||||
| +--+--+ | | +--+--+ | * | +--+--+ | | +--+--+ |
|
||||
| | | | | | | | * | | | | | | | |
|
||||
| | R1 | | | | R2 | | * | | R1 +--+ | | +---+ R2 | |
|
||||
| | | | | | | | * | | | | | | | | | |
|
||||
| +--+--+ | | +--+--+ | * | +--+--+ | | | | +--+--+ |
|
||||
| | | | | | * | | | | | | | |
|
||||
| | | | | | * | | | | | | | |
|
||||
| +---+-+-+ | | +---+-+-+ | * | +---+---+ | | | | +---+---+ |
|
||||
| net1 | | | net2 | | * | net1 | | | | net2 |
|
||||
| | | | | | * | | | | | |
|
||||
| +--------+--+ | | +--------+--+ | * | | | net3 | | |
|
||||
| | Instance1 | | | | Instance2 | | * | +------------+------------+-----------+ |
|
||||
| +-----------+ | | +-----------+ | * | | | |
|
||||
| | | | | | * +-----------------+ +-----------------+
|
||||
| | | net3 | | | *
|
||||
| +------+-------------------------+----+ | * Fig 8.2
|
||||
| | | | *
|
||||
+-----------------+ +-----------------+ *
|
||||
*
|
||||
Fig 8.1
|
||||
|
||||
In the left topology, two instances are connected by a shared VxLAN network,
|
||||
only local network is attached to local router, so it can be either legacy or
|
||||
DVR mode. In the right topology, two local routers are connected by a shared
|
||||
VxLAN network, so they can only be legacy mode.
|
||||
|
||||
Cross-Neutron Network and Local Router
|
||||
--------------------------------------
|
||||
|
||||
Topology::
|
||||
|
||||
+-----------------+ +-----------------+
|
||||
|OpenStack1 | |OpenStack2 |
|
||||
| | | |
|
||||
| ext net1 | | ext net2 |
|
||||
| +-----+-----+ | | +-----+-----+ |
|
||||
| | | | | |
|
||||
| | | | | |
|
||||
| +--+--+ | | +--+--+ |
|
||||
| | | | | | | |
|
||||
| | R1 | | | | R2 | |
|
||||
| | | | | | | |
|
||||
| +--+--+ | | +--+--+ |
|
||||
| | | | | |
|
||||
| net1 | | | | |
|
||||
| +--+---+---------------------+---+---+ |
|
||||
| | | | | |
|
||||
| | | | | |
|
||||
| +--+--------+ | | +--+--------+ |
|
||||
| | Instance1 | | | | Instance2 | |
|
||||
| +-----------+ | | +-----------+ |
|
||||
| | | |
|
||||
+-----------------+ +-----------------+
|
||||
|
||||
Fig 9
|
||||
|
||||
From the Neutron API point of view, attaching a network to different routers
|
||||
that each has its own external gateway is allowed but packets can only get out
|
||||
via one of the external network because there is only one gateway ip in one
|
||||
subnet. But in the Tricircle, we allocate one gateway ip for network in each
|
||||
OpenStack cloud, so instances can access specific external network via specific
|
||||
gateway according to which OpenStack cloud they are located.
|
||||
|
||||
We can see this topology as a simplification of the topology shown in Fig 8.1
|
||||
that it doesn't require an extra network interface for instances. And if no
|
||||
other networks are attached to R1 and R2 except net1, R1 and R2 can be DVR
|
||||
mode.
|
||||
|
||||
In the NFV scenario, usually instance itself acts as a router, so there's no
|
||||
need to create a Neutron router and we directly attach the instance to the
|
||||
provider network and access the real external network via the provider network.
|
||||
In that case, when creating Neutron network, "router:external" label should be
|
||||
set to "False". See Fig 10::
|
||||
|
||||
+-----------------+ +-----------------+
|
||||
|OpenStack1 | |OpenStack2 |
|
||||
| | | |
|
||||
| provider net1 | | provider net2 |
|
||||
| +--+---------+ | | +--+---------+ |
|
||||
| | | | | |
|
||||
| | | | | |
|
||||
| +--+--------+ | | +--+--------+ |
|
||||
| | VNF | | | | VNF | |
|
||||
| | Instance1 | | | | Instance2 | |
|
||||
| +------+----+ | | +------+----+ |
|
||||
| | | | | |
|
||||
| | | | | |
|
||||
| net1 | | | | |
|
||||
| +------+-------------------------+---+ |
|
||||
| | | |
|
||||
+-----------------+ +-----------------+
|
||||
|
||||
Fig 10
|
||||
|
||||
Local Network and Cross-Neutron Router
|
||||
--------------------------------------
|
||||
|
||||
Topology::
|
||||
|
||||
+-----------------+ +-----------------+
|
||||
|OpenStack1 | |OpenStack2 |
|
||||
| | | |
|
||||
| | | ext net |
|
||||
| | | +-------+---+ |
|
||||
| bridge net | | | |
|
||||
| +-----+-----------------+-+-+ | |
|
||||
| | | | | | +--+--+ |
|
||||
| | | | | | | | |
|
||||
| +--+--+ | | | +----+ R | |
|
||||
| | | | | | | | |
|
||||
| | R | | | | +-----+ |
|
||||
| | | | | | |
|
||||
| +--+--+ | | | +-----+ |
|
||||
| | | | | | | |
|
||||
| | | | +---+ R | |
|
||||
| +---+---+ | | | | |
|
||||
| net1 | | +--+--+ |
|
||||
| | | | |
|
||||
| | | | |
|
||||
| | | +---+---+ |
|
||||
| | | net2 |
|
||||
| | | |
|
||||
+-----------------+ +-----------------+
|
||||
|
||||
Fig 11
|
||||
|
||||
Since the router is cross-Neutron type, the Tricircle automatically creates
|
||||
bridge network to connect router instances inside the two Neutron servers and
|
||||
connect the router instance to the real external network. Networks attached to
|
||||
the router are local type, so the router can be either legacy or DVR mode.
|
||||
|
||||
Cross-Neutron Network and Cross-Neutron Router
|
||||
----------------------------------------------
|
||||
|
||||
Topology::
|
||||
|
||||
*
|
||||
+-----------------+ +-----------------+ * +-----------------+ +-----------------+
|
||||
|OpenStack1 | |OpenStack2 | * |OpenStack1 | |OpenStack2 |
|
||||
| | | | * | | | |
|
||||
| | | ext net | * | | | ext net |
|
||||
| | | +-------+---+ | * | | | +-------+---+ |
|
||||
| bridge net | | | | * | bridge net | | | |
|
||||
| +-----+-----------------+-+-+ | | * | +-----+-----------------+-+-+ | |
|
||||
| | | | | | +--+--+ | * | | | | | | +--+--+ |
|
||||
| | | | | | | | | * | | | | | | | | |
|
||||
| | | | | +----+ R | | * | | | | | +----+ R | |
|
||||
| | | | | | | | * | | | | | | | |
|
||||
| +--+--+ | | | +-----+ | * | +--+--+ | | | +-----+ |
|
||||
| | | | | | | * | | | | | | |
|
||||
| | R | | | | +-----+ | * | +--+ R | | | | +-----+ |
|
||||
| | | | | | | | | * | | | | | | | | | |
|
||||
| +--+--+ | | +---+ R | | * | | +--+--+ | | +---+ R +--+ |
|
||||
| | | | | | | * | | | | | | | | |
|
||||
| | | | +--+--+ | * | | | | | +--+--+ | |
|
||||
| | | | | | * | | | | | | | |
|
||||
| | | | | | * | | | | | | | |
|
||||
| +---+------------------------+---+ | * | | +---+------------------------+---+ | |
|
||||
| net1 | | | * | | net1 | | | |
|
||||
| | | | * | | | | | |
|
||||
+-----------------+ +-----------------+ * | | | | | |
|
||||
* | +-+------------------------------------++ |
|
||||
Fig 12.1 * | net2 | | |
|
||||
* | | | |
|
||||
* +-----------------+ +-----------------+
|
||||
*
|
||||
Fig 12.2
|
||||
|
||||
In Fig 12.1, the router can only be legacy mode since net1 attached to the
|
||||
router is shared VxLAN type. Actually in this case the bridge network is not
|
||||
needed for east-west networking. Let's see Fig 12.2, both net1 and net2 are
|
||||
shared VxLAN type and are attached to the router(also this router can only be
|
||||
legacy mode), so packets between net1 and net2 are routed in the router of the
|
||||
local OpenStack cloud and then sent to the target. Extra routes will be cleared
|
||||
so no packets will go through the bridge network. This is the current
|
||||
implementation of the Tricircle to support VLAN network.
|
||||
|
||||
Recommended Layer-3 Networking Mode
|
||||
-----------------------------------
|
||||
|
||||
Let's make a summary of the above discussion. Assume that DVR mode is a must,
|
||||
the recommended layer-3 topology for each scenario is listed below.
|
||||
|
||||
+----------------------------+---------------------+------------------+
|
||||
| north-south networking via | isolated east-west | Fig 7 |
|
||||
| multiple external networks | networking | |
|
||||
| +---------------------+------------------+
|
||||
| | connected east-west | Fig 8.1 or Fig 9 |
|
||||
| | networking | |
|
||||
+----------------------------+---------------------+------------------+
|
||||
| north-south networking via | Fig 11 |
|
||||
| single external network | |
|
||||
+----------------------------+---------------------+------------------+
|
||||
| north-south networking via | Fig 10 |
|
||||
| direct provider network | |
|
||||
+--------------------------------------------------+------------------+
|
||||
|
||||
Data Model Impact
|
||||
=================
|
||||
|
||||
None
|
||||
|
||||
Dependencies
|
||||
============
|
||||
|
||||
None
|
||||
|
||||
Documentation Impact
|
||||
====================
|
||||
|
||||
Guide of multi-node DevStack installation needs to be updated to introduce
|
||||
the new bridge network solution.
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue