1. What is the problem?
Currently no doc is there for installing and configuring LBaaS.
2. What is the solution to the problem?
Add installation and configuration guide for LBaaS.
3. What features need to be implemented to the Tricircle to realize
the solution?
None
Change-Id: I73b48b88c341ac154e9714dfb855e283981d97e7
1. What is the problem
We try to figure out a way to integrate Tricircle with Nova cell v2.
2. What is the solution for the problem
The basic idea is to start a local Neutron server for each cell. Nova-
compute in each cell is configured to talk to local Neutron server in
the same cell. All local Neutron servers are configured to talk to the
very one Nova-API.
Currently DevStack doesn't support multi-cell deployment, so we try to
deploy it with our own plugin. In node1, Nova services are started as
before, but we change the region of Nova-API, Glance-API and placement-API
from RegionOne to CentralRegion. Local Neutron server is also configured
to talk to Nova-API in CentralRegion. In node2, Nova-API is enabled at
first, since we need DevStack to help us create the related database.
After the DevStack start in the node2, we manually stop Nova-API and
Nova-scheduler in node2.
One document discussing the detailed setup and trouble shooting steps
is added.
3. What features need to be implemented to the Tricircle to
realize the solution
Tricircle can work with Nova cell v2.
Change-Id: I6ba8e1022d83f40df36464abfdd7b4844673b24d
1. What is the problem?
Multi-region test has been added to our check/gate jobs, but the
test just installs Tricircle via DevStack and doesn't provision
any resources like network/subnet/router/server, so Tricircle
functionality is not tested.
2. What is the solution to the problem?
Add a script in the test to create a basic network topology via
central Neutron and check if local resources are correctly created.
In the topology, two tenant networks are connected by a router, an
external network is attached to the router. We boot one server in
each tenant network and associate a floating IP to one of the server.
This patch also fixes a problem brought by
(1) Eliminate lookup of "resource extend" funcs by name
92372b982f
(2) Defer service_plugins configuration
a8204752e3
We can put these changes in a standalone patch, but let's first put them
here to test by this smoke test.
3. What features need to be implemented to the Tricircle
to realize the solution?
Tricircle functionality can be tested.
Change-Id: Ib364a96fe4c3b9b635e5fac979c7c1cba2aaefc9
1. What is the problem?
As discussed in the spec[1], we lack support of one deployment
scenario that each OpenStack cloud provides external network
for north-south traffic and at the same time east-west networking
of tenant networks between OpenStack clouds are also enabled.
2. What is the solution to the problem?
Implement a new layer-3 networking model discussed in the spec[1].
3. What features need to be implemented to the Tricircle
to realize the solution?
Xmanager is modified to properly configure router interface, router
extra routes and subnet host routes for the new model.
[1] https://github.com/openstack/tricircle/blob/master/specs/pike/
l3-networking-multi-NS-with-EW-enabled.rst
Change-Id: I34ad7dbf01be68f4544b2170b2cfe90097c4edf5
1. What is the problem
Flat network type is commonly used as the external network type, but
currently users can not create a flat external network via Tricircle.
2. What is the solution for the problem
Support flat network type.
3. What features need to be implemented to the Tricircle to
realize the solution
(1) A new type driver for flat network is added.
(2) Release note is added
(3) Related documents are updated
Change-Id: I148e1102510dda96a9fcd8a4b76de09cd802833c
1. What is the problem?
The spec and implementaion of vxlan network support have been
submitted, but we lack updating related documents and adding
a release note.
2. What is the solution to the problem?
Update related documents and add a release note.
3. What the features need to be implemented to the Tricircle
to realize the solution?
N/A
Change-Id: I392022226b06e75f7813befc78927cb5779e0a45
1. What is the problem?
Using tempest plugin in conjuction with
NEUTRON_CREATE_INITIAL_NETWORKS to set False was causing failures
during the execution of the stack.sh script. This issue was solved
in I62e74d350d6533fa842d64c15b01b1a3d42c71c2 but it hasn't reflected
on the devstack samples.
2. What is the solution to the problem?
Let the users to enable or disable the tempest usage on their devstack
environments.
3. What the features need to be implemented to the Tricircle to
realize the solution?
No new features
Change-Id: Iad563a660ea58faa57984ee7829ee45e5811c900
1.What is the problem?
Devstack plugin only supports the first region installation,
for the second region installation with Tricircle local plugin,
need to install the tricircle package manually (please refer
to the multi-pod-installation-devstack.rst in doc/source).
2.What is the solution to the problem?
If we want to support multi-region gate/check test job
(https://blueprints.launchpad.net/tricircle/+spec/multi-region-job-for-gate-and-check-test),
the second region in the gate/check job can only be installed
through devstack plugin and local.conf. So we have to improve
the devstack plugin to support the second region installation.
Tricircle AdminAPI and XJob shouldn't be started in the second
region, there is no need to generate database schema
for the second region too. only the plugin needs to be installed
and configured in local Neutron. So TRICIRCLE_START_SERVICES is
introduced in devstack local.conf. The TRICIRCLE_START_SERVICES
variable needs to be enabled in the first region and disabled
in the second one.
At the same time, remove the variable Q_ENABLE_TRICIRCLE judgement
and configuration, if the Tricircle DevStack plugin is enabled,
that means the plugin itself will run by default.
3.What the features need to be implemented to the Tricircle
to realize the solution?
No new features.
Change-Id: Ib66a22f9e4889d131e5e481e9dec98efca5ed6fe
Signed-off-by: joehuang <joehuang@huawei.com>
1. What is the problem?
The current implementation of bridge networks has some problems
when supporting DVR and shared VxLAN network. One blueprint has
been registered[1] and the specification document has also been
submitted[2].
2. What is the solution to the problem?
The logic of bridge network operations will be changed, here lists
some major changes:
(1) Only one bridge network will be created for one project
(2) Bridge network is attached to local router as external network
(3) One local router dedicated for north-south networking will be
created in the pod hosting real external network. Bridge network
is attached to this special router as internal network
(4) If the instance port is not located in the pod hosting real
external network, after floating ip association, one "copy" port
will be created in the pod hosting real external network. Without
this port, local Neutron server will complain that it cannot find
the internal port going to be associated.
3.What the features need to be implemented to the Tricircle
to realize the solution?
Bring part of the DVR support to cross-pod layer-3 networking
[1] https://blueprints.launchpad.net/tricircle/+spec/combine-bridge-net
[2] https://review.openstack.org/#/c/396564/
Change-Id: I53d1736ab30d3bc508279532609285975988b5f4
1. What is the problem?
Tricircle now is dedicated for networking automation across Neutron. Some
tables used in APIs gateway should be removed, like aggregation table, pod
binding table, etc. They should not reside in the Tricircle any more. Other
tables containing old meanings but are still in use should be renamed for
better understanding. We can see the blueprint[1] for further explanation.
2. What is the solution to the problem?
The data models, tables and APIs about aggregation, pod binding, etc. should
be removed. After the pod binding table is removed, the az_hint used for
external network creation is hard to match. So special handle needs to be
implemented. Other tables will have vague meaning after this splitting, but
they still take effective in the Tricircle, So they should be renamed for
better understanding. What's more, the pod_name in the pod table is renamed
to region_name, which coordinates better with its availability zone.
1)Tables to be removed:
*aggregates
*aggregate_metadata
*instance_types
*instance_type_projects
*instance_type_extra_specs
*key_pairs
*pod_binding
2)Tables need to be renamed:
*cascaded_pod_service_configuration (new name: cached_endpoints)
*cascaded_pods (new name: pods)
*cascaded_pods_resource_routing (new name: resource_routings)
*job (new name: async_jobs)
3. What the features need to be implemented to the Tricircle to realize
the solution?
After the pod binding table is removed, the az_hint used for external
network creation is hard to match. New features will be implemented to solve
this problem.
[1] https://blueprints.launchpad.net/tricircle/+spec/clean-legacy-tables
Change-Id: I025b4fb48c70abf424bd458fac0dc888e5fa19fd
1. What is the problem
Necessary changes for local plugin and central plugin to boot
a virtual machine have been submitted in this patch[1]. As the
next step, we need to add l3 functionality to the local and
central plugin.
2. What is the solution to the problem
Several changes in local plugin.
(1) Before creating local subnet, local plugin sends request to
central plugin to create a "reserved gateway port", and use the
ip address of this port as the gateway ip of the local subnet.
(2) When local plugin receives network or subnet creation request,
if the request contains "name" parameter and the name is a UUID,
local plugin uses the name as the id of the local network or
subnet.
3. What the features need to be implemented to the Tricircle
to realize the solution
With this patch, users can connect virtual machines booted
directly via the local Nova server in different networks with
a router.
[1] https://review.openstack.org/375281
Change-Id: I12094f30804c0bad2f74e0ff510ac26bd217cfd4
Bug in DevStack[1] which affects multi-region deployment has been
fixed. Update readme to adapt this DevStack change.
[1] https://bugs.launchpad.net/devstack/+bug/1540802
Change-Id: I19876c359910741e5fe5babdd209b06f126b0d4f
Implement l3 north-south networking functionality. In our current
design, external network is hosted in one of the bottom pod, VMs
hosted in other bottom pods are connected to this external network
via a bridge network, using the same physical network as the east-
west networking, but different vlan.
Change-Id: I953322737aa97b2d1ebd9a15dc479d7aba753678
Implement cross pod l3 networking functionality. In this second
patch, we implement an extra job to configure extra routes. README
is updated to introduce this networking solution and how to test
it with DevStack.
Change-Id: I3dafd9ef15c211a941e85b690be1992416d3f3eb