1. What is the problem?
Qos rule cannot be updated after virtual machine creation
and qos policy binding.
2. What is the solution to the problem?
In central_plugin.py, add maintenance to the neutron0 database
table ml2_port_bindings to provide add and delete record.
Change-Id: I23462862bbdb33e96a58b67ed4cd6f3abf95076f
Signed-off-by: zhang xiaohan <zhangxiaohan@szzt.com.cn>
Co-Authored-By: tangzhuo <ztang@hnu.edu.cn>
1. What is the problem?
In a non-local network, delete a vm from one reigon, and use its port to
create a vm in another region, the port in the original region is not
released.
2. What is the solution to the problem?
Release port to down when delete a vm from one region, and change shadow
port 'name' and 'device_id' and 'device_owner' value if shadow port exist.
In the update port, according to the update before the port and the
updated port to determine whether to notify the Central updates.
3. What the features to be implemented in the Tricircle to realize the solution?
Because port be cached in neutron-openvswitch-agent, not delete port
when delete a vm from one region.
Signed-off-by: zhang xiaohan <zhangxiaohan@szzt.com.cn>
Co-Authored-By: tangzhuo <ztang@hnu.edu.cn>
Change-Id: If6819bf67ea2d206cc735039d8c55733dae54ee9
1. What is the problem?
A security group(sg) may map to several local sg in local neutron.
During the deletion of sg local neutron may recreate the sg if it
recieve a get-request of sg.
2. What is the solution to the problem?
The solution to resource deleting is covered in
specs/queens/resource deleting.rst
3. What the features to be implemented in the Tricircle to realize the solution?
No new features.
Change-Id: I37feacf93b1bde1459bfd8513c7860fe77c113a7
1. What is the problem?
When local neutron get the in deleting resource, we
should return 404 not found. in before i used the
_delete tag to handle it. but do not look like a good
way.
2. What is the solution to the problem?
Try hanlde it in the local and raise the network not
found exception.
3. What the features to be implemented in the Tricircle to realize the solution?
No new features.
Change-Id: Iceacaa83e9b31d05a2435586e8bbb3b433d90a4b
Signed-off-by: song baisen <songbaisen@szzt.com.cn>
1. What is the problem?
During the deletion of a network which is mapped to several
local networks, if the local neutron server receives a network-get
request, it will bring some conflict operations. For example,
we delete local networks before deleting central network. If a
"get_network" request comes to a local neutron server after the
local network is completely deleted in that region, if central network
is still there (assuming it takes certain time to delete all local networks),
Tricircle will also query central neutron and the deleted local network
will be recreated.
2. What is the solution to the problem?
The solution to resource deleting is covered in
specs/queens/resource deleting.rst
3. What the features to be implemented in the Tricircle to realize the solution?
No new features.
Signed-off-by: song baisen <songbaisen@szzt.com.cn>
Co-Authored-By: CR_hui <CR_hui@126.com>, zhiyuan_cai <luckyvega.g@gmail.com>
Change-Id: I7b3c1efb88c693c3babccdfc865fa560922ede28
Neutron lib contains the latest callbacks and this patch switches the
imports over the use them.
For details the corresponding neutron change please see
Iba5ad0beff3a37f81a6b789a0a42f9606cc7a197
Change-Id: I5a856e69896426aad2cc518fb04763ea6968fbfb
1. What is the problem
Unit tests fail due to database operation changes in neutron.
2. What is the solution for the problem
Modify unit tests to adapt the change.
3. What features need to be implemented to the Tricircle to
realize the solution
N/A
Change-Id: Iab3d1160bc4ef32ff19f68333ff49f2c3b3e0293
1. What is the problem?
When receiving a request, we don't know whether the request
comes from 'Central Neutron' or 'Local Neutron'.
2. What is the solution to the problem?
In order to determine the source of requests, we add a
filter to get the source message in the header of requests.
Next step we tag the source message into the context. When
deploy the WSGI app, we checkout the User-Agent in the
request's headers and tag the requests.
3. What the features need to be implemented to the Tricircle
to realize the solution?
No
Change-Id: I990fa46e887cc0822b8e6d74d199d92e39df0bd6
1. What is the problem?
Tricircle now don't support QoS service, we should add QoS
servicesupporting.
2. What is the solution to the problem?
We implement Tricircle QoS service by inherit neutron QoS plugin.
For QoS automation deployment in local, we should implement QoS xjob
jobs.
Change-Id: Ifbf453b57f7e18919318e1dae2ca2849e149a29b
Signed-off-by: xiaohan zhang <zhangxiaohan@szzt.com.cn>
1. What is the problem?
We have some outdated todo notes that may confuse developers.
2. What is the solution to the problem?
The following todo notes are removed:
(1) solve ip address conflict issue
Now the external gateway ip and floating ip are allocated from central
Neutron so conflict will not happend. Why this issue existed before is
that we updated local router before updated central router, so the
external gateway ip was not controlled by central Neutron.
(2) decide router is distributed or not from pod table
We already support DVR and this is passed from API, not read from DB
3. What features need to be implemented to the Tricircle to realize
the solution?
N/A
Change-Id: I9306e7f6e20715b6df8bb759be6dbf8a0b28cd2e
Closes-Bug: #1733750
Commit I81748aa0e48b1275df3e1ea41b1d36a117d0097d added the l3 extension
API definition to neutron-lib, commit
I2324a3a02789c798248cab41c278a2d9981d24be rehomed the l3 exceptions and
Ifd79eb1a92853e49bd4ef028e7a7bd89811c6957 shims the l3 exceptions in
neutron.
This patch consumes the l3 API definition from neutron-lib
in prep for If2e66e06b83e15ee2851ea2bc3b64ad366e675dd
Change-Id: Ia63996c641de43472445f6e8ebdb259f89e2b10c
The external network extension's API definition was rehomed into
neutron-lib with I9933b91d1e82db3891b3b72f06e94316e56a4f15. This patch
consumes it, switching over to neutron-lib's modules in prep for
I696b52265b9528082cd2524f05febe2338376488
Change-Id: Id1dfe8cf0ec50f1206dd997c9303756683ab5e00
Commit I1d4ded9959c05c65b04b118b1c31b8e6db652e67 rehomed the
availability zone extension's API definition into neutron-lib and
I761381de0d6e26a0380386700e7921b824991669 will consume it in neutron.
This patch switches the relevant code over to use neutron-lib rather
then neutron.
Change-Id: I9e42623d3b3be4e9332ecd28fd63be583c0801ef
1. What is the problem
OpenStack client will read "tags" field for network creation command,
but create_network in central plugin doesn't return this field.
2. What is the solution for the problem
Fill "tags" field in the returned body
3. What features need to be implemented to the Tricircle to
realize the solution
N/A
Change-Id: I2537bcc9e61dc95a6cfb9e2853c8a2ea32310eb2
Closes-Bug: #1715103
1. What is the problem
The segment ID of local type network is allocated in the local Neutron
server, so it's possible that segment IDs of bridge network and local
network conflict, which results to failure when creating bridge network.
2. What is the solution for the problem
Since network AZ is implemented, we can deprecate the "local" network
type and only create a local network by specifying AZ as region name.
Before such deprecation, we change local type as the last network type
candidate to avoid users create a local type network by mistake.
3. What features need to be implemented to the Tricircle to
realize the solution
N/A
Change-Id: I55a1b6a93bd43e28c05530161e23de26a8bb8f60
Partial-Bug: #1692415
1. What is the problem
Resource attributes are moved from neutron.api.v2.attributes to
neutron-lib.callbacks.resources.
2. What is the solution for the problem
Use the resource definition in neutron-lib.callbacks.resources
module instead.
3. What features need to be implemented to the Tricircle to
realize the solution
N/A
Change-Id: I44dd334e6fea19286fee9821183af8b6458ecb48
1. What is the problem
In the patch[1] for solving bug[2], we introduce an error that when
a fip is created with port ID and the bottom port is not there, the
fip body is not returned. Our smoke test doesn't find out this error
because it only affects the response, the fip is correctly created
in the database. But the log does show an error message.
2. What is the solution for the problem
Always return fip body. Unit test is modified to use the returned
body to ensure it's not None.
3. What features need to be implemented to the Tricircle to
realize the solution
N/A
[1] https://review.openstack.org/#/c/465870
[2] https://launchpad.net/bugs/1691918
Change-Id: I71fcc8fab9fb2a50ab058d67980f6e1e1e4e6d90
1. What is the problem
After booting a VM, we associate a FIP to it via central Neutron.
However, local FIP is not created as expected. Also, there is no related
asynchronous job registered.
The reason may be that we don't trigger asynchronous job in
"create_floatingip" method, and since VM is already there, there's no
chance to trigger the job.
2. What is the solution for the problem
If "port_id" is specified in the body, trigger asynchronous job.
3. What features need to be implemented to the Tricircle to
realize the solution
Support creating FIP with "port_id" specified.
Change-Id: Iceb15f68acc23f8dcb8767ac0371947334a1e9db
Closes-Bug: #1691918
1. What is the problem
Set external gateway to a router, if error occurs during local router
creating/updating, exception will be raised, but this time central
router is already updated with external gateway information. So when we
try to unset the external gateway, central Neutron server will try to
remove external gateway for local router first, if local router doesn't
exist at this time, we have no way to unset the external gateway since
we fail to remove external gateway for local router.
2. What is the solution for the problem
Do not try to remove external gateway for a nonexistent local router.
3. What features need to be implemented to the Tricircle to
realize the solution
N/A
Change-Id: Id7261640405d4d24c8523f6518ec19edc031024b
Closes-Bug: #1693138
1. What is the problem?
Provider attributes have been already move to Neutron-lib,
but we still have some hard-coded string: 'provider:network_type',
'provider:physical_network' and 'provider:segmentation_id'.
2. What is the solution to the problem?
We can use the constants in neutron_lib instead
3. What features need to be implemented to the Tricircle to realize
the solution?
None
Closes-Bug: #1691634
Change-Id: I4da0064d2ac78224698e62b8bd42935c6f8f25c9
1. What is the problem
Tricircle does not support VLAN aware VMs
2. What is the solution to the problem
Add VLAN aware VMs support
3. What the features need to be implemented to the Tricircle
Add VLAN aware VMs support
Implements: blueprint vlan-aware-vm-support
Change-Id: Ieed7ead2be3b152228fd11c454f2db1b640820c2
1. What is the problem
When XJob receives a job message from service, it will register
the job in database and handle it asynchronously. Tricircle
needs to provide API for admin to query the job status and trigger
failed job if something happens unexpectedly. The detailed work
for XJob Admin APIs is covered in the document[1].
2. What is the solution for the problem
We implement XJob management APIs, they are listed as following:
*(1) create a job
*(2) list single job info
*(3) list all jobs
*(4) list jobs with filters
*(5) list all jobs' schemas
*(6) delete a job
*(7) redo a job
3. What the features need to be implemented to the Tricircle to
realize the solution
Implement above job operations.
[1] https://review.openstack.org/#/c/438304/
Change-Id: Ibd90e539c9360a0ad7a01eeef185c0dbbee9bb4e
1. What is the problem?
Multi-region test has been added to our check/gate jobs, but the
test just installs Tricircle via DevStack and doesn't provision
any resources like network/subnet/router/server, so Tricircle
functionality is not tested.
2. What is the solution to the problem?
Add a script in the test to create a basic network topology via
central Neutron and check if local resources are correctly created.
In the topology, two tenant networks are connected by a router, an
external network is attached to the router. We boot one server in
each tenant network and associate a floating IP to one of the server.
This patch also fixes a problem brought by
(1) Eliminate lookup of "resource extend" funcs by name
92372b982f
(2) Defer service_plugins configuration
a8204752e3
We can put these changes in a standalone patch, but let's first put them
here to test by this smoke test.
3. What features need to be implemented to the Tricircle
to realize the solution?
Tricircle functionality can be tested.
Change-Id: Ib364a96fe4c3b9b635e5fac979c7c1cba2aaefc9
1. What is the problem?
As discussed in the spec[1], we lack support of one deployment
scenario that each OpenStack cloud provides external network
for north-south traffic and at the same time east-west networking
of tenant networks between OpenStack clouds are also enabled.
2. What is the solution to the problem?
Implement a new layer-3 networking model discussed in the spec[1].
3. What features need to be implemented to the Tricircle
to realize the solution?
Xmanager is modified to properly configure router interface, router
extra routes and subnet host routes for the new model.
[1] https://github.com/openstack/tricircle/blob/master/specs/pike/
l3-networking-multi-NS-with-EW-enabled.rst
Change-Id: I34ad7dbf01be68f4544b2170b2cfe90097c4edf5
1. What is the problem?
Multiple pods may be configured with physical networks that have
the same name, so theoretically external networks in different
pods can have the same physical network name. But currently we
can not create such external networks, because central Neutron
will save the used physical network in the database, when creating
the second external networks, FlatNetworkInUse exception will be
raised.
2. What is the solution to the problem?
Catch FlatNetworkInUse exception and leave the validation of
physical network to local Neutron.
3. What features need to be implemented to the Tricircle
to realize the solution?
Now users can create flat external networks with the same physical
network name in different pods.
Change-Id: Icf7877bc6cef8757b82552f9c3871336442a07a6
1. What is the problem
As reported in the bug page, creating external network will
fail with the recent updated Neutron codes.
2. What is the solution for the problem
After debugging, I find that the bug is caused by a recent
Neutron patch[1] that uses new enginefacade for network and
subnet operation. We need to update our central plugin to
adapt the change.
3. What features need to be implemented to the Tricircle to
realize the solution
No new features
[1] b8d4f81b8e
Closes-Bug: #1682315
Change-Id: Ia4e652c74d5ee32c10a907730a1eea5464a5b328
1. What is the problem
Flat network type is commonly used as the external network type, but
currently users can not create a flat external network via Tricircle.
2. What is the solution for the problem
Support flat network type.
3. What features need to be implemented to the Tricircle to
realize the solution
(1) A new type driver for flat network is added.
(2) Release note is added
(3) Related documents are updated
Change-Id: I148e1102510dda96a9fcd8a4b76de09cd802833c
1. What is the problem?
VLAN network has some restrictions that VxLAN network doesn't have.
For more flexible networking deployment, we consider supporting
cross-pod VxLAN network.
We are going to use shadow agent/port mechanism to synchronize VTEP
information and make cross-pod VxLAN networking available, as discussed
in the specification document[1].
In the previous implementation, we use a loop to create shadow ports
one by one. When there are large numbers of shadow ports need to be
created, the large numbers of API requests will affect the performance.
2. What is the solution to the problem?
Use bulk creation API to create shadow port.
3. What features need to be implemented to the Tricircle
to realize the solution?
This is the fifth patch for cross-pod VxLAN networking support, which
introduces the following changes:
(1) Use bulk API to create shadow ports
(2) Do not create resource routing entries for shadow ports
[1] https://review.openstack.org/#/c/429155/
Change-Id: I8b2dc98d84385433727e55584c80e1054fce406f
1. What is the problem?
The spec and implementaion of vxlan network support have been
submitted, but we lack updating related documents and adding
a release note.
2. What is the solution to the problem?
Update related documents and add a release note.
3. What the features need to be implemented to the Tricircle
to realize the solution?
N/A
Change-Id: I392022226b06e75f7813befc78927cb5779e0a45
1. What is the problem?
VLAN network has some restrictions that VxLAN network doesn't have.
For more flexible networking deployment, we consider supporting
cross-pod VxLAN network.
We are going to use shadow agent/port mechanism to synchronize VTEP
information and make cross-pod VxLAN networking available, as discussed
in the specification document[1].
With the previous parts[2, 3, 4], VxLAN network already works for
tenant network, but bridge network still lacks VxLAN network support.
2. What is the solution to the problem?
We need to build VxLAN tunnels for bridge ports, so bridge port
creation should also trigger shadow agent and shadow port setup.
3. What the features need to be implemented to the Tricircle
to realize the solution?
This is the forth patch for cross-pod VxLAN networking support, which
introduces the following changes:
(1) Make bridge network gateway port creation also trigger shadow
agent and shadow port setup, so we can use VxLAN type bridge network
(2) Delete shadow bridge ports when clearing bridge network/subnet
[1] https://review.openstack.org/#/c/429155/
[2] https://review.openstack.org/#/c/425128/
[3] https://review.openstack.org/#/c/425129/
[4] https://review.openstack.org/#/c/425130/
Change-Id: I3f3054c9300566ddbdd5b6d523f547485462447c
1. What is the problem?
Currently, the allowed-address-pairs is not supported in the Tricircle.
2. What is the solution to the problem?
Enable allowed address pairs in central neutron.
3. What the features to be implemented in the Tricircle
to realize the solution?
No new features.
Change-Id: I5c4f1bf1b146d5fdf49c14d43f7226d81770e667
1. What is the problem?
VLAN network has some restrictions that VxLAN network doesn't have.
For more flexible networking deployment, we consider supporting
cross-pod VxLAN network.
We are going to use shadow agent/port mechanism to synchronize VTEP
information and make cross-pod VxLAN networking available, as discussed
in the specification document[1].
With the previous parts[2, 3], we can create instances in the same
VxLAN network but in different pods. With the help of shadow ports,
tunnels are correctly created so instances can communicate with each
other. But we have a problem during the association of a floating ip
with an instance port because "setup_bottom_router" job will also
create shadow port for floating ip association.
2. What is the solution to the problem?
Let "setup_bottom_router" and "setup_shadow_ports" jobs call the
same method to create shadow ports and handle conflict in that method.
3. What the features need to be implemented to the Tricircle
to realize the solution?
This is the third patch for cross-pod VxLAN networking support, which
introduces the following changes:
(1) Both creating floating ip and booting instance in vxlan network
will create shadow port, so we leave shadow port deletion work to
central plugin, it will delete shadow port when deleting instance port
With this patch, floating ip binding to the port which is from cross-pod
VxLAN network is supported.
[1] https://review.openstack.org/#/c/429155/
[2] https://review.openstack.org/#/c/425128/
[3] https://review.openstack.org/#/c/425129/
Change-Id: I7ca3e124232baf265ec5a8ed3df0aca1303a2ff7
1. What is the problem
central plugin is missing the get_router_availability_zones
function
2. What is the solution to the problem
Add get_router_availability_zones func to central plugin.
This function was added to its parent class
RouterAvailabilityZoneMixin(which is Neutron code) by
mistake in the previous test so I didn't find out
this problem.
3. What the features need to be implemented to the Tricircle
No new features
Change-Id: Ia2323478d319ead69ff4bbbdb46684b4f18340ad
1. What is the problem?
VLAN network has some restrictions that VxLAN network doesn't have.
For more flexible networking deployment, we consider supporting
cross-pod VxLAN network.
We are going to use shadow agent/port mechanism to synchronize VTEP
information and make cross-pod VxLAN networking available, as discussed
in the specification document[1].
In part1[2], we have added the necessary logic to retrieve agent info
from local Neutron and save it in the shadow agent table in central
Neutron. Now we need to utilize this info to create shadow agent and
shadow port.
2. What is the solution to the problem?
An asynchronous job triggered when instance port is updated to active
is added. It calculates needed shadow ports and then create them in
the target pod.
3. What the features need to be implemented to the Tricircle
to realize the solution?
This is the second patch for cross-pod VxLAN networking support, which
introduces the following changes:
(1) A new asynchronous job setup_shadow_ports is added. Each asynchronous
job only handles the shadow ports setup in one given pod for one given
network. If shadow ports in other pod also needs to be updated, the job
registers one new job for each pod.
[1] https://review.openstack.org/#/c/429155/
[2] https://review.openstack.org/#/c/425128/
Change-Id: I9481016b54feb57aacd03688de882b8912a78018
1. What is the problem?
VLAN network has some restrictions that VxLAN network doesn't have.
For more flexible networking deployment, we consider supporting
cross-pod VxLAN network.
2. What is the solution to the problem?
We are going to use shadow agent/port mechanism to synchronize VTEP
information and make cross-pod VxLAN networking available, as discussed
in the specification document[1].
3. What the features need to be implemented to the Tricircle
to realize the solution?
This is the first patch for cross-pod VxLAN networking support, which
introduces the following changes:
(1) A new type driver for VxLAN network is added
(2) During processing update request from nova, local plugin populates
agent info in the update body and sends update request to central
neutron
(3) Central neutron extracts agent info from request body and registers
shadow agent in Tricircle database
(4) During processing create request, if agent info is set in the
binding:profile in the create body, local plugin creates or updates
shadow agent before invoking real core plugin
(5) During processing update request, if "force_up" is set in the
binding:profile in the update body, local plugin updates the port
status to active to trigger l2 population
[1] https://review.openstack.org/#/c/429155/
Change-Id: I2e2a651887320e1345f6904393422c5a9a3d0827
1. What is the problem
1) Tricircle does not enable router's az
2) Tricircle's network topology is too complex for local
router(router reside only in one region)
2. What is the solution to the problem
1) Enable router's az
2) Remove ns-router and bridge network when used local router
3. What the features need to be implemented to the Tricircle
1)Enable router's az
2)Attach external network to local router directly, no additional
intermediate router is needed.
Implements: blueprint enable-router-az-simplify-net-top
Change-Id: I410a81c9d0e56db8163e611211b8dbd4c5772767
1. What is the problem
when boot a vm, if the vm's network is not located in the specified
region, the VM can still be created successfully.
2. What is the solution to the problem
Converted the network's az_hints to region_name in the central plugin and
judge whether the network located in current region in the local plugin.
3. What the features need to be implemented to the Tricircle
No new features
Change-Id: Ie8c28b7956e1451fb51745864385a5ddefc9cbea
This patch refactors tricircle to use provider net
from neutron-lib. For more details see [1].
Note that this project also uses a private API from
neutron's provider net extension. There is work to
rehome that into neutron-lib as well [2].
NeutronLibImpact
[1] https://review.openstack.org/421562/
[2] https://review.openstack.org/421961/
Change-Id: I078198e0cebb33f40ef6c2f7849f42e5b5f47d2b
This patch refactors tricircle to use portbindings
from neutron-lib. For more details see [1].
NeutronLibImpact
[1] https://review.openstack.org/422210/
Change-Id: I48cdf64e06c69749f82c77823ad7710b0fce4fad
1. What is the problem?
When booting instance and adding router interface are running at
the same time, sometimes asynchronous job of router setup is not
triggered so the bottom router is not created. A script that can
reproduce this problem is attached to the bug report page[1].
2. What is the solution to the problem?
As discussed in the bug report page[1], this is a time series
related bug. To fix this bug, we just need to modify the router
interface adding process to add top interface before getting
network resource routing entries.
3. What the features need to be implemented to the Tricircle
to realize the solution?
No new feature.
[1] https://bugs.launchpad.net/tricircle/+bug/1647924
Change-Id: Ia79e375ed95fc88f5f670e350fe5c7bbef5627f2
Closes-Bug: #1647924
1. What is the problem
Tricircle does not support subnet update operation now.
2. What is the solution to the problem
Implement related functions
3. What the features need to be implemented to the Tricircle
No new features
Change-Id: I840caef4176f879ceedae98f6d2c22964ebf32d6
1. What is the problem
Tricircle does not support port update operation now.
2. What is the solution to the problem
Implement related functions
3. What the features need to be implemented to the Tricircle
Add name, description, admin_state_up, extra_dhcp_opts, device_owner,
device_id, mac_address, security group attribute updates supported,
where updating name only takes effect on the central pod.
Change-Id: Id0f1175f77f66721eaf739413edf81bfc9231957
1. What is the problem
shared_vlan's concept is to create a network spanning into multiple
OpenStack with same vlan segment, but when create a network which only
resides in one region and the network type is vlan, we have to specify
the physical network type as shared_vlan, otherwise no other way to
create a vlan network in local Neutron.
2. What is the solution to the problem
Just use vlan as the physical network type, and use
availability_zone_hints to limit where the network will reside.
3. What the features need to be implemented to the Tricircle
No new features
Change-Id: Ib3f110e2281eff2997752debda319da282c3e3ad
1. What is the problem?
Based on the patch to combine bridge network[1], we stil need some
changes to bring DVR support to the Tricircle.
2. What is the solution to the problem?
Here lists some major changes:
(1) Extend central plugin to add DVR support
(2) The "distributed" parameter xmanager sets to create local router
is no longer hard coded as False.
(3) Device owner filters to query ports now includes DVR port.
(4) Two ports are created when attaching a netwok to a distributed
router, one is of type "router_interface_distributed" and the
other is of type "router_centralized_snat". We assign the ip of
the pre-created interface port to the first one and assign the
ip of the central subnet gateway to the second one.
3. What the features need to be implemented to the Tricircle
to realize the solution?
Bring part of the DVR support to cross-pod layer-3 networking, now
users can create a distributed router.
[1] https://review.openstack.org/#/c/407956/
Change-Id: I0a9724e758bfa226520f536dd6055ca0c870fd89
1. What is the problem
An error occured when creating an internal network specifying the az-hint
parameter with region name: AvailabilityZone could not be found.
2. What is the solution to the problem
The region name is stored in the pod,add it to the known az list.
3. What the features need to be implemented to the Tricircle
No new features
Change-Id: Ie820946bbb0a43f51be683871c34ada71714902a
1.What is the problem?
Tricircle doesn't support python3 yet, but python2 support will
be stopped in 2020. OpenStack community has put the support of
python3 as the community wide goal in Pike release, Tricircle
needs to be ready for this.
2.What is the solution to the problem?
Port the code to be compatible with both python2 and python3, for
python3, only the python3.5 version will be supported.
After this patch is merged, a new gate/check job for python3.5
should be enabled too.
3.What the features need to be implemented to the Tricircle
to realize the solution?
No new features.
Change-Id: I18cb59cadb7a1c06f6cd729c4bda2c8e95d41e1e
Signed-off-by: joehuang <joehuang@huawei.com>
1. What is the problem?
The current implementation of bridge networks has some problems
when supporting DVR and shared VxLAN network. One blueprint has
been registered[1] and the specification document has also been
submitted[2].
2. What is the solution to the problem?
The logic of bridge network operations will be changed, here lists
some major changes:
(1) Only one bridge network will be created for one project
(2) Bridge network is attached to local router as external network
(3) One local router dedicated for north-south networking will be
created in the pod hosting real external network. Bridge network
is attached to this special router as internal network
(4) If the instance port is not located in the pod hosting real
external network, after floating ip association, one "copy" port
will be created in the pod hosting real external network. Without
this port, local Neutron server will complain that it cannot find
the internal port going to be associated.
3.What the features need to be implemented to the Tricircle
to realize the solution?
Bring part of the DVR support to cross-pod layer-3 networking
[1] https://blueprints.launchpad.net/tricircle/+spec/combine-bridge-net
[2] https://review.openstack.org/#/c/396564/
Change-Id: I53d1736ab30d3bc508279532609285975988b5f4
1. What is the problem?
Tricircle now is dedicated for networking automation across Neutron. Some
tables used in APIs gateway should be removed, like aggregation table, pod
binding table, etc. They should not reside in the Tricircle any more. Other
tables containing old meanings but are still in use should be renamed for
better understanding. We can see the blueprint[1] for further explanation.
2. What is the solution to the problem?
The data models, tables and APIs about aggregation, pod binding, etc. should
be removed. After the pod binding table is removed, the az_hint used for
external network creation is hard to match. So special handle needs to be
implemented. Other tables will have vague meaning after this splitting, but
they still take effective in the Tricircle, So they should be renamed for
better understanding. What's more, the pod_name in the pod table is renamed
to region_name, which coordinates better with its availability zone.
1)Tables to be removed:
*aggregates
*aggregate_metadata
*instance_types
*instance_type_projects
*instance_type_extra_specs
*key_pairs
*pod_binding
2)Tables need to be renamed:
*cascaded_pod_service_configuration (new name: cached_endpoints)
*cascaded_pods (new name: pods)
*cascaded_pods_resource_routing (new name: resource_routings)
*job (new name: async_jobs)
3. What the features need to be implemented to the Tricircle to realize
the solution?
After the pod binding table is removed, the az_hint used for external
network creation is hard to match. New features will be implemented to solve
this problem.
[1] https://blueprints.launchpad.net/tricircle/+spec/clean-legacy-tables
Change-Id: I025b4fb48c70abf424bd458fac0dc888e5fa19fd
1. What is the problem
Tricircle does not support network update operation now.
2. What is the solution to the problem
Implement related functions
3. What the features need to be implemented to the Tricircle
Add name,description,shared,admin_state_up attribute updates support,
where updating name only takes effect on the central pod.
Change-Id: Ifd0bceeb70ee5408d3357cfc990c9423a6e94237