1. What is the problem
Necessary changes for local plugin and central plugin to boot
a virtual machine have been submitted in this patch[1]. As the
next step, we need to add l3 functionality to the local and
central plugin.
2. What is the solution to the problem
Several changes in local plugin.
(1) Before creating local subnet, local plugin sends request to
central plugin to create a "reserved gateway port", and use the
ip address of this port as the gateway ip of the local subnet.
(2) When local plugin receives network or subnet creation request,
if the request contains "name" parameter and the name is a UUID,
local plugin uses the name as the id of the local network or
subnet.
3. What the features need to be implemented to the Tricircle
to realize the solution
With this patch, users can connect virtual machines booted
directly via the local Nova server in different networks with
a router.
[1] https://review.openstack.org/375281
Change-Id: I12094f30804c0bad2f74e0ff510ac26bd217cfd4
1. What is the problem
As discussed in the feature specification[1], we are going to
move the process of networking automation from the Nova-APIGW
to Neutron server.
2. What is the solution to the problem
Implement a new Neutron core plugin which runs in local Neutron
server to finish the networking automation process. Also, the
original Neutron core plugin is renamed as central plugin and
needs some changes to work with local plugin.
3. What the features need to be implemented to the Tricircle
to realize the solution
With this patch, users can boot a virtual machine directly via
the local Nova server. But security group support is not covered.
DevStack script and local.conf sample are also updated.
[1] https://github.com/openstack/tricircle/blob/master/specs/ocata/local-neutron-plugin.rst
Change-Id: I6a3dc5e9af395e3035a7d218264a08b6313a248d
1. What is the problem
After running tempest test test_networks and test_ports, we find
some bugs on network and port query:
(1) Querying port with fields specified is not supported
(2) Querying port with ip address filter is not supported
(3) Querying network without id in the returned fields causes error
2. What is the solution to the problem
(1) Remove unnecessary fields before returning result
(2) If ip address filter is specified, join IPAllocation table to
filter ports
(3) If id is not in the specified field list, do not extend result
with segmentation information because network id is required to
retrieve segmentation information
3. What the features need to be implemented to the Tricircle
to realize the solution
Tempest test test_networks can be passed, to pass test_ports, we stil
need to support security group update and binding profile update.
Change-Id: Iec498fc72cd0dba74c908823f2d429537d52e0e2
1. What is the problem
To connect bottom routers in different pods, we need to create
bridge networks and subnets. These networks and subnets are not
deleted after bottom routers are deleted.
2. What is the solution to the problem
Clean these bridge networks and subnets during top router deletion.
3. What the features need to be implemented to the Tricircle
to realize the solution
Bridge networks and subnets now can be cleaned up.
Change-Id: I1f2feb7cba3dda14350b3e25a3c33563379eb580
1. What is the problem
Before creating bottom subnet, we need to create some ports to
allocate ip address for bottom dhcp port and bottom gateway port.
These pre-created ports are not deleted after bottom resources
are deleted.
2. What is the solution to the problem
Clean these pre-created ports during top subnet deletion.
3. What the features need to be implemented to the Tricircle
to realize the solution
Pre-created ports now can be cleaned up.
Change-Id: I73c0ef87e4104f1db9926a5972c5f36be94d724a
1. What is the problem
User cannot correctly delete floating ip via the Tricirce plugin.
2. What is the solution to the problem
Overwrite delete_floatingip method in the Tricircle plugin. We
first update the floating ip to disassociate it with fixed ip,
then invoke delete_floatingip method in the base class to delete
the database record.
3. What the features need to be implemented to the Tricircle
to realize the solution
Deletion of floating ip is now supported.
Change-Id: Ia8bf6e7499321c1be085533e8695eea6ac8ef26d
1. What is the problem
Shared vlan type driver has been merged, so we can run two VMs in
the same network but across two pods. However if we attach a network
to the router, the tricircle plugin still check if the network is
bound to one AZ, so one network cannot cross different pods if we
are going to attach it to a router.
2. What is the solution to the problem
The reason we require network to be bound to AZ is that when the
network is attaching the one router, we know where to create the
bottom network resources. To support l3 networking in shared vlan
network, we need to remove this restriction.
In the previous patches[1, 2], we have already move bottom router
setup to an asynchronouse job, so we just remove the AZ restriction
and make the tricircle plugin and the nova_apigw to use the job.
Floating ip association and disassociation are also moved to bottom
router setup job.
3. What the features need to be implemented to the Tricircle
to realize the solution
Now network can be attached to a router without specifying AZ, so
l3 networking can work with across-pod network.
[1] https://review.openstack.org/#/c/343568/
[2] https://review.openstack.org/#/c/345863/
Change-Id: I9aaf908a5de55575d63533f1574a0a6edb3c66b8
1.What is the problem:
If availability zone (az in short) does not exit in the server creation
request or extenal network, 400 bad request was given in current handling,
this handling is inconsistent with Nova and Neutron.In Nova and Neutron,
az is one optional parameter, i.e, even the az is not speficied in the
request, a scheduled az should be used for the creation, but not
return 400 bad request.
2.What's need to be fixed:
To fix the issue, server creation should allow no az parameter in
the request, and select one default bottom pod to continue the server
creation.
3.What is the purpose of this patch set:
To make the az handling behavior is consistent with that in Nova and
Neutron.
Change-Id: I9281914bad482573c6540bf2a14e51e7ca4d744c
Signed-off-by: joehuang <joehuang@huawei.com>
1. What is the problem
Our current mechanism to handle dhcp port creation is that we first
create the top dhcp port, then use the allocated ip to create the
bottom dhcp port. If we get an "IpAddressInUse" error, meaning that
bottom dhcp agent has already created the dhcp port with the same
ip, we just reuse the port; otherwise, we successfully create the
bottom dhcp port, we remove other dhcp ports with different ips, so
dhcp agent can be notified and use the bottom dhcp port we create.
However, we find that this mechanism doesn't work in the following
situation. Dhcp agent may create the bottom dhcp port after we do
the check about whether other dhcp ports exist, so we don't remove
the dhcp port created by dhcp agent, thus we end up with two bottom
dhcp ports, one is created by dhcp agent, one is created by us.
2. What is the solution to the problem
Create bottom subnet with dhcp disabled, so dhcp agent will not be
scheduled, then create bottom dhcp port and update bottom subnet
to enable dhcp. Finding that there is already a reserved dhcp port,
dhcp agent will not create another dhcp port and directly use the
existing one.
3. What the features need to be implemented to the Tricircle
to realize the solution
No new feature introuduced.
Change-Id: I8bb8622b34b709edef230d1f1c985e3fabd5adb0
1. What is the problem
Currently dhcp port handle is only done in the Nova_apigw. Please
refer to our design documnet[1] to see why dhcp port needs special
process, which is discussed in section 7.4.10. Since we are going
to support one top network to spread into different AZs(availability
zones), routers will be created in different AZs. For shared VLAN
type network, bottom networks in different AZs are actually in the
same broadcast domain, so router gateway IPs should be different
in different AZs, otherwise we have two interfaces with the same
IP in the same broadcase domain. Thus, we need to allocate one IP
for each bottom router and the Tricircle plugin needs to handle
dhcp port creation.
2. What is the solution to the problem
Reconstrut the code to move dhcp port handle from the Nova_apigw
to the helper module, so both the Nova_apigw and the Tricircle
plugin can use.
3. What the features need to be implemented to the Tricircle
to realize the solution
No new feature introuduced.
[1] https://docs.google.com/document/d/18kZZ1snMOCD9IQvUKI5NVDzSASpw-QKj7l2zNqMEd3g
Change-Id: I2525a7a18761ef4aa8c6e743cb46ed238a313731
1. What is the problem
Currently router operations are all done in the Tricircle plugin
in a synchronous way. Now we are going to support one top network
to spread into different availability zones, so not only the
Tricircle plugin but also the Nova_apigw need to handle router
operation. Also, having one top network spreading into several
availability zones means that we may need to operate routers in
several bottom pods at one time, so it's better to handle the
operations in an asynchronous way.
2. What is the solution to the problem
Reconstrut the code to move router operations to a new helper
module and add a new type of job to setup bottom routers, so both
the Tricircle plugin and the Nova_apigw can operate bottom routers
via xjob.
3. What the features need to be implemented to the Tricircle
to realize the solution
A new helper module is added while most of the codes is moved from
the Tricircle plugin. Also, a new type of job is added.
Change-Id: Ie5a89628a65c4d7cbcb2acd56bafe682580da2c6
1. What is the problem
The Tricircle Neutron plugin will call Neutron type manager's
function _add_network_segement, but Neutron has removed the
"mtu" parameter recently, this makes the check and gate test
in the Tricircle failed because the Tricircle still passes the
"mtu" parameter.
All check and gate test of python27 failed due to this change,
and any new patch is not able to be merged. The issue should
be fixed ASAP.
2. What's needed to be fixed:
Do not pass "mtu" to the function _add_network_segement.
3. What is the purpose of this patch set:
To get rid of the failure in check and gate test failure.
Change-Id: Id51cc9840e4bf2dd8e01b504502266e962c1e0c9
1. What is the problem
Network type framework has been merged but only the local network
type is supported currently, so cross-pod l2 networking is still
not available.
2. What is the solution to the problem
At the first step, we support the simplest shared vlan network
type. VMs in different pods are hosted in the network of its own
pod with the same vlan ID and are connected with physical switches.
3. What the features need to be implemented to the Tricircle
to realize the solution
(1) A shared vlan type driver is added.
(2) During the process of VM creation, if Nova_apigw finds that
the required network is shared vlan type, it uses all the segment
information of the network to form a network creation request and
sends it to the Neutron server in the bottom pod with admin token.
(3) The creation of bridge network for cross-pod l3 networking
directly uses shared vlan type driver, no longer requires extra
codes to allocate segments.
To fully complete functional shared vlan network type, it's necessary
to add functionality of port creation for gateway in each pod. But this
patch set does not cover this functionality because of complexities.
Change-Id: I8dd75d51fb74340c03d44e007b217f70d1a12d66
1.What is the problem
The Tricircle Neutron plugin will call Neutron type manager's
function _add_network_segement, the first parameter of this
function was "session" in the past, but Neutron has changed
the parameter to "context" recently, this makes the check
and gate test in the Tricircle failed because the Tricircle
still pass "session" as the first parameter.
All check and gate test of python27 failed due to this change,
and any new patch is not able to be merged. The issue should
be fixed ASAP.
2.What's need to be fixed:
Pass "context" but not "session" to the function
_add_network_segement.
3.What is the purpose of this patch set:
To get rid of the failure in check and gate test failure.
Change-Id: I78c64fe3cdd939cbdecf35fec0cff6cb44746cb0
Signed-off-by: Chaoyi Huang <joehuang@huawei.com>
1. What is the problem
In the current implementation of the Tricircle plugin for neutron,
network type is not supported so users cannot create networks
with network type specified. In the specification of cross-pod
l2 networking feature[1], we decide to support several network
types like local, shared VLAN, shared VxLAN, etc, the first step
is to make the Tricircle plugin be aware of network type.
2. What is the solution to the problem
Handle network type in the Tricircle plugin for neutron.
3. What the features need to be implemented to the Tricircle
to realize the solution
In this patch, we add a framework to load type driver which
processes different network type. The framework is based on
neutron ML2 implemenation, we inherit the ML2 type manager and
create a new Tricircle type manager. Also the Tricircle plugin
is modified to extract network type parameter from request and
insert network type information to response.
[1] https://github.com/openstack/tricircle/blob/master/specs/cross-pod-l2-networking.rst
Change-Id: Ida9b88df6113db46e637a7841ce5c1adaf651eba
According to the current design of cross pod L3 networking,
user needs to specify a pod when creating an external network
and the external network will be located in this pod. For VMs
located in other pods to access the external network, we need
a bridge network to connect these pods.
We assign the bridge network a CIDR allocated from a CIDR
pool. In the pod hosting the VM, say Pod_vm, a bridge external
network is created with the CIDR, so we can allocate a floating
ip from the CIDR and bind it to the VM port. In the pod hosting
the real external network(say "real" here to distinguish with
the bridge external network), say Pod_extnet, a bridge internal
network is created with the CIDR, so we can create a port with
the same ip as floating ip in Pod_vm, and bind it to the real
floating ip in Pod_extnet. With the bridge network, via two-step
DNAT, the VM can be accessed from the real external network.
For example, let's say we have an internal network with CIDR
10.0.1.0/24 and an external network with CIDR 162.3.124.0/24,
the CIDR of bridge network is 100.0.1.0/24, when binding a VM
ip 10.0.1.4 to a floating ip 162.3.124.5, the VM ip is first
bound to 100.0.1.4, which is allocated from 100.0.1.0/24, then
100.0.1.4 is bound to 162.3.124.5.
In the case that VM and external network are in the same pod,
bridge network is not needed.
So plugin needs to distinguish these two cases when handling
floating ip disassociation. If VM and external network are in
the same pod, plugin only disassociates the binding; if they
are in different pods, plugin also needs to release the ip
allocated from the bridge network.
Change-Id: Ibae353ec81aceda53016b6ea8aba1872d6d514be
When creating external network in top pod, az hint is passed to
specify which pod bottom external network is located. So plugin
can get bottom router ID with top router ID and bottom pod ID
from resource routing table.
Plugin fist updates router in top pod to remove gateway, then
sends "remove_gateway" request to target bottom pod to update
bottom router.
Change-Id: I69e411188e758016ea789a91298ccd243bdc31cd
Change the following to pass tempest list_server_filters test:
(1) Change response body of Nova API gateway image controller
(2) Add network controller
(3) Add server list filters support
(4) Change response body of server list
Change-Id: I96013e2a3c871640b530611e36fa0a219c5e0516
Recent Neutron commit moves NetworkSegment table out of the ML2
code tree. This patch adapts this change.
Change-Id: I8d63cfd3cebca97c614ec547fbf58b0bfff4dda7
The task Neutron plugin needs to finish for security group
functionality is simple. When a rule is created or deleted
in a security group, it just updates the corresponding
bottom security group. Also, creating or deleting a rule
with "remote_group_id and changing rules in default
security group will be rejected by plugin.
Currently this task runs in synchronous way. Later it can
be implemented in asynchronous way for better response time.
Change-Id: Ibbf46c2e91382986c02324d86bc22887e93267eb
Invoke _make_port_dict for top port query to transfer query object
to dictionary so port information can be displayed correctly in
neutron client.
Change-Id: I8b025be7db36aaef0ae5e0465ceacd5da7c91742
Closes-Bug: #1551616
Implement l3 north-south networking functionality. In our current
design, external network is hosted in one of the bottom pod, VMs
hosted in other bottom pods are connected to this external network
via a bridge network, using the same physical network as the east-
west networking, but different vlan.
Change-Id: I953322737aa97b2d1ebd9a15dc479d7aba753678
Implement cross pod l3 networking functionality. In this second
patch, we implement an extra job to configure extra routes. README
is updated to introduce this networking solution and how to test
it with DevStack.
Change-Id: I3dafd9ef15c211a941e85b690be1992416d3f3eb
The statless design was developed in the experiment branch, the experiment
shows advantage in removing the status synchronization, uuid mapping
compared to the stateful design, and also fully reduce the coupling with
OpenStack services like Nova, Cinder. The overhead query latency for
resources also acceptable. It's time to move the statless design to the
master branch
BP: https://blueprints.launchpad.net/tricircle/+spec/implement-stateless
Change-Id: I51bbb60dc07da5b2e79f25e02209aa2eb72711ac
Signed-off-by: Chaoyi Huang <joehuang@huawei.com>