The purpose of this patch set is to provide the end users with
the following functionalities:
1. Extend the size of the specified volume.
2. Set attachment metadata of the specified volume.
3. Update state of the specified volume.
4. Set image metadata of the specified volume.
5. Unset image metadata of the specified volume.
6. Show image metadata of the specified volume.
Change-Id: Ie9e4ca15a412c89a3c44f1b8526e6597eddf762c
1. What is the problem:
Neutron removes _allocate_specific_ip function
(https://review.openstack.org/#/c/303638/), but stub function
_allocate_specific_ip function still exsits in PluginTest, this will
lead to the Tricircle unit test failed.
2. What's need to be fixed:
In tricircle/tests/unit/network/test_plugin.py
remove the stub function _allocate_specific_ip.
3. What is the purpose of this patch set:
To make the Tricircle adapt to the change of Neutron so that the unit
test of the Tricircle could be exceuted successfully.
Change-Id: I8d0962dec7fd2a5421b93b1b26d416846e32e80c
The purpose of this patch set is to provide the end users with
the following functionalities:
1. Create volume metadata for a given volume.
2. Get the list of metadata for a given volume.
3. Update a given volume's metadata.
4. Delete an existing metadata of a given volume.
Change-Id: Id10eb6b9b68fcb2c496d1e8fa4fecec3c8d613d8
1. What is the problem
In the current controller implementation in nova_apigw and
cinder_apigw, pecan.abort is used to raise an exception when
error occurs. The problem of using pecan.abort is that the
response body doesn't have the same format with the error
response body in nova api and cinder api. Thus python client
may not correctly extract the error message, also, tempest
test may fail.
2. What is the solution to the problem
Replace pecan.abort with correct response body.
3. What the features need to be implemented to the Tricircle
to realize the solution
In this patch, we remove pecan.abort calls in controllers
of nova and cinder resources and directly return the error
response body with correct format. Controllers for the Tricircle
api still keep pecan.abort calls since we don't have special
requirement on the format of error response body.
Change-Id: I0e6fe9ddfce3f001fee0be2160d24c9c628d0a88
1. What is the problem
In the current implementation of the Tricircle plugin for neutron,
network type is not supported so users cannot create networks
with network type specified. In the specification of cross-pod
l2 networking feature[1], we decide to support several network
types like local, shared VLAN, shared VxLAN, etc, the first step
is to make the Tricircle plugin be aware of network type.
2. What is the solution to the problem
Handle network type in the Tricircle plugin for neutron.
3. What the features need to be implemented to the Tricircle
to realize the solution
In this patch, we add a framework to load type driver which
processes different network type. The framework is based on
neutron ML2 implemenation, we inherit the ML2 type manager and
create a new Tricircle type manager. Also the Tricircle plugin
is modified to extract network type parameter from request and
insert network type information to response.
[1] https://github.com/openstack/tricircle/blob/master/specs/cross-pod-l2-networking.rst
Change-Id: Ida9b88df6113db46e637a7841ce5c1adaf651eba
The purpose of this patch set is to provide the end users with
the following functionalities:
1. Create volume type resources, only admin is able to create volume type
2. Show details for a volume type by id
3. List volume types
4. Update a volume type, only admin is able to update volume type
5. Delete a volume type, only admin is able to delete volume type
Change-Id: I6e3188018eff6db155ec02dbd2af6aefc0363df9
1. What is the problem?
In the Tricircle, the cross pod L2 networking automation is
established after the VM is plugged into. The simplest way to
stretch one L2 network across multiple OpenStack instances is to
use a same VLAN network, but there is a lot of limitation: the
number of VLAN segment is limited, the VLAN network itself is not
good to spread across multiple sites, although you can use some
gateways to do so. But there are so many tenants in the cloud, and
new tenants could be added into the cloud dynamically, fixed physical
network configuration for dynamic tenant networking is hard to manage.
2. What is the solution to the problem?
To deal with the above problem, flexible tenant level L2 networking
automation across multiple OpenStack instances in one site or in
multiple sites is needed in the Tricirlce.
3. What the features need to be implemented to the Tricircle to
realize the solution?
To implement the features that networking automation supports more
than one bottom pod in AZ or multiple AZs for different use cases
in the Tricircle
Blueprint: https://blueprints.launchpad.net/tricircle/+spec/cross-site-connectivity
Change-Id: I616048c13d03f48aa16d9ff48572b0d5a49d6fb4
1.What is the problem:
If availability zone (az in short) does not exit in the volume creation
request, 400 bad request was given in current handling, this handling is
incorrect and will block the following integration test cases:
ostestr --regex tempest.api.volume.test_volumes_list
ostestr --regex tempest.api.volume.test_volumes_get
Refer to https://review.openstack.org/#/c/329300/
2.What's need to be fixed:
To fix the issue, volume creation should allow no az parameter in
the request, and select one default bottom pod to continue the volume
creation.
3.What is the purpose of this patch set:
To make the integration test being the gate test of each patch set after
The Tricircle integration test Jenkins job is configured in
https://review.openstack.org/#/c/329740/
Change-Id: I8ee1cd5741605fbb105e2f24258459516aa7c5c0
1. What is the problem
In test_plugin module, we define a stub class FakePlugin to
bypass real database access for unit tests. In some previous
test cases, FakePlugin object is directly created and then
we test its function, but after some changes in Neutron, we
also need to register the class path of FakePlugin to an
configuration option called core_plugin, otherwise some of
test cases will fail with "core_plugin not configured" error.
2. What is the solution to the problem
Register core_plugin for each test cases.
3. What the features need to be implemented to the Tricircle
to realize the solution
Register core_plugin in setUp function then every test cases
can use this option.
Change-Id: Ibc2d81585ec8e67c3f56f864c4cbc5d2f0a0efa9
1. What is the problem?
In production clouds, each availability zone (AZ) is built by modularized
OpenStack instances. Each OpenStack instance acts as a pod. One AZ
consists of multiple pods. Among the pods within an AZ, they are
classified into different categories for different proposes, for instance,
general propose, CAD modeling and so on. Each tenant is bound to one pod,
where it creates various types of resources. However such a binding
relationship should be dynamic instead of static. For instance when
some resources in the pod are exhausted, tenant needs to be bound to a
new pod in same AZ.
2. What is the solution to the problem?
To deal with the above problem, the Tricircle dynamically bind tenants
to pod which has available resources. We call this feature dynamic pod
binding
3. What the features need to be implemented to the Tricircle to realize
the solution?
To realize dynamic pod binding, the following features need to be
implemented in the Tricircle.
1) To collect the usage in pod daily to evaluate whether the threshold
is reached or not.
2) To filter and weigh all the available pods for cloud tenants to bind
a tenant to a proper pod.
3) To manage and maintain all the active and historical binding
relationship.
This spec explains how Tricircle binds pods to tenants dynamically
in detail.
Blueprint: https://blueprints.launchpad.net/tricircle/+spec/dynamic-pod-binding
Change-Id: Ib429a59d3d216e578f9c451d84c1fe9a333cf050
1.What is the purpose of this patch set?
The Tricircle project doesn't provide integration test yet.
To achieve integration test, the Tricircle needs to provide
hook scripts for the Jenkins job to execute the test. The Tricircle
integration test Jenkins job will be configured in
https://github.com/openstack-infra/project-config/tree/master/jenkins/jobs
A Jenkins job patch will be submited to project-config once
this patch is merged. Without the hook scripts provided in this
patch, the Jenkins job configured in project-config is not able
to be executed successfully.
After the autotimc Jenkins job is established, all new patches
sumitted to the Tricircle should pass the integration test, i.e. the
test is also the gate test for a new patch.
For example, for cinder volume type patch, currently no integration
test is provided, we don't know the patch work or not.
As new features added to the Tricircle, integration test cases from
the Tempest for OpenStack Nova/Cinder/Neutron could be added to the
post_test_hook.sh, to test the features added to the Tricircle is
correct or not.
2.What is the benefit of this patch set for the Tricircle project?
Reuse OpenStack Nova/Cinder/Neutron tempest test cases, and provide
integration gate test for new patches.
3.What is the platform you tested it out?
After the Jenkins job patch is merged, the test will be executed in
the OpenStack CI pipeline, and as one of the gate test for a new
patch. CI pipeline will be triggered by gerrit review process, and
the test will be executed in the devstack VM(booted by CI pipeline).
http://docs.openstack.org/infra/system-config/jenkins.html
Change-Id: I5333681fdcaefb01498d9d8c6751d5d174bf57c2
Signed-off-by: Chaoyi Huang <joehuang@huawei.com>
1.What is the problem:
Neutron has just added one more parameter in _generate_ip function,
but there is no this parameter the stub function fake_generate_ip in
Tricircle plugin, this will lead to the Tricircle unit test failed.
2.What's need to be fixed:
In tricircle/tests/unit/network/test_plugin.py
update "def fake_generate_ip(context, subnets):" to
"def fake_generate_ip(context, subnets, prefer_next=False):"
3.What is the purpose of this patch set:
To make the Tricircle adapt to the change of Neutron so that the unit
test of the Tricircle could be exceuted successfully.
Change-Id: Ie715e3956f3ec7d3e3e4aedcd8b5e1c12c4df7ea
Signed-off-by: Chaoyi Huang <joehuang@huawei.com>
1.What is the purpose of this patch set?
The Tricircle project doesn't provide integration test yet.
To achieve integration test, the Tricircle needs to provide
Tempest plugin, so that the integration test job could
be configured in CI pipeline.
Tempest is a set of integration tests to be run against a live
OpenStack cluster:
http://docs.openstack.org/developer/tempest/overview.html
Tempest has an external test plugin interface which enables project
to integrate an external test suite as part of a tempest run.
http://docs.openstack.org/developer/tempest/plugin.html
This patch set is to create the Tricircle tempest plugin with a sample
test case. And main purpose of the patch set is to make tempest enable
to discover the Tricircle tempest plugin and its test case.
2.What is the benefit of this patch set for the Tricircle project?
Make the Tricircle tempest plugin being able to be discovered by
Tempest, and support later integration test job.
3.What is the platform you tested it out?
It will work on all linux distribution including python venv, docker
container are available.
The procedure to make discovering processes are as follows:
Install the Tricircle after the patch merged (the Tricircle tempest
plugin and the sample tempest test case will be installed together for
they are in the same repository):
1. git clone https://github.com/openstack/tricircle.git
2. sudo pip install -e tricircle/
Install the Tempest in another folder in order to avoid the python
import error:
1. git clone https://github.com/openstack/tempest.git
2. sudo pip install -e tempest/
Then run testr in tempest folder to see if the test case of the Tricircle
has been discovered:
1. cd tempest/
2. testr list-tests | grep Tricircle
The Tricircle devstack plugin is also updated with the Tricircle package
installation in order to simplify the tempest discovering processes.
Change-Id: I977c23f5e55e3ee062190fa9d5e6472e5d5acb33
Signed-off-by: Chaoyi Huang <joehuang@huawei.com>
According to the current design of cross pod L3 networking,
user needs to specify a pod when creating an external network
and the external network will be located in this pod. For VMs
located in other pods to access the external network, we need
a bridge network to connect these pods.
We assign the bridge network a CIDR allocated from a CIDR
pool. In the pod hosting the VM, say Pod_vm, a bridge external
network is created with the CIDR, so we can allocate a floating
ip from the CIDR and bind it to the VM port. In the pod hosting
the real external network(say "real" here to distinguish with
the bridge external network), say Pod_extnet, a bridge internal
network is created with the CIDR, so we can create a port with
the same ip as floating ip in Pod_vm, and bind it to the real
floating ip in Pod_extnet. With the bridge network, via two-step
DNAT, the VM can be accessed from the real external network.
For example, let's say we have an internal network with CIDR
10.0.1.0/24 and an external network with CIDR 162.3.124.0/24,
the CIDR of bridge network is 100.0.1.0/24, when binding a VM
ip 10.0.1.4 to a floating ip 162.3.124.5, the VM ip is first
bound to 100.0.1.4, which is allocated from 100.0.1.0/24, then
100.0.1.4 is bound to 162.3.124.5.
In the case that VM and external network are in the same pod,
bridge network is not needed.
So plugin needs to distinguish these two cases when handling
floating ip disassociation. If VM and external network are in
the same pod, plugin only disassociates the binding; if they
are in different pods, plugin also needs to release the ip
allocated from the bridge network.
Change-Id: Ibae353ec81aceda53016b6ea8aba1872d6d514be
In review I94f6f5f853078feeccaea0c50e5690101f95e318 tricircle is asking
to be added to projects.txt. Doing so will mean that when
global-requirements.txt is altered in a away that directly impacts
tricircle an automated (by proposal-bot) review similar to this one will
be generated.
This review manually performs this sync so that the first bot generated
change will be smaller and easier to review.
This introduces several significant changes (eventlet, ovo, Babel) so
please test carefully.
Change-Id: Ib2b7ed2671a5ad8d2a6262e07e0bbc5f921abf33
Permission of the following files is set to "-rwxrwxr-x".
It's not necessary for python scripts to be executable.
xjob/
__init__.py
xservice.py
xmanager.py
common/
baserpc.py
topics.py
serializer.py
rpc.py
xrpcapi.py
api
root.py
__init__.py
nova_apigw/
__init__.py
root.py
cinder_apigw/
root.py
__init__.py
Change-Id: Id397bf29574493bf1977ff40d0647d050fac2f5a
Signed-off-by: Chaoyi Huang <joehuang@huawei.com>
Closes-Bug: 1581271
Now oslo i18n _() returns a Message object which doesn't support
addition, so put all the texts in _() to avoid error.
Change-Id: I32ec2359200224e6e3e550bcca1e3fdf44b3d3c3
Use PortOpt and IpOpt as other OpenStack project does,
as PortOpt and IpOpt from oslo will perform checks to
ensure those configuration options are in correct form.
Change-Id: I914c930ce8427f6888ab120aeac6942a89f50824
Since neutron code bases was changed because of bug fix [1], network unit
test for the Tricircle has to be updated becuase of dependencies.
For More Detail:
neutron/db/ipam_backend_mixin.py
[1]https://review.openstack.org/#/c/288774/
Change-Id: I1142bdd75c1dd5d240d1d256d9687c2fe54612c8
This patch implements catching and logging of all abnormal exceptions. Relative tests are also written to check these exceptions.
Applied https://review.openstack.org/#/c/308940/ to solve network testing failed problem.
Change-Id: I18c39644924cd188d651037550151375f971d9d1
Closes-Bug: 1536929
When creating external network in top pod, az hint is passed to
specify which pod bottom external network is located. So plugin
can get bottom router ID with top router ID and bottom pod ID
from resource routing table.
Plugin fist updates router in top pod to remove gateway, then
sends "remove_gateway" request to target bottom pod to update
bottom router.
Change-Id: I69e411188e758016ea789a91298ccd243bdc31cd
Controller retrieves the routing entry according to the volume
ID and sends updating request to the correct bottom pod. If
controller gets a 404 error when updating the volume in the
bottom pod, meaning that the bottom volume no longer exists,
controller will remove the routing entry.
Change-Id: I119b0ec31c869179c7c921245c9e06963aa6f866
Currently volume list supports most of the filters since it
directly passes filter parameters to bottom pods. But
availability zone filter can not be implememted like that
because we don't require top pod and bottom pod to have
availability zones with the same name. Since each bottom pod
belongs to an availability zone defined in top pod, instead
of passing the availability zone filter to bottom pods, we
directly filter bottom pods by the availability zone filter
then send list request to these filtered bottom pods.
With change in this patch, tempest test test_volumes_list can
be passed.
Change-Id: I8e448a1262ca370ff3bfa9948fee636240a7fb78
Change the following to pass tempest list_server_filters test:
(1) Change response body of Nova API gateway image controller
(2) Add network controller
(3) Add server list filters support
(4) Change response body of server list
Change-Id: I96013e2a3c871640b530611e36fa0a219c5e0516
Bug in DevStack[1] which affects multi-region deployment has been
fixed. Update readme to adapt this DevStack change.
[1] https://bugs.launchpad.net/devstack/+bug/1540802
Change-Id: I19876c359910741e5fe5babdd209b06f126b0d4f
Change cinder delete response to pecan.response instead of a
"{}" to avoid ValueError in tempest test for cinder.
Change-Id: I5b37e7f2e23fdc0822d38e56b840e64aa87d066f
Closes-Bug: #1570306