1.What is the problem?
In nova-apigw, there are some servers with volume attachments
are not implemented:
* List volume attachments for an instance;
* Attach a volume to an instance;
* Show a detail of a volume attachment;
* Update a volume attachment;
* Detach a volume from an instance.
2.What is the solution to the problem?
This patch add three feature:
* We can get the volume identified by the attachment ID.
* We can get a list of all the attacned volumes.
* Detach a volume identified by the attachment ID from
the given server ID.
3.What the features need to be implemented to the Tricircle
to realize the solution?
The above three features.
Change-Id: I76ad72ca5d54fca7c2c463b5c41f2a4151f11eb8
Following OpenStack Style Guidelines[1]: http://docs.openstack.org/developer/hacking/#unit-tests-and-assertraises
[H203] Unit test assertions tend to give better messages for more
specific assertions. As a result, assertIsNone(...) is preferred
over assertEqual(None, ...) and assertIs(..,None)
Change-Id: I367cc785503dc4c9baa7f532610222f95d52eb50
1.what is the problem?
In tricircle, when a volume is in a stuck 'attaching' or 'detaching', Pod
can not detaching it by nova-apigw 'detach volume', so that this volume can
not reuse by ohther Pod. In addition, it may cause un-synchronization between
the Cinder database and the backend storage.
There is an user case in nova[1]. It may also occur on a tricircle Pod.
We don't need to worry about its security, because the Cinder api
'os-force-detach' is for safe cleanup. More user case and information
about 'force detach' in this document[2].
2.what is the solution to the problem?
We use Cinder api "os-force-detach" to detach it.
3. What the features need to be implemented to the Tricircle
to realize the solution?
Implement the force detach server action.
[1] https://blueprints.launchpad.net/nova/+spec/add-force-detach-to-nova
[2] https://github.com/openstack/cinder-specs/blob/master/specs/liberty/implement-force-detach-for-safe-cleanup.rst
Change-Id: If51ca17fc395dbb466b66514dec6c3fb66cd7519
1. What is the problem?
In the Tricircle, each tenant is bound to multiple pods,
where it creates various types of resources. However such a binding
relationship should be dynamic instead of static. For instance when
some resources in a pod are exhausted, tenant needs to be bound to a
new pod in same AZ.
2. What is the solution to the problem?
To deal with the above problem, the Tricircle dynamically bind tenants
to pod which has available resources. We call this feature dynamic pod
binding, which is explained in https://review.openstack.org/#/c/306224/
in detail. In this patch, we only try to binds a tenant to a pod
dynamically, when she tries to create a VM.
3. What the features need to be implemented to the Tricircle to realize
the solution?
When a tenant creates a VM, the Tricircle first selects all available
pods for her. Then by filtering and weighing the pods, the Tricircle
selects the most suitable pod for the tenant. Next, the Tricircle query
database for current binding relationship of the tenant. If the tenant
is not bound to any pod, we create a new binding relationship, which
binds the tenant to the selected pod. If the tenant is already bound to
a pod, and the pod is not the pod selected by the Tricircle, we update
current binding relationship, which binds the tenant to a new pod. If
the tenant is already bound to a pod, and the pod is exactly the pod
selected by the Tricircle, the Tricircle does noting.
Change-Id: I3972e6799f78da6ec35be556487f79b1234731b8
1. What is the problem
To connect bottom routers in different pods, we need to create
bridge networks and subnets. These networks and subnets are not
deleted after bottom routers are deleted.
2. What is the solution to the problem
Clean these bridge networks and subnets during top router deletion.
3. What the features need to be implemented to the Tricircle
to realize the solution
Bridge networks and subnets now can be cleaned up.
Change-Id: I1f2feb7cba3dda14350b3e25a3c33563379eb580
1. What is the problem
The current Nova_APIGW does not support following server actions:
reboot: Reboots a server.
resize: Resizes a server.
confirmResize: Confirms a pending resize action for a server.
revertResize: Cancels and reverts a pending resize action for
a server.
os-resetState: Resets the state of a server.
2. What is the solution to the problem
Implement the above server action
3. What the features need to be implemented to the Tricircle
to realize the solution
Add the above server action
Change-Id: Ia3d0de1a42320cb1ee55b25210b227cb34a829a9
1. What is the problem
Before creating bottom subnet, we need to create some ports to
allocate ip address for bottom dhcp port and bottom gateway port.
These pre-created ports are not deleted after bottom resources
are deleted.
2. What is the solution to the problem
Clean these pre-created ports during top subnet deletion.
3. What the features need to be implemented to the Tricircle
to realize the solution
Pre-created ports now can be cleaned up.
Change-Id: I73c0ef87e4104f1db9926a5972c5f36be94d724a
Tricircle already uses PBR:-
setuptools.setup(
setup_requires=['pbr>=1.8'],
pbr=True)
This patch removes `MANIFEST.in` file as pbr generates a
sensible manifest from git files and some standard files
and it removes the need for an explicit `MANIFEST.in` file.
Change-Id: I722651c1c3082ec59c0408a12db7b14c404d7ce7
Closes-Bug: #1608980
1. What is the problem?
After updated to version 1.2, the WSGI framework we use, pecan has a
behaviour that if the handle function return None, the setting of
resonse status code will not take effect. In the deletion handle
functions of pod and pod-binding controllers, we set the response
status code but return None, thus the status code is not changed and
our tests fail.
2. What is the solution to the problem?
Return an empty dict in the deletion handle functions.
3. What the features need to be implemented to the Tricircle to realize
the solution?
No new features.
Change-Id: I4bc44d2bc774fbe0cd9f820361d35077c83f3973
1. What is the problem
The current Nova_APIGW does not support following server actions:
os-start: Start server
os-stop: Stop server
lock: Locks a server
unlock: Unlocks a locked server
pause: Pause server
unpause: Unpause server
resume: Resumes a suspended server and changes its status to ACTIVE
suspend: Suspend a server
shelve: Shelves a server
unshelve: Shelves a server
shelveOffload: Shelf-offloads, or removes, a shelved server
migrate: Migrates a server to a host. The scheduler chooses the host.
forceDelete: Force-deletes a server before deferred cleanup
trigger_crash_dump: Trigger a crash dump in a server
2. What is the solution to the problem
Implement the above server action
3. What the features need to be implemented to the Tricircle
to realize the solution
Add the above server action
Change-Id: I6a65938f1797380a896efe8f6aaed6a1903c82ca
1. What is the problem
User cannot correctly delete floating ip via the Tricirce plugin.
2. What is the solution to the problem
Overwrite delete_floatingip method in the Tricircle plugin. We
first update the floating ip to disassociate it with fixed ip,
then invoke delete_floatingip method in the base class to delete
the database record.
3. What the features need to be implemented to the Tricircle
to realize the solution
Deletion of floating ip is now supported.
Change-Id: Ia8bf6e7499321c1be085533e8695eea6ac8ef26d
There is not this directory in debtcollector,so
we should drop it for improving searching efficiency.
Change-Id: Ia72a3cdcf3f5c02e491a535d212b44ee1eb3249c
1. What is the problem
After enabled neutron in tempest, DevStack by default creates
initial network, including external network. However, currently
creating external network requires specifying AZ, which is not
supported yet in neutron client, so DevStack fails to create
external network, and thus DevStack fails to run.
2. What is the solution to the problem
Disable initial network creation in DevStack configuration
3. What the features need to be implemented to the Tricircle
to realize the solution
No new features
Change-Id: I620a77157e7a73e4e4c8b18820eb4a70ccf0d497
Python 3.3 support would be dropped by
Infra team from mitaka,CI would no longer be testing it,
so projects should drop it also.
Change-Id: I3bf594f925341a74ac06aa35df3430c0f77600ec
1. What is the problem
Communities require above services enable in devstack plugin.sh
keep consistency with other project.
Andreas Jaeger's opinion:
Enabling the plugin should be enough, if those are needed,
they should be part of the plugin.Please refer:
https://review.openstack.org/#/c/367220/
2. What is the solution to the problem
Enable above services in devstack plugin.sh
3. What the features need to be implemented to the Tricircle
to realize the solution
Enable the above service in devstack plugin.sh
Change-Id: I96a60ff59776d429917f365aa50715cf6c38d39b
1.What is the problem:
Currently Admin-API is to manage pod and pod-binding, the Admin-API
access is hard coded, and only admin role is allowed. OpenStack
usually use policy.json based authorization to control the
API-request. Policy feature is missing in the Tricircle.
2.What's need to be fixed:
Remove hard coded Admin-API request authorization, use policy instead.
For Nova API-GW and Cinder API-GW, the API access control should be
done at bottom OpenStack as far as possible if the API request will
be forwarded to bottom OpenStack directly for further processing;
only these APIs which only interact with database for example flavor
and volume type, because these APIs processing will be terminated at
the Tricircle layer, so policy control should be done in Nova API-GW
or Cinder API-GW. No work needs to do in Tricircle Neutron Plugin for
Neutron API server is there, Neutron API server will be responsible
for policy control.
3.What is the purpose of this patch set:
In this patch, default policy option and rule, and policy control
in Admin-API were added. Using the default option and value to
generate the policy.json will be implemented in next patch. No
policy.json is mandatory required after this patch is merged,
if no policy.json is configured or provided, the policy control
will use the default rule automatically.
Change-Id: Ifb6137b20f56e9f9a70d339fd357ee480fa3ce2e
Signed-off-by: joehuang <joehuang@huawei.com>
1. What is the problem
Neutron now reads project_id field from Context object for privilege
validation, but in our unit tests, FakeNeutronContext object which
simulates Context object doesn't have project_id, thus unit tests
fail to pass.
Also, Neutron now calls expunge method in Session object which the
simulation object FakeSession doesn't have that method.
2. What is the solution to the problem
Add project_id field to FakeNeutronContext object, and make sure
project_id fields set in FakeNeutronContext object and resource
object are the same.
Use __getattr__ to return a dummy method if the method hasn't been
defined in FakeSession object.
3. What the features need to be implemented to the Tricircle
to realize the solution
No new feature introduced.
Change-Id: I58fb446dc7837065a60aafd5c6c6262840b60e21
1. What is the problem?
In the original link, there are some problems:
* Sometimes, this link can't open.
* Some operation details is not clearly and easy to misleading.
2. What is the solution to the problem?
We have wrote a new document in the Wiki[1].
3. What the features need to be implemented to the Tricircle to
realize the solution?
None.
[1] https://wiki.openstack.org/wiki/Play_tricircle_with_virtualbox
Change-Id: I7253fe0cae82fe816a0fd4d175de27d12535399f
1. What is the problem?
The Tricircle Admin API documentation has been implemented, so the
relevant link to it in the document README.rst should be updated as
well.
2. What is the solution to the problem?
Remove the original null link at the bottom of the document README.rst and
change the part "Documentation" to "Tricircle Admin API documentation", then
provide the relative link behind it.
3. What the features need to be implemented to the Tricircle to realize the solution?
None.
Change-Id: I8dae02ddef4a3019abe91ec1ca283280b64c7c61
1. What is the problem
Pod and pod binding functions have been implemented, but the documentation
is not there.
2. What is the solution for the problem
Pod and pod binding have four main public functions respectively. For pod,
there are four public functions: 1) get pod list. 2) show a single pod in
detail. 3) create a pod. 4) delete a pod. Likewise, pod binding has four
similar public functions to pod. The file /tricircle/doc/source/api_v1.rst
includes the request and response explanation about each function.
3. What the features need to be implemented to the Tricircle to
realize the solution
None.
Change-Id: Ic530326bb00556b5b9e8d4e4699eaa35ebf01e61
1. What is the problem
The current Nova_APIGW does not support microversion function, the service
controller uses a fixed API version number to initialize novaclient
2. What is the solution to the problem
When the service controller receives an API request, it will get the
microversion number from request headers , and use this to initialize
novaclient.
For how to get the microversion number, please refer to:
https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html
The microversion supported range is 2.1 to latest version
3. What the features need to be implemented to the Tricircle
to realize the solution
Nova_APIGW microversion support added
Change-Id: Idf44c91100e5cb8ad0355164c9be991aa54a652b
1. What is the problem
Shared vlan type driver has been merged, so we can run two VMs in
the same network but across two pods. However if we attach a network
to the router, the tricircle plugin still check if the network is
bound to one AZ, so one network cannot cross different pods if we
are going to attach it to a router.
2. What is the solution to the problem
The reason we require network to be bound to AZ is that when the
network is attaching the one router, we know where to create the
bottom network resources. To support l3 networking in shared vlan
network, we need to remove this restriction.
In the previous patches[1, 2], we have already move bottom router
setup to an asynchronouse job, so we just remove the AZ restriction
and make the tricircle plugin and the nova_apigw to use the job.
Floating ip association and disassociation are also moved to bottom
router setup job.
3. What the features need to be implemented to the Tricircle
to realize the solution
Now network can be attached to a router without specifying AZ, so
l3 networking can work with across-pod network.
[1] https://review.openstack.org/#/c/343568/
[2] https://review.openstack.org/#/c/345863/
Change-Id: I9aaf908a5de55575d63533f1574a0a6edb3c66b8
1. What is the problem
To simulate Neutron database operations in unit tests, FakeQuery and
FakeSession are defined to replace the original Query and Session
objects. After service subnets feature is merged into Neutron[1], during
ip allocation, Neutron calls Query.add_entity to query subnet by service
type. However, FakeQuery object doesn't have add_entity method, thus
unit tests fail to pass.
2. What is the solution to the problem
Mock _allocate_ips_for_port to bypass the service subnet query, which
is not necessary in the unit tests.
3. What the features need to be implemented to the Tricircle
to realize the solution
No new feature introduced.
[1] https://review.openstack.org/#/c/350613/
Change-Id: Ieec7f5ab7b63cba0b1c9569e87dd894dd34b07c2
1. What is the problem
After recent updates now Neutron uses pluggable IPAM driver backend by default,
while in the unit tests for network functionality in the Tricircle, unpluggable
IPAM driver backend is assumed to be used. So this default backend change causes
most of the unit tests fail.
2. What is the solution to the problem
In the setup of unit tests, set "ipam_driver" option as empty string to use
unpluggable IPAM driver backend.
3. What the features need to be implemented to the Tricircle
to realize the solution
No new feature introduced.
Change-Id: Ifa85d4e209ac0fa04c21db316eee6ebc41981dab
Neutron is refactoring a lot of its code. For example, common entities
are being moved to neutron-lib. When code is refactored it is marked
with debtcollector, which issues a deprecation warning. In the next
release the old entity will be removed. Consuming projects (like
tricircle) must update to comply with the new entity before the old
one is removed.
Tricircle maintainers must monitor deprecation warnings and act on them
before the next release.
Many deprecations are triggered early (on imports, for example)
before the warnings are enabled by the WarningsFixture in the
base test class.
To make sure all DeprecationWarning messages are emitted we enable
them via the PYTHONWARNINGS environment variable.
Change-Id: Iade909b35a55cebb7dfe13a688f451ad91989b94
1.What is the problem:
If availability zone (az in short) does not exit in the server creation
request or extenal network, 400 bad request was given in current handling,
this handling is inconsistent with Nova and Neutron.In Nova and Neutron,
az is one optional parameter, i.e, even the az is not speficied in the
request, a scheduled az should be used for the creation, but not
return 400 bad request.
2.What's need to be fixed:
To fix the issue, server creation should allow no az parameter in
the request, and select one default bottom pod to continue the server
creation.
3.What is the purpose of this patch set:
To make the az handling behavior is consistent with that in Nova and
Neutron.
Change-Id: I9281914bad482573c6540bf2a14e51e7ca4d744c
Signed-off-by: joehuang <joehuang@huawei.com>
1.What is the problem:
If 'networks' is not carried in the request to Nova API-GW,
then in current implementation, server_body['networks'] will
not be initialized before it is used in the "nics = [ {'port-id':
_port['port']} for _port in server_body['networks']]" clause,
then exception will happen, and return error to the API caller.
In fact, it should works, no exception should happen.
2.What's need to be fixed:
Move the server_body['networks'] initialization outside
the "if 'networks' in kw['server']" clause will ensure
server_body['networks'] being initialized before it is used
later, and eliminate the exception happened in "nics = ...
for _port in server_body['networks']]" clause.
3.What is the purpose of this patch set:
Eliminate the unusual exception happened in server creation
Change-Id: I50a6ac8e6bba78233e063bcfc24c0a1d4663e9f6
Signed-off-by: joehuang <joehuang@huawei.com>
1.What is the problem:
Python requests lib is used to forward the http request from the Cinder
API-GW to bottom Cinder. Now the latest version of the requests lib
add stricter header validity check: only bytes or string is allowed for
header item's value, other value will lead to exception, even for "None".
But Cinder client will pass some header items with "None" value in the
api request to Cinder API-GW, thus lead to the exception in Cinder API-GW
when forwarding the request to the bottom Cinder via requests lib, which
lead to the integration failed, and block any patch to be merged.
2.What's need to be fixed:
Remove(clean) the invalid header items in the request to the bottom
Cinder, in which the python requests lib is used. So that integration
test can pass, and make patch being able to be merged.
3.What is the purpose of this patch set:
Fix the integration test failure issue casued by strict validity check
in http header processing.
Change-Id: Iff6d2dd77571180ef9a0cad2171c479be63bd880
Signed-off-by: joehuang <joehuang@huawei.com>
1. What is the problem
Our current mechanism to handle dhcp port creation is that we first
create the top dhcp port, then use the allocated ip to create the
bottom dhcp port. If we get an "IpAddressInUse" error, meaning that
bottom dhcp agent has already created the dhcp port with the same
ip, we just reuse the port; otherwise, we successfully create the
bottom dhcp port, we remove other dhcp ports with different ips, so
dhcp agent can be notified and use the bottom dhcp port we create.
However, we find that this mechanism doesn't work in the following
situation. Dhcp agent may create the bottom dhcp port after we do
the check about whether other dhcp ports exist, so we don't remove
the dhcp port created by dhcp agent, thus we end up with two bottom
dhcp ports, one is created by dhcp agent, one is created by us.
2. What is the solution to the problem
Create bottom subnet with dhcp disabled, so dhcp agent will not be
scheduled, then create bottom dhcp port and update bottom subnet
to enable dhcp. Finding that there is already a reserved dhcp port,
dhcp agent will not create another dhcp port and directly use the
existing one.
3. What the features need to be implemented to the Tricircle
to realize the solution
No new feature introuduced.
Change-Id: I8bb8622b34b709edef230d1f1c985e3fabd5adb0