Move statless design from experiment to master branch

The statless design was developed in the experiment branch, the experiment
shows advantage in removing the status synchronization, uuid mapping
compared to the stateful design, and also fully reduce the coupling with
OpenStack services like Nova, Cinder. The overhead query latency for
resources also acceptable. It's time to move the statless design to the
master branch

BP: https://blueprints.launchpad.net/tricircle/+spec/implement-stateless

Change-Id: I51bbb60dc07da5b2e79f25e02209aa2eb72711ac
Signed-off-by: Chaoyi Huang <joehuang@huawei.com>
This commit is contained in:
Chaoyi Huang 2016-01-14 12:06:20 +08:00
parent 585bb7ea63
commit 81b45f2c1d
119 changed files with 10071 additions and 4626 deletions

110
README.md
View File

@ -1,11 +1,111 @@
# Tricircle # Tricircle
(Attention Please, Stateless Design Proposal is being worked on the ["experiment"](https://github.com/openstack/tricircle/tree/experiment) branch). (The original PoC source code, please switch to
["poc"](https://github.com/openstack/tricircle/tree/poc) tag, or
["stable/fortest"](https://github.com/openstack/tricircle/tree/stable/fortest)
branch)
(The origningal PoC source code, please switch to ["poc"](https://github.com/openstack/tricircle/tree/poc) tag, or ["stable/fortest"](https://github.com/openstack/tricircle/tree/stable/fortest) branch) Tricircle is an OpenStack project that aims to deal with multiple OpenStack
deployment across multiple data centers. It provides users a single management
view by having only one Tricircle instance on behalf of all the involved
OpenStack instances.
Tricircle is a openstack project that aims to deal with OpenStack deployment across multiple sites. It provides users a single management view by having only one OpenStack instance on behalf of all the involved ones. It essentially serves as a communication bus between the central OpenStack instance and the other OpenStack instances that are called upon. Tricircle presents one big region to the end user in KeyStone. And each
OpenStack instance, which is called a pod, is a sub-region of Tricircle in
KeyStone, and not visible to end user directly.
Tricircle acts as OpenStack API gateway, can accept all OpenStack API calls
and forward the API calls to regarding OpenStack instance(pod), and deal with
cross pod networking automaticly.
The end user can see avaialbility zone (AZ in short) and use AZ to provision
VM, Volume, even Network through Tricircle.
Similar as AWS, one AZ can includes many pods, and a tenant's resources will
be bound to specific pods automaticly.
## Project Resources ## Project Resources
- Project status, bugs, and blueprints are tracked on [Launchpad](https://launchpad.net/tricircle) License: Apache 2.0
- Additional resources are linked from the project [Wiki](https://wiki.openstack.org/wiki/Tricircle) page
- Design documentation: [Tricircle Design Blueprint](https://docs.google.com/document/d/18kZZ1snMOCD9IQvUKI5NVDzSASpw-QKj7l2zNqMEd3g/)
- Wiki: https://wiki.openstack.org/wiki/tricircle
- Documentation: http://docs.openstack.org/developer/tricircle
- Source: https://github.com/openstack/tricircle
- Bugs: http://bugs.launchpad.net/tricircle
- Blueprints: https://launchpad.net/tricircle
## Play with DevStack
Now stateless design can be played with DevStack.
- 1 Git clone DevStack.
- 2 Git clone Tricircle, or just download devstack/local.conf.sample
- 3 Copy devstack/local.conf.sample to DevStack folder and rename it to
local.conf, change password in the file if needed.
- 4 Run DevStack.
- 5 After DevStack successfully starts, check if services have been correctly
registered. Run "openstack endpoint list" and you should get similar output
as following:
```
+----------------------------------+-----------+--------------+----------------+
| ID | Region | Service Name | Service Type |
+----------------------------------+-----------+--------------+----------------+
| 230059e8533e4d389e034fd68257034b | RegionOne | glance | image |
| 25180a0a08cb41f69de52a7773452b28 | RegionOne | nova | compute |
| bd1ed1d6f0cc42398688a77bcc3bda91 | Pod1 | neutron | network |
| 673736f54ec147b79e97c395afe832f9 | RegionOne | ec2 | ec2 |
| fd7f188e2ba04ebd856d582828cdc50c | RegionOne | neutron | network |
| ffb56fd8b24a4a27bf6a707a7f78157f | RegionOne | keystone | identity |
| 88da40693bfa43b9b02e1478b1fa0bc6 | Pod1 | nova | compute |
| f35d64c2ddc44c16a4f9dfcd76e23d9f | RegionOne | nova_legacy | compute_legacy |
| 8759b2941fe7469e9651de3f6a123998 | RegionOne | tricircle | Cascading |
+----------------------------------+-----------+--------------+----------------+
```
"RegionOne" is the region you set in local.conf via REGION_NAME, whose default
value is "RegionOne", we use it as the region for top OpenStack(Tricircle);
"Pod1" is the region set via "POD_REGION_NAME", new configuration option
introduced by Tricircle, we use it as the bottom OpenStack.
- 6 Create pod instances for Tricircle and bottom OpenStack
```
curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" -d '{"pod": {"pod_name": "RegionOne"}}'
curl -X POST http://127.0.0.1:19999/v1.0/pods -H "Content-Type: application/json" \
-H "X-Auth-Token: $token" -d '{"pod": {"pod_name": "Pod1", "az_name": "az1"}}'
```
Pay attention to "pod_name" parameter we specify when creating pod. Pod name
should exactly match the region name registered in Keystone since it is used
by Tricircle to route API request. In the above commands, we create pods named
"RegionOne" and "Pod1" for top OpenStack(Tricircle) and bottom OpenStack.
Tricircle API service will automatically create a aggregate when user creates
a bottom pod, so command "nova aggregate-list" will show the following result:
```
+----+----------+-------------------+
| Id | Name | Availability Zone |
+----+----------+-------------------+
| 1 | ag_Pod1 | az1 |
+----+----------+-------------------+
```
- 7 Create necessary resources to boot a virtual machine.
```
nova flavor-create test 1 1024 10 1
neutron net-create net1
neutron subnet-create net1 10.0.0.0/24
glance image-list
```
Note that flavor mapping has not been implemented yet so the created flavor is
just a database record and actually flavor in bottom OpenStack with the same id
will be used.
- 8 Boot a virtual machine.
```
nova boot --flavor 1 --image $image_id --nic net-id=$net_id --availability-zone az1 vm1
```
- 9 Create, list, show and delete volume.
```
cinder --debug create --availability-zone=az1 1
cinder --debug list
cinder --debug show $volume_id
cinder --debug delete $volume_id
cinder --debug list
```

View File

@ -21,14 +21,13 @@ import sys
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log as logging from oslo_log import log as logging
from oslo_service import wsgi
from werkzeug import serving
from tricircle.api import app from tricircle.api import app
from tricircle.common import config from tricircle.common import config
from tricircle.common.i18n import _LI from tricircle.common.i18n import _LI
from tricircle.common.i18n import _LW from tricircle.common.i18n import _LW
from tricircle.common import restapp
CONF = cfg.CONF CONF = cfg.CONF
@ -36,8 +35,7 @@ LOG = logging.getLogger(__name__)
def main(): def main():
config.init(sys.argv[1:]) config.init(app.common_opts, sys.argv[1:])
config.setup_logging()
application = app.setup_app() application = app.setup_app()
host = CONF.bind_host host = CONF.bind_host
@ -48,16 +46,17 @@ def main():
LOG.warning(_LW("Wrong worker number, worker = %(workers)s"), workers) LOG.warning(_LW("Wrong worker number, worker = %(workers)s"), workers)
workers = 1 workers = 1
LOG.info(_LI("Server on http://%(host)s:%(port)s with %(workers)s"), LOG.info(_LI("Admin API on http://%(host)s:%(port)s with %(workers)s"),
{'host': host, 'port': port, 'workers': workers}) {'host': host, 'port': port, 'workers': workers})
serving.run_simple(host, port, service = wsgi.Server(CONF, 'Tricircle Admin_API', application, host, port)
application, restapp.serve(service, CONF, workers)
processes=workers)
LOG.info(_LI("Configuration:")) LOG.info(_LI("Configuration:"))
CONF.log_opt_values(LOG, std_logging.INFO) CONF.log_opt_values(LOG, std_logging.INFO)
restapp.wait()
if __name__ == '__main__': if __name__ == '__main__':
main() main()

63
cmd/cinder_apigw.py Normal file
View File

@ -0,0 +1,63 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Much of this module is based on the work of the Ironic team
# see http://git.openstack.org/cgit/openstack/ironic/tree/ironic/cmd/api.py
import logging as std_logging
import sys
from oslo_config import cfg
from oslo_log import log as logging
from oslo_service import wsgi
from tricircle.common import config
from tricircle.common.i18n import _LI
from tricircle.common.i18n import _LW
from tricircle.common import restapp
from tricircle.cinder_apigw import app
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
def main():
config.init(app.common_opts, sys.argv[1:])
application = app.setup_app()
host = CONF.bind_host
port = CONF.bind_port
workers = CONF.api_workers
if workers < 1:
LOG.warning(_LW("Wrong worker number, worker = %(workers)s"), workers)
workers = 1
LOG.info(_LI("Cinder_APIGW on http://%(host)s:%(port)s with %(workers)s"),
{'host': host, 'port': port, 'workers': workers})
service = wsgi.Server(CONF, 'Tricircle Cinder_APIGW',
application, host, port)
restapp.serve(service, CONF, workers)
LOG.info(_LI("Configuration:"))
CONF.log_opt_values(LOG, std_logging.INFO)
restapp.wait()
if __name__ == '__main__':
main()

View File

@ -1,85 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import eventlet
if __name__ == "__main__":
eventlet.monkey_patch()
import sys
import traceback
from oslo_config import cfg
from oslo_log import log as logging
from tricircle.common.i18n import _LE
from tricircle.common.nova_lib import conductor_rpcapi
from tricircle.common.nova_lib import db_api as nova_db_api
from tricircle.common.nova_lib import exception as nova_exception
from tricircle.common.nova_lib import objects as nova_objects
from tricircle.common.nova_lib import objects_base
from tricircle.common.nova_lib import quota
from tricircle.common.nova_lib import rpc as nova_rpc
from tricircle.dispatcher import service
def block_db_access():
class NoDB(object):
def __getattr__(self, attr):
return self
def __call__(self, *args, **kwargs):
stacktrace = "".join(traceback.format_stack())
LOG = logging.getLogger('nova.compute')
LOG.error(_LE('No db access allowed in nova-compute: %s'),
stacktrace)
raise nova_exception.DBNotAllowed('nova-compute')
nova_db_api.IMPL = NoDB()
def set_up_nova_object_indirection():
conductor = conductor_rpcapi.ConductorAPI()
conductor.client.target.exchange = "nova"
objects_base.NovaObject.indirection_api = conductor
def process_command_line_arguments():
logging.register_options(cfg.CONF)
logging.set_defaults()
cfg.CONF(sys.argv[1:])
logging.setup(cfg.CONF, "dispatcher", version='0.1')
def _set_up_nova_objects():
nova_rpc.init(cfg.CONF)
block_db_access()
set_up_nova_object_indirection()
nova_objects.register_all()
def _disable_quotas():
QUOTAS = quota.QUOTAS
QUOTAS._driver_cls = quota.NoopQuotaDriver()
if __name__ == "__main__":
_set_up_nova_objects()
_disable_quotas()
process_command_line_arguments()
server = service.setup_server()
server.start()
server.wait()

View File

@ -19,7 +19,7 @@ import sys
from oslo_config import cfg from oslo_config import cfg
from tricircle.db import core from tricircle.db import core
import tricircle.db.migration_helpers as migration_helpers from tricircle.db import migration_helpers
def main(argv=None, config_files=None): def main(argv=None, config_files=None):
@ -28,7 +28,7 @@ def main(argv=None, config_files=None):
project='tricircle', project='tricircle',
default_config_files=config_files) default_config_files=config_files)
migration_helpers.find_migrate_repo() migration_helpers.find_migrate_repo()
migration_helpers.sync_repo(1) migration_helpers.sync_repo(2)
if __name__ == '__main__': if __name__ == '__main__':

68
cmd/nova_apigw.py Normal file
View File

@ -0,0 +1,68 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Much of this module is based on the work of the Ironic team
# see http://git.openstack.org/cgit/openstack/ironic/tree/ironic/cmd/api.py
import eventlet
if __name__ == "__main__":
eventlet.monkey_patch()
import logging as std_logging
import sys
from oslo_config import cfg
from oslo_log import log as logging
from oslo_service import wsgi
from tricircle.common import config
from tricircle.common.i18n import _LI
from tricircle.common.i18n import _LW
from tricircle.common import restapp
from tricircle.nova_apigw import app
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
def main():
config.init(app.common_opts, sys.argv[1:])
application = app.setup_app()
host = CONF.bind_host
port = CONF.bind_port
workers = CONF.api_workers
if workers < 1:
LOG.warning(_LW("Wrong worker number, worker = %(workers)s"), workers)
workers = 1
LOG.info(_LI("Nova_APIGW on http://%(host)s:%(port)s with %(workers)s"),
{'host': host, 'port': port, 'workers': workers})
service = wsgi.Server(CONF, 'Tricircle Nova_APIGW',
application, host, port)
restapp.serve(service, CONF, workers)
LOG.info(_LI("Configuration:"))
CONF.log_opt_values(LOG, std_logging.INFO)
restapp.wait()
if __name__ == '__main__':
main()

View File

@ -1,85 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import eventlet
if __name__ == "__main__":
eventlet.monkey_patch()
import sys
import traceback
from oslo_config import cfg
from oslo_log import log as logging
from tricircle.common.i18n import _LE
from tricircle.common.nova_lib import conductor_rpcapi
from tricircle.common.nova_lib import db_api as nova_db_api
from tricircle.common.nova_lib import exception as nova_exception
from tricircle.common.nova_lib import objects as nova_objects
from tricircle.common.nova_lib import objects_base
from tricircle.common.nova_lib import quota
from tricircle.common.nova_lib import rpc as nova_rpc
from tricircle.proxy import service
def block_db_access():
class NoDB(object):
def __getattr__(self, attr):
return self
def __call__(self, *args, **kwargs):
stacktrace = "".join(traceback.format_stack())
LOG = logging.getLogger('nova.compute')
LOG.error(_LE('No db access allowed in nova-compute: %s'),
stacktrace)
raise nova_exception.DBNotAllowed('nova-compute')
nova_db_api.IMPL = NoDB()
def set_up_nova_object_indirection():
conductor = conductor_rpcapi.ConductorAPI()
conductor.client.target.exchange = "nova"
objects_base.NovaObject.indirection_api = conductor
def process_command_line_arguments():
logging.register_options(cfg.CONF)
logging.set_defaults()
cfg.CONF(sys.argv[1:])
logging.setup(cfg.CONF, "proxy", version='0.1')
def _set_up_nova_objects():
nova_rpc.init(cfg.CONF)
block_db_access()
set_up_nova_object_indirection()
nova_objects.register_all()
def _disable_quotas():
QUOTAS = quota.QUOTAS
QUOTAS._driver_cls = quota.NoopQuotaDriver()
if __name__ == "__main__":
_set_up_nova_objects()
_disable_quotas()
process_command_line_arguments()
server = service.setup_server()
server.start()
server.wait()

61
cmd/xjob.py Normal file
View File

@ -0,0 +1,61 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Much of this module is based on the work of the Ironic team
# see http://git.openstack.org/cgit/openstack/ironic/tree/ironic/cmd/api.py
import eventlet
if __name__ == "__main__":
eventlet.monkey_patch()
import logging as std_logging
import sys
from oslo_config import cfg
from oslo_log import log as logging
from tricircle.common import config
from tricircle.common.i18n import _LI
from tricircle.common.i18n import _LW
from tricircle.xjob import xservice
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
def main():
config.init(xservice.common_opts, sys.argv[1:])
host = CONF.host
workers = CONF.workers
if workers < 1:
LOG.warning(_LW("Wrong worker number, worker = %(workers)s"), workers)
workers = 1
LOG.info(_LI("XJob Server on http://%(host)s with %(workers)s"),
{'host': host, 'workers': workers})
xservice.serve(xservice.create_service(), workers)
LOG.info(_LI("Configuration:"))
CONF.log_opt_values(LOG, std_logging.INFO)
xservice.wait()
if __name__ == '__main__':
main()

View File

@ -27,25 +27,32 @@ Q_FLOATING_ALLOCATION_POOL=start=10.100.100.160,end=10.100.100.192
PUBLIC_NETWORK_GATEWAY=10.100.100.3 PUBLIC_NETWORK_GATEWAY=10.100.100.3
TENANT_VLAN_RANGE=2001:3000
PHYSICAL_NETWORK=bridge
Q_ENABLE_TRICIRCLE=True Q_ENABLE_TRICIRCLE=True
enable_plugin tricircle https://git.openstack.org/openstack/tricircle master enable_plugin tricircle https://github.com/openstack/tricircle/
# Tricircle Services # Tricircle Services
enable_service t-api enable_service t-api
enable_service t-prx enable_service t-ngw
enable_service t-dis enable_service t-cgw
enable_service t-job
# Use Neutron instead of nova-network # Use Neutron instead of nova-network
disable_service n-net disable_service n-net
disable_service n-cpu
disable_service n-sch
enable_service q-svc enable_service q-svc
disable_service q-dhcp enable_service q-svc1
enable_service q-dhcp
enable_service q-agt
disable_service n-obj
disable_service n-cauth
disable_service n-novnc
disable_service q-l3 disable_service q-l3
disable_service q-agt enable_service c-api
disable_service c-api enable_service c-vol
disable_service c-vol enable_service c-sch
disable_service c-bak disable_service c-bak
disable_service c-sch disable_service tempest
disable_service cinder disable_service horizon

View File

@ -3,7 +3,7 @@
# Test if any tricircle services are enabled # Test if any tricircle services are enabled
# is_tricircle_enabled # is_tricircle_enabled
function is_tricircle_enabled { function is_tricircle_enabled {
[[ ,${ENABLED_SERVICES} =~ ,"t-" ]] && return 0 [[ ,${ENABLED_SERVICES} =~ ,"t-api" ]] && return 0
return 1 return 1
} }
@ -18,9 +18,9 @@ function create_tricircle_accounts {
create_service_user "tricircle" create_service_user "tricircle"
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
local tricircle_dispatcher=$(get_or_create_service "tricircle" \ local tricircle_api=$(get_or_create_service "tricircle" \
"Cascading" "OpenStack Cascading Service") "Cascading" "OpenStack Cascading Service")
get_or_create_endpoint $tricircle_dispatcher \ get_or_create_endpoint $tricircle_api \
"$REGION_NAME" \ "$REGION_NAME" \
"$SERVICE_PROTOCOL://$TRICIRCLE_API_HOST:$TRICIRCLE_API_PORT/v1.0" \ "$SERVICE_PROTOCOL://$TRICIRCLE_API_HOST:$TRICIRCLE_API_PORT/v1.0" \
"$SERVICE_PROTOCOL://$TRICIRCLE_API_HOST:$TRICIRCLE_API_PORT/v1.0" \ "$SERVICE_PROTOCOL://$TRICIRCLE_API_HOST:$TRICIRCLE_API_PORT/v1.0" \
@ -29,6 +29,79 @@ function create_tricircle_accounts {
fi fi
} }
# create_nova_apigw_accounts() - Set up common required nova_apigw
# work as nova api serice
# service accounts in keystone
# Project User Roles
# -----------------------------------------------------------------
# $SERVICE_TENANT_NAME nova_apigw service
function create_nova_apigw_accounts {
if [[ "$ENABLED_SERVICES" =~ "t-ngw" ]]; then
create_service_user "nova_apigw"
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
local tricircle_nova_apigw=$(get_or_create_service "nova" \
"compute" "Nova Compute Service")
remove_old_endpoint_conf $tricircle_nova_apigw
get_or_create_endpoint $tricircle_nova_apigw \
"$REGION_NAME" \
"$SERVICE_PROTOCOL://$TRICIRCLE_NOVA_APIGW_HOST:$TRICIRCLE_NOVA_APIGW_PORT/v2.1/"'$(tenant_id)s' \
"$SERVICE_PROTOCOL://$TRICIRCLE_NOVA_APIGW_HOST:$TRICIRCLE_NOVA_APIGW_PORT/v2.1/"'$(tenant_id)s' \
"$SERVICE_PROTOCOL://$TRICIRCLE_NOVA_APIGW_HOST:$TRICIRCLE_NOVA_APIGW_PORT/v2.1/"'$(tenant_id)s'
fi
fi
}
# create_cinder_apigw_accounts() - Set up common required cinder_apigw
# work as cinder api serice
# service accounts in keystone
# Project User Roles
# ---------------------------------------------------------------------
# $SERVICE_TENANT_NAME cinder_apigw service
function create_cinder_apigw_accounts {
if [[ "$ENABLED_SERVICES" =~ "t-cgw" ]]; then
create_service_user "cinder_apigw"
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
local tricircle_cinder_apigw=$(get_or_create_service "cinder" \
"volumev2" "Cinder Volume Service")
remove_old_endpoint_conf $tricircle_cinder_apigw
get_or_create_endpoint $tricircle_cinder_apigw \
"$REGION_NAME" \
"$SERVICE_PROTOCOL://$TRICIRCLE_CINDER_APIGW_HOST:$TRICIRCLE_CINDER_APIGW_PORT/v2/"'$(tenant_id)s' \
"$SERVICE_PROTOCOL://$TRICIRCLE_CINDER_APIGW_HOST:$TRICIRCLE_CINDER_APIGW_PORT/v2/"'$(tenant_id)s' \
"$SERVICE_PROTOCOL://$TRICIRCLE_CINDER_APIGW_HOST:$TRICIRCLE_CINDER_APIGW_PORT/v2/"'$(tenant_id)s'
fi
fi
}
# common config-file configuration for tricircle services
function remove_old_endpoint_conf {
local service=$1
local endpoint_id
interface_list="public admin internal"
for interface in $interface_list; do
endpoint_id=$(openstack endpoint list \
--service "$service" \
--interface "$interface" \
--region "$REGION_NAME" \
-c ID -f value)
if [[ -n "$endpoint_id" ]]; then
# Delete endpoint
openstack endpoint delete "$endpoint_id"
fi
done
}
# create_tricircle_cache_dir() - Set up cache dir for tricircle # create_tricircle_cache_dir() - Set up cache dir for tricircle
function create_tricircle_cache_dir { function create_tricircle_cache_dir {
@ -36,68 +109,33 @@ function create_tricircle_cache_dir {
sudo rm -rf $TRICIRCLE_AUTH_CACHE_DIR sudo rm -rf $TRICIRCLE_AUTH_CACHE_DIR
sudo mkdir -p $TRICIRCLE_AUTH_CACHE_DIR sudo mkdir -p $TRICIRCLE_AUTH_CACHE_DIR
sudo chown `whoami` $TRICIRCLE_AUTH_CACHE_DIR sudo chown `whoami` $TRICIRCLE_AUTH_CACHE_DIR
} }
# common config-file configuration for tricircle services
function init_common_tricircle_conf {
local conf_file=$1
function configure_tricircle_dispatcher { touch $conf_file
if is_service_enabled q-svc ; then iniset $conf_file DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
echo "Configuring Neutron plugin for Tricircle" iniset $conf_file DEFAULT verbose True
Q_PLUGIN_CLASS="tricircle.networking.plugin.TricirclePlugin" iniset $conf_file DEFAULT use_syslog $SYSLOG
iniset $conf_file DEFAULT tricircle_db_connection `database_connection_url tricircle`
#NEUTRON_CONF=/etc/neutron/neutron.conf iniset $conf_file client admin_username admin
iniset $NEUTRON_CONF DEFAULT core_plugin "$Q_PLUGIN_CLASS" iniset $conf_file client admin_password $ADMIN_PASSWORD
iniset $NEUTRON_CONF DEFAULT service_plugins "" iniset $conf_file client admin_tenant demo
fi iniset $conf_file client auto_refresh_endpoint True
iniset $conf_file client top_pod_name $REGION_NAME
if is_service_enabled t-dis ; then iniset $conf_file oslo_concurrency lock_path $TRICIRCLE_STATE_PATH/lock
echo "Configuring Tricircle Dispatcher"
sudo install -d -o $STACK_USER -m 755 $TRICIRCLE_CONF_DIR
cp -p $TRICIRCLE_DIR/etc/dispatcher.conf $TRICIRCLE_DISPATCHER_CONF
TRICIRCLE_POLICY_FILE=$TRICIRCLE_CONF_DIR/policy.json
cp $TRICIRCLE_DIR/etc/policy.json $TRICIRCLE_POLICY_FILE
iniset $TRICIRCLE_DISPATCHER_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
iniset $TRICIRCLE_DISPATCHER_CONF DEFAULT verbose True
setup_colorized_logging $TRICIRCLE_DISPATCHER_CONF DEFAULT tenant
iniset $TRICIRCLE_DISPATCHER_CONF DEFAULT bind_host $TRICIRCLE_DISPATCHER_LISTEN_ADDRESS
iniset $TRICIRCLE_DISPATCHER_CONF DEFAULT use_syslog $SYSLOG
iniset_rpc_backend tricircle $TRICIRCLE_DISPATCHER_CONF
iniset $TRICIRCLE_DISPATCHER_CONF database connection `database_connection_url tricircle`
iniset $TRICIRCLE_DISPATCHER_CONF client admin_username admin
iniset $TRICIRCLE_DISPATCHER_CONF client admin_password $ADMIN_PASSWORD
iniset $TRICIRCLE_DISPATCHER_CONF client admin_tenant demo
iniset $TRICIRCLE_DISPATCHER_CONF client auto_refresh_endpoint True
iniset $TRICIRCLE_DISPATCHER_CONF client top_site_name $OS_REGION_NAME
fi
}
function configure_tricircle_proxy {
if is_service_enabled t-prx ; then
echo "Configuring Tricircle Proxy"
cp -p $NOVA_CONF $TRICIRCLE_CONF_DIR
mv $TRICIRCLE_CONF_DIR/nova.conf $TRICIRCLE_PROXY_CONF
fi
} }
function configure_tricircle_api { function configure_tricircle_api {
if is_service_enabled t-api ; then if is_service_enabled t-api ; then
echo "Configuring Tricircle API" echo "Configuring Tricircle API"
cp -p $TRICIRCLE_DIR/etc/api.conf $TRICIRCLE_API_CONF init_common_tricircle_conf $TRICIRCLE_API_CONF
iniset $TRICIRCLE_API_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
iniset $TRICIRCLE_API_CONF DEFAULT verbose True
iniset $TRICIRCLE_API_CONF DEFAULT use_syslog $SYSLOG
iniset $TRICIRCLE_API_CONF database connection `database_connection_url tricircle`
iniset $TRICIRCLE_API_CONF client admin_username admin
iniset $TRICIRCLE_API_CONF client admin_password $ADMIN_PASSWORD
iniset $TRICIRCLE_API_CONF client admin_tenant demo
iniset $TRICIRCLE_API_CONF client auto_refresh_endpoint True
iniset $TRICIRCLE_API_CONF client top_site_name $OS_REGION_NAME
setup_colorized_logging $TRICIRCLE_API_CONF DEFAULT tenant_name setup_colorized_logging $TRICIRCLE_API_CONF DEFAULT tenant_name
@ -116,59 +154,208 @@ function configure_tricircle_api {
fi fi
} }
function configure_tricircle_nova_apigw {
if is_service_enabled t-ngw ; then
echo "Configuring Tricircle Nova APIGW"
init_common_tricircle_conf $TRICIRCLE_NOVA_APIGW_CONF
iniset $NEUTRON_CONF client admin_username admin
iniset $NEUTRON_CONF client admin_password $ADMIN_PASSWORD
iniset $NEUTRON_CONF client admin_tenant demo
iniset $NEUTRON_CONF client auto_refresh_endpoint True
iniset $NEUTRON_CONF client top_pod_name $REGION_NAME
setup_colorized_logging $TRICIRCLE_NOVA_APIGW_CONF DEFAULT tenant_name
if is_service_enabled keystone; then
create_tricircle_cache_dir
# Configure auth token middleware
configure_auth_token_middleware $TRICIRCLE_NOVA_APIGW_CONF tricircle \
$TRICIRCLE_AUTH_CACHE_DIR
else
iniset $TRICIRCLE_NOVA_APIGW_CONF DEFAULT auth_strategy noauth
fi
fi
}
function configure_tricircle_cinder_apigw {
if is_service_enabled t-cgw ; then
echo "Configuring Tricircle Cinder APIGW"
init_common_tricircle_conf $TRICIRCLE_CINDER_APIGW_CONF
setup_colorized_logging $TRICIRCLE_CINDER_APIGW_CONF DEFAULT tenant_name
if is_service_enabled keystone; then
create_tricircle_cache_dir
# Configure auth token middleware
configure_auth_token_middleware $TRICIRCLE_CINDER_APIGW_CONF tricircle \
$TRICIRCLE_AUTH_CACHE_DIR
else
iniset $TRICIRCLE_CINDER_APIGW_CONF DEFAULT auth_strategy noauth
fi
fi
}
function configure_tricircle_xjob {
if is_service_enabled t-job ; then
echo "Configuring Tricircle xjob"
init_common_tricircle_conf $TRICIRCLE_XJOB_CONF
setup_colorized_logging $TRICIRCLE_XJOB_CONF DEFAULT
fi
}
function start_new_neutron_server {
local server_index=$1
local region_name=$2
local q_port=$3
get_or_create_service "neutron" "network" "Neutron Service"
get_or_create_endpoint "network" \
"$region_name" \
"$Q_PROTOCOL://$SERVICE_HOST:$q_port/" \
"$Q_PROTOCOL://$SERVICE_HOST:$q_port/" \
"$Q_PROTOCOL://$SERVICE_HOST:$q_port/"
cp $NEUTRON_CONF $NEUTRON_CONF.$server_index
iniset $NEUTRON_CONF.$server_index database connection `database_connection_url $Q_DB_NAME$server_index`
iniset $NEUTRON_CONF.$server_index nova region_name $region_name
iniset $NEUTRON_CONF.$server_index DEFAULT bind_port $q_port
recreate_database $Q_DB_NAME$server_index
$NEUTRON_BIN_DIR/neutron-db-manage --config-file $NEUTRON_CONF.$server_index --config-file /$Q_PLUGIN_CONF_FILE upgrade head
run_process q-svc$server_index "$NEUTRON_BIN_DIR/neutron-server --config-file $NEUTRON_CONF.$server_index --config-file /$Q_PLUGIN_CONF_FILE"
}
if [[ "$Q_ENABLE_TRICIRCLE" == "True" ]]; then if [[ "$Q_ENABLE_TRICIRCLE" == "True" ]]; then
if [[ "$1" == "stack" && "$2" == "pre-install" ]]; then if [[ "$1" == "stack" && "$2" == "pre-install" ]]; then
echo summary "Tricircle pre-install" echo summary "Tricircle pre-install"
elif [[ "$1" == "stack" && "$2" == "install" ]]; then elif [[ "$1" == "stack" && "$2" == "install" ]]; then
echo_summary "Installing Tricircle" echo_summary "Installing Tricircle"
git_clone $TRICIRCLE_REPO $TRICIRCLE_DIR $TRICIRCLE_BRANCH
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
echo_summary "Configuring Tricircle" echo_summary "Configuring Tricircle"
configure_tricircle_dispatcher sudo install -d -o $STACK_USER -m 755 $TRICIRCLE_CONF_DIR
configure_tricircle_proxy
configure_tricircle_api configure_tricircle_api
configure_tricircle_nova_apigw
configure_tricircle_cinder_apigw
configure_tricircle_xjob
echo export PYTHONPATH=\$PYTHONPATH:$TRICIRCLE_DIR >> $RC_DIR/.localrc.auto echo export PYTHONPATH=\$PYTHONPATH:$TRICIRCLE_DIR >> $RC_DIR/.localrc.auto
recreate_database tricircle recreate_database tricircle
python "$TRICIRCLE_DIR/cmd/manage.py" "$TRICIRCLE_DISPATCHER_CONF" python "$TRICIRCLE_DIR/cmd/manage.py" "$TRICIRCLE_API_CONF"
if is_service_enabled q-svc ; then
start_new_neutron_server 1 $POD_REGION_NAME $TRICIRCLE_NEUTRON_PORT
# reconfigure neutron server to use our own plugin
echo "Configuring Neutron plugin for Tricircle"
Q_PLUGIN_CLASS="tricircle.network.plugin.TricirclePlugin"
iniset $NEUTRON_CONF DEFAULT core_plugin "$Q_PLUGIN_CLASS"
iniset $NEUTRON_CONF DEFAULT service_plugins ""
iniset $NEUTRON_CONF DEFAULT tricircle_db_connection `database_connection_url tricircle`
iniset $NEUTRON_CONF client admin_username admin
iniset $NEUTRON_CONF client admin_password $ADMIN_PASSWORD
iniset $NEUTRON_CONF client admin_tenant demo
iniset $NEUTRON_CONF client auto_refresh_endpoint True
iniset $NEUTRON_CONF client top_pod_name $REGION_NAME
iniset $NEUTRON_CONF tricircle bridge_segmentation_id `echo $TENANT_VLAN_RANGE | awk -F: '{print $2}'`
iniset $NEUTRON_CONF tricircle bridge_physical_network $PHYSICAL_NETWORK
fi
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
echo_summary "Initializing Tricircle Service" echo_summary "Initializing Tricircle Service"
if is_service_enabled t-dis; then
run_process t-dis "python $TRICIRCLE_DISPATCHER --config-file $TRICIRCLE_DISPATCHER_CONF"
fi
if is_service_enabled t-prx; then
run_process t-prx "python $TRICIRCLE_PROXY --config-file $TRICIRCLE_PROXY_CONF"
fi
if is_service_enabled t-api; then if is_service_enabled t-api; then
create_tricircle_accounts create_tricircle_accounts
run_process t-api "python $TRICIRCLE_API --config-file $TRICIRCLE_API_CONF" run_process t-api "python $TRICIRCLE_API --config-file $TRICIRCLE_API_CONF"
fi fi
if is_service_enabled t-ngw; then
create_nova_apigw_accounts
run_process t-ngw "python $TRICIRCLE_NOVA_APIGW --config-file $TRICIRCLE_NOVA_APIGW_CONF"
# Nova services are running, but we need to re-configure them to
# move them to bottom region
iniset $NOVA_CONF neutron region_name $POD_REGION_NAME
iniset $NOVA_CONF neutron url "$Q_PROTOCOL://$SERVICE_HOST:$TRICIRCLE_NEUTRON_PORT"
get_or_create_endpoint "compute" \
"$POD_REGION_NAME" \
"$NOVA_SERVICE_PROTOCOL://$NOVA_SERVICE_HOST:$NOVA_SERVICE_PORT/v2.1/"'$(tenant_id)s' \
"$NOVA_SERVICE_PROTOCOL://$NOVA_SERVICE_HOST:$NOVA_SERVICE_PORT/v2.1/"'$(tenant_id)s' \
"$NOVA_SERVICE_PROTOCOL://$NOVA_SERVICE_HOST:$NOVA_SERVICE_PORT/v2.1/"'$(tenant_id)s'
stop_process n-api
stop_process n-cpu
# remove previous failure flag file since we are going to restart service
rm -f "$SERVICE_DIR/$SCREEN_NAME"/n-api.failure
rm -f "$SERVICE_DIR/$SCREEN_NAME"/n-cpu.failure
sleep 20
run_process n-api "$NOVA_BIN_DIR/nova-api"
run_process n-cpu "$NOVA_BIN_DIR/nova-compute --config-file $NOVA_CONF" $LIBVIRT_GROUP
fi
if is_service_enabled t-cgw; then
create_cinder_apigw_accounts
run_process t-cgw "python $TRICIRCLE_CINDER_APIGW --config-file $TRICIRCLE_CINDER_APIGW_CONF"
get_or_create_endpoint "volumev2" \
"$POD_REGION_NAME" \
"$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v2/"'$(tenant_id)s' \
"$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v2/"'$(tenant_id)s' \
"$CINDER_SERVICE_PROTOCOL://$CINDER_SERVICE_HOST:$CINDER_SERVICE_PORT/v2/"'$(tenant_id)s'
fi
if is_service_enabled t-job; then
run_process t-job "python $TRICIRCLE_XJOB --config-file $TRICIRCLE_XJOB_CONF"
fi
fi fi
if [[ "$1" == "unstack" ]]; then if [[ "$1" == "unstack" ]]; then
if is_service_enabled t-dis; then
stop_process t-dis
fi
if is_service_enabled t-prx; then
stop_process t-prx
fi
if is_service_enabled t-api; then if is_service_enabled t-api; then
stop_process t-api stop_process t-api
fi fi
if is_service_enabled t-ngw; then
stop_process t-ngw
fi
if is_service_enabled t-cgw; then
stop_process t-cgw
fi
if is_service_enabled t-job; then
stop_process t-job
fi
if is_service_enabled q-svc1; then
stop_process q-svc1
fi
fi fi
fi fi

View File

@ -4,18 +4,12 @@ TRICIRCLE_DIR=$DEST/tricircle
TRICIRCLE_BRANCH=${TRICIRCLE_BRANCH:-master} TRICIRCLE_BRANCH=${TRICIRCLE_BRANCH:-master}
# common variables # common variables
POD_REGION_NAME=${POD_REGION_NAME:-Pod1}
TRICIRCLE_NEUTRON_PORT=${TRICIRCLE_NEUTRON_PORT:-20001}
TRICIRCLE_CONF_DIR=${TRICIRCLE_CONF_DIR:-/etc/tricircle} TRICIRCLE_CONF_DIR=${TRICIRCLE_CONF_DIR:-/etc/tricircle}
TRICIRCLE_STATE_PATH=${TRICIRCLE_STATE_PATH:-/var/lib/tricircle}
# tricircle dispatcher # tricircle rest admin api
TRICIRCLE_DISPATCHER=$TRICIRCLE_DIR/cmd/dispatcher.py
TRICIRCLE_DISPATCHER_CONF=$TRICIRCLE_CONF_DIR/dispatcher.conf
TRICIRCLE_DISPATCHER_LISTEN_ADDRESS=${TRICIRCLE_DISPATCHER_LISTEN_ADDRESS:-0.0.0.0}
# tricircle proxy
TRICIRCLE_PROXY=$TRICIRCLE_DIR/cmd/proxy.py
TRICIRCLE_PROXY_CONF=$TRICIRCLE_CONF_DIR/proxy.conf
# tricircle rest api
TRICIRCLE_API=$TRICIRCLE_DIR/cmd/api.py TRICIRCLE_API=$TRICIRCLE_DIR/cmd/api.py
TRICIRCLE_API_CONF=$TRICIRCLE_CONF_DIR/api.conf TRICIRCLE_API_CONF=$TRICIRCLE_CONF_DIR/api.conf
@ -24,6 +18,28 @@ TRICIRCLE_API_HOST=${TRICIRCLE_API_HOST:-$SERVICE_HOST}
TRICIRCLE_API_PORT=${TRICIRCLE_API_PORT:-19999} TRICIRCLE_API_PORT=${TRICIRCLE_API_PORT:-19999}
TRICIRCLE_API_PROTOCOL=${TRICIRCLE_API_PROTOCOL:-$SERVICE_PROTOCOL} TRICIRCLE_API_PROTOCOL=${TRICIRCLE_API_PROTOCOL:-$SERVICE_PROTOCOL}
# tricircle nova_apigw
TRICIRCLE_NOVA_APIGW=$TRICIRCLE_DIR/cmd/nova_apigw.py
TRICIRCLE_NOVA_APIGW_CONF=$TRICIRCLE_CONF_DIR/nova_apigw.conf
TRICIRCLE_NOVA_APIGW_LISTEN_ADDRESS=${TRICIRCLE_NOVA_APIGW_LISTEN_ADDRESS:-0.0.0.0}
TRICIRCLE_NOVA_APIGW_HOST=${TRICIRCLE_NOVA_APIGW_HOST:-$SERVICE_HOST}
TRICIRCLE_NOVA_APIGW_PORT=${TRICIRCLE_NOVA_APIGW_PORT:-19998}
TRICIRCLE_NOVA_APIGW_PROTOCOL=${TRICIRCLE_NOVA_APIGW_PROTOCOL:-$SERVICE_PROTOCOL}
# tricircle cinder_apigw
TRICIRCLE_CINDER_APIGW=$TRICIRCLE_DIR/cmd/cinder_apigw.py
TRICIRCLE_CINDER_APIGW_CONF=$TRICIRCLE_CONF_DIR/cinder_apigw.conf
TRICIRCLE_CINDER_APIGW_LISTEN_ADDRESS=${TRICIRCLE_CINDER_APIGW_LISTEN_ADDRESS:-0.0.0.0}
TRICIRCLE_CINDER_APIGW_HOST=${TRICIRCLE_CINDER_APIGW_HOST:-$SERVICE_HOST}
TRICIRCLE_CINDER_APIGW_PORT=${TRICIRCLE_CINDER_APIGW_PORT:-19997}
TRICIRCLE_CINDER_APIGW_PROTOCOL=${TRICIRCLE_CINDER_APIGW_PROTOCOL:-$SERVICE_PROTOCOL}
# tricircle xjob
TRICIRCLE_XJOB=$TRICIRCLE_DIR/cmd/xjob.py
TRICIRCLE_XJOB_CONF=$TRICIRCLE_CONF_DIR/xjob.conf
TRICIRCLE_AUTH_CACHE_DIR=${TRICIRCLE_AUTH_CACHE_DIR:-/var/cache/tricircle} TRICIRCLE_AUTH_CACHE_DIR=${TRICIRCLE_AUTH_CACHE_DIR:-/var/cache/tricircle}
export PYTHONPATH=$PYTHONPATH:$TRICIRCLE_DIR export PYTHONPATH=$PYTHONPATH:$TRICIRCLE_DIR

View File

@ -1,7 +1,7 @@
================ ================
Tricircle API v1 Tricircle API v1
================ ================
This API describes the ways of interacting with Tricircle(Cascade) service via This API describes the ways of interacting with Tricircle service via
HTTP protocol using Representational State Transfer(ReST). HTTP protocol using Representational State Transfer(ReST).
Application Root [/] Application Root [/]
@ -13,106 +13,146 @@ API v1 Root [/v1/]
================== ==================
All API v1 URLs are relative to API v1 root. All API v1 URLs are relative to API v1 root.
Site [/sites/{site_id}] Pod [/pods/{pod_id}]
======================= =======================
A site represents a region in Keystone. When operating a site, Tricircle A pod represents a region in Keystone. When operating a pod, Tricircle
decides the correct endpoints to send request based on the region of the site. decides the correct endpoints to send request based on the region of the pod.
Considering the 2-layers architecture of Tricircle, we also have 2 kinds of Considering the 2-layers architecture of Tricircle, we also have 2 kinds of
sites: top site and bottom site. A site has the following attributes: pods: top pod and bottom pod. A pod has the following attributes:
- site_id - pod_id
- site_name - pod_name
- az_id - pod_az_name
- dc_name
- az_name
**site_id** is automatically generated when creating a site. **site_name** is
specified by user but **MUST** match the region name registered in Keystone. **pod_id** is automatically generated when creating a site.
When creating a bottom site, Tricircle automatically creates a host aggregate
and assigns the new availability zone id to **az_id**. Top site doesn't need a **pod_name** is specified by user but **MUST** match the region name
host aggregate so **az_id** is left empty. registered in Keystone. When creating a bottom pod, Tricircle automatically
creates a host aggregate and assigns the new availability zone id to
**az_name**. When **az_name** is empty, that means this pod is top region,
no host aggregate will be generated. If **az_name** is not empty, that means
this pod will belong to this availability zone. Multiple pods with same
**az_name** means that these pods are under same availability zone.
**pod_az_name** is the az name in the bottom pod, it could be empty, if empty,
then no az parameter will be added to the request to the bottom pod. If the
**pod_az_name** is different than **az_name**, then the az parameter will be
replaced to the **pod_az_name** when the request is forwarded to regarding
bottom pod.
**dc_name** is the name of the data center where the pod is located.
URL Parameters URL Parameters
-------------- --------------
- site_id: Site id - pod_id: Pod id
Models Models
------ ------
:: ::
{ {
"site_id": "302e02a6-523c-4a92-a8d1-4939b31a788c", "pod_id": "302e02a6-523c-4a92-a8d1-4939b31a788c",
"site_name": "Site1", "pod_name": "pod1",
"az_id": "az_Site1" "pod_az_name": "az1",
"dc_name": "data center 1",
"az_name": "az1"
} }
Retrieve Site List [GET] Retrieve Pod List [GET]
------------------------ ------------------------
- URL: /sites - URL: /pods
- Status: 200 - Status: 200
- Returns: List of Sites - Returns: List of Pods
Response Response
:: ::
{ {
"sites": [ "pods": [
{ {
"site_id": "f91ca3a5-d5c6-45d6-be4c-763f5a2c4aa3", "pod_id": "f91ca3a5-d5c6-45d6-be4c-763f5a2c4aa3",
"site_name": "RegionOne", "pod_name": "RegionOne",
"az_id": ""
}, },
{ {
"site_id": "302e02a6-523c-4a92-a8d1-4939b31a788c", "pod_id": "302e02a6-523c-4a92-a8d1-4939b31a788c",
"site_name": "Site1", "pod_name": "pod1",
"az_id": "az_Site1" "pod_az_name": "az1",
"dc_name": "data center 1",
"az_name": "az1"
} }
] ]
} }
Retrieve a Single Site [GET] Retrieve a Single Pod [GET]
---------------------------- ----------------------------
- URL: /sites/site_id - URL: /pods/pod_id
- Status: 200 - Status: 200
- Returns: Site - Returns: Pod
Response Response
:: ::
{ {
"site": { "pod": {
"site_id": "302e02a6-523c-4a92-a8d1-4939b31a788c", "pod_id": "302e02a6-523c-4a92-a8d1-4939b31a788c",
"site_name": "Site1", "pod_name": "pod1",
"az_id": "az_Site1" "pod_az_name": "az1",
"dc_name": "data center 1",
"az_name": "az1"
} }
} }
Create a Site [POST] Create a Pod [POST]
-------------------- --------------------
- URL: /sites - URL: /pods
- Status: 201 - Status: 201
- Returns: Created Site - Returns: Created Pod
Request (application/json) Request (application/json)
.. csv-table::
:header: "Parameter", "Type", "Description"
name, string, name of the Site
top, bool, "indicate whether it's a top Site, optional, default false"
:: ::
# for the pod represent the region where the Tricircle is running
{ {
"name": "RegionOne" "pod": {
"top": true "pod_name": "RegionOne",
}
}
# for the bottom pod which is managed by Tricircle
{
"pod": {
"pod_name": "pod1",
"pod_az_name": "az1",
"dc_name": "data center 1",
"az_name": "az1"
}
} }
Response Response
:: ::
# for the pod represent the region where the Tricircle is running
{ {
"site": { "pod": {
"site_id": "f91ca3a5-d5c6-45d6-be4c-763f5a2c4aa3", "pod_id": "302e02a6-523c-4a92-a8d1-4939b31a788c",
"site_name": "RegionOne", "pod_name": "RegionOne",
"az_id": "" "pod_az_name": "",
"dc_name": "",
"az_name": ""
}
}
# for the bottom pod which is managed by Tricircle
{
"pod": {
"pod_id": "302e02a6-523c-4a92-a8d1-4939b31a788c",
"pod_name": "pod1",
"pod_az_name": "az1",
"dc_name": "data center 1",
"az_name": "az1"
} }
} }

16
etc/api-cfg-gen.conf Normal file
View File

@ -0,0 +1,16 @@
[DEFAULT]
output_file = etc/api.conf.sample
wrap_width = 79
namespace = tricircle.api
namespace = tricircle.common
namespace = tricircle.db
namespace = oslo.log
namespace = oslo.messaging
namespace = oslo.policy
namespace = oslo.service.periodic_task
namespace = oslo.service.service
namespace = oslo.service.sslutils
namespace = oslo.db
namespace = oslo.middleware
namespace = oslo.concurrency
namespace = keystonemiddleware.auth_token

View File

@ -1,412 +0,0 @@
[DEFAULT]
# Print more verbose output (set logging level to INFO instead of default WARNING level).
# verbose = True
# Print debugging output (set logging level to DEBUG instead of default WARNING level).
# debug = False
# Where to store Tricircle state files. This directory must be writable by the
# user executing the agent.
# state_path = /var/lib/tricircle
# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
# log_date_format = %Y-%m-%d %H:%M:%S
# use_syslog -> syslog
# log_file and log_dir -> log_dir/log_file
# (not log_file) and log_dir -> log_dir/{binary_name}.log
# use_stderr -> stderr
# (not user_stderr) and (not log_file) -> stdout
# publish_errors -> notification system
# use_syslog = False
# syslog_log_facility = LOG_USER
# use_stderr = True
# log_file =
# log_dir =
# publish_errors = False
# Address to bind the API server to
# bind_host = 127.0.0.1
# Port the bind the API server to
# bind_port = 19999
# Paste configuration file
# api_paste_config = api-paste.ini
# (StrOpt) Hostname to be used by the tricircle server, agents and services
# running on this machine. All the agents and services running on this machine
# must use the same host value.
# The default value is hostname of the machine.
#
# host =
# admin_tenant_name = %SERVICE_TENANT_NAME%
# admin_user = %SERVICE_USER%
# admin_password = %SERVICE_PASSWORD%
# Enable or disable bulk create/update/delete operations
# allow_bulk = True
# Enable or disable pagination
# allow_pagination = False
# Enable or disable sorting
# allow_sorting = False
# Default maximum number of items returned in a single response,
# value == infinite and value < 0 means no max limit, and value must
# be greater than 0. If the number of items requested is greater than
# pagination_max_limit, server will just return pagination_max_limit
# of number of items.
# pagination_max_limit = -1
# =========== WSGI parameters related to the API server ==============
# Number of separate worker processes to spawn. The default, 0, runs the
# worker thread in the current process. Greater than 0 launches that number of
# child processes as workers. The parent process manages them.
# api_workers = 3
# Number of separate RPC worker processes to spawn. The default, 0, runs the
# worker thread in the current process. Greater than 0 launches that number of
# child processes as RPC workers. The parent process manages them.
# This feature is experimental until issues are addressed and testing has been
# enabled for various plugins for compatibility.
# rpc_workers = 0
# Timeout for client connections socket operations. If an
# incoming connection is idle for this number of seconds it
# will be closed. A value of '0' means wait forever. (integer
# value)
# client_socket_timeout = 900
# wsgi keepalive option. Determines if connections are allowed to be held open
# by clients after a request is fulfilled. A value of False will ensure that
# the socket connection will be explicitly closed once a response has been
# sent to the client.
# wsgi_keep_alive = True
# Sets the value of TCP_KEEPIDLE in seconds to use for each server socket when
# starting API server. Not supported on OS X.
# tcp_keepidle = 600
# Number of seconds to keep retrying to listen
# retry_until_window = 30
# Number of backlog requests to configure the socket with.
# backlog = 4096
# Max header line to accommodate large tokens
# max_header_line = 16384
# Enable SSL on the API server
# use_ssl = False
# Certificate file to use when starting API server securely
# ssl_cert_file = /path/to/certfile
# Private key file to use when starting API server securely
# ssl_key_file = /path/to/keyfile
# CA certificate file to use when starting API server securely to
# verify connecting clients. This is an optional parameter only required if
# API clients need to authenticate to the API server using SSL certificates
# signed by a trusted CA
# ssl_ca_file = /path/to/cafile
# ======== end of WSGI parameters related to the API server ==========
# The strategy to be used for auth.
# Supported values are 'keystone'(default), 'noauth'.
# auth_strategy = keystone
[filter:authtoken]
# paste.filter_factory = keystonemiddleware.auth_token:filter_factory
[keystone_authtoken]
# auth_uri = http://162.3.111.227:35357/v3
# identity_uri = http://162.3.111.227:35357
# admin_tenant_name = service
# admin_user = tricircle
# admin_password = 1234
# auth_version = 3
[database]
# This line MUST be changed to actually run the plugin.
# Example:
# connection = mysql://root:pass@127.0.0.1:3306/neutron
# Replace 127.0.0.1 above with the IP address of the database used by the
# main neutron server. (Leave it as is if the database runs on this host.)
# connection = sqlite://
# NOTE: In deployment the [database] section and its connection attribute may
# be set in the corresponding core plugin '.ini' file. However, it is suggested
# to put the [database] section and its connection attribute in this
# configuration file.
# Database engine for which script will be generated when using offline
# migration
# engine =
# The SQLAlchemy connection string used to connect to the slave database
# slave_connection =
# Database reconnection retry times - in event connectivity is lost
# set to -1 implies an infinite retry count
# max_retries = 10
# Database reconnection interval in seconds - if the initial connection to the
# database fails
# retry_interval = 10
# Minimum number of SQL connections to keep open in a pool
# min_pool_size = 1
# Maximum number of SQL connections to keep open in a pool
# max_pool_size = 10
# Timeout in seconds before idle sql connections are reaped
# idle_timeout = 3600
# If set, use this value for max_overflow with sqlalchemy
# max_overflow = 20
# Verbosity of SQL debugging information. 0=None, 100=Everything
# connection_debug = 0
# Add python stack traces to SQL as comment strings
# connection_trace = False
# If set, use this value for pool_timeout with sqlalchemy
# pool_timeout = 10
[client]
# Keystone authentication URL
# auth_url = http://127.0.0.1:5000/v3
# Keystone service URL
# identity_url = http://127.0.0.1:35357/v3
# If set to True, endpoint will be automatically refreshed if timeout
# accessing endpoint.
# auto_refresh_endpoint = False
# Name of top site which client needs to access
# top_site_name =
# Username of admin account for synchronizing endpoint with Keystone
# admin_username =
# Password of admin account for synchronizing endpoint with Keystone
# admin_password =
# Tenant name of admin account for synchronizing endpoint with Keystone
# admin_tenant =
# User domain name of admin account for synchronizing endpoint with Keystone
# admin_user_domain_name = default
# Tenant domain name of admin account for synchronizing endpoint with Keystone
# admin_tenant_domain_name = default
# Timeout for glance client in seconds
# glance_timeout = 60
# Timeout for neutron client in seconds
# neutron_timeout = 60
# Timeout for nova client in seconds
# nova_timeout = 60
[oslo_concurrency]
# Directory to use for lock files. For security, the specified directory should
# only be writable by the user running the processes that need locking.
# Defaults to environment variable OSLO_LOCK_PATH. If external locks are used,
# a lock path must be set.
lock_path = $state_path/lock
# Enables or disables inter-process locks.
# disable_process_locking = False
[oslo_policy]
# The JSON file that defines policies.
# policy_file = policy.json
# Default rule. Enforced when a requested rule is not found.
# policy_default_rule = default
# Directories where policy configuration files are stored.
# They can be relative to any directory in the search path defined by the
# config_dir option, or absolute paths. The file defined by policy_file
# must exist for these directories to be searched. Missing or empty
# directories are ignored.
# policy_dirs = policy.d
[oslo_messaging_amqp]
#
# From oslo.messaging
#
# Address prefix used when sending to a specific server (string value)
# server_request_prefix = exclusive
# Address prefix used when broadcasting to all servers (string value)
# broadcast_prefix = broadcast
# Address prefix when sending to any server in group (string value)
# group_request_prefix = unicast
# Name for the AMQP container (string value)
# container_name =
# Timeout for inactive connections (in seconds) (integer value)
# idle_timeout = 0
# Debug: dump AMQP frames to stdout (boolean value)
# trace = false
# CA certificate PEM file for verifing server certificate (string value)
# ssl_ca_file =
# Identifying certificate PEM file to present to clients (string value)
# ssl_cert_file =
# Private key PEM file used to sign cert_file certificate (string value)
# ssl_key_file =
# Password for decrypting ssl_key_file (if encrypted) (string value)
# ssl_key_password =
# Accept clients using either SSL or plain TCP (boolean value)
# allow_insecure_clients = false
[oslo_messaging_qpid]
#
# From oslo.messaging
#
# Use durable queues in AMQP. (boolean value)
# amqp_durable_queues = false
# Auto-delete queues in AMQP. (boolean value)
# amqp_auto_delete = false
# Size of RPC connection pool. (integer value)
# rpc_conn_pool_size = 30
# Qpid broker hostname. (string value)
# qpid_hostname = localhost
# Qpid broker port. (integer value)
# qpid_port = 5672
# Qpid HA cluster host:port pairs. (list value)
# qpid_hosts = $qpid_hostname:$qpid_port
# Username for Qpid connection. (string value)
# qpid_username =
# Password for Qpid connection. (string value)
# qpid_password =
# Space separated list of SASL mechanisms to use for auth. (string value)
# qpid_sasl_mechanisms =
# Seconds between connection keepalive heartbeats. (integer value)
# qpid_heartbeat = 60
# Transport to use, either 'tcp' or 'ssl'. (string value)
# qpid_protocol = tcp
# Whether to disable the Nagle algorithm. (boolean value)
# qpid_tcp_nodelay = true
# The number of prefetched messages held by receiver. (integer value)
# qpid_receiver_capacity = 1
# The qpid topology version to use. Version 1 is what was originally used by
# impl_qpid. Version 2 includes some backwards-incompatible changes that allow
# broker federation to work. Users should update to version 2 when they are
# able to take everything down, as it requires a clean break. (integer value)
# qpid_topology_version = 1
[oslo_messaging_rabbit]
#
# From oslo.messaging
#
# Use durable queues in AMQP. (boolean value)
# amqp_durable_queues = false
# Auto-delete queues in AMQP. (boolean value)
# amqp_auto_delete = false
# Size of RPC connection pool. (integer value)
# rpc_conn_pool_size = 30
# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
# distributions. (string value)
# kombu_ssl_version =
# SSL key file (valid only if SSL enabled). (string value)
# kombu_ssl_keyfile =
# SSL cert file (valid only if SSL enabled). (string value)
# kombu_ssl_certfile =
# SSL certification authority file (valid only if SSL enabled). (string value)
# kombu_ssl_ca_certs =
# How long to wait before reconnecting in response to an AMQP consumer cancel
# notification. (floating point value)
# kombu_reconnect_delay = 1.0
# The RabbitMQ broker address where a single node is used. (string value)
# rabbit_host = localhost
# The RabbitMQ broker port where a single node is used. (integer value)
# rabbit_port = 5672
# RabbitMQ HA cluster host:port pairs. (list value)
# rabbit_hosts = $rabbit_host:$rabbit_port
# Connect over SSL for RabbitMQ. (boolean value)
# rabbit_use_ssl = false
# The RabbitMQ userid. (string value)
# rabbit_userid = guest
# The RabbitMQ password. (string value)
# rabbit_password = guest
# The RabbitMQ login method. (string value)
# rabbit_login_method = AMQPLAIN
# The RabbitMQ virtual host. (string value)
# rabbit_virtual_host = /
# How frequently to retry connecting with RabbitMQ. (integer value)
# rabbit_retry_interval = 1
# How long to backoff for between retries when connecting to RabbitMQ. (integer
# value)
# rabbit_retry_backoff = 2
# Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry
# count). (integer value)
# rabbit_max_retries = 0
# Use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you
# must wipe the RabbitMQ database. (boolean value)
# rabbit_ha_queues = false
# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
# fake_rabbit = false

View File

@ -0,0 +1,16 @@
[DEFAULT]
output_file = etc/cinder_apigw.conf.sample
wrap_width = 79
namespace = tricircle.cinder_apigw
namespace = tricircle.common
namespace = tricircle.db
namespace = oslo.log
namespace = oslo.messaging
namespace = oslo.policy
namespace = oslo.service.periodic_task
namespace = oslo.service.service
namespace = oslo.service.sslutils
namespace = oslo.db
namespace = oslo.middleware
namespace = oslo.concurrency
namespace = keystonemiddleware.auth_token

View File

@ -1,521 +0,0 @@
[DEFAULT]
# Print more verbose output (set logging level to INFO instead of default WARNING level).
# verbose = True
# Print debugging output (set logging level to DEBUG instead of default WARNING level).
# debug = True
# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
# log_date_format = %Y-%m-%d %H:%M:%S
# use_syslog -> syslog
# log_file and log_dir -> log_dir/log_file
# (not log_file) and log_dir -> log_dir/{binary_name}.log
# use_stderr -> stderr
# (not user_stderr) and (not log_file) -> stdout
# publish_errors -> notification system
# use_syslog = False
# syslog_log_facility = LOG_USER
# use_stderr = True
# log_file =
# log_dir =
# publish_errors = False
# Address to bind the API server to
# bind_host = 0.0.0.0
# Port the bind the API server to
# bind_port = 9696
#
# Options defined in oslo.messaging
#
# Use durable queues in amqp. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
# amqp_durable_queues=false
# Auto-delete queues in amqp. (boolean value)
# amqp_auto_delete=false
# Size of RPC connection pool. (integer value)
# rpc_conn_pool_size=30
# Qpid broker hostname. (string value)
# qpid_hostname=localhost
# Qpid broker port. (integer value)
# qpid_port=5672
# Qpid HA cluster host:port pairs. (list value)
# qpid_hosts=$qpid_hostname:$qpid_port
# Username for Qpid connection. (string value)
# qpid_username=
# Password for Qpid connection. (string value)
# qpid_password=
# Space separated list of SASL mechanisms to use for auth.
# (string value)
# qpid_sasl_mechanisms=
# Seconds between connection keepalive heartbeats. (integer
# value)
# qpid_heartbeat=60
# Transport to use, either 'tcp' or 'ssl'. (string value)
# qpid_protocol=tcp
# Whether to disable the Nagle algorithm. (boolean value)
# qpid_tcp_nodelay=true
# The qpid topology version to use. Version 1 is what was
# originally used by impl_qpid. Version 2 includes some
# backwards-incompatible changes that allow broker federation
# to work. Users should update to version 2 when they are
# able to take everything down, as it requires a clean break.
# (integer value)
# qpid_topology_version=1
# SSL version to use (valid only if SSL enabled). valid values
# are TLSv1, SSLv23 and SSLv3. SSLv2 may be available on some
# distributions. (string value)
# kombu_ssl_version=
# SSL key file (valid only if SSL enabled). (string value)
# kombu_ssl_keyfile=
# SSL cert file (valid only if SSL enabled). (string value)
# kombu_ssl_certfile=
# SSL certification authority file (valid only if SSL
# enabled). (string value)
# kombu_ssl_ca_certs=
# How long to wait before reconnecting in response to an AMQP
# consumer cancel notification. (floating point value)
# kombu_reconnect_delay=1.0
# The RabbitMQ broker address where a single node is used.
# (string value)
# rabbit_host=localhost
# The RabbitMQ broker port where a single node is used.
# (integer value)
# rabbit_port=5672
# RabbitMQ HA cluster host:port pairs. (list value)
# rabbit_hosts=$rabbit_host:$rabbit_port
# Connect over SSL for RabbitMQ. (boolean value)
# rabbit_use_ssl=false
# The RabbitMQ userid. (string value)
# rabbit_userid=guest
# The RabbitMQ password. (string value)
# rabbit_password=guest
# the RabbitMQ login method (string value)
# rabbit_login_method=AMQPLAIN
# The RabbitMQ virtual host. (string value)
# rabbit_virtual_host=/
# How frequently to retry connecting with RabbitMQ. (integer
# value)
# rabbit_retry_interval=1
# How long to backoff for between retries when connecting to
# RabbitMQ. (integer value)
# rabbit_retry_backoff=2
# Maximum number of RabbitMQ connection retries. Default is 0
# (infinite retry count). (integer value)
# rabbit_max_retries=0
# Use HA queues in RabbitMQ (x-ha-policy: all). If you change
# this option, you must wipe the RabbitMQ database. (boolean
# value)
# rabbit_ha_queues=false
# If passed, use a fake RabbitMQ provider. (boolean value)
# fake_rabbit=false
# ZeroMQ bind address. Should be a wildcard (*), an ethernet
# interface, or IP. The "host" option should point or resolve
# to this address. (string value)
# rpc_zmq_bind_address=*
# MatchMaker driver. (string value)
# rpc_zmq_matchmaker=oslo.messaging._drivers.matchmaker.MatchMakerLocalhost
# ZeroMQ receiver listening port. (integer value)
# rpc_zmq_port=9501
# Number of ZeroMQ contexts, defaults to 1. (integer value)
# rpc_zmq_contexts=1
# Maximum number of ingress messages to locally buffer per
# topic. Default is unlimited. (integer value)
# rpc_zmq_topic_backlog=
# Directory for holding IPC sockets. (string value)
# rpc_zmq_ipc_dir=/var/run/openstack
# Name of this node. Must be a valid hostname, FQDN, or IP
# address. Must match "host" option, if running Nova. (string
# value)
# rpc_zmq_host=oslo
# Seconds to wait before a cast expires (TTL). Only supported
# by impl_zmq. (integer value)
# rpc_cast_timeout=30
# Heartbeat frequency. (integer value)
# matchmaker_heartbeat_freq=300
# Heartbeat time-to-live. (integer value)
# matchmaker_heartbeat_ttl=600
# Size of RPC greenthread pool. (integer value)
# rpc_thread_pool_size=64
# Driver or drivers to handle sending notifications. (multi
# valued)
# notification_driver=
# AMQP topic used for OpenStack notifications. (list value)
# Deprecated group/name - [rpc_notifier2]/topics
# notification_topics=notifications
# Seconds to wait for a response from a call. (integer value)
# rpc_response_timeout=60
# A URL representing the messaging driver to use and its full
# configuration. If not set, we fall back to the rpc_backend
# option and driver specific configuration. (string value)
# transport_url=
# The messaging driver to use, defaults to rabbit. Other
# drivers include qpid and zmq. (string value)
# rpc_backend=rabbit
# The default exchange under which topics are scoped. May be
# overridden by an exchange name specified in the
# transport_url option. (string value)
# control_exchange=openstack
[database]
# This line MUST be changed to actually run the plugin.
# Example:
# connection = mysql+pymysql://root:pass@127.0.0.1:3306/neutron
# Replace 127.0.0.1 above with the IP address of the database used by the
# main neutron server. (Leave it as is if the database runs on this host.)
# connection = sqlite://
# NOTE: In deployment the [database] section and its connection attribute may
# be set in the corresponding core plugin '.ini' file. However, it is suggested
# to put the [database] section and its connection attribute in this
# configuration file.
# Database engine for which script will be generated when using offline
# migration
# engine =
# The SQLAlchemy connection string used to connect to the slave database
# slave_connection =
# Database reconnection retry times - in event connectivity is lost
# set to -1 implies an infinite retry count
# max_retries = 10
# Database reconnection interval in seconds - if the initial connection to the
# database fails
# retry_interval = 10
# Minimum number of SQL connections to keep open in a pool
# min_pool_size = 1
# Maximum number of SQL connections to keep open in a pool
# max_pool_size = 10
# Timeout in seconds before idle sql connections are reaped
# idle_timeout = 3600
# If set, use this value for max_overflow with sqlalchemy
# max_overflow = 20
# Verbosity of SQL debugging information. 0=None, 100=Everything
# connection_debug = 0
# Add python stack traces to SQL as comment strings
# connection_trace = False
# If set, use this value for pool_timeout with sqlalchemy
# pool_timeout = 10
[client]
# Keystone authentication URL
# auth_url = http://127.0.0.1:5000/v3
# Keystone service URL
# identity_url = http://127.0.0.1:35357/v3
# If set to True, endpoint will be automatically refreshed if timeout
# accessing endpoint.
# auto_refresh_endpoint = False
# Name of top site which client needs to access
# top_site_name =
# Username of admin account for synchronizing endpoint with Keystone
# admin_username =
# Password of admin account for synchronizing endpoint with Keystone
# admin_password =
# Tenant name of admin account for synchronizing endpoint with Keystone
# admin_tenant =
# User domain name of admin account for synchronizing endpoint with Keystone
# admin_user_domain_name = default
# Tenant domain name of admin account for synchronizing endpoint with Keystone
# admin_tenant_domain_name = default
# Timeout for glance client in seconds
# glance_timeout = 60
# Timeout for neutron client in seconds
# neutron_timeout = 60
# Timeout for nova client in seconds
# nova_timeout = 60
[oslo_concurrency]
# Directory to use for lock files. For security, the specified directory should
# only be writable by the user running the processes that need locking.
# Defaults to environment variable OSLO_LOCK_PATH. If external locks are used,
# a lock path must be set.
lock_path = $state_path/lock
# Enables or disables inter-process locks.
# disable_process_locking = False
[oslo_messaging_amqp]
#
# From oslo.messaging
#
# Address prefix used when sending to a specific server (string value)
# Deprecated group/name - [amqp1]/server_request_prefix
# server_request_prefix = exclusive
# Address prefix used when broadcasting to all servers (string value)
# Deprecated group/name - [amqp1]/broadcast_prefix
# broadcast_prefix = broadcast
# Address prefix when sending to any server in group (string value)
# Deprecated group/name - [amqp1]/group_request_prefix
# group_request_prefix = unicast
# Name for the AMQP container (string value)
# Deprecated group/name - [amqp1]/container_name
# container_name =
# Timeout for inactive connections (in seconds) (integer value)
# Deprecated group/name - [amqp1]/idle_timeout
# idle_timeout = 0
# Debug: dump AMQP frames to stdout (boolean value)
# Deprecated group/name - [amqp1]/trace
# trace = false
# CA certificate PEM file for verifing server certificate (string value)
# Deprecated group/name - [amqp1]/ssl_ca_file
# ssl_ca_file =
# Identifying certificate PEM file to present to clients (string value)
# Deprecated group/name - [amqp1]/ssl_cert_file
# ssl_cert_file =
# Private key PEM file used to sign cert_file certificate (string value)
# Deprecated group/name - [amqp1]/ssl_key_file
# ssl_key_file =
# Password for decrypting ssl_key_file (if encrypted) (string value)
# Deprecated group/name - [amqp1]/ssl_key_password
# ssl_key_password =
# Accept clients using either SSL or plain TCP (boolean value)
# Deprecated group/name - [amqp1]/allow_insecure_clients
# allow_insecure_clients = false
[oslo_messaging_qpid]
#
# From oslo.messaging
#
# Use durable queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
# amqp_durable_queues = false
# Auto-delete queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_auto_delete
# amqp_auto_delete = false
# Size of RPC connection pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size
# rpc_conn_pool_size = 30
# Qpid broker hostname. (string value)
# Deprecated group/name - [DEFAULT]/qpid_hostname
# qpid_hostname = localhost
# Qpid broker port. (integer value)
# Deprecated group/name - [DEFAULT]/qpid_port
# qpid_port = 5672
# Qpid HA cluster host:port pairs. (list value)
# Deprecated group/name - [DEFAULT]/qpid_hosts
# qpid_hosts = $qpid_hostname:$qpid_port
# Username for Qpid connection. (string value)
# Deprecated group/name - [DEFAULT]/qpid_username
# qpid_username =
# Password for Qpid connection. (string value)
# Deprecated group/name - [DEFAULT]/qpid_password
# qpid_password =
# Space separated list of SASL mechanisms to use for auth. (string value)
# Deprecated group/name - [DEFAULT]/qpid_sasl_mechanisms
# qpid_sasl_mechanisms =
# Seconds between connection keepalive heartbeats. (integer value)
# Deprecated group/name - [DEFAULT]/qpid_heartbeat
# qpid_heartbeat = 60
# Transport to use, either 'tcp' or 'ssl'. (string value)
# Deprecated group/name - [DEFAULT]/qpid_protocol
# qpid_protocol = tcp
# Whether to disable the Nagle algorithm. (boolean value)
# Deprecated group/name - [DEFAULT]/qpid_tcp_nodelay
# qpid_tcp_nodelay = true
# The number of prefetched messages held by receiver. (integer value)
# Deprecated group/name - [DEFAULT]/qpid_receiver_capacity
# qpid_receiver_capacity = 1
# The qpid topology version to use. Version 1 is what was originally used by
# impl_qpid. Version 2 includes some backwards-incompatible changes that allow
# broker federation to work. Users should update to version 2 when they are
# able to take everything down, as it requires a clean break. (integer value)
# Deprecated group/name - [DEFAULT]/qpid_topology_version
# qpid_topology_version = 1
[oslo_messaging_rabbit]
#
# From oslo.messaging
#
# Use durable queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
# amqp_durable_queues = false
# Auto-delete queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_auto_delete
# amqp_auto_delete = false
# Size of RPC connection pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size
# rpc_conn_pool_size = 30
# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
# distributions. (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_version
# kombu_ssl_version =
# SSL key file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_keyfile
# kombu_ssl_keyfile =
# SSL cert file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_certfile
# kombu_ssl_certfile =
# SSL certification authority file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_ca_certs
# kombu_ssl_ca_certs =
# How long to wait before reconnecting in response to an AMQP consumer cancel
# notification. (floating point value)
# Deprecated group/name - [DEFAULT]/kombu_reconnect_delay
# kombu_reconnect_delay = 1.0
# The RabbitMQ broker address where a single node is used. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_host
# rabbit_host = localhost
# The RabbitMQ broker port where a single node is used. (integer value)
# Deprecated group/name - [DEFAULT]/rabbit_port
# rabbit_port = 5672
# RabbitMQ HA cluster host:port pairs. (list value)
# Deprecated group/name - [DEFAULT]/rabbit_hosts
# rabbit_hosts = $rabbit_host:$rabbit_port
# Connect over SSL for RabbitMQ. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_use_ssl
# rabbit_use_ssl = false
# The RabbitMQ userid. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_userid
# rabbit_userid = guest
# The RabbitMQ password. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_password
# rabbit_password = guest
# The RabbitMQ login method. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_login_method
# rabbit_login_method = AMQPLAIN
# The RabbitMQ virtual host. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_virtual_host
# rabbit_virtual_host = /
# How frequently to retry connecting with RabbitMQ. (integer value)
# rabbit_retry_interval = 1
# How long to backoff for between retries when connecting to RabbitMQ. (integer
# value)
# Deprecated group/name - [DEFAULT]/rabbit_retry_backoff
# rabbit_retry_backoff = 2
# Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry
# count). (integer value)
# Deprecated group/name - [DEFAULT]/rabbit_max_retries
# rabbit_max_retries = 0
# Use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you
# must wipe the RabbitMQ database. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_ha_queues
# rabbit_ha_queues = false
# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
# Deprecated group/name - [DEFAULT]/fake_rabbit
# fake_rabbit = false

View File

@ -0,0 +1,16 @@
[DEFAULT]
output_file = etc/nova_apigw.conf.sample
wrap_width = 79
namespace = tricircle.nova_apigw
namespace = tricircle.common
namespace = tricircle.db
namespace = oslo.log
namespace = oslo.messaging
namespace = oslo.policy
namespace = oslo.service.periodic_task
namespace = oslo.service.service
namespace = oslo.service.sslutils
namespace = oslo.db
namespace = oslo.middleware
namespace = oslo.concurrency
namespace = keystonemiddleware.auth_token

View File

@ -1,485 +0,0 @@
{
"context_is_admin": "role:admin",
"admin_or_owner": "is_admin:True or project_id:%(project_id)s",
"default": "rule:admin_or_owner",
"cells_scheduler_filter:TargetCellFilter": "is_admin:True",
"compute:create": "",
"compute:create:attach_network": "",
"compute:create:attach_volume": "",
"compute:create:forced_host": "is_admin:True",
"compute:get": "",
"compute:get_all": "",
"compute:get_all_tenants": "is_admin:True",
"compute:update": "",
"compute:get_instance_metadata": "",
"compute:get_all_instance_metadata": "",
"compute:get_all_instance_system_metadata": "",
"compute:update_instance_metadata": "",
"compute:delete_instance_metadata": "",
"compute:get_instance_faults": "",
"compute:get_diagnostics": "",
"compute:get_instance_diagnostics": "",
"compute:start": "rule:admin_or_owner",
"compute:stop": "rule:admin_or_owner",
"compute:get_lock": "",
"compute:lock": "",
"compute:unlock": "",
"compute:unlock_override": "rule:admin_api",
"compute:get_vnc_console": "",
"compute:get_spice_console": "",
"compute:get_rdp_console": "",
"compute:get_serial_console": "",
"compute:get_mks_console": "",
"compute:get_console_output": "",
"compute:reset_network": "",
"compute:inject_network_info": "",
"compute:add_fixed_ip": "",
"compute:remove_fixed_ip": "",
"compute:attach_volume": "",
"compute:detach_volume": "",
"compute:swap_volume": "",
"compute:attach_interface": "",
"compute:detach_interface": "",
"compute:set_admin_password": "",
"compute:rescue": "",
"compute:unrescue": "",
"compute:suspend": "",
"compute:resume": "",
"compute:pause": "",
"compute:unpause": "",
"compute:shelve": "",
"compute:shelve_offload": "",
"compute:unshelve": "",
"compute:snapshot": "",
"compute:snapshot_volume_backed": "",
"compute:backup": "",
"compute:resize": "",
"compute:confirm_resize": "",
"compute:revert_resize": "",
"compute:rebuild": "",
"compute:reboot": "",
"compute:security_groups:add_to_instance": "",
"compute:security_groups:remove_from_instance": "",
"compute:delete": "",
"compute:soft_delete": "",
"compute:force_delete": "",
"compute:restore": "",
"compute:volume_snapshot_create": "",
"compute:volume_snapshot_delete": "",
"admin_api": "is_admin:True",
"compute_extension:accounts": "rule:admin_api",
"compute_extension:admin_actions": "rule:admin_api",
"compute_extension:admin_actions:pause": "rule:admin_or_owner",
"compute_extension:admin_actions:unpause": "rule:admin_or_owner",
"compute_extension:admin_actions:suspend": "rule:admin_or_owner",
"compute_extension:admin_actions:resume": "rule:admin_or_owner",
"compute_extension:admin_actions:lock": "rule:admin_or_owner",
"compute_extension:admin_actions:unlock": "rule:admin_or_owner",
"compute_extension:admin_actions:resetNetwork": "rule:admin_api",
"compute_extension:admin_actions:injectNetworkInfo": "rule:admin_api",
"compute_extension:admin_actions:createBackup": "rule:admin_or_owner",
"compute_extension:admin_actions:migrateLive": "rule:admin_api",
"compute_extension:admin_actions:resetState": "rule:admin_api",
"compute_extension:admin_actions:migrate": "rule:admin_api",
"compute_extension:aggregates": "rule:admin_api",
"compute_extension:agents": "rule:admin_api",
"compute_extension:attach_interfaces": "",
"compute_extension:baremetal_nodes": "rule:admin_api",
"compute_extension:cells": "rule:admin_api",
"compute_extension:cells:create": "rule:admin_api",
"compute_extension:cells:delete": "rule:admin_api",
"compute_extension:cells:update": "rule:admin_api",
"compute_extension:cells:sync_instances": "rule:admin_api",
"compute_extension:certificates": "",
"compute_extension:cloudpipe": "rule:admin_api",
"compute_extension:cloudpipe_update": "rule:admin_api",
"compute_extension:config_drive": "",
"compute_extension:console_output": "",
"compute_extension:consoles": "",
"compute_extension:createserverext": "",
"compute_extension:deferred_delete": "",
"compute_extension:disk_config": "",
"compute_extension:evacuate": "rule:admin_api",
"compute_extension:extended_server_attributes": "rule:admin_api",
"compute_extension:extended_status": "",
"compute_extension:extended_availability_zone": "",
"compute_extension:extended_ips": "",
"compute_extension:extended_ips_mac": "",
"compute_extension:extended_vif_net": "",
"compute_extension:extended_volumes": "",
"compute_extension:fixed_ips": "rule:admin_api",
"compute_extension:flavor_access": "",
"compute_extension:flavor_access:addTenantAccess": "rule:admin_api",
"compute_extension:flavor_access:removeTenantAccess": "rule:admin_api",
"compute_extension:flavor_disabled": "",
"compute_extension:flavor_rxtx": "",
"compute_extension:flavor_swap": "",
"compute_extension:flavorextradata": "",
"compute_extension:flavorextraspecs:index": "",
"compute_extension:flavorextraspecs:show": "",
"compute_extension:flavorextraspecs:create": "rule:admin_api",
"compute_extension:flavorextraspecs:update": "rule:admin_api",
"compute_extension:flavorextraspecs:delete": "rule:admin_api",
"compute_extension:flavormanage": "rule:admin_api",
"compute_extension:floating_ip_dns": "",
"compute_extension:floating_ip_pools": "",
"compute_extension:floating_ips": "",
"compute_extension:floating_ips_bulk": "rule:admin_api",
"compute_extension:fping": "",
"compute_extension:fping:all_tenants": "rule:admin_api",
"compute_extension:hide_server_addresses": "is_admin:False",
"compute_extension:hosts": "rule:admin_api",
"compute_extension:hypervisors": "rule:admin_api",
"compute_extension:image_size": "",
"compute_extension:instance_actions": "",
"compute_extension:instance_actions:events": "rule:admin_api",
"compute_extension:instance_usage_audit_log": "rule:admin_api",
"compute_extension:keypairs": "",
"compute_extension:keypairs:index": "",
"compute_extension:keypairs:show": "",
"compute_extension:keypairs:create": "",
"compute_extension:keypairs:delete": "",
"compute_extension:multinic": "",
"compute_extension:networks": "rule:admin_api",
"compute_extension:networks:view": "",
"compute_extension:networks_associate": "rule:admin_api",
"compute_extension:os-tenant-networks": "",
"compute_extension:quotas:show": "",
"compute_extension:quotas:update": "rule:admin_api",
"compute_extension:quotas:delete": "rule:admin_api",
"compute_extension:quota_classes": "",
"compute_extension:rescue": "",
"compute_extension:security_group_default_rules": "rule:admin_api",
"compute_extension:security_groups": "",
"compute_extension:server_diagnostics": "rule:admin_api",
"compute_extension:server_groups": "",
"compute_extension:server_password": "",
"compute_extension:server_usage": "",
"compute_extension:services": "rule:admin_api",
"compute_extension:shelve": "",
"compute_extension:shelveOffload": "rule:admin_api",
"compute_extension:simple_tenant_usage:show": "rule:admin_or_owner",
"compute_extension:simple_tenant_usage:list": "rule:admin_api",
"compute_extension:unshelve": "",
"compute_extension:users": "rule:admin_api",
"compute_extension:virtual_interfaces": "",
"compute_extension:virtual_storage_arrays": "",
"compute_extension:volumes": "",
"compute_extension:volume_attachments:index": "",
"compute_extension:volume_attachments:show": "",
"compute_extension:volume_attachments:create": "",
"compute_extension:volume_attachments:update": "",
"compute_extension:volume_attachments:delete": "",
"compute_extension:volumetypes": "",
"compute_extension:availability_zone:list": "",
"compute_extension:availability_zone:detail": "rule:admin_api",
"compute_extension:used_limits_for_admin": "rule:admin_api",
"compute_extension:migrations:index": "rule:admin_api",
"compute_extension:os-assisted-volume-snapshots:create": "rule:admin_api",
"compute_extension:os-assisted-volume-snapshots:delete": "rule:admin_api",
"compute_extension:console_auth_tokens": "rule:admin_api",
"compute_extension:os-server-external-events:create": "rule:admin_api",
"network:get_all": "",
"network:get": "",
"network:create": "",
"network:delete": "",
"network:associate": "",
"network:disassociate": "",
"network:get_vifs_by_instance": "",
"network:allocate_for_instance": "",
"network:deallocate_for_instance": "",
"network:validate_networks": "",
"network:get_instance_uuids_by_ip_filter": "",
"network:get_instance_id_by_floating_address": "",
"network:setup_networks_on_host": "",
"network:get_backdoor_port": "",
"network:get_floating_ip": "",
"network:get_floating_ip_pools": "",
"network:get_floating_ip_by_address": "",
"network:get_floating_ips_by_project": "",
"network:get_floating_ips_by_fixed_address": "",
"network:allocate_floating_ip": "",
"network:associate_floating_ip": "",
"network:disassociate_floating_ip": "",
"network:release_floating_ip": "",
"network:migrate_instance_start": "",
"network:migrate_instance_finish": "",
"network:get_fixed_ip": "",
"network:get_fixed_ip_by_address": "",
"network:add_fixed_ip_to_instance": "",
"network:remove_fixed_ip_from_instance": "",
"network:add_network_to_project": "",
"network:get_instance_nw_info": "",
"network:get_dns_domains": "",
"network:add_dns_entry": "",
"network:modify_dns_entry": "",
"network:delete_dns_entry": "",
"network:get_dns_entries_by_address": "",
"network:get_dns_entries_by_name": "",
"network:create_private_dns_domain": "",
"network:create_public_dns_domain": "",
"network:delete_dns_domain": "",
"network:attach_external_network": "rule:admin_api",
"network:get_vif_by_mac_address": "",
"os_compute_api:servers:detail:get_all_tenants": "is_admin:True",
"os_compute_api:servers:index:get_all_tenants": "is_admin:True",
"os_compute_api:servers:confirm_resize": "",
"os_compute_api:servers:create": "",
"os_compute_api:servers:create:attach_network": "",
"os_compute_api:servers:create:attach_volume": "",
"os_compute_api:servers:create:forced_host": "rule:admin_api",
"os_compute_api:servers:delete": "",
"os_compute_api:servers:update": "",
"os_compute_api:servers:detail": "",
"os_compute_api:servers:index": "",
"os_compute_api:servers:reboot": "",
"os_compute_api:servers:rebuild": "",
"os_compute_api:servers:resize": "",
"os_compute_api:servers:revert_resize": "",
"os_compute_api:servers:show": "",
"os_compute_api:servers:create_image": "",
"os_compute_api:servers:create_image:allow_volume_backed": "",
"os_compute_api:servers:start": "rule:admin_or_owner",
"os_compute_api:servers:stop": "rule:admin_or_owner",
"os_compute_api:os-access-ips:discoverable": "",
"os_compute_api:os-access-ips": "",
"os_compute_api:os-admin-actions": "rule:admin_api",
"os_compute_api:os-admin-actions:discoverable": "",
"os_compute_api:os-admin-actions:reset_network": "rule:admin_api",
"os_compute_api:os-admin-actions:inject_network_info": "rule:admin_api",
"os_compute_api:os-admin-actions:reset_state": "rule:admin_api",
"os_compute_api:os-admin-password": "",
"os_compute_api:os-admin-password:discoverable": "",
"os_compute_api:os-aggregates:discoverable": "",
"os_compute_api:os-aggregates:index": "rule:admin_api",
"os_compute_api:os-aggregates:create": "rule:admin_api",
"os_compute_api:os-aggregates:show": "rule:admin_api",
"os_compute_api:os-aggregates:update": "rule:admin_api",
"os_compute_api:os-aggregates:delete": "rule:admin_api",
"os_compute_api:os-aggregates:add_host": "rule:admin_api",
"os_compute_api:os-aggregates:remove_host": "rule:admin_api",
"os_compute_api:os-aggregates:set_metadata": "rule:admin_api",
"os_compute_api:os-agents": "rule:admin_api",
"os_compute_api:os-agents:discoverable": "",
"os_compute_api:os-attach-interfaces": "",
"os_compute_api:os-attach-interfaces:discoverable": "",
"os_compute_api:os-baremetal-nodes": "rule:admin_api",
"os_compute_api:os-baremetal-nodes:discoverable": "",
"os_compute_api:os-block-device-mapping-v1:discoverable": "",
"os_compute_api:os-cells": "rule:admin_api",
"os_compute_api:os-cells:create": "rule:admin_api",
"os_compute_api:os-cells:delete": "rule:admin_api",
"os_compute_api:os-cells:update": "rule:admin_api",
"os_compute_api:os-cells:sync_instances": "rule:admin_api",
"os_compute_api:os-cells:discoverable": "",
"os_compute_api:os-certificates:create": "",
"os_compute_api:os-certificates:show": "",
"os_compute_api:os-certificates:discoverable": "",
"os_compute_api:os-cloudpipe": "rule:admin_api",
"os_compute_api:os-cloudpipe:discoverable": "",
"os_compute_api:os-config-drive": "",
"os_compute_api:os-consoles:discoverable": "",
"os_compute_api:os-consoles:create": "",
"os_compute_api:os-consoles:delete": "",
"os_compute_api:os-consoles:index": "",
"os_compute_api:os-consoles:show": "",
"os_compute_api:os-console-output:discoverable": "",
"os_compute_api:os-console-output": "",
"os_compute_api:os-remote-consoles": "",
"os_compute_api:os-remote-consoles:discoverable": "",
"os_compute_api:os-create-backup:discoverable": "",
"os_compute_api:os-create-backup": "rule:admin_or_owner",
"os_compute_api:os-deferred-delete": "",
"os_compute_api:os-deferred-delete:discoverable": "",
"os_compute_api:os-disk-config": "",
"os_compute_api:os-disk-config:discoverable": "",
"os_compute_api:os-evacuate": "rule:admin_api",
"os_compute_api:os-evacuate:discoverable": "",
"os_compute_api:os-extended-server-attributes": "rule:admin_api",
"os_compute_api:os-extended-server-attributes:discoverable": "",
"os_compute_api:os-extended-status": "",
"os_compute_api:os-extended-status:discoverable": "",
"os_compute_api:os-extended-availability-zone": "",
"os_compute_api:os-extended-availability-zone:discoverable": "",
"os_compute_api:extensions": "",
"os_compute_api:extension_info:discoverable": "",
"os_compute_api:os-extended-volumes": "",
"os_compute_api:os-extended-volumes:discoverable": "",
"os_compute_api:os-fixed-ips": "rule:admin_api",
"os_compute_api:os-fixed-ips:discoverable": "",
"os_compute_api:os-flavor-access": "",
"os_compute_api:os-flavor-access:discoverable": "",
"os_compute_api:os-flavor-access:remove_tenant_access": "rule:admin_api",
"os_compute_api:os-flavor-access:add_tenant_access": "rule:admin_api",
"os_compute_api:os-flavor-rxtx": "",
"os_compute_api:os-flavor-rxtx:discoverable": "",
"os_compute_api:flavors:discoverable": "",
"os_compute_api:os-flavor-extra-specs:discoverable": "",
"os_compute_api:os-flavor-extra-specs:index": "",
"os_compute_api:os-flavor-extra-specs:show": "",
"os_compute_api:os-flavor-extra-specs:create": "rule:admin_api",
"os_compute_api:os-flavor-extra-specs:update": "rule:admin_api",
"os_compute_api:os-flavor-extra-specs:delete": "rule:admin_api",
"os_compute_api:os-flavor-manage:discoverable": "",
"os_compute_api:os-flavor-manage": "rule:admin_api",
"os_compute_api:os-floating-ip-dns": "",
"os_compute_api:os-floating-ip-dns:discoverable": "",
"os_compute_api:os-floating-ip-dns:domain:update": "rule:admin_api",
"os_compute_api:os-floating-ip-dns:domain:delete": "rule:admin_api",
"os_compute_api:os-floating-ip-pools": "",
"os_compute_api:os-floating-ip-pools:discoverable": "",
"os_compute_api:os-floating-ips": "",
"os_compute_api:os-floating-ips:discoverable": "",
"os_compute_api:os-floating-ips-bulk": "rule:admin_api",
"os_compute_api:os-floating-ips-bulk:discoverable": "",
"os_compute_api:os-fping": "",
"os_compute_api:os-fping:discoverable": "",
"os_compute_api:os-fping:all_tenants": "rule:admin_api",
"os_compute_api:os-hide-server-addresses": "is_admin:False",
"os_compute_api:os-hide-server-addresses:discoverable": "",
"os_compute_api:os-hosts": "rule:admin_api",
"os_compute_api:os-hosts:discoverable": "",
"os_compute_api:os-hypervisors": "rule:admin_api",
"os_compute_api:os-hypervisors:discoverable": "",
"os_compute_api:images:discoverable": "",
"os_compute_api:image-size": "",
"os_compute_api:image-size:discoverable": "",
"os_compute_api:os-instance-actions": "",
"os_compute_api:os-instance-actions:discoverable": "",
"os_compute_api:os-instance-actions:events": "rule:admin_api",
"os_compute_api:os-instance-usage-audit-log": "rule:admin_api",
"os_compute_api:os-instance-usage-audit-log:discoverable": "",
"os_compute_api:ips:discoverable": "",
"os_compute_api:ips:index": "rule:admin_or_owner",
"os_compute_api:ips:show": "rule:admin_or_owner",
"os_compute_api:os-keypairs:discoverable": "",
"os_compute_api:os-keypairs": "",
"os_compute_api:os-keypairs:index": "rule:admin_api or user_id:%(user_id)s",
"os_compute_api:os-keypairs:show": "rule:admin_api or user_id:%(user_id)s",
"os_compute_api:os-keypairs:create": "rule:admin_api or user_id:%(user_id)s",
"os_compute_api:os-keypairs:delete": "rule:admin_api or user_id:%(user_id)s",
"os_compute_api:limits:discoverable": "",
"os_compute_api:limits": "",
"os_compute_api:os-lock-server:discoverable": "",
"os_compute_api:os-lock-server:lock": "rule:admin_or_owner",
"os_compute_api:os-lock-server:unlock": "rule:admin_or_owner",
"os_compute_api:os-lock-server:unlock:unlock_override": "rule:admin_api",
"os_compute_api:os-migrate-server:discoverable": "",
"os_compute_api:os-migrate-server:migrate": "rule:admin_api",
"os_compute_api:os-migrate-server:migrate_live": "rule:admin_api",
"os_compute_api:os-multinic": "",
"os_compute_api:os-multinic:discoverable": "",
"os_compute_api:os-networks": "rule:admin_api",
"os_compute_api:os-networks:view": "",
"os_compute_api:os-networks:discoverable": "",
"os_compute_api:os-networks-associate": "rule:admin_api",
"os_compute_api:os-networks-associate:discoverable": "",
"os_compute_api:os-pause-server:discoverable": "",
"os_compute_api:os-pause-server:pause": "rule:admin_or_owner",
"os_compute_api:os-pause-server:unpause": "rule:admin_or_owner",
"os_compute_api:os-pci:pci_servers": "",
"os_compute_api:os-pci:discoverable": "",
"os_compute_api:os-pci:index": "rule:admin_api",
"os_compute_api:os-pci:detail": "rule:admin_api",
"os_compute_api:os-pci:show": "rule:admin_api",
"os_compute_api:os-personality:discoverable": "",
"os_compute_api:os-preserve-ephemeral-rebuild:discoverable": "",
"os_compute_api:os-quota-sets:discoverable": "",
"os_compute_api:os-quota-sets:show": "rule:admin_or_owner",
"os_compute_api:os-quota-sets:defaults": "",
"os_compute_api:os-quota-sets:update": "rule:admin_api",
"os_compute_api:os-quota-sets:delete": "rule:admin_api",
"os_compute_api:os-quota-sets:detail": "rule:admin_api",
"os_compute_api:os-quota-class-sets:update": "rule:admin_api",
"os_compute_api:os-quota-class-sets:show": "is_admin:True or quota_class:%(quota_class)s",
"os_compute_api:os-quota-class-sets:discoverable": "",
"os_compute_api:os-rescue": "",
"os_compute_api:os-rescue:discoverable": "",
"os_compute_api:os-scheduler-hints:discoverable": "",
"os_compute_api:os-security-group-default-rules:discoverable": "",
"os_compute_api:os-security-group-default-rules": "rule:admin_api",
"os_compute_api:os-security-groups": "",
"os_compute_api:os-security-groups:discoverable": "",
"os_compute_api:os-server-diagnostics": "rule:admin_api",
"os_compute_api:os-server-diagnostics:discoverable": "",
"os_compute_api:os-server-password": "",
"os_compute_api:os-server-password:discoverable": "",
"os_compute_api:os-server-usage": "",
"os_compute_api:os-server-usage:discoverable": "",
"os_compute_api:os-server-groups": "",
"os_compute_api:os-server-groups:discoverable": "",
"os_compute_api:os-services": "rule:admin_api",
"os_compute_api:os-services:discoverable": "",
"os_compute_api:server-metadata:discoverable": "",
"os_compute_api:server-metadata:index": "rule:admin_or_owner",
"os_compute_api:server-metadata:show": "rule:admin_or_owner",
"os_compute_api:server-metadata:delete": "rule:admin_or_owner",
"os_compute_api:server-metadata:create": "rule:admin_or_owner",
"os_compute_api:server-metadata:update": "rule:admin_or_owner",
"os_compute_api:server-metadata:update_all": "rule:admin_or_owner",
"os_compute_api:servers:discoverable": "",
"os_compute_api:os-shelve:shelve": "",
"os_compute_api:os-shelve:shelve:discoverable": "",
"os_compute_api:os-shelve:shelve_offload": "rule:admin_api",
"os_compute_api:os-simple-tenant-usage:discoverable": "",
"os_compute_api:os-simple-tenant-usage:show": "rule:admin_or_owner",
"os_compute_api:os-simple-tenant-usage:list": "rule:admin_api",
"os_compute_api:os-suspend-server:discoverable": "",
"os_compute_api:os-suspend-server:suspend": "rule:admin_or_owner",
"os_compute_api:os-suspend-server:resume": "rule:admin_or_owner",
"os_compute_api:os-tenant-networks": "rule:admin_or_owner",
"os_compute_api:os-tenant-networks:discoverable": "",
"os_compute_api:os-shelve:unshelve": "",
"os_compute_api:os-user-data:discoverable": "",
"os_compute_api:os-virtual-interfaces": "",
"os_compute_api:os-virtual-interfaces:discoverable": "",
"os_compute_api:os-volumes": "",
"os_compute_api:os-volumes:discoverable": "",
"os_compute_api:os-volumes-attachments:index": "",
"os_compute_api:os-volumes-attachments:show": "",
"os_compute_api:os-volumes-attachments:create": "",
"os_compute_api:os-volumes-attachments:update": "",
"os_compute_api:os-volumes-attachments:delete": "",
"os_compute_api:os-volumes-attachments:discoverable": "",
"os_compute_api:os-availability-zone:list": "",
"os_compute_api:os-availability-zone:discoverable": "",
"os_compute_api:os-availability-zone:detail": "rule:admin_api",
"os_compute_api:os-used-limits": "rule:admin_api",
"os_compute_api:os-used-limits:discoverable": "",
"os_compute_api:os-migrations:index": "rule:admin_api",
"os_compute_api:os-migrations:discoverable": "",
"os_compute_api:os-assisted-volume-snapshots:create": "rule:admin_api",
"os_compute_api:os-assisted-volume-snapshots:delete": "rule:admin_api",
"os_compute_api:os-assisted-volume-snapshots:discoverable": "",
"os_compute_api:os-console-auth-tokens": "rule:admin_api",
"os_compute_api:os-server-external-events:create": "rule:admin_api"
}

15
etc/xjob-cfg-gen.conf Normal file
View File

@ -0,0 +1,15 @@
[DEFAULT]
output_file = etc/xjob.conf.sample
wrap_width = 79
namespace = tricircle.xjob
namespace = tricircle.common
namespace = oslo.log
namespace = oslo.messaging
namespace = oslo.policy
namespace = oslo.service.periodic_task
namespace = oslo.service.service
namespace = oslo.service.sslutils
namespace = oslo.db
namespace = oslo.middleware
namespace = oslo.concurrency
namespace = keystonemiddleware.auth_token

View File

@ -13,7 +13,7 @@ eventlet>=0.17.4
pecan>=1.0.0 pecan>=1.0.0
greenlet>=0.3.2 greenlet>=0.3.2
httplib2>=0.7.5 httplib2>=0.7.5
requests>=2.8.1 requests!=2.9.0,>=2.8.1
Jinja2>=2.8 # BSD License (3 clause) Jinja2>=2.8 # BSD License (3 clause)
keystonemiddleware>=4.0.0 keystonemiddleware>=4.0.0
netaddr!=0.7.16,>=0.7.12 netaddr!=0.7.16,>=0.7.12
@ -30,7 +30,7 @@ alembic>=0.8.0
six>=1.9.0 six>=1.9.0
stevedore>=1.5.0 # Apache-2.0 stevedore>=1.5.0 # Apache-2.0
oslo.concurrency>=2.3.0 # Apache-2.0 oslo.concurrency>=2.3.0 # Apache-2.0
oslo.config>=2.7.0 # Apache-2.0 oslo.config>=3.2.0 # Apache-2.0
oslo.context>=0.2.0 # Apache-2.0 oslo.context>=0.2.0 # Apache-2.0
oslo.db>=4.1.0 # Apache-2.0 oslo.db>=4.1.0 # Apache-2.0
oslo.i18n>=1.5.0 # Apache-2.0 oslo.i18n>=1.5.0 # Apache-2.0
@ -41,6 +41,6 @@ oslo.policy>=0.5.0 # Apache-2.0
oslo.rootwrap>=2.0.0 # Apache-2.0 oslo.rootwrap>=2.0.0 # Apache-2.0
oslo.serialization>=1.10.0 # Apache-2.0 oslo.serialization>=1.10.0 # Apache-2.0
oslo.service>=1.0.0 # Apache-2.0 oslo.service>=1.0.0 # Apache-2.0
oslo.utils!=3.1.0,>=2.8.0 # Apache-2.0 oslo.utils>=3.2.0 # Apache-2.0
oslo.versionedobjects>=0.13.0 oslo.versionedobjects>=0.13.0
sqlalchemy-migrate>=0.9.6 sqlalchemy-migrate>=0.9.6

View File

@ -46,3 +46,12 @@ mapping_file = babel.cfg
output_file = tricircle/locale/tricircle.pot output_file = tricircle/locale/tricircle.pot
[entry_points] [entry_points]
oslo.config.opts =
tricircle.api = tricircle.api.opts:list_opts
tricircle.common = tricircle.common.opts:list_opts
tricircle.db = tricircle.db.opts:list_opts
tricircle.nova_apigw = tricircle.nova_apigw.opts:list_opts
tricircle.cinder_apigw = tricircle.cinder_apigw.opts:list_opts
tricircle.xjob = tricircle.xjob.opts:list_opts

View File

@ -18,6 +18,6 @@ testscenarios>=0.4
WebTest>=2.0 WebTest>=2.0
oslotest>=1.10.0 # Apache-2.0 oslotest>=1.10.0 # Apache-2.0
os-testr>=0.4.1 os-testr>=0.4.1
tempest-lib>=0.11.0 tempest-lib>=0.13.0
ddt>=1.0.1 ddt>=1.0.1
pylint==1.4.5 # GNU GPL v2 pylint==1.4.5 # GNU GPL v2

14
tox.ini
View File

@ -6,19 +6,15 @@ skipsdist = True
[testenv] [testenv]
sitepackages = True sitepackages = True
usedevelop = True usedevelop = True
install_command = install_command = pip install -U --force-reinstall {opts} {packages}
pip install -U --force-reinstall {opts} {packages}
setenv = setenv =
VIRTUAL_ENV={envdir} VIRTUAL_ENV={envdir}
deps = deps =
-egit+https://git.openstack.org/openstack/neutron@master#egg=neutron
-r{toxinidir}/test-requirements.txt -r{toxinidir}/test-requirements.txt
-egit+https://git.openstack.org/openstack/neutron@master#egg=neutron
commands = python setup.py testr --slowest --testr-args='{posargs}' commands = python setup.py testr --slowest --testr-args='{posargs}'
whitelist_externals = rm whitelist_externals = rm
[testenv:common-constraints]
install_command = {toxinidir}/tools/tox_install.sh constrained -c{env:UPPER_CONTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages}
[testenv:pep8] [testenv:pep8]
commands = flake8 commands = flake8
@ -28,6 +24,12 @@ commands = {posargs}
[testenv:cover] [testenv:cover]
commands = python setup.py testr --coverage --testr-args='{posargs}' commands = python setup.py testr --coverage --testr-args='{posargs}'
[testenv:genconfig]
commands = oslo-config-generator --config-file=etc/api-cfg-gen.conf
oslo-config-generator --config-file=etc/nova_apigw-cfg-gen.conf
oslo-config-generator --config-file=etc/cinder_apigw-cfg-gen.conf
oslo-config-generator --config-file=etc/xjob-cfg-gen.conf
[testenv:docs] [testenv:docs]
commands = python setup.py build_sphinx commands = python setup.py build_sphinx

0
tricircle/api/__init__.py Executable file → Normal file
View File

50
tricircle/api/app.py Executable file → Normal file
View File

@ -13,12 +13,36 @@
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
from keystonemiddleware import auth_token
from oslo_config import cfg
from oslo_middleware import request_id
import pecan import pecan
from tricircle.common import exceptions as t_exc from oslo_config import cfg
from tricircle.common.i18n import _
from tricircle.common import restapp
common_opts = [
cfg.StrOpt('bind_host', default='0.0.0.0',
help=_("The host IP to bind to")),
cfg.IntOpt('bind_port', default=19999,
help=_("The port to bind to")),
cfg.IntOpt('api_workers', default=1,
help=_("number of api workers")),
cfg.StrOpt('api_extensions_path', default="",
help=_("The path for API extensions")),
cfg.StrOpt('auth_strategy', default='keystone',
help=_("The type of authentication to use")),
cfg.BoolOpt('allow_bulk', default=True,
help=_("Allow the usage of the bulk API")),
cfg.BoolOpt('allow_pagination', default=False,
help=_("Allow the usage of the pagination")),
cfg.BoolOpt('allow_sorting', default=False,
help=_("Allow the usage of the sorting")),
cfg.StrOpt('pagination_max_limit', default="-1",
help=_("The maximum number of items returned in a single "
"response, value was 'infinite' or negative integer "
"means no limit")),
]
def setup_app(*args, **kwargs): def setup_app(*args, **kwargs):
@ -43,26 +67,10 @@ def setup_app(*args, **kwargs):
app = pecan.make_app( app = pecan.make_app(
pecan_config.app.root, pecan_config.app.root,
debug=False, debug=False,
wrap_app=_wrap_app, wrap_app=restapp.auth_app,
force_canonical=False, force_canonical=False,
hooks=[], hooks=[],
guess_content_type_from_ext=True guess_content_type_from_ext=True
) )
return app return app
def _wrap_app(app):
app = request_id.RequestId(app)
if cfg.CONF.auth_strategy == 'noauth':
pass
elif cfg.CONF.auth_strategy == 'keystone':
# NOTE(zhiyuan) pkg_resources will try to load tricircle to get module
# version, passing "project" as empty string to bypass it
app = auth_token.AuthProtocol(app, {'project': ''})
else:
raise t_exc.InvalidConfigurationOption(
opt_name='auth_strategy', opt_value=cfg.CONF.auth_strategy)
return app

View File

@ -0,0 +1,301 @@
# Copyright (c) 2015 Huawei Tech. Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from pecan import expose
from pecan import Response
from pecan import rest
import oslo_db.exception as db_exc
from oslo_log import log as logging
from oslo_utils import uuidutils
from tricircle.common import az_ag
import tricircle.common.context as t_context
import tricircle.common.exceptions as t_exc
from tricircle.common.i18n import _
from tricircle.common.i18n import _LE
from tricircle.common import utils
from tricircle.db import api as db_api
from tricircle.db import core
from tricircle.db import models
LOG = logging.getLogger(__name__)
class PodsController(rest.RestController):
def __init__(self):
pass
@expose(generic=True, template='json')
def post(self, **kw):
context = t_context.extract_context_from_environ()
if not t_context.is_admin_context(context):
pecan.abort(400, _('Admin role required to create pods'))
return
if 'pod' not in kw:
pecan.abort(400, _('Request body pod not found'))
return
pod = kw['pod']
# if az_name is null, and there is already one in db
pod_name = pod.get('pod_name', '').strip()
pod_az_name = pod.get('pod_az_name', '').strip()
dc_name = pod.get('dc_name', '').strip()
az_name = pod.get('az_name', '').strip()
_uuid = uuidutils.generate_uuid()
if az_name == '' and pod_name == '':
return Response(_('Valid pod_name is required for top region'),
422)
if az_name != '' and pod_name == '':
return Response(_('Valid pod_name is required for pod'), 422)
if pod.get('az_name') is None:
if self._get_top_region(context) != '':
return Response(_('Top region already exists'), 409)
# if az_name is not null, then the pod region name should not
# be same as that the top region
if az_name != '':
if self._get_top_region(context) == pod_name and pod_name != '':
return Response(
_('Pod region name duplicated with the top region name'),
409)
# to create the top region, make the pod_az_name to null value
if az_name == '':
pod_az_name = ''
try:
with context.session.begin():
# if not top region,
# then add corresponding ag and az for the pod
if az_name != '':
ag_name = utils.get_ag_name(pod_name)
aggregate = az_ag.create_ag_az(context,
ag_name=ag_name,
az_name=az_name)
if aggregate is None:
return Response(_('Ag creation failure'), 400)
new_pod = core.create_resource(
context, models.Pod,
{'pod_id': _uuid,
'pod_name': pod_name,
'pod_az_name': pod_az_name,
'dc_name': dc_name,
'az_name': az_name})
except db_exc.DBDuplicateEntry as e1:
LOG.error(_LE('Record already exists: %(exception)s'),
{'exception': e1})
return Response(_('Record already exists'), 409)
except Exception as e2:
LOG.error(_LE('Fail to create pod: %(exception)s'),
{'exception': e2})
return Response(_('Fail to create pod'), 500)
return {'pod': new_pod}
@expose(generic=True, template='json')
def get_one(self, _id):
context = t_context.extract_context_from_environ()
if not t_context.is_admin_context(context):
pecan.abort(400, _('Admin role required to show pods'))
return
try:
return {'pod': db_api.get_pod(context, _id)}
except t_exc.ResourceNotFound:
pecan.abort(404, _('Pod not found'))
return
@expose(generic=True, template='json')
def get_all(self):
context = t_context.extract_context_from_environ()
if not t_context.is_admin_context(context):
pecan.abort(400, _('Admin role required to list pods'))
return
try:
return {'pods': db_api.list_pods(context)}
except Exception as e:
LOG.error(_LE('Fail to list pod: %(exception)s'),
{'exception': e})
pecan.abort(500, _('Fail to list pod'))
return
@expose(generic=True, template='json')
def delete(self, _id):
context = t_context.extract_context_from_environ()
if not t_context.is_admin_context(context):
pecan.abort(400, _('Admin role required to delete pods'))
return
try:
with context.session.begin():
pod = core.get_resource(context, models.Pod, _id)
if pod is not None:
ag_name = utils.get_ag_name(pod['pod_name'])
ag = az_ag.get_ag_by_name(context, ag_name)
if ag is not None:
az_ag.delete_ag(context, ag['id'])
core.delete_resource(context, models.Pod, _id)
pecan.response.status = 200
except t_exc.ResourceNotFound:
return Response(_('Pod not found'), 404)
except Exception as e:
LOG.error(_LE('Fail to delete pod: %(exception)s'),
{'exception': e})
return Response(_('Fail to delete pod'), 500)
def _get_top_region(self, ctx):
top_region_name = ''
try:
with ctx.session.begin():
pods = core.query_resource(ctx,
models.Pod, [], [])
for pod in pods:
if pod['az_name'] == '' and pod['pod_name'] != '':
return pod['pod_name']
except Exception:
return top_region_name
return top_region_name
class BindingsController(rest.RestController):
def __init__(self):
pass
@expose(generic=True, template='json')
def post(self, **kw):
context = t_context.extract_context_from_environ()
if not t_context.is_admin_context(context):
pecan.abort(400, _('Admin role required to create bindings'))
return
if 'pod_binding' not in kw:
pecan.abort(400, _('Request body not found'))
return
pod_b = kw['pod_binding']
tenant_id = pod_b.get('tenant_id', '').strip()
pod_id = pod_b.get('pod_id', '').strip()
_uuid = uuidutils.generate_uuid()
if tenant_id == '' or pod_id == '':
return Response(
_('Tenant_id and pod_id can not be empty'),
422)
# the az_pod_map_id should be exist for in the pod map table
try:
with context.session.begin():
pod = core.get_resource(context, models.Pod,
pod_id)
if pod.get('az_name') == '':
return Response(_('Top region can not be bound'), 422)
except t_exc.ResourceNotFound:
return Response(_('pod_id not found in pod'), 422)
except Exception as e:
LOG.error(_LE('Fail to create pod binding: %(exception)s'),
{'exception': e})
pecan.abort(500, _('Fail to create pod binding'))
return
try:
with context.session.begin():
pod_binding = core.create_resource(context, models.PodBinding,
{'id': _uuid,
'tenant_id': tenant_id,
'pod_id': pod_id})
except db_exc.DBDuplicateEntry:
return Response(_('Pod binding already exists'), 409)
except db_exc.DBConstraintError:
return Response(_('pod_id not exists in pod'), 422)
except db_exc.DBReferenceError:
return Response(_('DB reference not exists in pod'), 422)
except Exception as e:
LOG.error(_LE('Fail to create pod binding: %(exception)s'),
{'exception': e})
pecan.abort(500, _('Fail to create pod binding'))
return
return {'pod_binding': pod_binding}
@expose(generic=True, template='json')
def get_one(self, _id):
context = t_context.extract_context_from_environ()
if not t_context.is_admin_context(context):
pecan.abort(400, _('Admin role required to show bindings'))
return
try:
with context.session.begin():
pod_binding = core.get_resource(context,
models.PodBinding,
_id)
return {'pod_binding': pod_binding}
except t_exc.ResourceNotFound:
pecan.abort(404, _('Tenant pod binding not found'))
return
@expose(generic=True, template='json')
def get_all(self):
context = t_context.extract_context_from_environ()
if not t_context.is_admin_context(context):
pecan.abort(400, _('Admin role required to list bindings'))
return
try:
with context.session.begin():
pod_bindings = core.query_resource(context,
models.PodBinding,
[], [])
except Exception:
pecan.abort(500, _('Fail to list tenant pod bindings'))
return
return {'pod_bindings': pod_bindings}
@expose(generic=True, template='json')
def delete(self, _id):
context = t_context.extract_context_from_environ()
if not t_context.is_admin_context(context):
pecan.abort(400, _('Admin role required to delete bindings'))
return
try:
with context.session.begin():
core.delete_resource(context, models.PodBinding, _id)
pecan.response.status = 200
except t_exc.ResourceNotFound:
pecan.abort(404, _('Pod binding not found'))
return

View File

@ -13,19 +13,14 @@
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
import uuid
import oslo_log.log as logging import oslo_log.log as logging
import pecan import pecan
from pecan import request from pecan import request
from pecan import rest
from tricircle.common import cascading_site_api from tricircle.api.controllers import pod
import tricircle.common.context as t_context import tricircle.common.context as t_context
from tricircle.common import utils
from tricircle.db import client
from tricircle.db import exception
from tricircle.db import models
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
@ -49,7 +44,7 @@ class RootController(object):
if version == 'v1.0': if version == 'v1.0':
return V1Controller(), remainder return V1Controller(), remainder
@pecan.expose('json') @pecan.expose(generic=True, template='json')
def index(self): def index(self):
return { return {
"versions": [ "versions": [
@ -67,19 +62,28 @@ class RootController(object):
] ]
} }
@index.when(method='POST')
@index.when(method='PUT')
@index.when(method='DELETE')
@index.when(method='HEAD')
@index.when(method='PATCH')
def not_supported(self):
pecan.abort(405)
class V1Controller(object): class V1Controller(object):
def __init__(self): def __init__(self):
self.sub_controllers = { self.sub_controllers = {
"sites": SitesController() "pods": pod.PodsController(),
"bindings": pod.BindingsController()
} }
for name, ctrl in self.sub_controllers.items(): for name, ctrl in self.sub_controllers.items():
setattr(self, name, ctrl) setattr(self, name, ctrl)
@pecan.expose('json') @pecan.expose(generic=True, template='json')
def index(self): def index(self):
return { return {
"version": "1.0", "version": "1.0",
@ -93,6 +97,14 @@ class V1Controller(object):
] ]
} }
@index.when(method='POST')
@index.when(method='PUT')
@index.when(method='DELETE')
@index.when(method='HEAD')
@index.when(method='PATCH')
def not_supported(self):
pecan.abort(405)
def _extract_context_from_environ(environ): def _extract_context_from_environ(environ):
context_paras = {'auth_token': 'HTTP_X_AUTH_TOKEN', context_paras = {'auth_token': 'HTTP_X_AUTH_TOKEN',
@ -114,83 +126,3 @@ def _extract_context_from_environ(environ):
def _get_environment(): def _get_environment():
return request.environ return request.environ
class SitesController(rest.RestController):
"""ReST controller to handle CRUD operations of site resource"""
@expose()
def put(self, site_id, **kw):
return {'message': 'PUT'}
@expose()
def get_one(self, site_id):
context = _extract_context_from_environ(_get_environment())
try:
return {'site': models.get_site(context, site_id)}
except exception.ResourceNotFound:
pecan.abort(404, 'Site with id %s not found' % site_id)
@expose()
def get_all(self):
context = _extract_context_from_environ(_get_environment())
sites = models.list_sites(context, [])
return {'sites': sites}
@expose()
def post(self, **kw):
context = _extract_context_from_environ(_get_environment())
if not context.is_admin:
pecan.abort(400, 'Admin role required to create sites')
return
site_name = kw.get('name')
is_top_site = kw.get('top', False)
if not site_name:
pecan.abort(400, 'Name of site required')
return
site_filters = [{'key': 'site_name', 'comparator': 'eq',
'value': site_name}]
sites = models.list_sites(context, site_filters)
if sites:
pecan.abort(409, 'Site with name %s exists' % site_name)
return
ag_name = utils.get_ag_name(site_name)
# top site doesn't need az
az_name = utils.get_az_name(site_name) if not is_top_site else ''
try:
site_dict = {'site_id': str(uuid.uuid4()),
'site_name': site_name,
'az_id': az_name}
site = models.create_site(context, site_dict)
except Exception as e:
LOG.debug(e.message)
pecan.abort(500, 'Fail to create site')
return
# top site doesn't need aggregate
if is_top_site:
pecan.response.status = 201
return {'site': site}
else:
try:
top_client = client.Client()
top_client.create_aggregates(context, ag_name, az_name)
site_api = cascading_site_api.CascadingSiteNotifyAPI()
site_api.create_site(context, site_name)
except Exception as e:
LOG.debug(e.message)
# delete previously created site
models.delete_site(context, site['site_id'])
pecan.abort(500, 'Fail to create aggregate')
return
pecan.response.status = 201
return {'site': site}
@expose()
def delete(self, site_id):
return {'message': 'DELETE'}

22
tricircle/api/opts.py Normal file
View File

@ -0,0 +1,22 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import tricircle.api.app
def list_opts():
return [
('DEFAULT', tricircle.api.app.common_opts),
]

View File

@ -0,0 +1,76 @@
# Copyright (c) 2015 Huawei, Tech. Co,. Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from oslo_config import cfg
from tricircle.common.i18n import _
from tricircle.common import restapp
common_opts = [
cfg.StrOpt('bind_host', default='0.0.0.0',
help=_("The host IP to bind to")),
cfg.IntOpt('bind_port', default=19997,
help=_("The port to bind to")),
cfg.IntOpt('api_workers', default=1,
help=_("number of api workers")),
cfg.StrOpt('api_extensions_path', default="",
help=_("The path for API extensions")),
cfg.StrOpt('auth_strategy', default='keystone',
help=_("The type of authentication to use")),
cfg.BoolOpt('allow_bulk', default=True,
help=_("Allow the usage of the bulk API")),
cfg.BoolOpt('allow_pagination', default=False,
help=_("Allow the usage of the pagination")),
cfg.BoolOpt('allow_sorting', default=False,
help=_("Allow the usage of the sorting")),
cfg.StrOpt('pagination_max_limit', default="-1",
help=_("The maximum number of items returned in a single "
"response, value was 'infinite' or negative integer "
"means no limit")),
]
def setup_app(*args, **kwargs):
config = {
'server': {
'port': cfg.CONF.bind_port,
'host': cfg.CONF.bind_host
},
'app': {
'root': 'tricircle.cinder_apigw.controllers.root.RootController',
'modules': ['tricircle.cinder_apigw'],
'errors': {
400: '/error',
'__force_dict__': True
}
}
}
pecan_config = pecan.configuration.conf_from_dict(config)
# app_hooks = [], hook collection will be put here later
app = pecan.make_app(
pecan_config.app.root,
debug=False,
wrap_app=restapp.auth_app,
force_canonical=False,
hooks=[],
guess_content_type_from_ext=True
)
return app

View File

@ -0,0 +1,118 @@
# Copyright (c) 2015 Huawei Tech. Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
import oslo_log.log as logging
from tricircle.cinder_apigw.controllers import volume
LOG = logging.getLogger(__name__)
class RootController(object):
@pecan.expose()
def _lookup(self, version, *remainder):
if version == 'v2':
return V2Controller(), remainder
@pecan.expose(generic=True, template='json')
def index(self):
return {
"versions": [
{
"status": "CURRENT",
"updated": "2012-11-21T11:33:21Z",
"id": "v2.0",
"links": [
{
"href": pecan.request.application_url + "/v2/",
"rel": "self"
}
]
}
]
}
@index.when(method='POST')
@index.when(method='PUT')
@index.when(method='DELETE')
@index.when(method='HEAD')
@index.when(method='PATCH')
def not_supported(self):
pecan.abort(405)
class V2Controller(object):
_media_type1 = "application/vnd.openstack.volume+xml;version=1"
_media_type2 = "application/vnd.openstack.volume+json;version=1"
def __init__(self):
self.resource_controller = {
'volumes': volume.VolumeController,
}
@pecan.expose()
def _lookup(self, tenant_id, *remainder):
if not remainder:
pecan.abort(404)
return
resource = remainder[0]
if resource not in self.resource_controller:
pecan.abort(404)
return
return self.resource_controller[resource](tenant_id), remainder[1:]
@pecan.expose(generic=True, template='json')
def index(self):
return {
"version": {
"status": "CURRENT",
"updated": "2012-11-21T11:33:21Z",
"media-types": [
{
"base": "application/xml",
"type": self._media_type1
},
{
"base": "application/json",
"type": self._media_type2
}
],
"id": "v2.0",
"links": [
{
"href": pecan.request.application_url + "/v2/",
"rel": "self"
},
{
"href": "http://docs.openstack.org/",
"type": "text/html",
"rel": "describedby"
}
]
}
}
@index.when(method='POST')
@index.when(method='PUT')
@index.when(method='DELETE')
@index.when(method='HEAD')
@index.when(method='PATCH')
def not_supported(self):
pecan.abort(405)

View File

@ -0,0 +1,332 @@
# Copyright (c) 2015 Huawei Tech. Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from pecan import expose
from pecan import request
from pecan import response
from pecan import Response
from pecan import rest
from oslo_log import log as logging
from oslo_serialization import jsonutils
from tricircle.common import az_ag
from tricircle.common import constants as cons
import tricircle.common.context as t_context
from tricircle.common import httpclient as hclient
from tricircle.common.i18n import _
from tricircle.common.i18n import _LE
import tricircle.db.api as db_api
from tricircle.db import core
from tricircle.db import models
LOG = logging.getLogger(__name__)
class VolumeController(rest.RestController):
def __init__(self, tenant_id):
self.tenant_id = tenant_id
@expose(generic=True, template='json')
def post(self, **kw):
context = t_context.extract_context_from_environ()
if 'volume' not in kw:
pecan.abort(400, _('Volume not found in request body'))
return
if 'availability_zone' not in kw['volume']:
pecan.abort(400, _('Availability zone not set in request'))
return
pod, pod_az = az_ag.get_pod_by_az_tenant(
context,
az_name=kw['volume']['availability_zone'],
tenant_id=self.tenant_id)
if not pod:
pecan.abort(500, _('Pod not configured or scheduling failure'))
LOG.error(_LE("Pod not configured or scheduling failure"))
return
t_pod = db_api.get_top_pod(context)
if not t_pod:
pecan.abort(500, _('Top Pod not configured'))
LOG.error(_LE("Top Po not configured"))
return
# TODO(joehuang): get release from pod configuration,
# to convert the content
# b_release = pod['release']
# t_release = t_pod['release']
t_release = 'Mitaka'
b_release = 'Mitaka'
s_ctx = hclient.get_pod_service_ctx(
context,
request.url,
pod['pod_name'],
s_type=cons.ST_CINDER)
if s_ctx['b_url'] == '':
pecan.abort(500, _('bottom pod endpoint incorrect'))
LOG.error(_LE("bottom pod endpoint incorrect %s") %
pod['pod_name'])
return
b_headers = self._convert_header(t_release,
b_release,
request.headers)
t_vol = kw['volume']
# add or remove key-value in the request for diff. version
b_vol_req = self._convert_object(t_release, b_release, t_vol,
res_type=cons.RT_VOLUME)
# convert az to the configured one
# remove the AZ parameter to bottom request for default one
b_vol_req['availability_zone'] = pod['pod_az_name']
if b_vol_req['availability_zone'] == '':
b_vol_req.pop("availability_zone", None)
b_body = jsonutils.dumps({'volume': b_vol_req})
resp = hclient.forward_req(
context,
'POST',
b_headers,
s_ctx['b_url'],
b_body)
b_status = resp.status_code
b_ret_body = jsonutils.loads(resp.content)
# build routing and convert response from the bottom pod
# for different version.
response.status = b_status
if b_status == 202:
if b_ret_body.get('volume') is not None:
b_vol_ret = b_ret_body['volume']
try:
with context.session.begin():
core.create_resource(
context, models.ResourceRouting,
{'top_id': b_vol_ret['id'],
'bottom_id': b_vol_ret['id'],
'pod_id': pod['pod_id'],
'project_id': self.tenant_id,
'resource_type': cons.RT_VOLUME})
except Exception as e:
LOG.error(_LE('Fail to create volume: %(exception)s'),
{'exception': e})
return Response(_('Failed to create volume'), 500)
ret_vol = self._convert_object(b_release, t_release,
b_vol_ret,
res_type=cons.RT_VOLUME)
ret_vol['availability_zone'] = pod['az_name']
return {'volume': ret_vol}
return {'error': b_ret_body}
@expose(generic=True, template='json')
def get_one(self, _id):
context = t_context.extract_context_from_environ()
if _id == 'detail':
return {'volumes': self._get_all(context)}
# TODO(joehuang): get the release of top and bottom
t_release = 'MITATA'
b_release = 'MITATA'
b_headers = self._convert_header(t_release,
b_release,
request.headers)
s_ctx = self._get_res_routing_ref(context, _id, request.url)
if not s_ctx:
return Response(_('Failed to find resource'), 404)
if s_ctx['b_url'] == '':
return Response(_('bottom pod endpoint incorrect'), 404)
resp = hclient.forward_req(context, 'GET',
b_headers,
s_ctx['b_url'],
request.body)
b_ret_body = jsonutils.loads(resp.content)
b_status = resp.status_code
response.status = b_status
if b_status == 200:
if b_ret_body.get('volume') is not None:
b_vol_ret = b_ret_body['volume']
ret_vol = self._convert_object(b_release, t_release,
b_vol_ret,
res_type=cons.RT_VOLUME)
pod = self._get_pod_by_top_id(context, _id)
if pod:
ret_vol['availability_zone'] = pod['az_name']
return {'volume': ret_vol}
# resource not find but routing exist, remove the routing
if b_status == 404:
filters = [{'key': 'top_id', 'comparator': 'eq', 'value': _id},
{'key': 'resource_type',
'comparator': 'eq',
'value': cons.RT_VOLUME}]
with context.session.begin():
core.delete_resources(context,
models.ResourceRouting,
filters)
return b_ret_body
@expose(generic=True, template='json')
def get_all(self):
# TODO(joehuang): here should return link instead,
# now combined with 'detail'
context = t_context.extract_context_from_environ()
return {'volumes': self._get_all(context)}
def _get_all(self, context):
# TODO(joehuang): query optimization for pagination, sort, etc
ret = []
pods = az_ag.list_pods_by_tenant(context, self.tenant_id)
for pod in pods:
if pod['pod_name'] == '':
continue
s_ctx = hclient.get_pod_service_ctx(
context,
request.url,
pod['pod_name'],
s_type=cons.ST_CINDER)
if s_ctx['b_url'] == '':
LOG.error(_LE("bottom pod endpoint incorrect %s")
% pod['pod_name'])
continue
# TODO(joehuang): convert header and body content
resp = hclient.forward_req(context, 'GET',
request.headers,
s_ctx['b_url'],
request.body)
if resp.status_code == 200:
routings = db_api.get_bottom_mappings_by_tenant_pod(
context, self.tenant_id,
pod['pod_id'], cons.RT_VOLUME
)
b_ret_body = jsonutils.loads(resp.content)
if b_ret_body.get('volumes'):
for vol in b_ret_body['volumes']:
if not routings.get(vol['id']):
b_ret_body['volumes'].remove(vol)
continue
vol['availability_zone'] = pod['az_name']
ret.extend(b_ret_body['volumes'])
return ret
@expose(generic=True, template='json')
def delete(self, _id):
context = t_context.extract_context_from_environ()
# TODO(joehuang): get the release of top and bottom
t_release = 'MITATA'
b_release = 'MITATA'
s_ctx = self._get_res_routing_ref(context, _id, request.url)
if not s_ctx:
return Response(_('Failed to find resource'), 404)
if s_ctx['b_url'] == '':
return Response(_('bottom pod endpoint incorrect'), 404)
b_headers = self._convert_header(t_release,
b_release,
request.headers)
resp = hclient.forward_req(context, 'DELETE',
b_headers,
s_ctx['b_url'],
request.body)
response.status = resp.status_code
# don't remove the resource routing for delete is async. operation
# remove the routing when query is executed but not find
# No content in the resp actually
return {}
# move to common function if other modules need
def _get_res_routing_ref(self, context, _id, t_url):
pod = self._get_pod_by_top_id(context, _id)
if not pod:
return None
pod_name = pod['pod_name']
s_ctx = hclient.get_pod_service_ctx(
context,
t_url,
pod_name,
s_type=cons.ST_CINDER)
if s_ctx['b_url'] == '':
LOG.error(_LE("bottom pod endpoint incorrect %s") %
pod_name)
return s_ctx
# move to common function if other modules need
def _get_pod_by_top_id(self, context, _id):
mappings = db_api.get_bottom_mappings_by_top_id(
context, _id,
cons.RT_VOLUME)
if not mappings or len(mappings) != 1:
return None
return mappings[0][0]
def _convert_header(self, from_release, to_release, header):
return header
def _convert_object(self, from_release, to_release, res_object,
res_type=cons.RT_VOLUME):
return res_object

View File

@ -0,0 +1,22 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import tricircle.cinder_apigw.app
def list_opts():
return [
('DEFAULT', tricircle.cinder_apigw.app.common_opts),
]

164
tricircle/common/az_ag.py Normal file
View File

@ -0,0 +1,164 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log as logging
from oslo_utils import uuidutils
from tricircle.common.i18n import _LE
from tricircle.db import api as db_api
from tricircle.db import core
from tricircle.db import models
LOG = logging.getLogger(__name__)
def create_ag_az(context, ag_name, az_name):
aggregate = core.create_resource(context, models.Aggregate,
{'name': ag_name})
core.create_resource(
context, models.AggregateMetadata,
{'key': 'availability_zone',
'value': az_name,
'aggregate_id': aggregate['id']})
extra_fields = {
'availability_zone': az_name,
'metadata': {'availability_zone': az_name}
}
aggregate.update(extra_fields)
return aggregate
def get_one_ag(context, aggregate_id):
aggregate = core.get_resource(context, models.Aggregate, aggregate_id)
metadatas = core.query_resource(
context, models.AggregateMetadata,
[{'key': 'key', 'comparator': 'eq',
'value': 'availability_zone'},
{'key': 'aggregate_id', 'comparator': 'eq',
'value': aggregate['id']}], [])
if metadatas:
aggregate['availability_zone'] = metadatas[0]['value']
aggregate['metadata'] = {
'availability_zone': metadatas[0]['value']}
else:
aggregate['availability_zone'] = ''
aggregate['metadata'] = {}
return aggregate
def get_ag_by_name(context, ag_name):
filters = [{'key': 'name',
'comparator': 'eq',
'value': ag_name}]
aggregates = get_all_ag(context, filters)
if aggregates is not None:
if len(aggregates) == 1:
return aggregates[0]
return None
def delete_ag(context, aggregate_id):
core.delete_resources(context, models.AggregateMetadata,
[{'key': 'aggregate_id',
'comparator': 'eq',
'value': aggregate_id}])
core.delete_resource(context, models.Aggregate, aggregate_id)
return
def get_all_ag(context, filters=None, sorts=None):
aggregates = core.query_resource(context,
models.Aggregate,
filters or [],
sorts or [])
metadatas = core.query_resource(
context, models.AggregateMetadata,
[{'key': 'key',
'comparator': 'eq',
'value': 'availability_zone'}], [])
agg_meta_map = {}
for metadata in metadatas:
agg_meta_map[metadata['aggregate_id']] = metadata
for aggregate in aggregates:
extra_fields = {
'availability_zone': '',
'metadata': {}
}
if aggregate['id'] in agg_meta_map:
metadata = agg_meta_map[aggregate['id']]
extra_fields['availability_zone'] = metadata['value']
extra_fields['metadata'] = {
'availability_zone': metadata['value']}
aggregate.update(extra_fields)
return aggregates
def get_pod_by_az_tenant(context, az_name, tenant_id):
pod_bindings = core.query_resource(context,
models.PodBinding,
[{'key': 'tenant_id',
'comparator': 'eq',
'value': tenant_id}],
[])
for pod_b in pod_bindings:
pod = core.get_resource(context,
models.Pod,
pod_b['pod_id'])
if pod['az_name'] == az_name:
return pod, pod['pod_az_name']
# TODO(joehuang): schedule one dynamically in the future
filters = [{'key': 'az_name', 'comparator': 'eq', 'value': az_name}]
pods = db_api.list_pods(context, filters=filters)
for pod in pods:
if pod['pod_name'] != '':
try:
with context.session.begin():
core.create_resource(
context, models.PodBinding,
{'id': uuidutils.generate_uuid(),
'tenant_id': tenant_id,
'pod_id': pod['pod_id']})
return pod, pod['pod_az_name']
except Exception as e:
LOG.error(_LE('Fail to create pod binding: %(exception)s'),
{'exception': e})
return None, None
return None, None
def list_pods_by_tenant(context, tenant_id):
pod_bindings = core.query_resource(context,
models.PodBinding,
[{'key': 'tenant_id',
'comparator': 'eq',
'value': tenant_id}],
[])
pods = []
if pod_bindings:
for pod_b in pod_bindings:
pod = core.get_resource(context,
models.Pod,
pod_b['pod_id'])
pods.append(pod)
return pods

75
tricircle/common/baserpc.py Executable file
View File

@ -0,0 +1,75 @@
#
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# copy and modify from OpenStack Nova
"""
Base RPC client and server common to all services.
"""
from oslo_config import cfg
import oslo_messaging as messaging
from oslo_serialization import jsonutils
from tricircle.common import rpc
CONF = cfg.CONF
rpcapi_cap_opt = cfg.StrOpt('baseclientapi',
help='Set a version cap for messages sent to the'
'base api in any service')
CONF.register_opt(rpcapi_cap_opt, 'upgrade_levels')
_NAMESPACE = 'baseclientapi'
class BaseClientAPI(object):
"""Client side of the base rpc API.
API version history:
1.0 - Initial version.
"""
VERSION_ALIASES = {
# baseapi was added in the first version of Tricircle
}
def __init__(self, topic):
super(BaseClientAPI, self).__init__()
target = messaging.Target(topic=topic,
namespace=_NAMESPACE,
version='1.0')
version_cap = self.VERSION_ALIASES.get(CONF.upgrade_levels.baseapi,
CONF.upgrade_levels.baseapi)
self.client = rpc.get_client(target, version_cap=version_cap)
def ping(self, context, arg, timeout=None):
arg_p = jsonutils.to_primitive(arg)
cctxt = self.client.prepare(timeout=timeout)
return cctxt.call(context, 'ping', arg=arg_p)
class BaseServerRPCAPI(object):
"""Server side of the base RPC API."""
target = messaging.Target(namespace=_NAMESPACE, version='1.0')
def __init__(self, service_name):
self.service_name = service_name
def ping(self, context, arg):
resp = {'service': self.service_name, 'arg': arg}
return jsonutils.to_primitive(resp)

View File

@ -1,76 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log as logging
import oslo_messaging
from neutron.common import rpc as n_rpc
from tricircle.common.serializer import CascadeSerializer as Serializer
from tricircle.common import topics
LOG = logging.getLogger(__name__)
class CascadingNetworkingNotifyAPI(object):
"""API for to notify Cascading service for the networking API."""
def __init__(self, topic=topics.CASCADING_SERVICE):
target = oslo_messaging.Target(topic=topic,
exchange="tricircle",
namespace="networking",
version='1.0',
fanout=True)
self.client = n_rpc.get_client(
target,
serializer=Serializer(),
)
def _cast_message(self, context, method, payload):
"""Cast the payload to the running cascading service instances."""
cctx = self.client.prepare()
LOG.debug('Fanout notify at %(topic)s.%(namespace)s the message '
'%(method)s for CascadingNetwork. payload: %(payload)s',
{'topic': cctx.target.topic,
'namespace': cctx.target.namespace,
'payload': payload,
'method': method})
cctx.cast(context, method, payload=payload)
def create_network(self, context, network):
self._cast_message(context, "create_network", network)
def delete_network(self, context, network_id):
self._cast_message(context,
"delete_network",
{'network_id': network_id})
def update_network(self, context, network_id, network):
payload = {
'network_id': network_id,
'network': network
}
self._cast_message(context, "update_network", payload)
def create_port(self, context, port):
self._cast_message(context, "create_port", port)
def delete_port(self, context, port_id, l3_port_check=True):
payload = {
'port_id': port_id,
'l3_port_check': l3_port_check
}
self._cast_message(context, "delete_port", payload)

View File

@ -1,50 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log as logging
import oslo_messaging
from tricircle.common import rpc
from tricircle.common import topics
LOG = logging.getLogger(__name__)
class CascadingSiteNotifyAPI(object):
"""API for to notify Cascading service for the site API."""
def __init__(self, topic=topics.CASCADING_SERVICE):
target = oslo_messaging.Target(topic=topic,
exchange="tricircle",
namespace="site",
version='1.0',
fanout=True)
self.client = rpc.create_client(target)
def _cast_message(self, context, method, payload):
"""Cast the payload to the running cascading service instances."""
cctx = self.client.prepare()
LOG.debug('Fanout notify at %(topic)s.%(namespace)s the message '
'%(method)s for CascadingSite. payload: %(payload)s',
{'topic': cctx.target.topic,
'namespace': cctx.target.namespace,
'payload': payload,
'method': method})
cctx.cast(context, method, payload=payload)
def create_site(self, context, site_name):
self._cast_message(context, "create_site", site_name)

View File

@ -27,9 +27,11 @@ from oslo_config import cfg
from oslo_log import log as logging from oslo_log import log as logging
import tricircle.common.context as tricircle_context import tricircle.common.context as tricircle_context
from tricircle.db import exception from tricircle.common import exceptions
from tricircle.common import resource_handle
from tricircle.db import api
from tricircle.db import models from tricircle.db import models
from tricircle.db import resource_handle
client_opts = [ client_opts = [
cfg.StrOpt('auth_url', cfg.StrOpt('auth_url',
@ -42,8 +44,8 @@ client_opts = [
default=False, default=False,
help='if set to True, endpoint will be automatically' help='if set to True, endpoint will be automatically'
'refreshed if timeout accessing endpoint'), 'refreshed if timeout accessing endpoint'),
cfg.StrOpt('top_site_name', cfg.StrOpt('top_pod_name',
help='name of top site which client needs to access'), help='name of top pod which client needs to access'),
cfg.StrOpt('admin_username', cfg.StrOpt('admin_username',
help='username of admin account, needed when' help='username of admin account, needed when'
' auto_refresh_endpoint set to True'), ' auto_refresh_endpoint set to True'),
@ -76,14 +78,16 @@ def _safe_operation(operation_name):
instance, resource, context = args[:3] instance, resource, context = args[:3]
if resource not in instance.operation_resources_map[ if resource not in instance.operation_resources_map[
operation_name]: operation_name]:
raise exception.ResourceNotSupported(resource, operation_name) raise exceptions.ResourceNotSupported(resource, operation_name)
retries = 1 retries = 1
for _ in xrange(retries + 1): for i in xrange(retries + 1):
try: try:
service = instance.resource_service_map[resource] service = instance.resource_service_map[resource]
instance._ensure_endpoint_set(context, service) instance._ensure_endpoint_set(context, service)
return func(*args, **kwargs) return func(*args, **kwargs)
except exception.EndpointNotAvailable as e: except exceptions.EndpointNotAvailable as e:
if i == retries:
raise
if cfg.CONF.client.auto_refresh_endpoint: if cfg.CONF.client.auto_refresh_endpoint:
LOG.warn(e.message + ', update endpoint and try again') LOG.warn(e.message + ', update endpoint and try again')
instance._update_endpoint_from_keystone(context, True) instance._update_endpoint_from_keystone(context, True)
@ -94,11 +98,14 @@ def _safe_operation(operation_name):
class Client(object): class Client(object):
def __init__(self): def __init__(self, pod_name=None):
self.auth_url = cfg.CONF.client.auth_url self.auth_url = cfg.CONF.client.auth_url
self.resource_service_map = {} self.resource_service_map = {}
self.operation_resources_map = collections.defaultdict(set) self.operation_resources_map = collections.defaultdict(set)
self.service_handle_map = {} self.service_handle_map = {}
self.pod_name = pod_name
if not self.pod_name:
self.pod_name = cfg.CONF.client.top_pod_name
for _, handle_class in inspect.getmembers(resource_handle): for _, handle_class in inspect.getmembers(resource_handle):
if not inspect.isclass(handle_class): if not inspect.isclass(handle_class):
continue continue
@ -108,6 +115,7 @@ class Client(object):
self.service_handle_map[handle_obj.service_type] = handle_obj self.service_handle_map[handle_obj.service_type] = handle_obj
for resource in handle_obj.support_resource: for resource in handle_obj.support_resource:
self.resource_service_map[resource] = handle_obj.service_type self.resource_service_map[resource] = handle_obj.service_type
self.operation_resources_map['client'].add(resource)
for operation, index in six.iteritems( for operation, index in six.iteritems(
resource_handle.operation_index_map): resource_handle.operation_index_map):
# add parentheses to emphasize we mean to do bitwise and # add parentheses to emphasize we mean to do bitwise and
@ -160,35 +168,35 @@ class Client(object):
region_service_endpoint_map[region_id][service_name] = url region_service_endpoint_map[region_id][service_name] = url
return region_service_endpoint_map return region_service_endpoint_map
def _get_config_with_retry(self, cxt, filters, site, service, retry): def _get_config_with_retry(self, cxt, filters, pod, service, retry):
conf_list = models.list_site_service_configurations(cxt, filters) conf_list = api.list_pod_service_configurations(cxt, filters)
if len(conf_list) > 1: if len(conf_list) > 1:
raise exception.EndpointNotUnique(site, service) raise exceptions.EndpointNotUnique(pod, service)
if len(conf_list) == 0: if len(conf_list) == 0:
if not retry: if not retry:
raise exception.EndpointNotFound(site, service) raise exceptions.EndpointNotFound(pod, service)
self._update_endpoint_from_keystone(cxt, True) self._update_endpoint_from_keystone(cxt, True)
return self._get_config_with_retry(cxt, return self._get_config_with_retry(cxt,
filters, site, service, False) filters, pod, service, False)
return conf_list return conf_list
def _ensure_endpoint_set(self, cxt, service): def _ensure_endpoint_set(self, cxt, service):
handle = self.service_handle_map[service] handle = self.service_handle_map[service]
if not handle.is_endpoint_url_set(): if not handle.is_endpoint_url_set():
site_filters = [{'key': 'site_name', pod_filters = [{'key': 'pod_name',
'comparator': 'eq', 'comparator': 'eq',
'value': cfg.CONF.client.top_site_name}] 'value': self.pod_name}]
site_list = models.list_sites(cxt, site_filters) pod_list = api.list_pods(cxt, pod_filters)
if len(site_list) == 0: if len(pod_list) == 0:
raise exception.ResourceNotFound(models.Site, raise exceptions.ResourceNotFound(models.Pod,
cfg.CONF.client.top_site_name) self.pod_name)
# site_name is unique key, safe to get the first element # pod_name is unique key, safe to get the first element
site_id = site_list[0]['site_id'] pod_id = pod_list[0]['pod_id']
config_filters = [ config_filters = [
{'key': 'site_id', 'comparator': 'eq', 'value': site_id}, {'key': 'pod_id', 'comparator': 'eq', 'value': pod_id},
{'key': 'service_type', 'comparator': 'eq', 'value': service}] {'key': 'service_type', 'comparator': 'eq', 'value': service}]
conf_list = self._get_config_with_retry( conf_list = self._get_config_with_retry(
cxt, config_filters, site_id, service, cxt, config_filters, pod_id, service,
cfg.CONF.client.auto_refresh_endpoint) cfg.CONF.client.auto_refresh_endpoint)
url = conf_list[0]['service_url'] url = conf_list[0]['service_url']
handle.update_endpoint_url(url) handle.update_endpoint_url(url)
@ -211,54 +219,54 @@ class Client(object):
endpoint_map = self._get_endpoint_from_keystone(cxt) endpoint_map = self._get_endpoint_from_keystone(cxt)
for region in endpoint_map: for region in endpoint_map:
# use region name to query site # use region name to query pod
site_filters = [{'key': 'site_name', 'comparator': 'eq', pod_filters = [{'key': 'pod_name', 'comparator': 'eq',
'value': region}] 'value': region}]
site_list = models.list_sites(cxt, site_filters) pod_list = api.list_pods(cxt, pod_filters)
# skip region/site not registered in cascade service # skip region/pod not registered in cascade service
if len(site_list) != 1: if len(pod_list) != 1:
continue continue
for service in endpoint_map[region]: for service in endpoint_map[region]:
site_id = site_list[0]['site_id'] pod_id = pod_list[0]['pod_id']
config_filters = [{'key': 'site_id', 'comparator': 'eq', config_filters = [{'key': 'pod_id', 'comparator': 'eq',
'value': site_id}, 'value': pod_id},
{'key': 'service_type', 'comparator': 'eq', {'key': 'service_type', 'comparator': 'eq',
'value': service}] 'value': service}]
config_list = models.list_site_service_configurations( config_list = api.list_pod_service_configurations(
cxt, config_filters) cxt, config_filters)
if len(config_list) > 1: if len(config_list) > 1:
raise exception.EndpointNotUnique(site_id, service) raise exceptions.EndpointNotUnique(pod_id, service)
if len(config_list) == 1: if len(config_list) == 1:
config_id = config_list[0]['service_id'] config_id = config_list[0]['service_id']
update_dict = { update_dict = {
'service_url': endpoint_map[region][service]} 'service_url': endpoint_map[region][service]}
models.update_site_service_configuration( api.update_pod_service_configuration(
cxt, config_id, update_dict) cxt, config_id, update_dict)
else: else:
config_dict = { config_dict = {
'service_id': str(uuid.uuid4()), 'service_id': str(uuid.uuid4()),
'site_id': site_id, 'pod_id': pod_id,
'service_type': service, 'service_type': service,
'service_url': endpoint_map[region][service] 'service_url': endpoint_map[region][service]
} }
models.create_site_service_configuration( api.create_pod_service_configuration(
cxt, config_dict) cxt, config_dict)
def get_endpoint(self, cxt, site_id, service): def get_endpoint(self, cxt, pod_id, service):
"""Get endpoint url of given site and service """Get endpoint url of given pod and service
:param cxt: context object :param cxt: context object
:param site_id: site id :param pod_id: pod id
:param service: service type :param service: service type
:return: endpoint url for given site and service :return: endpoint url for given pod and service
:raises: EndpointNotUnique, EndpointNotFound :raises: EndpointNotUnique, EndpointNotFound
""" """
config_filters = [ config_filters = [
{'key': 'site_id', 'comparator': 'eq', 'value': site_id}, {'key': 'pod_id', 'comparator': 'eq', 'value': pod_id},
{'key': 'service_type', 'comparator': 'eq', 'value': service}] {'key': 'service_type', 'comparator': 'eq', 'value': service}]
conf_list = self._get_config_with_retry( conf_list = self._get_config_with_retry(
cxt, config_filters, site_id, service, cxt, config_filters, pod_id, service,
cfg.CONF.client.auto_refresh_endpoint) cfg.CONF.client.auto_refresh_endpoint)
return conf_list[0]['service_url'] return conf_list[0]['service_url']
@ -272,9 +280,27 @@ class Client(object):
""" """
self._update_endpoint_from_keystone(cxt, False) self._update_endpoint_from_keystone(cxt, False)
@_safe_operation('client')
def get_native_client(self, resource, cxt):
"""Get native python client instance
Use this function only when for complex operations
:param resource: resource type
:param cxt: resource type
:return: client instance
"""
if cxt.is_admin and not cxt.auth_token:
cxt.auth_token = self._get_admin_token()
cxt.tenant = self._get_admin_project_id()
service = self.resource_service_map[resource]
handle = self.service_handle_map[service]
return handle._get_client(cxt)
@_safe_operation('list') @_safe_operation('list')
def list_resources(self, resource, cxt, filters=None): def list_resources(self, resource, cxt, filters=None):
"""Query resource in site of top layer """Query resource in pod of top layer
Directly invoke this method to query resources, or use Directly invoke this method to query resources, or use
list_(resource)s (self, cxt, filters=None), for example, list_(resource)s (self, cxt, filters=None), for example,
@ -283,7 +309,7 @@ class Client(object):
of each ResourceHandle class. of each ResourceHandle class.
:param resource: resource type :param resource: resource type
:param cxt: context object :param cxt: resource type
:param filters: list of dict with key 'key', 'comparator', 'value' :param filters: list of dict with key 'key', 'comparator', 'value'
like {'key': 'name', 'comparator': 'eq', 'value': 'private'}, 'key' like {'key': 'name', 'comparator': 'eq', 'value': 'private'}, 'key'
is the field name of resources is the field name of resources
@ -301,7 +327,7 @@ class Client(object):
@_safe_operation('create') @_safe_operation('create')
def create_resources(self, resource, cxt, *args, **kwargs): def create_resources(self, resource, cxt, *args, **kwargs):
"""Create resource in site of top layer """Create resource in pod of top layer
Directly invoke this method to create resources, or use Directly invoke this method to create resources, or use
create_(resource)s (self, cxt, *args, **kwargs). These methods are create_(resource)s (self, cxt, *args, **kwargs). These methods are
@ -315,6 +341,10 @@ class Client(object):
resource -> args -> kwargs resource -> args -> kwargs
-------------------------- --------------------------
aggregate -> name, availability_zone_name -> none aggregate -> name, availability_zone_name -> none
server -> name, image, flavor -> nics
network -> body -> none
subnet -> body -> none
port -> body -> none
-------------------------- --------------------------
:return: a dict containing resource information :return: a dict containing resource information
:raises: EndpointNotAvailable :raises: EndpointNotAvailable
@ -329,7 +359,7 @@ class Client(object):
@_safe_operation('delete') @_safe_operation('delete')
def delete_resources(self, resource, cxt, resource_id): def delete_resources(self, resource, cxt, resource_id):
"""Delete resource in site of top layer """Delete resource in pod of top layer
Directly invoke this method to delete resources, or use Directly invoke this method to delete resources, or use
delete_(resource)s (self, cxt, obj_id). These methods are delete_(resource)s (self, cxt, obj_id). These methods are
@ -349,9 +379,31 @@ class Client(object):
handle = self.service_handle_map[service] handle = self.service_handle_map[service]
handle.handle_delete(cxt, resource, resource_id) handle.handle_delete(cxt, resource, resource_id)
@_safe_operation('get')
def get_resources(self, resource, cxt, resource_id):
"""Get resource in pod of top layer
Directly invoke this method to get resources, or use
get_(resource)s (self, cxt, obj_id). These methods are
automatically generated according to the supported resources
of each ResourceHandle class.
:param resource: resource type
:param cxt: context object
:param resource_id: id of resource
:return: a dict containing resource information
:raises: EndpointNotAvailable
"""
if cxt.is_admin and not cxt.auth_token:
cxt.auth_token = self._get_admin_token()
cxt.tenant = self._get_admin_project_id()
service = self.resource_service_map[resource]
handle = self.service_handle_map[service]
return handle.handle_get(cxt, resource, resource_id)
@_safe_operation('action') @_safe_operation('action')
def action_resources(self, resource, cxt, action, *args, **kwargs): def action_resources(self, resource, cxt, action, *args, **kwargs):
"""Apply action on resource in site of top layer """Apply action on resource in pod of top layer
Directly invoke this method to apply action, or use Directly invoke this method to apply action, or use
action_(resource)s (self, cxt, action, *args, **kwargs). These methods action_(resource)s (self, cxt, action, *args, **kwargs). These methods
@ -366,6 +418,8 @@ class Client(object):
resource -> action -> args -> kwargs resource -> action -> args -> kwargs
-------------------------- --------------------------
aggregate -> add_host -> aggregate, host -> none aggregate -> add_host -> aggregate, host -> none
volume -> set_bootable -> volume, flag -> none
router -> add_interface -> router, body -> none
-------------------------- --------------------------
:return: None :return: None
:raises: EndpointNotAvailable :raises: EndpointNotAvailable

74
tricircle/common/config.py Executable file → Normal file
View File

@ -17,69 +17,46 @@
Routines for configuring tricircle, largely copy from Neutron Routines for configuring tricircle, largely copy from Neutron
""" """
import os
import sys import sys
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log as logging import oslo_log.log as logging
from paste import deploy
from tricircle.common.i18n import _
from tricircle.common.i18n import _LI from tricircle.common.i18n import _LI
# from tricircle import policy # from tricircle import policy
from tricircle.common import rpc
from tricircle.common import version from tricircle.common import version
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
common_opts = [
cfg.StrOpt('bind_host', default='0.0.0.0',
help=_("The host IP to bind to")),
cfg.IntOpt('bind_port', default=19999,
help=_("The port to bind to")),
cfg.IntOpt('api_workers', default=1,
help=_("number of api workers")),
cfg.StrOpt('api_paste_config', default="api-paste.ini",
help=_("The API paste config file to use")),
cfg.StrOpt('api_extensions_path', default="",
help=_("The path for API extensions")),
cfg.StrOpt('auth_strategy', default='keystone',
help=_("The type of authentication to use")),
cfg.BoolOpt('allow_bulk', default=True,
help=_("Allow the usage of the bulk API")),
cfg.BoolOpt('allow_pagination', default=False,
help=_("Allow the usage of the pagination")),
cfg.BoolOpt('allow_sorting', default=False,
help=_("Allow the usage of the sorting")),
cfg.StrOpt('pagination_max_limit', default="-1",
help=_("The maximum number of items returned in a single "
"response, value was 'infinite' or negative integer "
"means no limit")),
]
def init(opts, args, **kwargs):
def init(args, **kwargs):
# Register the configuration options # Register the configuration options
cfg.CONF.register_opts(common_opts) cfg.CONF.register_opts(opts)
# ks_session.Session.register_conf_options(cfg.CONF) # ks_session.Session.register_conf_options(cfg.CONF)
# auth.register_conf_options(cfg.CONF) # auth.register_conf_options(cfg.CONF)
logging.register_options(cfg.CONF) logging.register_options(cfg.CONF)
cfg.CONF(args=args, project='tricircle', cfg.CONF(args=args, project='tricircle',
version='%%(prog)s %s' % version.version_info.release_string(), version=version.version_info,
**kwargs) **kwargs)
_setup_logging()
def setup_logging(): rpc.init(cfg.CONF)
def _setup_logging():
"""Sets up the logging options for a log with supplied name.""" """Sets up the logging options for a log with supplied name."""
product_name = "tricircle" product_name = "tricircle"
logging.setup(cfg.CONF, product_name) logging.setup(cfg.CONF, product_name)
LOG.info(_LI("Logging enabled!")) LOG.info(_LI("Logging enabled!"))
LOG.info(_LI("%(prog)s version %(version)s"), LOG.info(_LI("%(prog)s version %(version)s"),
{'prog': sys.argv[0], {'prog': sys.argv[0],
'version': version.version_info.release_string()}) 'version': version.version_info})
LOG.debug("command line: %s", " ".join(sys.argv)) LOG.debug("command line: %s", " ".join(sys.argv))
@ -87,34 +64,7 @@ def reset_service():
# Reset worker in case SIGHUP is called. # Reset worker in case SIGHUP is called.
# Note that this is called only in case a service is running in # Note that this is called only in case a service is running in
# daemon mode. # daemon mode.
setup_logging() _setup_logging()
# TODO(zhiyuan) enforce policy later # TODO(zhiyuan) enforce policy later
# policy.refresh() # policy.refresh()
def load_paste_app(app_name):
"""Builds and returns a WSGI app from a paste config file.
:param app_name: Name of the application to load
:raises ConfigFilesNotFoundError when config file cannot be located
:raises RuntimeError when application cannot be loaded from config file
"""
config_path = cfg.CONF.find_file(cfg.CONF.api_paste_config)
if not config_path:
raise cfg.ConfigFilesNotFoundError(
config_files=[cfg.CONF.api_paste_config])
config_path = os.path.abspath(config_path)
LOG.info(_LI("Config paste file: %s"), config_path)
try:
app = deploy.loadapp("config:%s" % config_path, name=app_name)
except (LookupError, ImportError):
msg = (_("Unable to load %(app_name)s from "
"configuration file %(config_path)s.") %
{'app_name': app_name,
'config_path': config_path})
LOG.exception(msg)
raise RuntimeError(msg)
return app

View File

@ -0,0 +1,46 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# service type
ST_NOVA = 'nova'
# only support cinder v2
ST_CINDER = 'cinderv2'
ST_NEUTRON = 'neutron'
ST_GLANCE = 'glance'
# resource_type
RT_SERVER = 'server'
RT_VOLUME = 'volume'
RT_BACKUP = 'backup'
RT_SNAPSHOT = 'snapshot'
RT_NETWORK = 'network'
RT_SUBNET = 'subnet'
RT_PORT = 'port'
RT_ROUTER = 'router'
# version list
NOVA_VERSION_V21 = 'v2.1'
CINDER_VERSION_V2 = 'v2'
NEUTRON_VERSION_V2 = 'v2'
# supported release
R_LIBERTY = 'liberty'
R_MITAKA = 'mitaka'
# l3 bridge networking elements
bridge_subnet_pool_name = 'bridge_subnet_pool'
bridge_net_name = 'bridge_net_%s'
bridge_subnet_name = 'bridge_subnet_%s'
bridge_port_name = 'bridge_port_%s_%s'

View File

@ -13,7 +13,9 @@
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
from oslo_context import context as oslo_ctx from pecan import request
import oslo_context.context as oslo_ctx
from tricircle.db import core from tricircle.db import core
@ -28,6 +30,42 @@ def get_admin_context():
return ctx return ctx
def is_admin_context(ctx):
return ctx.is_admin
def extract_context_from_environ():
context_paras = {'auth_token': 'HTTP_X_AUTH_TOKEN',
'user': 'HTTP_X_USER_ID',
'tenant': 'HTTP_X_TENANT_ID',
'user_name': 'HTTP_X_USER_NAME',
'tenant_name': 'HTTP_X_PROJECT_NAME',
'domain': 'HTTP_X_DOMAIN_ID',
'user_domain': 'HTTP_X_USER_DOMAIN_ID',
'project_domain': 'HTTP_X_PROJECT_DOMAIN_ID',
'request_id': 'openstack.request_id'}
environ = request.environ
for key in context_paras:
context_paras[key] = environ.get(context_paras[key])
role = environ.get('HTTP_X_ROLE')
context_paras['is_admin'] = role == 'admin'
return Context(**context_paras)
def get_context_from_neutron_context(context):
ctx = Context()
ctx.auth_token = context.auth_token
ctx.user = context.user_id
ctx.tenant = context.tenant_id
ctx.tenant_name = context.tenant_name
ctx.user_name = context.user_name
ctx.resource_uuid = context.resource_uuid
return ctx
class ContextBase(oslo_ctx.RequestContext): class ContextBase(oslo_ctx.RequestContext):
def __init__(self, auth_token=None, user_id=None, tenant_id=None, def __init__(self, auth_token=None, user_id=None, tenant_id=None,
is_admin=False, request_id=None, overwrite=True, is_admin=False, request_id=None, overwrite=True,
@ -52,10 +90,20 @@ class ContextBase(oslo_ctx.RequestContext):
ctx_dict = super(ContextBase, self).to_dict() ctx_dict = super(ContextBase, self).to_dict()
ctx_dict.update({ ctx_dict.update({
'user_name': self.user_name, 'user_name': self.user_name,
'tenant_name': self.tenant_name 'tenant_name': self.tenant_name,
'tenant_id': self.tenant_id,
'project_id': self.project_id
}) })
return ctx_dict return ctx_dict
@property
def project_id(self):
return self.tenant
@property
def tenant_id(self):
return self.tenant
@classmethod @classmethod
def from_dict(cls, ctx): def from_dict(cls, ctx):
return cls(**ctx) return cls(**ctx)

41
tricircle/common/exceptions.py Executable file → Normal file
View File

@ -17,8 +17,9 @@
Tricircle base exception handling. Tricircle base exception handling.
""" """
from oslo_utils import excutils
import six import six
from oslo_utils import excutils
from tricircle.common.i18n import _ from tricircle.common.i18n import _
@ -81,3 +82,41 @@ class InUse(TricircleException):
class InvalidConfigurationOption(TricircleException): class InvalidConfigurationOption(TricircleException):
message = _("An invalid value was provided for %(opt_name)s: " message = _("An invalid value was provided for %(opt_name)s: "
"%(opt_value)s") "%(opt_value)s")
class EndpointNotAvailable(TricircleException):
message = "Endpoint %(url)s for %(service)s is not available"
def __init__(self, service, url):
super(EndpointNotAvailable, self).__init__(service=service, url=url)
class EndpointNotUnique(TricircleException):
message = "Endpoint for %(service)s in %(pod)s not unique"
def __init__(self, pod, service):
super(EndpointNotUnique, self).__init__(pod=pod, service=service)
class EndpointNotFound(TricircleException):
message = "Endpoint for %(service)s in %(pod)s not found"
def __init__(self, pod, service):
super(EndpointNotFound, self).__init__(pod=pod, service=service)
class ResourceNotFound(TricircleException):
message = "Could not find %(resource_type)s: %(unique_key)s"
def __init__(self, model, unique_key):
resource_type = model.__name__.lower()
super(ResourceNotFound, self).__init__(resource_type=resource_type,
unique_key=unique_key)
class ResourceNotSupported(TricircleException):
message = "%(method)s method not supported for %(resource)s"
def __init__(self, resource, method):
super(ResourceNotSupported, self).__init__(resource=resource,
method=method)

View File

@ -0,0 +1,138 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import urlparse
from requests import Request
from requests import Session
from tricircle.common import client
from tricircle.common import constants as cons
from tricircle.db import api as db_api
# the url could be endpoint registered in the keystone
# or url sent to tricircle service, which is stored in
# pecan.request.url
def get_version_from_url(url):
components = urlparse.urlsplit(url)
path = components.path
pos = path.find('/')
ver = ''
if pos == 0:
path = path[1:]
i = path.find('/')
if i >= 0:
ver = path[:i]
else:
ver = path
elif pos > 0:
ver = path[:pos]
else:
ver = path
return ver
def get_bottom_url(t_ver, t_url, b_ver, b_endpoint):
"""get_bottom_url
convert url received by Tricircle service to bottom OpenStack
request url through the configured endpoint in the KeyStone
:param t_ver: version of top service
:param t_url: request url to the top service
:param b_ver: version of bottom service
:param b_endpoint: endpoint registered in keystone for bottom service
:return: request url to bottom service
"""
t_parse = urlparse.urlsplit(t_url)
after_ver = t_parse.path
remove_ver = '/' + t_ver + '/'
pos = after_ver.find(remove_ver)
if pos == 0:
after_ver = after_ver[len(remove_ver):]
else:
remove_ver = t_ver + '/'
pos = after_ver.find(remove_ver)
if pos == 0:
after_ver = after_ver[len(remove_ver):]
if after_ver == t_parse.path:
# wrong t_url
return ''
b_parse = urlparse.urlsplit(b_endpoint)
scheme = b_parse.scheme
netloc = b_parse.netloc
path = '/' + b_ver + '/' + after_ver
if b_ver == '':
path = '/' + after_ver
query = t_parse.query
fragment = t_parse.fragment
b_url = urlparse.urlunsplit((scheme,
netloc,
path,
query,
fragment))
return b_url
def get_pod_service_endpoint(context, pod_name, st):
pod = db_api.get_pod_by_name(context, pod_name)
if pod:
c = client.Client()
return c.get_endpoint(context, pod['pod_id'], st)
return ''
def get_pod_service_ctx(context, t_url, pod_name, s_type=cons.ST_NOVA):
t_ver = get_version_from_url(t_url)
b_endpoint = get_pod_service_endpoint(context,
pod_name,
s_type)
b_ver = get_version_from_url(b_endpoint)
b_url = ''
if b_endpoint != '':
b_url = get_bottom_url(t_ver, t_url, b_ver, b_endpoint)
return {'t_ver': t_ver, 'b_ver': b_ver,
't_url': t_url, 'b_url': b_url}
def forward_req(context, action, b_headers, b_url, b_body):
s = Session()
req = Request(action, b_url,
data=b_body,
headers=b_headers)
prepped = req.prepare()
# do something with prepped.body
# do something with prepped.headers
resp = s.send(prepped,
timeout=60)
return resp

View File

@ -0,0 +1,124 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import eventlet
import oslo_db.exception as db_exc
from tricircle.db import core
from tricircle.db import models
def get_or_create_route(t_ctx, q_ctx,
project_id, pod, _id, _type, list_ele_method):
# use configuration option later
route_expire_threshold = 30
with t_ctx.session.begin():
routes = core.query_resource(
t_ctx, models.ResourceRouting,
[{'key': 'top_id', 'comparator': 'eq', 'value': _id},
{'key': 'pod_id', 'comparator': 'eq',
'value': pod['pod_id']}], [])
if routes:
route = routes[0]
if route['bottom_id']:
return route, False
else:
route_time = route['updated_at'] or route['created_at']
current_time = datetime.datetime.utcnow()
delta = current_time - route_time
if delta.seconds > route_expire_threshold:
# NOTE(zhiyuan) cannot directly remove the route, we have
# a race here that other worker is updating this route, we
# need to check if the corresponding element has been
# created by other worker
eles = list_ele_method(t_ctx, q_ctx, pod, _id, _type)
if eles:
route['bottom_id'] = eles[0]['id']
core.update_resource(t_ctx,
models.ResourceRouting,
route['id'], route)
return route, False
try:
core.delete_resource(t_ctx,
models.ResourceRouting,
route['id'])
except db_exc.ResourceNotFound:
pass
try:
# NOTE(zhiyuan) try/except block inside a with block will cause
# problem, so move them out of the block and manually handle the
# session context
t_ctx.session.begin()
route = core.create_resource(t_ctx, models.ResourceRouting,
{'top_id': _id,
'pod_id': pod['pod_id'],
'project_id': project_id,
'resource_type': _type})
t_ctx.session.commit()
return route, True
except db_exc.DBDuplicateEntry:
t_ctx.session.rollback()
return None, False
finally:
t_ctx.session.close()
def get_or_create_element(t_ctx, q_ctx,
project_id, pod, ele, _type, body,
list_ele_method, create_ele_method):
# use configuration option later
max_tries = 5
for _ in xrange(max_tries):
route, is_new = get_or_create_route(
t_ctx, q_ctx, project_id, pod, ele['id'], _type, list_ele_method)
if not route:
eventlet.sleep(0)
continue
if not is_new and not route['bottom_id']:
eventlet.sleep(0)
continue
if not is_new and route['bottom_id']:
break
if is_new:
try:
ele = create_ele_method(t_ctx, q_ctx, pod, body, _type)
except Exception:
with t_ctx.session.begin():
try:
core.delete_resource(t_ctx,
models.ResourceRouting,
route['id'])
except db_exc.ResourceNotFound:
# NOTE(zhiyuan) this is a rare case that other worker
# considers the route expires and delete it though it
# was just created, maybe caused by out-of-sync time
pass
raise
with t_ctx.session.begin():
# NOTE(zhiyuan) it's safe to update route, the bottom network
# has been successfully created, so other worker will not
# delete this route
route['bottom_id'] = ele['id']
core.update_resource(t_ctx, models.ResourceRouting,
route['id'], route)
break
if not route:
raise Exception('Fail to create %s routing entry' % _type)
if not route['bottom_id']:
raise Exception('Fail to bind top and bottom %s' % _type)
return is_new, route['bottom_id']

View File

@ -1,65 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import nova.block_device
import nova.cloudpipe.pipelib
import nova.compute.manager
import nova.compute.task_states
import nova.compute.utils
import nova.compute.vm_states
import nova.conductor
import nova.conductor.rpcapi
import nova.context
import nova.db.api
import nova.exception
import nova.manager
import nova.network
import nova.network.model
import nova.network.security_group.openstack_driver
import nova.objects
import nova.objects.base
import nova.quota
import nova.rpc
import nova.service
import nova.utils
import nova.version
import nova.virt.block_device
import nova.volume
block_device = nova.block_device
pipelib = nova.cloudpipe.pipelib
compute_manager = nova.compute.manager
task_states = nova.compute.task_states
vm_states = nova.compute.vm_states
compute_utils = nova.compute.utils
conductor = nova.conductor
conductor_rpcapi = nova.conductor.rpcapi
context = nova.context
db_api = nova.db.api
exception = nova.exception
manager = nova.manager
network = nova.network
network_model = nova.network.model
openstack_driver = nova.network.security_group.openstack_driver
objects = nova.objects
objects_base = nova.objects.base
quota = nova.quota
rpc = nova.rpc
service = nova.service
utils = nova.utils
driver_block_device = nova.virt.block_device
volume = nova.volume
version = nova.version

26
tricircle/common/opts.py Normal file
View File

@ -0,0 +1,26 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import tricircle.common.client
# Todo: adding rpc cap negotiation configuration after first release
# import tricircle.common.xrpcapi
def list_opts():
return [
('client', tricircle.common.client.client_opts),
# ('upgrade_levels', tricircle.common.xrpcapi.rpcapi_cap_opt),
]

View File

@ -0,0 +1,320 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from cinderclient import client as c_client
from cinderclient import exceptions as c_exceptions
import glanceclient as g_client
import glanceclient.exc as g_exceptions
from neutronclient.common import exceptions as q_exceptions
from neutronclient.neutron import client as q_client
from novaclient import client as n_client
from novaclient import exceptions as n_exceptions
from oslo_config import cfg
from oslo_log import log as logging
from requests import exceptions as r_exceptions
from tricircle.common import constants as cons
from tricircle.common import exceptions
client_opts = [
cfg.IntOpt('cinder_timeout',
default=60,
help='timeout for cinder client in seconds'),
cfg.IntOpt('glance_timeout',
default=60,
help='timeout for glance client in seconds'),
cfg.IntOpt('neutron_timeout',
default=60,
help='timeout for neutron client in seconds'),
cfg.IntOpt('nova_timeout',
default=60,
help='timeout for nova client in seconds'),
]
cfg.CONF.register_opts(client_opts, group='client')
LIST, CREATE, DELETE, GET, ACTION = 1, 2, 4, 8, 16
operation_index_map = {'list': LIST, 'create': CREATE,
'delete': DELETE, 'get': GET, 'action': ACTION}
LOG = logging.getLogger(__name__)
def _transform_filters(filters):
filter_dict = {}
for query_filter in filters:
# only eq filter supported at first
if query_filter['comparator'] != 'eq':
continue
key = query_filter['key']
value = query_filter['value']
filter_dict[key] = value
return filter_dict
class ResourceHandle(object):
def __init__(self, auth_url):
self.auth_url = auth_url
self.endpoint_url = None
def is_endpoint_url_set(self):
return self.endpoint_url is not None
def update_endpoint_url(self, url):
self.endpoint_url = url
class GlanceResourceHandle(ResourceHandle):
service_type = cons.ST_GLANCE
support_resource = {'image': LIST | GET}
def _get_client(self, cxt):
return g_client.Client('1',
token=cxt.auth_token,
auth_url=self.auth_url,
endpoint=self.endpoint_url,
timeout=cfg.CONF.client.glance_timeout)
def handle_list(self, cxt, resource, filters):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
return [res.to_dict() for res in getattr(
client, collection).list(filters=_transform_filters(filters))]
except g_exceptions.InvalidEndpoint:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable('glance',
client.http_client.endpoint)
def handle_get(self, cxt, resource, resource_id):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
return getattr(client, collection).get(resource_id).to_dict()
except g_exceptions.InvalidEndpoint:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable('glance',
client.http_client.endpoint)
except g_exceptions.HTTPNotFound:
LOG.debug("%(resource)s %(resource_id)s not found",
{'resource': resource, 'resource_id': resource_id})
class NeutronResourceHandle(ResourceHandle):
service_type = cons.ST_NEUTRON
support_resource = {'network': LIST | CREATE | DELETE | GET,
'subnet': LIST | CREATE | DELETE | GET,
'port': LIST | CREATE | DELETE | GET,
'router': LIST | CREATE | ACTION,
'security_group': LIST,
'security_group_rule': LIST}
def _get_client(self, cxt):
return q_client.Client('2.0',
token=cxt.auth_token,
auth_url=self.auth_url,
endpoint_url=self.endpoint_url,
timeout=cfg.CONF.client.neutron_timeout)
def handle_list(self, cxt, resource, filters):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
search_opts = _transform_filters(filters)
return [res for res in getattr(
client, 'list_%s' % collection)(**search_opts)[collection]]
except q_exceptions.ConnectionFailed:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable(
'neutron', client.httpclient.endpoint_url)
def handle_create(self, cxt, resource, *args, **kwargs):
try:
client = self._get_client(cxt)
return getattr(client, 'create_%s' % resource)(
*args, **kwargs)[resource]
except q_exceptions.ConnectionFailed:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable(
'neutron', client.httpclient.endpoint_url)
def handle_get(self, cxt, resource, resource_id):
try:
client = self._get_client(cxt)
return getattr(client, 'show_%s' % resource)(resource_id)[resource]
except q_exceptions.ConnectionFailed:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable(
'neutron', client.httpclient.endpoint_url)
except q_exceptions.NotFound:
LOG.debug("%(resource)s %(resource_id)s not found",
{'resource': resource, 'resource_id': resource_id})
def handle_delete(self, cxt, resource, resource_id):
try:
client = self._get_client(cxt)
return getattr(client, 'delete_%s' % resource)(resource_id)
except q_exceptions.ConnectionFailed:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable(
'neutron', client.httpclient.endpoint_url)
except q_exceptions.NotFound:
LOG.debug("Delete %(resource)s %(resource_id)s which not found",
{'resource': resource, 'resource_id': resource_id})
def handle_action(self, cxt, resource, action, *args, **kwargs):
try:
client = self._get_client(cxt)
return getattr(client, '%s_%s' % (action, resource))(*args,
**kwargs)
except q_exceptions.ConnectionFailed:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable(
'neutron', client.httpclient.endpoint_url)
class NovaResourceHandle(ResourceHandle):
service_type = cons.ST_NOVA
support_resource = {'flavor': LIST,
'server': LIST | CREATE | GET,
'aggregate': LIST | CREATE | DELETE | ACTION}
def _get_client(self, cxt):
cli = n_client.Client('2',
auth_token=cxt.auth_token,
auth_url=self.auth_url,
timeout=cfg.CONF.client.nova_timeout)
cli.set_management_url(
self.endpoint_url.replace('$(tenant_id)s', cxt.tenant))
return cli
def handle_list(self, cxt, resource, filters):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
# only server list supports filter
if resource == 'server':
search_opts = _transform_filters(filters)
return [res.to_dict() for res in getattr(
client, collection).list(search_opts=search_opts)]
else:
return [res.to_dict() for res in getattr(client,
collection).list()]
except r_exceptions.ConnectTimeout:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable('nova',
client.client.management_url)
def handle_create(self, cxt, resource, *args, **kwargs):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
return getattr(client, collection).create(
*args, **kwargs).to_dict()
except r_exceptions.ConnectTimeout:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable('nova',
client.client.management_url)
def handle_get(self, cxt, resource, resource_id):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
return getattr(client, collection).get(resource_id).to_dict()
except r_exceptions.ConnectTimeout:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable('nova',
client.client.management_url)
except n_exceptions.NotFound:
LOG.debug("%(resource)s %(resource_id)s not found",
{'resource': resource, 'resource_id': resource_id})
def handle_delete(self, cxt, resource, resource_id):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
return getattr(client, collection).delete(resource_id)
except r_exceptions.ConnectTimeout:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable('nova',
client.client.management_url)
except n_exceptions.NotFound:
LOG.debug("Delete %(resource)s %(resource_id)s which not found",
{'resource': resource, 'resource_id': resource_id})
def handle_action(self, cxt, resource, action, *args, **kwargs):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
resource_manager = getattr(client, collection)
getattr(resource_manager, action)(*args, **kwargs)
except r_exceptions.ConnectTimeout:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable('nova',
client.client.management_url)
class CinderResourceHandle(ResourceHandle):
service_type = cons.ST_CINDER
support_resource = {'volume': GET | ACTION,
'transfer': CREATE | ACTION}
def _get_client(self, cxt):
cli = c_client.Client('2',
auth_token=cxt.auth_token,
auth_url=self.auth_url,
timeout=cfg.CONF.client.cinder_timeout)
cli.set_management_url(
self.endpoint_url.replace('$(tenant_id)s', cxt.tenant))
return cli
def handle_get(self, cxt, resource, resource_id):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
res = getattr(client, collection).get(resource_id)
info = {}
info.update(res._info)
return info
except r_exceptions.ConnectTimeout:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable('cinder',
client.client.management_url)
def handle_delete(self, cxt, resource, resource_id):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
return getattr(client, collection).delete(resource_id)
except r_exceptions.ConnectTimeout:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable('cinder',
client.client.management_url)
except c_exceptions.NotFound:
LOG.debug("Delete %(resource)s %(resource_id)s which not found",
{'resource': resource, 'resource_id': resource_id})
def handle_action(self, cxt, resource, action, *args, **kwargs):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
resource_manager = getattr(client, collection)
getattr(resource_manager, action)(*args, **kwargs)
except r_exceptions.ConnectTimeout:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable('cinder',
client.client.management_url)

View File

@ -0,0 +1,52 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from keystonemiddleware import auth_token
from oslo_config import cfg
from oslo_middleware import request_id
from oslo_service import service
import exceptions as t_exc
from i18n import _
def auth_app(app):
app = request_id.RequestId(app)
if cfg.CONF.auth_strategy == 'noauth':
pass
elif cfg.CONF.auth_strategy == 'keystone':
# NOTE(zhiyuan) pkg_resources will try to load tricircle to get module
# version, passing "project" as empty string to bypass it
app = auth_token.AuthProtocol(app, {'project': ''})
else:
raise t_exc.InvalidConfigurationOption(
opt_name='auth_strategy', opt_value=cfg.CONF.auth_strategy)
return app
_launcher = None
def serve(api_service, conf, workers=1):
global _launcher
if _launcher:
raise RuntimeError(_('serve() can only be called once'))
_launcher = service.launch(conf, api_service, workers=workers)
def wait():
_launcher.wait()

208
tricircle/common/rpc.py Normal file → Executable file
View File

@ -1,119 +1,135 @@
# Copyright 2015 Huawei Technologies Co., Ltd. # Copyright 2015 Huawei Technologies Co., Ltd.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License"); you may
# you may not use this file except in compliance with the License. # not use this file except in compliance with the License. You may obtain
# You may obtain a copy of the License at # a copy of the License at
# #
# http://www.apache.org/licenses/LICENSE-2.0 # http://www.apache.org/licenses/LICENSE-2.0
# #
# Unless required by applicable law or agreed to in writing, software # Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# implied. # License for the specific language governing permissions and limitations
# See the License for the specific language governing permissions and # under the License.
# limitations under the License. #
# copy and modify from Nova
from inspect import stack __all__ = [
'init',
'cleanup',
'set_defaults',
'add_extra_exmods',
'clear_extra_exmods',
'get_allowed_exmods',
'RequestContextSerializer',
'get_client',
'get_server',
'get_notifier',
]
import neutron.common.rpc as neutron_rpc
import neutron.common.topics as neutron_topics
import neutron.context as neutron_context
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log as logging import oslo_messaging as messaging
import oslo_messaging from oslo_serialization import jsonutils
from tricircle.common.serializer import CascadeSerializer as Serializer import tricircle.common.context
import tricircle.common.exceptions
TRANSPORT = oslo_messaging.get_transport(cfg.CONF) CONF = cfg.CONF
TRANSPORT = None
NOTIFIER = None
LOG = logging.getLogger(__name__) ALLOWED_EXMODS = [
tricircle.common.exceptions.__name__,
]
EXTRA_EXMODS = []
class NetworkingRpcApi(object): def init(conf):
def __init__(self): global TRANSPORT, NOTIFIER
if not neutron_rpc.TRANSPORT: exmods = get_allowed_exmods()
neutron_rpc.init(cfg.CONF) TRANSPORT = messaging.get_transport(conf,
target = oslo_messaging.Target(topic=neutron_topics.PLUGIN, allowed_remote_exmods=exmods)
version='1.0') serializer = RequestContextSerializer(JsonPayloadSerializer())
self.client = neutron_rpc.get_client(target) NOTIFIER = messaging.Notifier(TRANSPORT, serializer=serializer)
# adapt tricircle context to neutron context
def _make_neutron_context(self, context):
return neutron_context.ContextBase(context.user, context.tenant,
auth_token=context.auth_token,
is_admin=context.is_admin,
request_id=context.request_id,
user_name=context.user_name,
tenant_name=context.tenant_name)
def update_port_up(self, context, port_id):
call_context = self.client.prepare()
return call_context.call(self._make_neutron_context(context),
'update_port_up', port_id=port_id)
def update_port_down(self, context, port_id):
call_context = self.client.prepare()
return call_context.call(self._make_neutron_context(context),
'update_port_down', port_id=port_id)
def create_client(target): def cleanup():
return oslo_messaging.RPCClient( global TRANSPORT, NOTIFIER
TRANSPORT, assert TRANSPORT is not None
target, assert NOTIFIER is not None
serializer=Serializer(), TRANSPORT.cleanup()
) TRANSPORT = NOTIFIER = None
class AutomaticRpcWrapper(object): def set_defaults(control_exchange):
def __init__(self, send_message_callback): messaging.set_transport_defaults(control_exchange)
self._send_message = send_message_callback
def _send_message(self, context, method, payload, cast=False):
"""Cast the payload to the running cascading service instances."""
cctx = self._client.prepare( def add_extra_exmods(*args):
fanout=cast, EXTRA_EXMODS.extend(args)
)
LOG.debug(
'%(what)s at %(topic)s.%(namespace)s the message %(method)s',
{
'topic': cctx.target.topic,
'namespace': cctx.target.namespace,
'method': method,
'what': {True: 'Fanout notify', False: 'Method call'}[cast],
}
)
if cast:
cctx.cast(context, method, payload=payload)
else:
return cctx.call(context, method, payload=payload)
def send(self, cast): def clear_extra_exmods():
"""Autowrap an API call with a send_message() call del EXTRA_EXMODS[:]
This function uses python tricks to implement a passthrough call from
the calling API to the cascade service
"""
caller = stack()[1]
frame = caller[0]
method_name = caller[3]
context = frame.f_locals.get('context', {})
payload = {} def get_allowed_exmods():
for varname in frame.f_code.co_varnames: return ALLOWED_EXMODS + EXTRA_EXMODS
if varname in ("self", "context"):
continue
try:
payload[varname] = frame.f_locals[varname]
except KeyError:
pass
LOG.info( class JsonPayloadSerializer(messaging.NoOpSerializer):
"Farwarding request to %s(%s)", @staticmethod
method_name, def serialize_entity(context, entity):
payload, return jsonutils.to_primitive(entity, convert_instances=True)
)
return self._send_message(context, method_name, payload, cast)
class RequestContextSerializer(messaging.Serializer):
def __init__(self, base):
self._base = base
def serialize_entity(self, context, entity):
if not self._base:
return entity
return self._base.serialize_entity(context, entity)
def deserialize_entity(self, context, entity):
if not self._base:
return entity
return self._base.deserialize_entity(context, entity)
def serialize_context(self, context):
return context.to_dict()
def deserialize_context(self, context):
return tricircle.common.context.Context.from_dict(context)
def get_transport_url(url_str=None):
return messaging.TransportURL.parse(CONF, url_str)
def get_client(target, version_cap=None, serializer=None):
assert TRANSPORT is not None
serializer = RequestContextSerializer(serializer)
return messaging.RPCClient(TRANSPORT,
target,
version_cap=version_cap,
serializer=serializer)
def get_server(target, endpoints, serializer=None):
assert TRANSPORT is not None
serializer = RequestContextSerializer(serializer)
return messaging.get_rpc_server(TRANSPORT,
target,
endpoints,
executor='eventlet',
serializer=serializer)
def get_notifier(service, host=None, publisher_id=None):
assert NOTIFIER is not None
if not publisher_id:
publisher_id = "%s.%s" % (service, host or CONF.host)
return NOTIFIER.prepare(publisher_id=publisher_id)

17
tricircle/common/serializer.py Normal file → Executable file
View File

@ -14,10 +14,9 @@
# limitations under the License. # limitations under the License.
import six import six
from neutron.api.v2.attributes import ATTR_NOT_SPECIFIED
from oslo_messaging import Serializer from oslo_messaging import Serializer
import tricircle.common.context as t_context ATTR_NOT_SPECIFIED = object()
class Mapping(object): class Mapping(object):
@ -32,9 +31,9 @@ _SINGLETON_MAPPING = Mapping({
}) })
class CascadeSerializer(Serializer): class TricircleSerializer(Serializer):
def __init__(self, base=None): def __init__(self, base=None):
super(CascadeSerializer, self).__init__() super(TricircleSerializer, self).__init__()
self._base = base self._base = base
def serialize_entity(self, context, entity): def serialize_entity(self, context, entity):
@ -72,7 +71,13 @@ class CascadeSerializer(Serializer):
return entity return entity
def serialize_context(self, context): def serialize_context(self, context):
return context.to_dict() if self._base is not None:
context = self._base.serialize_context(context)
return context
def deserialize_context(self, context): def deserialize_context(self, context):
return t_context.Context.from_dict(context) if self._base is not None:
context = self._base.deserialize_context(context)
return context

View File

@ -1,80 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from tricircle.common.nova_lib import rpc as nova_rpc
from tricircle.common.nova_lib import service as nova_service
from tricircle.common.nova_lib import version as nova_version
def fix_compute_service_exchange(service):
"""Fix service exchange value for nova"""
_manager = service.manager
client_paths = [
('compute_rpcapi', 'client'),
('compute_task_api', 'conductor_compute_rpcapi', 'client'),
('consoleauth_rpcapi', 'client'),
('scheduler_client', 'queryclient', 'scheduler_rpcapi', 'client'),
('proxy_client',),
('conductor_api', '_manager', 'client')
]
for client_path in client_paths:
if not hasattr(_manager, client_path[0]):
continue
obj = getattr(_manager, client_path[0])
for part in client_path[1:]:
obj = getattr(obj, part)
obj.target.exchange = 'nova'
def _patch_nova_service():
if nova_version.loaded:
return
nova_version.NOVA_PACKAGE = "tricircle"
nova_rpc.TRANSPORT.conf.set_override('control_exchange', 'nova')
nova_version.loaded = True
class NovaService(nova_service.Service):
def __init__(self, *args, **kwargs):
_patch_nova_service()
self._conductor_api = None
self._rpcserver = None
super(NovaService, self).__init__(*args, **kwargs)
@property
def conductor_api(self):
return self._conductor_api
@conductor_api.setter
def conductor_api(self, value):
self._conductor_api = value
for client in (
self._conductor_api.base_rpcapi.client,
self._conductor_api._manager.client,
):
client.target.exchange = "nova"
@property
def rpcserver(self):
return self._rpcserver
@rpcserver.setter
def rpcserver(self, value):
self._rpcserver = value
if value is not None:
value.dispatcher._target.exchange = "nova"

View File

@ -1,30 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from threading import Lock
class Singleton(object):
def __init__(self, factory_method):
self._factory_method = factory_method
self._instance = None
self._instanceLock = Lock()
def get_instance(self):
if self._instance is None:
with self._instanceLock:
if self._instance is None:
self._instance = self._factory_method()
return self._instance

7
tricircle/common/topics.py Normal file → Executable file
View File

@ -13,13 +13,8 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
NETWORK = 'network'
SUBNET = 'subnet'
PORT = 'port'
SECURITY_GROUP = 'security_group'
CREATE = 'create' CREATE = 'create'
DELETE = 'delete' DELETE = 'delete'
UPDATE = 'update' UPDATE = 'update'
CASCADING_SERVICE = 'k-cascading' TOPIC_XJOB = 'xjob'

View File

@ -18,13 +18,20 @@ def get_import_path(cls):
return cls.__module__ + "." + cls.__name__ return cls.__module__ + "." + cls.__name__
def get_ag_name(site_name): def get_ag_name(pod_name):
return 'ag_%s' % site_name return 'ag_%s' % pod_name
def get_az_name(site_name): def get_az_name(pod_name):
return 'az_%s' % site_name return 'az_%s' % pod_name
def get_node_name(site_name): def get_node_name(pod_name):
return "cascade_%s" % site_name return "cascade_%s" % pod_name
def validate_required_fields_set(body, fields):
for field in fields:
if field not in body:
return False
return True

View File

@ -12,6 +12,4 @@
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
import pbr.version version_info = "tricircle 1.0"
version_info = pbr.version.VersionInfo('tricircle')

74
tricircle/common/xrpcapi.py Executable file
View File

@ -0,0 +1,74 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Client side of the job daemon RPC API.
"""
from oslo_config import cfg
from oslo_log import log as logging
import oslo_messaging as messaging
import rpc
from serializer import TricircleSerializer as Serializer
import topics
CONF = cfg.CONF
rpcapi_cap_opt = cfg.StrOpt('xjobapi',
default='1.0',
help='Set a version cap for messages sent to the'
'xjob api in any service')
CONF.register_opt(rpcapi_cap_opt, 'upgrade_levels')
LOG = logging.getLogger(__name__)
class XJobAPI(object):
"""Client side of the xjob rpc API.
API version history:
* 1.0 - Initial version.
"""
VERSION_ALIASES = {
'mitaka': '1.0',
}
def __init__(self):
super(XJobAPI, self).__init__()
rpc.init(CONF)
target = messaging.Target(topic=topics.TOPIC_XJOB, version='1.0')
upgrade_level = CONF.upgrade_levels.xjobapi
version_cap = 1.0
if upgrade_level == 'auto':
version_cap = self._determine_version_cap(target)
else:
version_cap = self.VERSION_ALIASES.get(upgrade_level,
upgrade_level)
serializer = Serializer()
self.client = rpc.get_client(target,
version_cap=version_cap,
serializer=serializer)
# to do the version compatibility for future purpose
def _determine_version_cap(self, target):
version_cap = 1.0
return version_cap
def test_rpc(self, ctxt, payload):
return self.client.call(ctxt, 'test_rpc', payload=payload)

168
tricircle/db/api.py Normal file
View File

@ -0,0 +1,168 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from tricircle.db import core
from tricircle.db import models
def create_pod(context, pod_dict):
with context.session.begin():
return core.create_resource(context, models.Pod, pod_dict)
def delete_pod(context, pod_id):
with context.session.begin():
return core.delete_resource(context, models.Pod, pod_id)
def get_pod(context, pod_id):
with context.session.begin():
return core.get_resource(context, models.Pod, pod_id)
def list_pods(context, filters=None, sorts=None):
with context.session.begin():
return core.query_resource(context, models.Pod, filters or [],
sorts or [])
def update_pod(context, pod_id, update_dict):
with context.session.begin():
return core.update_resource(context, models.Pod, pod_id, update_dict)
def create_pod_service_configuration(context, config_dict):
with context.session.begin():
return core.create_resource(context, models.PodServiceConfiguration,
config_dict)
def delete_pod_service_configuration(context, config_id):
with context.session.begin():
return core.delete_resource(context, models.PodServiceConfiguration,
config_id)
def get_pod_service_configuration(context, config_id):
with context.session.begin():
return core.get_resource(context, models.PodServiceConfiguration,
config_id)
def list_pod_service_configurations(context, filters=None, sorts=None):
with context.session.begin():
return core.query_resource(context, models.PodServiceConfiguration,
filters or [], sorts or [])
def update_pod_service_configuration(context, config_id, update_dict):
with context.session.begin():
return core.update_resource(
context, models.PodServiceConfiguration, config_id, update_dict)
def get_bottom_mappings_by_top_id(context, top_id, resource_type):
"""Get resource id and pod name on bottom
:param context: context object
:param top_id: resource id on top
:return: a list of tuple (pod dict, bottom_id)
"""
route_filters = [{'key': 'top_id', 'comparator': 'eq', 'value': top_id},
{'key': 'resource_type',
'comparator': 'eq',
'value': resource_type}]
mappings = []
with context.session.begin():
routes = core.query_resource(
context, models.ResourceRouting, route_filters, [])
for route in routes:
if not route['bottom_id']:
continue
pod = core.get_resource(context, models.Pod, route['pod_id'])
mappings.append((pod, route['bottom_id']))
return mappings
def get_bottom_mappings_by_tenant_pod(context,
tenant_id,
pod_id,
resource_type):
"""Get resource routing for specific tenant and pod
:param context: context object
:param tenant_id: tenant id to look up
:param pod_id: pod to look up
:param resource_type: specific resource
:return: a dic {top_id : route}
"""
route_filters = [{'key': 'pod_id',
'comparator': 'eq',
'value': pod_id},
{'key': 'project_id',
'comparator': 'eq',
'value': tenant_id},
{'key': 'resource_type',
'comparator': 'eq',
'value': resource_type}]
routings = {}
with context.session.begin():
routes = core.query_resource(
context, models.ResourceRouting, route_filters, [])
for _route in routes:
if not _route['bottom_id']:
continue
routings[_route['top_id']] = _route
return routings
def get_next_bottom_pod(context, current_pod_id=None):
pods = list_pods(context, sorts=[(models.Pod.pod_id, True)])
# NOTE(zhiyuan) number of pods is small, just traverse to filter top pod
pods = [pod for pod in pods if pod['az_name']]
for index, pod in enumerate(pods):
if not current_pod_id:
return pod
if pod['pod_id'] == current_pod_id and index < len(pods) - 1:
return pods[index + 1]
return None
def get_top_pod(context):
filters = [{'key': 'az_name', 'comparator': 'eq', 'value': ''}]
pods = list_pods(context, filters=filters)
# only one should be searched
for pod in pods:
if (pod['pod_name'] != '') and \
(pod['az_name'] == ''):
return pod
return None
def get_pod_by_name(context, pod_name):
filters = [{'key': 'pod_name', 'comparator': 'eq', 'value': pod_name}]
pods = list_pods(context, filters=filters)
# only one should be searched
for pod in pods:
if pod['pod_name'] == pod_name:
return pod
return None

View File

@ -16,13 +16,20 @@
from oslo_config import cfg from oslo_config import cfg
import oslo_db.options as db_options import oslo_db.options as db_options
from oslo_db.sqlalchemy import session as db_session import oslo_db.sqlalchemy.session as db_session
from oslo_utils import strutils from oslo_utils import strutils
import sqlalchemy as sql import sqlalchemy as sql
from sqlalchemy.ext import declarative from sqlalchemy.ext import declarative
from sqlalchemy.inspection import inspect from sqlalchemy.inspection import inspect
import tricircle.db.exception as db_exception from tricircle.common import exceptions
db_opts = [
cfg.StrOpt('tricircle_db_connection',
help='db connection string for tricircle'),
]
cfg.CONF.register_opts(db_opts)
_engine_facade = None _engine_facade = None
ModelBase = declarative.declarative_base() ModelBase = declarative.declarative_base()
@ -34,7 +41,7 @@ def _filter_query(model, query, filters):
:param model: :param model:
:param query: :param query:
:param filters: list of filter dict with key 'key', 'comparator', 'value' :param filters: list of filter dict with key 'key', 'comparator', 'value'
like {'key': 'site_id', 'comparator': 'eq', 'value': 'test_site_uuid'} like {'key': 'pod_id', 'comparator': 'eq', 'value': 'test_pod_uuid'}
:return: :return:
""" """
filter_dict = {} filter_dict = {}
@ -60,15 +67,15 @@ def _get_engine_facade():
global _engine_facade global _engine_facade
if not _engine_facade: if not _engine_facade:
_engine_facade = db_session.EngineFacade.from_config(cfg.CONF) t_connection = cfg.CONF.tricircle_db_connection
_engine_facade = db_session.EngineFacade(t_connection, _conf=cfg.CONF)
return _engine_facade return _engine_facade
def _get_resource(context, model, pk_value): def _get_resource(context, model, pk_value):
res_obj = context.session.query(model).get(pk_value) res_obj = context.session.query(model).get(pk_value)
if not res_obj: if not res_obj:
raise db_exception.ResourceNotFound(model, pk_value) raise exceptions.ResourceNotFound(model, pk_value)
return res_obj return res_obj
@ -86,6 +93,14 @@ def delete_resource(context, model, pk_value):
context.session.delete(res_obj) context.session.delete(res_obj)
def delete_resources(context, model, filters, delete_all=False):
# passing empty filter requires delete_all confirmation
assert filters or delete_all
query = context.session.query(model)
query = _filter_query(model, query, filters)
query.delete(synchronize_session=False)
def get_engine(): def get_engine():
return _get_engine_facade().get_engine() return _get_engine_facade().get_engine()
@ -104,10 +119,13 @@ def initialize():
connection='sqlite:///:memory:') connection='sqlite:///:memory:')
def query_resource(context, model, filters): def query_resource(context, model, filters, sorts):
query = context.session.query(model) query = context.session.query(model)
objs = _filter_query(model, query, filters) query = _filter_query(model, query, filters)
return [obj.to_dict() for obj in objs] for sort_key, sort_dir in sorts:
sort_dir_func = sql.asc if sort_dir else sql.desc
query = query.order_by(sort_dir_func(sort_key))
return [obj.to_dict() for obj in query]
def update_resource(context, model, pk_value, update_dict): def update_resource(context, model, pk_value, update_dict):

View File

@ -1,70 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class EndpointNotAvailable(Exception):
def __init__(self, service, url):
self.service = service
self.url = url
message = "Endpoint %(url)s for %(service)s is not available" % {
'url': url,
'service': service
}
super(EndpointNotAvailable, self).__init__(message)
class EndpointNotUnique(Exception):
def __init__(self, site, service):
self.site = site
self.service = service
message = "Endpoint for %(service)s in %(site)s not unique" % {
'site': site,
'service': service
}
super(EndpointNotUnique, self).__init__(message)
class EndpointNotFound(Exception):
def __init__(self, site, service):
self.site = site
self.service = service
message = "Endpoint for %(service)s in %(site)s not found" % {
'site': site,
'service': service
}
super(EndpointNotFound, self).__init__(message)
class ResourceNotFound(Exception):
def __init__(self, model, unique_key):
resource_type = model.__name__.lower()
self.resource_type = resource_type
self.unique_key = unique_key
message = "Could not find %(resource_type)s: %(unique_key)s" % {
'resource_type': resource_type,
'unique_key': unique_key
}
super(ResourceNotFound, self).__init__(message)
class ResourceNotSupported(Exception):
def __init__(self, resource, method):
self.resource = resource
self.method = method
message = "%(method)s method not supported for %(resource)s" % {
'resource': resource,
'method': method
}
super(ResourceNotSupported, self).__init__(message)

View File

@ -22,35 +22,52 @@ def upgrade(migrate_engine):
meta = sql.MetaData() meta = sql.MetaData()
meta.bind = migrate_engine meta.bind = migrate_engine
cascaded_sites = sql.Table( cascaded_pods = sql.Table(
'cascaded_sites', meta, 'cascaded_pods', meta,
sql.Column('site_id', sql.String(length=64), primary_key=True), sql.Column('pod_id', sql.String(length=36), primary_key=True),
sql.Column('site_name', sql.String(length=64), unique=True, sql.Column('pod_name', sql.String(length=255), unique=True,
nullable=False), nullable=False),
sql.Column('az_id', sql.String(length=64), nullable=False), sql.Column('pod_az_name', sql.String(length=255), nullable=True),
sql.Column('dc_name', sql.String(length=255), nullable=True),
sql.Column('az_name', sql.String(length=255), nullable=False),
mysql_engine='InnoDB', mysql_engine='InnoDB',
mysql_charset='utf8') mysql_charset='utf8')
cascaded_site_service_configuration = sql.Table(
'cascaded_site_service_configuration', meta, cascaded_pod_service_configuration = sql.Table(
'cascaded_pod_service_configuration', meta,
sql.Column('service_id', sql.String(length=64), primary_key=True), sql.Column('service_id', sql.String(length=64), primary_key=True),
sql.Column('site_id', sql.String(length=64), nullable=False), sql.Column('pod_id', sql.String(length=64), nullable=False),
sql.Column('service_type', sql.String(length=64), nullable=False), sql.Column('service_type', sql.String(length=64), nullable=False),
sql.Column('service_url', sql.String(length=512), nullable=False), sql.Column('service_url', sql.String(length=512), nullable=False),
mysql_engine='InnoDB', mysql_engine='InnoDB',
mysql_charset='utf8') mysql_charset='utf8')
cascaded_site_services = sql.Table(
'cascaded_site_services', meta, pod_binding = sql.Table(
sql.Column('site_id', sql.String(length=64), primary_key=True), 'pod_binding', meta,
sql.Column('id', sql.String(36), primary_key=True),
sql.Column('tenant_id', sql.String(length=255), nullable=False),
sql.Column('pod_id', sql.String(length=255), nullable=False),
sql.Column('created_at', sql.DateTime),
sql.Column('updated_at', sql.DateTime),
migrate.UniqueConstraint(
'tenant_id', 'pod_id',
name='pod_binding0tenant_id0pod_id'),
mysql_engine='InnoDB', mysql_engine='InnoDB',
mysql_charset='utf8') mysql_charset='utf8')
tables = [cascaded_sites, cascaded_site_service_configuration, tables = [cascaded_pods, cascaded_pod_service_configuration,
cascaded_site_services] pod_binding]
for table in tables: for table in tables:
table.create() table.create()
fkey = {'columns': [cascaded_site_service_configuration.c.site_id], fkey = {'columns': [cascaded_pod_service_configuration.c.pod_id],
'references': [cascaded_sites.c.site_id]} 'references': [cascaded_pods.c.pod_id]}
migrate.ForeignKeyConstraint(columns=fkey['columns'],
refcolumns=fkey['references'],
name=fkey.get('name')).create()
fkey = {'columns': [pod_binding.c.pod_id],
'references': [cascaded_pods.c.pod_id]}
migrate.ForeignKeyConstraint(columns=fkey['columns'], migrate.ForeignKeyConstraint(columns=fkey['columns'],
refcolumns=fkey['references'], refcolumns=fkey['references'],
name=fkey.get('name')).create() name=fkey.get('name')).create()

View File

@ -0,0 +1,196 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import migrate
import sqlalchemy as sql
from sqlalchemy.dialects import mysql
def MediumText():
return sql.Text().with_variant(mysql.MEDIUMTEXT(), 'mysql')
def upgrade(migrate_engine):
meta = sql.MetaData()
meta.bind = migrate_engine
aggregates = sql.Table(
'aggregates', meta,
sql.Column('id', sql.Integer, primary_key=True),
sql.Column('name', sql.String(255), unique=True),
sql.Column('created_at', sql.DateTime),
sql.Column('updated_at', sql.DateTime),
mysql_engine='InnoDB',
mysql_charset='utf8')
aggregate_metadata = sql.Table(
'aggregate_metadata', meta,
sql.Column('id', sql.Integer, primary_key=True),
sql.Column('key', sql.String(255), nullable=False),
sql.Column('value', sql.String(255), nullable=False),
sql.Column('aggregate_id', sql.Integer, nullable=False),
sql.Column('created_at', sql.DateTime),
sql.Column('updated_at', sql.DateTime),
migrate.UniqueConstraint(
'aggregate_id', 'key',
name='uniq_aggregate_metadata0aggregate_id0key'),
mysql_engine='InnoDB',
mysql_charset='utf8')
instance_types = sql.Table(
'instance_types', meta,
sql.Column('id', sql.Integer, primary_key=True),
sql.Column('name', sql.String(255), unique=True),
sql.Column('memory_mb', sql.Integer, nullable=False),
sql.Column('vcpus', sql.Integer, nullable=False),
sql.Column('root_gb', sql.Integer),
sql.Column('ephemeral_gb', sql.Integer),
sql.Column('flavorid', sql.String(255), unique=True),
sql.Column('swap', sql.Integer, nullable=False, default=0),
sql.Column('rxtx_factor', sql.Float, default=1),
sql.Column('vcpu_weight', sql.Integer),
sql.Column('disabled', sql.Boolean, default=False),
sql.Column('is_public', sql.Boolean, default=True),
sql.Column('created_at', sql.DateTime),
sql.Column('updated_at', sql.DateTime),
mysql_engine='InnoDB',
mysql_charset='utf8')
instance_type_projects = sql.Table(
'instance_type_projects', meta,
sql.Column('id', sql.Integer, primary_key=True),
sql.Column('instance_type_id', sql.Integer, nullable=False),
sql.Column('project_id', sql.String(255)),
sql.Column('created_at', sql.DateTime),
sql.Column('updated_at', sql.DateTime),
migrate.UniqueConstraint(
'instance_type_id', 'project_id',
name='uniq_instance_type_projects0instance_type_id0project_id'),
mysql_engine='InnoDB',
mysql_charset='utf8')
instance_type_extra_specs = sql.Table(
'instance_type_extra_specs', meta,
sql.Column('id', sql.Integer, primary_key=True),
sql.Column('key', sql.String(255)),
sql.Column('value', sql.String(255)),
sql.Column('instance_type_id', sql.Integer, nullable=False),
sql.Column('created_at', sql.DateTime),
sql.Column('updated_at', sql.DateTime),
migrate.UniqueConstraint(
'instance_type_id', 'key',
name='uniq_instance_type_extra_specs0instance_type_id0key'),
mysql_engine='InnoDB',
mysql_charset='utf8')
enum = sql.Enum('ssh', 'x509', metadata=meta, name='keypair_types')
enum.create()
key_pairs = sql.Table(
'key_pairs', meta,
sql.Column('id', sql.Integer, primary_key=True, nullable=False),
sql.Column('name', sql.String(255), nullable=False),
sql.Column('user_id', sql.String(255)),
sql.Column('fingerprint', sql.String(255)),
sql.Column('public_key', MediumText()),
sql.Column('type', enum, nullable=False, server_default='ssh'),
sql.Column('created_at', sql.DateTime),
sql.Column('updated_at', sql.DateTime),
migrate.UniqueConstraint(
'user_id', 'name',
name='uniq_key_pairs0user_id0name'),
mysql_engine='InnoDB',
mysql_charset='utf8')
quotas = sql.Table(
'quotas', meta,
sql.Column('id', sql.Integer, primary_key=True),
sql.Column('project_id', sql.String(255)),
sql.Column('resource', sql.String(255), nullable=False),
sql.Column('hard_limit', sql.Integer),
sql.Column('created_at', sql.DateTime),
sql.Column('updated_at', sql.DateTime),
migrate.UniqueConstraint(
'project_id', 'resource',
name='uniq_quotas0project_id0resource'),
mysql_engine='InnoDB',
mysql_charset='utf8')
volume_types = sql.Table(
'volume_types', meta,
sql.Column('id', sql.String(36), primary_key=True),
sql.Column('name', sql.String(255), unique=True),
sql.Column('description', sql.String(255)),
sql.Column('qos_specs_id', sql.String(36)),
sql.Column('is_public', sql.Boolean, default=True),
sql.Column('created_at', sql.DateTime),
sql.Column('updated_at', sql.DateTime),
mysql_engine='InnoDB',
mysql_charset='utf8')
quality_of_service_specs = sql.Table(
'quality_of_service_specs', meta,
sql.Column('id', sql.String(36), primary_key=True),
sql.Column('specs_id', sql.String(36)),
sql.Column('key', sql.String(255)),
sql.Column('value', sql.String(255)),
sql.Column('created_at', sql.DateTime),
sql.Column('updated_at', sql.DateTime),
mysql_engine='InnoDB',
mysql_charset='utf8')
cascaded_pods_resource_routing = sql.Table(
'cascaded_pods_resource_routing', meta,
sql.Column('id', sql.Integer, primary_key=True),
sql.Column('top_id', sql.String(length=127), nullable=False),
sql.Column('bottom_id', sql.String(length=36)),
sql.Column('pod_id', sql.String(length=64), nullable=False),
sql.Column('project_id', sql.String(length=36)),
sql.Column('resource_type', sql.String(length=64), nullable=False),
sql.Column('created_at', sql.DateTime),
sql.Column('updated_at', sql.DateTime),
mysql_engine='InnoDB',
mysql_charset='utf8')
tables = [aggregates, aggregate_metadata, instance_types,
instance_type_projects, instance_type_extra_specs, key_pairs,
quotas, volume_types, quality_of_service_specs,
cascaded_pods_resource_routing]
for table in tables:
table.create()
cascaded_pods = sql.Table('cascaded_pods', meta, autoload=True)
fkeys = [{'columns': [instance_type_projects.c.instance_type_id],
'references': [instance_types.c.id]},
{'columns': [instance_type_extra_specs.c.instance_type_id],
'references': [instance_types.c.id]},
{'columns': [volume_types.c.qos_specs_id],
'references': [quality_of_service_specs.c.id]},
{'columns': [quality_of_service_specs.c.specs_id],
'references': [quality_of_service_specs.c.id]},
{'columns': [aggregate_metadata.c.aggregate_id],
'references': [aggregates.c.id]},
{'columns': [cascaded_pods_resource_routing.c.pod_id],
'references': [cascaded_pods.c.pod_id]}]
for fkey in fkeys:
migrate.ForeignKeyConstraint(columns=fkey['columns'],
refcolumns=fkey['references'],
name=fkey.get('name')).create()
def downgrade(migrate_engine):
raise NotImplementedError('downgrade not support')

View File

@ -14,83 +14,280 @@
# under the License. # under the License.
from oslo_db.sqlalchemy import models
import sqlalchemy as sql import sqlalchemy as sql
from sqlalchemy.dialects import mysql
from sqlalchemy import schema
from tricircle.db import core from tricircle.db import core
def create_site(context, site_dict): def MediumText():
with context.session.begin(): return sql.Text().with_variant(mysql.MEDIUMTEXT(), 'mysql')
return core.create_resource(context, Site, site_dict)
def delete_site(context, site_id): # Resource Model
with context.session.begin(): class Aggregate(core.ModelBase, core.DictBase, models.TimestampMixin):
return core.delete_resource(context, Site, site_id) """Represents a cluster of hosts that exists in this zone."""
__tablename__ = 'aggregates'
attributes = ['id', 'name', 'created_at', 'updated_at']
id = sql.Column(sql.Integer, primary_key=True)
name = sql.Column(sql.String(255), unique=True)
def get_site(context, site_id): class AggregateMetadata(core.ModelBase, core.DictBase, models.TimestampMixin):
with context.session.begin(): """Represents a metadata key/value pair for an aggregate."""
return core.get_resource(context, Site, site_id) __tablename__ = 'aggregate_metadata'
__table_args__ = (
sql.Index('aggregate_metadata_key_idx', 'key'),
schema.UniqueConstraint(
'aggregate_id', 'key',
name='uniq_aggregate_metadata0aggregate_id0key'),
)
attributes = ['id', 'key', 'value', 'aggregate_id',
'created_at', 'updated_at']
id = sql.Column(sql.Integer, primary_key=True)
key = sql.Column(sql.String(255), nullable=False)
value = sql.Column(sql.String(255), nullable=False)
aggregate_id = sql.Column(sql.Integer,
sql.ForeignKey('aggregates.id'), nullable=False)
def list_sites(context, filters): class InstanceTypes(core.ModelBase, core.DictBase, models.TimestampMixin):
with context.session.begin(): """Represents possible flavors for instances.
return core.query_resource(context, Site, filters)
Note: instance_type and flavor are synonyms and the term instance_type is
deprecated and in the process of being removed.
"""
__tablename__ = 'instance_types'
attributes = ['id', 'name', 'memory_mb', 'vcpus', 'root_gb',
'ephemeral_gb', 'flavorid', 'swap', 'rxtx_factor',
'vcpu_weight', 'disabled', 'is_public', 'created_at',
'updated_at']
# Internal only primary key/id
id = sql.Column(sql.Integer, primary_key=True)
name = sql.Column(sql.String(255), unique=True)
memory_mb = sql.Column(sql.Integer, nullable=False)
vcpus = sql.Column(sql.Integer, nullable=False)
root_gb = sql.Column(sql.Integer)
ephemeral_gb = sql.Column(sql.Integer)
# Public facing id will be renamed public_id
flavorid = sql.Column(sql.String(255), unique=True)
swap = sql.Column(sql.Integer, nullable=False, default=0)
rxtx_factor = sql.Column(sql.Float, default=1)
vcpu_weight = sql.Column(sql.Integer)
disabled = sql.Column(sql.Boolean, default=False)
is_public = sql.Column(sql.Boolean, default=True)
def update_site(context, site_id, update_dict): class InstanceTypeProjects(core.ModelBase, core.DictBase,
with context.session.begin(): models.TimestampMixin):
return core.update_resource(context, Site, site_id, update_dict) """Represent projects associated instance_types."""
__tablename__ = 'instance_type_projects'
__table_args__ = (schema.UniqueConstraint(
'instance_type_id', 'project_id',
name='uniq_instance_type_projects0instance_type_id0project_id'),
)
attributes = ['id', 'instance_type_id', 'project_id', 'created_at',
'updated_at']
id = sql.Column(sql.Integer, primary_key=True)
instance_type_id = sql.Column(sql.Integer,
sql.ForeignKey('instance_types.id'),
nullable=False)
project_id = sql.Column(sql.String(255))
def create_site_service_configuration(context, config_dict): class InstanceTypeExtraSpecs(core.ModelBase, core.DictBase,
with context.session.begin(): models.TimestampMixin):
return core.create_resource(context, SiteServiceConfiguration, """Represents additional specs as key/value pairs for an instance_type."""
config_dict) __tablename__ = 'instance_type_extra_specs'
__table_args__ = (
sql.Index('instance_type_extra_specs_instance_type_id_key_idx',
'instance_type_id', 'key'),
schema.UniqueConstraint(
'instance_type_id', 'key',
name='uniq_instance_type_extra_specs0instance_type_id0key'),
{'mysql_collate': 'utf8_bin'},
)
attributes = ['id', 'key', 'value', 'instance_type_id', 'created_at',
'updated_at']
id = sql.Column(sql.Integer, primary_key=True)
key = sql.Column(sql.String(255))
value = sql.Column(sql.String(255))
instance_type_id = sql.Column(sql.Integer,
sql.ForeignKey('instance_types.id'),
nullable=False)
def delete_site_service_configuration(context, config_id): class KeyPair(core.ModelBase, core.DictBase, models.TimestampMixin):
with context.session.begin(): """Represents a public key pair for ssh / WinRM."""
return core.delete_resource(context, __tablename__ = 'key_pairs'
SiteServiceConfiguration, config_id) __table_args__ = (
schema.UniqueConstraint('user_id', 'name',
name='uniq_key_pairs0user_id0name'),
)
attributes = ['id', 'name', 'user_id', 'fingerprint', 'public_key', 'type',
'created_at', 'updated_at']
id = sql.Column(sql.Integer, primary_key=True, nullable=False)
name = sql.Column(sql.String(255), nullable=False)
user_id = sql.Column(sql.String(255))
fingerprint = sql.Column(sql.String(255))
public_key = sql.Column(MediumText())
type = sql.Column(sql.Enum('ssh', 'x509', name='keypair_types'),
nullable=False, server_default='ssh')
def list_site_service_configurations(context, filters): class Quota(core.ModelBase, core.DictBase, models.TimestampMixin):
with context.session.begin(): """Represents a single quota override for a project.
return core.query_resource(context, SiteServiceConfiguration, filters)
If there is no row for a given project id and resource, then the
default for the quota class is used. If there is no row for a
given quota class and resource, then the default for the
deployment is used. If the row is present but the hard limit is
Null, then the resource is unlimited.
"""
__tablename__ = 'quotas'
__table_args__ = (
schema.UniqueConstraint('project_id', 'resource',
name='uniq_quotas0project_id0resource'),
)
attributes = ['id', 'project_id', 'resource', 'hard_limit',
'created_at', 'updated_at']
id = sql.Column(sql.Integer, primary_key=True)
project_id = sql.Column(sql.String(255))
resource = sql.Column(sql.String(255), nullable=False)
hard_limit = sql.Column(sql.Integer)
def update_site_service_configuration(context, config_id, update_dict): class VolumeTypes(core.ModelBase, core.DictBase, models.TimestampMixin):
with context.session.begin(): """Represent possible volume_types of volumes offered."""
return core.update_resource( __tablename__ = "volume_types"
context, SiteServiceConfiguration, config_id, update_dict) attributes = ['id', 'name', 'description', 'qos_specs_id', 'is_public',
'created_at', 'updated_at']
id = sql.Column(sql.String(36), primary_key=True)
name = sql.Column(sql.String(255), unique=True)
description = sql.Column(sql.String(255))
# A reference to qos_specs entity
qos_specs_id = sql.Column(sql.String(36),
sql.ForeignKey('quality_of_service_specs.id'))
is_public = sql.Column(sql.Boolean, default=True)
class Site(core.ModelBase, core.DictBase): class QualityOfServiceSpecs(core.ModelBase, core.DictBase,
__tablename__ = 'cascaded_sites' models.TimestampMixin):
attributes = ['site_id', 'site_name', 'az_id'] """Represents QoS specs as key/value pairs.
site_id = sql.Column('site_id', sql.String(length=64), primary_key=True)
site_name = sql.Column('site_name', sql.String(length=64), unique=True, QoS specs is standalone entity that can be associated/disassociated
nullable=False) with volume types (one to many relation). Adjacency list relationship
az_id = sql.Column('az_id', sql.String(length=64), nullable=False) pattern is used in this model in order to represent following hierarchical
data with in flat table, e.g, following structure
qos-specs-1 'Rate-Limit'
|
+------> consumer = 'front-end'
+------> total_bytes_sec = 1048576
+------> total_iops_sec = 500
qos-specs-2 'QoS_Level1'
|
+------> consumer = 'back-end'
+------> max-iops = 1000
+------> min-iops = 200
is represented by:
id specs_id key value
------ -------- ------------- -----
UUID-1 NULL QoSSpec_Name Rate-Limit
UUID-2 UUID-1 consumer front-end
UUID-3 UUID-1 total_bytes_sec 1048576
UUID-4 UUID-1 total_iops_sec 500
UUID-5 NULL QoSSpec_Name QoS_Level1
UUID-6 UUID-5 consumer back-end
UUID-7 UUID-5 max-iops 1000
UUID-8 UUID-5 min-iops 200
"""
__tablename__ = 'quality_of_service_specs'
attributes = ['id', 'specs_id', 'key', 'value', 'created_at', 'updated_at']
id = sql.Column(sql.String(36), primary_key=True)
specs_id = sql.Column(sql.String(36), sql.ForeignKey(id))
key = sql.Column(sql.String(255))
value = sql.Column(sql.String(255))
class SiteServiceConfiguration(core.ModelBase, core.DictBase): # Pod Model
__tablename__ = 'cascaded_site_service_configuration' class Pod(core.ModelBase, core.DictBase):
attributes = ['service_id', 'site_id', 'service_type', 'service_url'] __tablename__ = 'cascaded_pods'
attributes = ['pod_id', 'pod_name', 'pod_az_name', 'dc_name', 'az_name']
pod_id = sql.Column('pod_id', sql.String(length=36), primary_key=True)
pod_name = sql.Column('pod_name', sql.String(length=255), unique=True,
nullable=False)
pod_az_name = sql.Column('pod_az_name', sql.String(length=255),
nullable=True)
dc_name = sql.Column('dc_name', sql.String(length=255), nullable=True)
az_name = sql.Column('az_name', sql.String(length=255), nullable=False)
class PodServiceConfiguration(core.ModelBase, core.DictBase):
__tablename__ = 'cascaded_pod_service_configuration'
attributes = ['service_id', 'pod_id', 'service_type', 'service_url']
service_id = sql.Column('service_id', sql.String(length=64), service_id = sql.Column('service_id', sql.String(length=64),
primary_key=True) primary_key=True)
site_id = sql.Column('site_id', sql.String(length=64), pod_id = sql.Column('pod_id', sql.String(length=64),
sql.ForeignKey('cascaded_sites.site_id'), sql.ForeignKey('cascaded_pods.pod_id'),
nullable=False) nullable=False)
service_type = sql.Column('service_type', sql.String(length=64), service_type = sql.Column('service_type', sql.String(length=64),
nullable=False) nullable=False)
service_url = sql.Column('service_url', sql.String(length=512), service_url = sql.Column('service_url', sql.String(length=512),
nullable=False) nullable=False)
class SiteService(core.ModelBase, core.DictBase): # Tenant and pod binding model
__tablename__ = 'cascaded_site_services' class PodBinding(core.ModelBase, core.DictBase, models.TimestampMixin):
attributes = ['site_id'] __tablename__ = 'pod_binding'
site_id = sql.Column('site_id', sql.String(length=64), primary_key=True) __table_args__ = (
schema.UniqueConstraint(
'tenant_id', 'pod_id',
name='pod_binding0tenant_id0pod_id'),
)
attributes = ['id', 'tenant_id', 'pod_id',
'created_at', 'updated_at']
id = sql.Column(sql.String(36), primary_key=True)
tenant_id = sql.Column('tenant_id', sql.String(36), nullable=False)
pod_id = sql.Column('pod_id', sql.String(36),
sql.ForeignKey('cascaded_pods.pod_id'),
nullable=False)
# Routing Model
class ResourceRouting(core.ModelBase, core.DictBase, models.TimestampMixin):
__tablename__ = 'cascaded_pods_resource_routing'
__table_args__ = (
schema.UniqueConstraint(
'top_id', 'pod_id',
name='cascaded_pods_resource_routing0top_id0pod_id'),
)
attributes = ['id', 'top_id', 'bottom_id', 'pod_id', 'project_id',
'resource_type', 'created_at', 'updated_at']
id = sql.Column('id', sql.Integer, primary_key=True)
top_id = sql.Column('top_id', sql.String(length=127), nullable=False)
bottom_id = sql.Column('bottom_id', sql.String(length=36))
pod_id = sql.Column('pod_id', sql.String(length=64),
sql.ForeignKey('cascaded_pods.pod_id'),
nullable=False)
project_id = sql.Column('project_id', sql.String(length=36))
resource_type = sql.Column('resource_type', sql.String(length=64),
nullable=False)

22
tricircle/db/opts.py Normal file
View File

@ -0,0 +1,22 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import tricircle.db.core
def list_opts():
return [
('DEFAULT', tricircle.db.core.db_opts),
]

View File

@ -1,191 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import glanceclient as g_client
import glanceclient.exc as g_exceptions
from neutronclient.common import exceptions as q_exceptions
from neutronclient.neutron import client as q_client
from novaclient import client as n_client
from novaclient import exceptions as n_exceptions
from oslo_config import cfg
from oslo_log import log as logging
from requests import exceptions as r_exceptions
from tricircle.db import exception as exception
client_opts = [
cfg.IntOpt('glance_timeout',
default=60,
help='timeout for glance client in seconds'),
cfg.IntOpt('neutron_timeout',
default=60,
help='timeout for neutron client in seconds'),
cfg.IntOpt('nova_timeout',
default=60,
help='timeout for nova client in seconds'),
]
cfg.CONF.register_opts(client_opts, group='client')
LIST, CREATE, DELETE, ACTION = 1, 2, 4, 8
operation_index_map = {'list': LIST, 'create': CREATE,
'delete': DELETE, 'action': ACTION}
LOG = logging.getLogger(__name__)
def _transform_filters(filters):
filter_dict = {}
for query_filter in filters:
# only eq filter supported at first
if query_filter['comparator'] != 'eq':
continue
key = query_filter['key']
value = query_filter['value']
filter_dict[key] = value
return filter_dict
class ResourceHandle(object):
def __init__(self, auth_url):
self.auth_url = auth_url
self.endpoint_url = None
def is_endpoint_url_set(self):
return self.endpoint_url is not None
def update_endpoint_url(self, url):
self.endpoint_url = url
class GlanceResourceHandle(ResourceHandle):
service_type = 'glance'
support_resource = {'image': LIST}
def _get_client(self, cxt):
return g_client.Client('1',
token=cxt.auth_token,
auth_url=self.auth_url,
endpoint=self.endpoint_url,
timeout=cfg.CONF.client.glance_timeout)
def handle_list(self, cxt, resource, filters):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
return [res.to_dict() for res in getattr(
client, collection).list(filters=_transform_filters(filters))]
except g_exceptions.InvalidEndpoint:
self.endpoint_url = None
raise exception.EndpointNotAvailable('glance',
client.http_client.endpoint)
class NeutronResourceHandle(ResourceHandle):
service_type = 'neutron'
support_resource = {'network': LIST,
'subnet': LIST,
'port': LIST,
'router': LIST,
'security_group': LIST,
'security_group_rule': LIST}
def _get_client(self, cxt):
return q_client.Client('2.0',
token=cxt.auth_token,
auth_url=self.auth_url,
endpoint_url=self.endpoint_url,
timeout=cfg.CONF.client.neutron_timeout)
def handle_list(self, cxt, resource, filters):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
search_opts = _transform_filters(filters)
return [res for res in getattr(
client, 'list_%s' % collection)(**search_opts)[collection]]
except q_exceptions.ConnectionFailed:
self.endpoint_url = None
raise exception.EndpointNotAvailable(
'neutron', client.httpclient.endpoint_url)
class NovaResourceHandle(ResourceHandle):
service_type = 'nova'
support_resource = {'flavor': LIST,
'server': LIST,
'aggregate': LIST | CREATE | DELETE | ACTION}
def _get_client(self, cxt):
cli = n_client.Client('2',
auth_token=cxt.auth_token,
auth_url=self.auth_url,
timeout=cfg.CONF.client.nova_timeout)
cli.set_management_url(
self.endpoint_url.replace('$(tenant_id)s', cxt.tenant))
return cli
def handle_list(self, cxt, resource, filters):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
# only server list supports filter
if resource == 'server':
search_opts = _transform_filters(filters)
return [res.to_dict() for res in getattr(
client, collection).list(search_opts=search_opts)]
else:
return [res.to_dict() for res in getattr(client,
collection).list()]
except r_exceptions.ConnectTimeout:
self.endpoint_url = None
raise exception.EndpointNotAvailable('nova',
client.client.management_url)
def handle_create(self, cxt, resource, *args, **kwargs):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
return getattr(client, collection).create(
*args, **kwargs).to_dict()
except r_exceptions.ConnectTimeout:
self.endpoint_url = None
raise exception.EndpointNotAvailable('nova',
client.client.management_url)
def handle_delete(self, cxt, resource, resource_id):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
return getattr(client, collection).delete(resource_id)
except r_exceptions.ConnectTimeout:
self.endpoint_url = None
raise exception.EndpointNotAvailable('nova',
client.client.management_url)
except n_exceptions.NotFound:
LOG.debug("Delete %(resource)s %(resource_id)s which not found",
{'resource': resource, 'resource_id': resource_id})
def handle_action(self, cxt, resource, action, *args, **kwargs):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
resource_manager = getattr(client, collection)
getattr(resource_manager, action)(*args, **kwargs)
except r_exceptions.ConnectTimeout:
self.endpoint_url = None
raise exception.EndpointNotAvailable('nova',
client.client.management_url)

View File

@ -1,126 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import oslo_log.log as logging
import oslo_messaging as messaging
from tricircle.common.i18n import _LI
from tricircle.common.i18n import _LW
from tricircle.common.nova_lib import context as nova_context
from tricircle.common.nova_lib import exception
from tricircle.common.nova_lib import manager
from tricircle.common.nova_lib import objects
from tricircle.common.nova_lib import objects_base
from tricircle.common.nova_lib import rpc as nova_rpc
from tricircle.common import utils
LOG = logging.getLogger(__name__)
class DispatcherComputeManager(manager.Manager):
target = messaging.Target(version='4.0')
def __init__(self, site_manager=None, *args, **kwargs):
self._site_manager = site_manager
target = messaging.Target(topic="proxy", version='4.0')
serializer = objects_base.NovaObjectSerializer()
self.proxy_client = nova_rpc.get_client(target, '4.0', serializer)
super(DispatcherComputeManager, self).__init__(service_name="compute",
*args, **kwargs)
def _get_compute_node(self, context):
"""Returns compute node for the host and nodename."""
try:
return objects.ComputeNode.get_by_host_and_nodename(
context, self.host, utils.get_node_name(self.host))
except exception.NotFound:
LOG.warning(_LW("No compute node record for %(host)s:%(node)s"),
{'host': self.host,
'node': utils.get_node_name(self.host)})
def _copy_resources(self, compute_node, resources):
"""Copy resource values to initialise compute_node"""
# update the allocation ratios for the related ComputeNode object
compute_node.ram_allocation_ratio = 1
compute_node.cpu_allocation_ratio = 1
# now copy rest to compute_node
for key in resources:
compute_node[key] = resources[key]
def _init_compute_node(self, context, resources):
"""Initialise the compute node if it does not already exist.
The nova scheduler will be inoperable if compute_node
is not defined. The compute_node will remain undefined if
we fail to create it or if there is no associated service
registered.
If this method has to create a compute node it needs initial
values - these come from resources.
:param context: security context
:param resources: initial values
"""
# try to get the compute node record from the
# database. If we get one we use resources to initialize
compute_node = self._get_compute_node(context)
if compute_node:
self._copy_resources(compute_node, resources)
compute_node.save()
return
# there was no local copy and none in the database
# so we need to create a new compute node. This needs
# to be initialised with resource values.
compute_node = objects.ComputeNode(context)
service = objects.Service.get_by_host_and_binary(
context, self.host, 'nova-compute')
compute_node.host = self.host
compute_node.service_id = service['id']
self._copy_resources(compute_node, resources)
compute_node.create()
LOG.info(_LI('Compute_service record created for '
'%(host)s:%(node)s'),
{'host': self.host, 'node': utils.get_node_name(self.host)})
# NOTE(zhiyuan) register fake compute node information in db so nova
# scheduler can properly select destination
def pre_start_hook(self):
site = self._site_manager.get_site(self.host)
node = site.get_nodes()[0]
resources = node.get_available_resource()
context = nova_context.get_admin_context()
self._init_compute_node(context, resources)
def build_and_run_instance(self, context, instance, image, request_spec,
filter_properties, admin_password=None,
injected_files=None, requested_networks=None,
security_groups=None, block_device_mapping=None,
node=None, limits=None):
version = '4.0'
cctxt = self.proxy_client.prepare(version=version)
cctxt.cast(context, 'build_and_run_instance', host=self.host,
instance=instance, image=image, request_spec=request_spec,
filter_properties=filter_properties,
admin_password=admin_password,
injected_files=injected_files,
requested_networks=requested_networks,
security_groups=security_groups,
block_device_mapping=block_device_mapping, node=node,
limits=limits)

View File

@ -1,52 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import oslo_messaging
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
class CascadeNetworkingServiceEndpoint(object):
target = oslo_messaging.Target(namespace="networking",
version='1.0')
def create_network(self, ctx, payload):
# TODO(gamepl, saggi): STUB
LOG.info("(create_network) payload: %s", payload)
return True
def delete_network(self, ctx, payload):
# TODO(gampel, saggi): STUB
LOG.info("(delete_network) payload: %s", payload)
return True
def update_network(self, ctx, payload):
# TODO(gampel, saggi): STUB
LOG.info("(update_network) payload: %s", payload)
return True
def create_port(self, ctx, payload):
# TODO(gampel, saggi): STUB
LOG.info("(create_port) payload: %s", payload)
return True
def delete_port(self, ctx, payload):
# TODO(gampel, saggi): STUB
LOG.info("(delete_port) payload: %s", payload)
return True

View File

@ -1,32 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import oslo_messaging
from oslo_log import log as logging
from tricircle.dispatcher import site_manager
LOG = logging.getLogger(__name__)
class CascadeSiteServiceEndpoint(object):
target = oslo_messaging.Target(namespace="site",
version='1.0')
def create_site(self, ctx, payload):
site_manager.get_instance().create_site(ctx, payload)

View File

@ -1,50 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tricircle.common.service as t_service
from tricircle.common.utils import get_import_path
from tricircle.dispatcher.compute_manager import DispatcherComputeManager
_REPORT_INTERVAL = 30
_REPORT_INTERVAL_MAX = 60
class ComputeHostManager(object):
def __init__(self, site_manager):
self._compute_nodes = []
self._site_manager = site_manager
def _create_compute_node_service(self, host):
service = t_service.NovaService(
host=host,
binary="nova-compute",
topic="compute", # TODO(saggi): get from conf
db_allowed=False,
periodic_enable=True,
report_interval=_REPORT_INTERVAL,
periodic_interval_max=_REPORT_INTERVAL_MAX,
manager=get_import_path(DispatcherComputeManager),
site_manager=self._site_manager
)
t_service.fix_compute_service_exchange(service)
return service
def create_host_adapter(self, host):
"""Creates an adapter between the nova compute API and Site object"""
service = self._create_compute_node_service(host)
service.start()
self._compute_nodes.append(service)

View File

@ -1,77 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from socket import gethostname
from oslo_config import cfg
from oslo_log import log as logging
import oslo_messaging
from tricircle.common.serializer import CascadeSerializer as Serializer
from tricircle.common import topics
from tricircle.dispatcher import site_manager
# import endpoints here
from tricircle.dispatcher.endpoints.networking import (
CascadeNetworkingServiceEndpoint)
from tricircle.dispatcher.endpoints.site import (
CascadeSiteServiceEndpoint)
LOG = logging.getLogger(__name__)
class ServerControlEndpoint(object):
target = oslo_messaging.Target(namespace='control',
version='1.0')
def __init__(self, server):
self.server = server
def stop(self, ctx):
if self.server:
self.server.stop()
def _create_main_cascade_server():
transport = oslo_messaging.get_transport(cfg.CONF)
target = oslo_messaging.Target(
exchange="tricircle",
topic=topics.CASCADING_SERVICE,
server=gethostname(),
)
server_control_endpoint = ServerControlEndpoint(None)
endpoints = [
server_control_endpoint,
CascadeNetworkingServiceEndpoint(),
CascadeSiteServiceEndpoint()
]
server = oslo_messaging.get_rpc_server(
transport,
target,
endpoints,
executor='eventlet',
serializer=Serializer(),
)
server_control_endpoint.server = server
# init _SiteManager to start fake nodes
site_manager.get_instance()
return server
def setup_server():
return _create_main_cascade_server()

View File

@ -1,140 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tricircle.common.context as t_context
from tricircle.common.singleton import Singleton
from tricircle.common import utils
from tricircle.db import client
from tricircle.db import models
from tricircle.dispatcher.host_manager import ComputeHostManager
class Node(object):
def __init__(self, name):
self.vcpus = 20
self.memory_mb = 1024 * 32 # 32 GB
self.memory_mb_used = self.memory_mb * 0.1
self.free_ram_mb = self.memory_mb - self.memory_mb_used
self.local_gb = 1024 * 10 # 10 TB
self.local_gb_used = self.local_gb * 0.3
self.free_disk_gb = self.local_gb - self.local_gb_used
self.vcpus_used = 0
self.hypervisor_type = "Cascade Site"
self.hypervisor_version = 1
self.current_workload = 1
self.hypervisor_hostname = name
self.running_vms = 0
self.cpu_info = ""
self.disk_available_least = 1
self.supported_hv_specs = []
self.metrics = None
self.pci_stats = None
self.extra_resources = None
self.stats = {}
self.numa_topology = None
def get_available_resource(self):
return {
"vcpus": self.vcpus,
"memory_mb": self.memory_mb,
"local_gb": self.local_gb,
"vcpus_used": self.vcpus_used,
"memory_mb_used": self.memory_mb_used,
"local_gb_used": self.local_gb_used,
"hypervisor_type": self.hypervisor_type,
"hypervisor_version": self.hypervisor_version,
"hypervisor_hostname": self.hypervisor_hostname,
"free_ram_mb": self.free_ram_mb,
"free_disk_gb": self.free_disk_gb,
"current_workload": self.current_workload,
"running_vms": self.running_vms,
"cpu_info": self.cpu_info,
"disk_available_least": self.disk_available_least,
"supported_hv_specs": self.supported_hv_specs,
"metrics": self.metrics,
"pci_stats": self.pci_stats,
"extra_resources": self.extra_resources,
"stats": self.stats,
"numa_topology": self.numa_topology,
}
class Site(object):
def __init__(self, name):
self.name = name
# We currently just hold one aggregate subnode representing the
# resources owned by all the site's nodes.
self._aggragate_node = Node(utils.get_node_name(name))
self._instance_launch_information = {}
def get_nodes(self):
return [self._aggragate_node]
def get_node(self, name):
return self._aggragate_node
def get_num_instances(self):
return 0
def prepare_for_instance(self, request_spec, filter_properties):
instance_uuid = request_spec[u'instance_properties']['uuid']
self._instance_launch_information[instance_uuid] = (
request_spec,
filter_properties
)
class _SiteManager(object):
def __init__(self):
self._sites = {}
self.compute_host_manager = ComputeHostManager(self)
sites = models.list_sites(t_context.get_db_context(), [])
for site in sites:
# skip top site
if not site['az_id']:
continue
self.create_site(t_context.get_admin_context(), site['site_name'])
def create_site(self, context, site_name):
"""creates a fake node as nova-compute and add it to az"""
# TODO(saggi): thread safety
if site_name in self._sites:
raise RuntimeError("Site already exists in site map")
# TODO(zhiyuan): use DHT to judge whether host this site or not
self._sites[site_name] = Site(site_name)
self.compute_host_manager.create_host_adapter(site_name)
ag_name = utils.get_ag_name(site_name)
top_client = client.Client()
aggregates = top_client.list_resources('aggregate', context)
for aggregate in aggregates:
if aggregate['name'] == ag_name:
if site_name in aggregate['hosts']:
return
else:
top_client.action_resources('aggregate', context,
'add_host', aggregate['id'],
site_name)
return
def get_site(self, site_name):
return self._sites[site_name]
get_instance = Singleton(_SiteManager).get_instance

850
tricircle/network/plugin.py Normal file
View File

@ -0,0 +1,850 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
import oslo_log.helpers as log_helpers
from oslo_log import log
from neutron.api.v2 import attributes
from neutron.common import exceptions
from neutron.db import common_db_mixin
from neutron.db import db_base_plugin_v2
from neutron.db import external_net_db
from neutron.db import extradhcpopt_db
# NOTE(zhiyuan) though not used, this import cannot be removed because Router
# relies on one table defined in l3_agentschedulers_db
from neutron.db import l3_agentschedulers_db # noqa
from neutron.db import l3_db
from neutron.db import models_v2
from neutron.db import portbindings_db
from neutron.db import securitygroups_db
from neutron.db import sqlalchemyutils
from neutron.extensions import availability_zone as az_ext
from sqlalchemy import sql
from tricircle.common import az_ag
import tricircle.common.client as t_client
import tricircle.common.constants as t_constants
import tricircle.common.context as t_context
from tricircle.common.i18n import _
from tricircle.common.i18n import _LI
import tricircle.common.lock_handle as t_lock
import tricircle.db.api as db_api
from tricircle.db import core
from tricircle.db import models
tricircle_opts = [
# TODO(zhiyuan) change to segmentation range
# currently all tenants share one VLAN id for bridge networks, should
# allocate one isolated segmentation id for each tenant later
cfg.IntOpt('bridge_segmentation_id',
default=0,
help='vlan id of l3 bridge network'),
cfg.StrOpt('bridge_physical_network',
default='',
help='name of l3 bridge physical network')
]
tricircle_opt_group = cfg.OptGroup('tricircle')
cfg.CONF.register_group(tricircle_opt_group)
cfg.CONF.register_opts(tricircle_opts, group=tricircle_opt_group)
LOG = log.getLogger(__name__)
class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
securitygroups_db.SecurityGroupDbMixin,
external_net_db.External_net_db_mixin,
portbindings_db.PortBindingMixin,
extradhcpopt_db.ExtraDhcpOptMixin,
l3_db.L3_NAT_dbonly_mixin):
__native_bulk_support = True
__native_pagination_support = True
__native_sorting_support = True
# NOTE(zhiyuan) we don't support "agent" and "availability_zone" extensions
# and also it's no need for us to support, but "network_availability_zone"
# depends on these two extensions so we need to register them
supported_extension_aliases = ["agent",
"quotas",
"extra_dhcp_opt",
"binding",
"security-group",
"external-net",
"availability_zone",
"network_availability_zone",
"router"]
def __init__(self):
super(TricirclePlugin, self).__init__()
LOG.info(_LI("Starting Tricircle Neutron Plugin"))
self.clients = {}
self._setup_rpc()
def _setup_rpc(self):
self.endpoints = []
def _get_client(self, pod_name):
if pod_name not in self.clients:
self.clients[pod_name] = t_client.Client(pod_name)
return self.clients[pod_name]
@log_helpers.log_method_call
def start_rpc_listeners(self):
pass
# NOTE(zhiyuan) use later
# self.topic = topics.PLUGIN
# self.conn = n_rpc.create_connection(new=True)
# self.conn.create_consumer(self.topic, self.endpoints, fanout=False)
# return self.conn.consume_in_threads()
@staticmethod
def _validate_availability_zones(context, az_list):
if not az_list:
return
t_ctx = t_context.get_context_from_neutron_context(context)
with context.session.begin():
pods = core.query_resource(t_ctx, models.Pod, [], [])
az_set = set(az_list)
known_az_set = set([pod['az_name'] for pod in pods])
diff = az_set - known_az_set
if diff:
raise az_ext.AvailabilityZoneNotFound(
availability_zone=diff.pop())
@staticmethod
def _extend_availability_zone(net_res, net_db):
net_res[az_ext.AZ_HINTS] = az_ext.convert_az_string_to_list(
net_db[az_ext.AZ_HINTS])
common_db_mixin.CommonDbMixin.register_dict_extend_funcs(
attributes.NETWORKS, ['_extend_availability_zone'])
@property
def _core_plugin(self):
return self
def create_network(self, context, network):
net_data = network['network']
res = super(TricirclePlugin, self).create_network(context, network)
if az_ext.AZ_HINTS in net_data:
self._validate_availability_zones(context,
net_data[az_ext.AZ_HINTS])
az_hints = az_ext.convert_az_list_to_string(
net_data[az_ext.AZ_HINTS])
update_res = super(TricirclePlugin, self).update_network(
context, res['id'], {'network': {az_ext.AZ_HINTS: az_hints}})
res[az_ext.AZ_HINTS] = update_res[az_ext.AZ_HINTS]
return res
def delete_network(self, context, network_id):
t_ctx = t_context.get_context_from_neutron_context(context)
try:
mappings = db_api.get_bottom_mappings_by_top_id(
t_ctx, network_id, t_constants.RT_NETWORK)
for mapping in mappings:
pod_name = mapping[0]['pod_name']
bottom_network_id = mapping[1]
self._get_client(pod_name).delete_networks(
t_ctx, bottom_network_id)
with t_ctx.session.begin():
core.delete_resources(
t_ctx, models.ResourceRouting,
filters=[{'key': 'top_id', 'comparator': 'eq',
'value': network_id},
{'key': 'pod_id', 'comparator': 'eq',
'value': mapping[0]['pod_id']}])
except Exception:
raise
with t_ctx.session.begin():
core.delete_resources(t_ctx, models.ResourceRouting,
filters=[{'key': 'top_id',
'comparator': 'eq',
'value': network_id}])
super(TricirclePlugin, self).delete_network(context, network_id)
def update_network(self, context, network_id, network):
return super(TricirclePlugin, self).update_network(
context, network_id, network)
def create_subnet(self, context, subnet):
return super(TricirclePlugin, self).create_subnet(context, subnet)
def delete_subnet(self, context, subnet_id):
t_ctx = t_context.get_context_from_neutron_context(context)
try:
mappings = db_api.get_bottom_mappings_by_top_id(
t_ctx, subnet_id, t_constants.RT_SUBNET)
for mapping in mappings:
pod_name = mapping[0]['pod_name']
bottom_subnet_id = mapping[1]
self._get_client(pod_name).delete_subnets(
t_ctx, bottom_subnet_id)
with t_ctx.session.begin():
core.delete_resources(
t_ctx, models.ResourceRouting,
filters=[{'key': 'top_id', 'comparator': 'eq',
'value': subnet_id},
{'key': 'pod_id', 'comparator': 'eq',
'value': mapping[0]['pod_id']}])
except Exception:
raise
super(TricirclePlugin, self).delete_subnet(context, subnet_id)
def update_subnet(self, context, subnet_id, subnet):
return super(TricirclePlugin, self).update_network(
context, subnet_id, subnet)
def create_port(self, context, port):
return super(TricirclePlugin, self).create_port(context, port)
def update_port(self, context, port_id, port):
# TODO(zhiyuan) handle bottom port update
# be careful that l3_db will call update_port to update device_id of
# router interface, we cannot directly update bottom port in this case,
# otherwise we will fail when attaching bottom port to bottom router
# because its device_id is not empty
return super(TricirclePlugin, self).update_port(context, port_id, port)
def delete_port(self, context, port_id, l3_port_check=True):
t_ctx = t_context.get_context_from_neutron_context(context)
try:
mappings = db_api.get_bottom_mappings_by_top_id(
t_ctx, port_id, t_constants.RT_PORT)
if mappings:
pod_name = mappings[0][0]['pod_name']
bottom_port_id = mappings[0][1]
self._get_client(pod_name).delete_ports(
t_ctx, bottom_port_id)
except Exception:
raise
with t_ctx.session.begin():
core.delete_resources(t_ctx, models.ResourceRouting,
filters=[{'key': 'top_id',
'comparator': 'eq',
'value': port_id}])
super(TricirclePlugin, self).delete_port(context, port_id)
def get_port(self, context, port_id, fields=None):
t_ctx = t_context.get_context_from_neutron_context(context)
mappings = db_api.get_bottom_mappings_by_top_id(
t_ctx, port_id, t_constants.RT_PORT)
if mappings:
pod_name = mappings[0][0]['pod_name']
bottom_port_id = mappings[0][1]
port = self._get_client(pod_name).get_ports(
t_ctx, bottom_port_id)
port['id'] = port_id
if fields:
port = dict(
[(k, v) for k, v in port.iteritems() if k in fields])
if 'network_id' not in port and 'fixed_ips' not in port:
return port
bottom_top_map = {}
with t_ctx.session.begin():
for resource in (t_constants.RT_SUBNET, t_constants.RT_NETWORK,
t_constants.RT_ROUTER):
route_filters = [{'key': 'resource_type',
'comparator': 'eq',
'value': resource}]
routes = core.query_resource(
t_ctx, models.ResourceRouting, route_filters, [])
for route in routes:
if route['bottom_id']:
bottom_top_map[
route['bottom_id']] = route['top_id']
self._map_port_from_bottom_to_top(port, bottom_top_map)
return port
else:
return super(TricirclePlugin, self).get_port(context,
port_id, fields)
@staticmethod
def _apply_ports_filters(query, model, filters):
if not filters:
return query
for key, value in filters.iteritems():
column = getattr(model, key, None)
if column is not None:
if not value:
query = query.filter(sql.false())
return query
query = query.filter(column.in_(value))
return query
def _get_ports_from_db_with_number(self, context,
number, last_port_id, top_bottom_map,
filters=None):
query = context.session.query(models_v2.Port)
# set step as two times of number to have better chance to obtain all
# ports we need
search_step = number * 2
if search_step < 100:
search_step = 100
query = self._apply_ports_filters(query, models_v2.Port, filters)
query = sqlalchemyutils.paginate_query(
query, models_v2.Port, search_step, [('id', False)],
# create a dummy port object
marker_obj=models_v2.Port(
id=last_port_id) if last_port_id else None)
total = 0
ret = []
for port in query:
total += 1
if port['id'] not in top_bottom_map:
ret.append(port)
if len(ret) == number:
return ret
# NOTE(zhiyuan) we have traverse all the ports
if total < search_step:
return ret
else:
ret.extend(self._get_ports_from_db_with_number(
context, number - len(ret), ret[-1]['id'], top_bottom_map))
def _get_ports_from_top_with_number(self, context,
number, last_port_id, top_bottom_map,
filters=None):
with context.session.begin():
ret = self._get_ports_from_db_with_number(
context, number, last_port_id, top_bottom_map, filters)
return {'ports': ret}
def _get_ports_from_top(self, context, top_bottom_map, filters=None):
with context.session.begin():
ret = []
query = context.session.query(models_v2.Port)
query = self._apply_ports_filters(query, models_v2.Port, filters)
for port in query:
if port['id'] not in top_bottom_map:
ret.append(port)
return ret
@staticmethod
def _map_port_from_bottom_to_top(port, bottom_top_map):
if 'network_id' in port and port['network_id'] in bottom_top_map:
port['network_id'] = bottom_top_map[port['network_id']]
if 'fixed_ips' in port:
for ip in port['fixed_ips']:
if ip['subnet_id'] in bottom_top_map:
ip['subnet_id'] = bottom_top_map[ip['subnet_id']]
if 'device_id' in port and port['device_id'] in bottom_top_map:
port['device_id'] = bottom_top_map[port['device_id']]
@staticmethod
def _map_ports_from_bottom_to_top(ports, bottom_top_map):
# TODO(zhiyuan) judge if it's fine to remove unmapped port
port_list = []
for port in ports:
if port['id'] not in bottom_top_map:
continue
port['id'] = bottom_top_map[port['id']]
TricirclePlugin._map_port_from_bottom_to_top(port, bottom_top_map)
port_list.append(port)
return port_list
@staticmethod
def _get_map_filter_ids(key, value, top_bottom_map):
if key in ('id', 'network_id', 'device_id'):
id_list = []
for _id in value:
if _id in top_bottom_map:
id_list.append(top_bottom_map[_id])
else:
id_list.append(_id)
return id_list
def _get_ports_from_pod_with_number(self, context,
current_pod, number, last_port_id,
bottom_top_map, top_bottom_map,
filters=None):
# NOTE(zhiyuan) last_port_id is top id, also id in returned port dict
# also uses top id. when interacting with bottom pod, need to map
# top to bottom in request and map bottom to top in response
t_ctx = t_context.get_context_from_neutron_context(context)
q_client = self._get_client(
current_pod['pod_name']).get_native_client('port', t_ctx)
params = {'limit': number}
if filters:
_filters = dict(filters)
for key, value in _filters:
id_list = self._get_map_filter_ids(key, value, top_bottom_map)
if id_list:
_filters[key] = id_list
params.update(_filters)
if last_port_id:
# map top id to bottom id in request
params['marker'] = top_bottom_map[last_port_id]
res = q_client.get(q_client.ports_path, params=params)
# map bottom id to top id in client response
mapped_port_list = self._map_ports_from_bottom_to_top(res['ports'],
bottom_top_map)
del res['ports']
res['ports'] = mapped_port_list
if len(res['ports']) == number:
return res
else:
next_pod = db_api.get_next_bottom_pod(
t_ctx, current_pod_id=current_pod['pod_id'])
if not next_pod:
# _get_ports_from_top_with_number uses top id, no need to map
next_res = self._get_ports_from_top_with_number(
context, number - len(res['ports']), '', top_bottom_map,
filters)
next_res['ports'].extend(res['ports'])
return next_res
else:
# _get_ports_from_pod_with_number itself returns top id, no
# need to map
next_res = self._get_ports_from_pod_with_number(
context, next_pod, number - len(res['ports']), '',
bottom_top_map, top_bottom_map, filters)
next_res['ports'].extend(res['ports'])
return next_res
def get_ports(self, context, filters=None, fields=None, sorts=None,
limit=None, marker=None, page_reverse=False):
t_ctx = t_context.get_context_from_neutron_context(context)
with t_ctx.session.begin():
bottom_top_map = {}
top_bottom_map = {}
for resource in (t_constants.RT_PORT, t_constants.RT_SUBNET,
t_constants.RT_NETWORK, t_constants.RT_ROUTER):
route_filters = [{'key': 'resource_type',
'comparator': 'eq',
'value': resource}]
routes = core.query_resource(t_ctx, models.ResourceRouting,
route_filters, [])
for route in routes:
if route['bottom_id']:
bottom_top_map[route['bottom_id']] = route['top_id']
top_bottom_map[route['top_id']] = route['bottom_id']
if limit:
if marker:
mappings = db_api.get_bottom_mappings_by_top_id(
t_ctx, marker, t_constants.RT_PORT)
# NOTE(zhiyuan) if mapping exists, we retrieve port information
# from bottom, otherwise from top
if mappings:
pod_id = mappings[0][0]['pod_id']
current_pod = db_api.get_pod(t_ctx, pod_id)
res = self._get_ports_from_pod_with_number(
context, current_pod, limit, marker,
bottom_top_map, top_bottom_map, filters)
else:
res = self._get_ports_from_top_with_number(
context, limit, marker, top_bottom_map, filters)
else:
current_pod = db_api.get_next_bottom_pod(t_ctx)
# only top pod registered
if current_pod:
res = self._get_ports_from_pod_with_number(
context, current_pod, limit, '',
bottom_top_map, top_bottom_map, filters)
else:
res = self._get_ports_from_top_with_number(
context, limit, marker, top_bottom_map, filters)
# NOTE(zhiyuan) we can safely return ports, neutron controller will
# generate links for us so we do not need to worry about it.
#
# _get_ports_from_pod_with_number already traverses all the pods
# to try to get ports equal to limit, so pod is transparent for
# controller.
return res['ports']
else:
ret = []
pods = db_api.list_pods(t_ctx)
for pod in pods:
if not pod['az_name']:
continue
_filters = []
if filters:
for key, value in filters.iteritems():
id_list = self._get_map_filter_ids(key, value,
top_bottom_map)
if id_list:
_filters.append({'key': key,
'comparator': 'eq',
'value': id_list})
else:
_filters.append({'key': key,
'comparator': 'eq',
'value': value})
client = self._get_client(pod['pod_name'])
ret.extend(client.list_ports(t_ctx, filters=_filters))
ret = self._map_ports_from_bottom_to_top(ret, bottom_top_map)
ret.extend(self._get_ports_from_top(context, top_bottom_map,
filters))
return ret
def create_router(self, context, router):
return super(TricirclePlugin, self).create_router(context, router)
def delete_router(self, context, _id):
super(TricirclePlugin, self).delete_router(context, _id)
def _judge_network_across_pods(self, context, interface, add_by_port):
if add_by_port:
port = self.get_port(context, interface['port_id'])
net_id = port['network_id']
else:
subnet = self.get_subnet(context, interface['subnet_id'])
net_id = subnet['network_id']
network = self.get_network(context, net_id)
if len(network.get(az_ext.AZ_HINTS, [])) != 1:
# Currently not support cross pods l3 networking so
# raise an exception here
raise Exception('Cross pods L3 networking not support')
return network[az_ext.AZ_HINTS][0], network
def _prepare_top_element(self, t_ctx, q_ctx,
project_id, pod, ele, _type, body):
def list_resources(t_ctx_, q_ctx_, pod_, _id_, _type_):
return getattr(self, 'get_%ss' % _type_)(
q_ctx_, filters={'name': _id_})
def create_resources(t_ctx_, q_ctx_, pod_, body_, _type_):
return getattr(self, 'create_%s' % _type_)(q_ctx_, body_)
return t_lock.get_or_create_element(
t_ctx, q_ctx,
project_id, pod, ele, _type, body,
list_resources, create_resources)
def _prepare_bottom_element(self, t_ctx,
project_id, pod, ele, _type, body):
def list_resources(t_ctx_, q_ctx, pod_, _id_, _type_):
client = self._get_client(pod_['pod_name'])
return client.list_resources(_type_, t_ctx_, [{'key': 'name',
'comparator': 'eq',
'value': _id_}])
def create_resources(t_ctx_, q_ctx, pod_, body_, _type_):
client = self._get_client(pod_['pod_name'])
return client.create_resources(_type_, t_ctx_, body_)
return t_lock.get_or_create_element(
t_ctx, None, # we don't need neutron context, so pass None
project_id, pod, ele, _type, body,
list_resources, create_resources)
def _get_bridge_subnet_pool_id(self, t_ctx, q_ctx, project_id, pod):
pool_name = t_constants.bridge_subnet_pool_name
pool_cidr = '100.0.0.0/8'
pool_ele = {'id': pool_name}
body = {'subnetpool': {'tenant_id': project_id,
'name': pool_name,
'shared': True,
'is_default': False,
'prefixes': [pool_cidr]}}
is_admin = q_ctx.is_admin
q_ctx.is_admin = True
_, pool_id = self._prepare_top_element(t_ctx, q_ctx, project_id, pod,
pool_ele, 'subnetpool', body)
q_ctx.is_admin = is_admin
return pool_id
def _get_bridge_network_subnet(self, t_ctx, q_ctx,
project_id, pod, pool_id):
bridge_net_name = t_constants.bridge_net_name % project_id
bridge_net_ele = {'id': bridge_net_name}
bridge_subnet_name = t_constants.bridge_subnet_name % project_id
bridge_subnet_ele = {'id': bridge_subnet_name}
is_admin = q_ctx.is_admin
q_ctx.is_admin = True
net_body = {'network': {'tenant_id': project_id,
'name': bridge_net_name,
'shared': False,
'admin_state_up': True}}
_, net_id = self._prepare_top_element(
t_ctx, q_ctx, project_id, pod, bridge_net_ele, 'network', net_body)
subnet_body = {
'subnet': {
'network_id': net_id,
'name': bridge_subnet_name,
'prefixlen': 24,
'ip_version': 4,
'allocation_pools': attributes.ATTR_NOT_SPECIFIED,
'dns_nameservers': attributes.ATTR_NOT_SPECIFIED,
'host_routes': attributes.ATTR_NOT_SPECIFIED,
'cidr': attributes.ATTR_NOT_SPECIFIED,
'subnetpool_id': pool_id,
'enable_dhcp': False,
'tenant_id': project_id
}
}
_, subnet_id = self._prepare_top_element(
t_ctx, q_ctx,
project_id, pod, bridge_subnet_ele, 'subnet', subnet_body)
q_ctx.is_admin = is_admin
net = self.get_network(q_ctx, net_id)
subnet = self.get_subnet(q_ctx, subnet_id)
return net, subnet
def _get_bottom_elements(self, t_ctx, project_id, pod,
t_net, t_subnet, t_port):
net_body = {
'network': {
'tenant_id': project_id,
'name': t_net['id'],
'admin_state_up': True
}
}
_, net_id = self._prepare_bottom_element(
t_ctx, project_id, pod, t_net, 'network', net_body)
subnet_body = {
'subnet': {
'network_id': net_id,
'name': t_subnet['id'],
'ip_version': t_subnet['ip_version'],
'cidr': t_subnet['cidr'],
'gateway_ip': t_subnet['gateway_ip'],
'allocation_pools': t_subnet['allocation_pools'],
'enable_dhcp': t_subnet['enable_dhcp'],
'tenant_id': project_id
}
}
_, subnet_id = self._prepare_bottom_element(
t_ctx, project_id, pod, t_subnet, 'subnet', subnet_body)
port_body = {
'port': {
'network_id': net_id,
'name': t_port['id'],
'admin_state_up': True,
'fixed_ips': [
{'subnet_id': subnet_id,
'ip_address': t_port['fixed_ips'][0]['ip_address']}],
'mac_address': t_port['mac_address']
}
}
_, port_id = self._prepare_bottom_element(
t_ctx, project_id, pod, t_port, 'port', port_body)
return port_id
def _get_bridge_interface(self, t_ctx, q_ctx, project_id, pod,
t_net_id, b_router_id):
bridge_port_name = t_constants.bridge_port_name % (project_id,
b_router_id)
bridge_port_ele = {'id': bridge_port_name}
port_body = {
'port': {
'tenant_id': project_id,
'admin_state_up': True,
'name': bridge_port_name,
'network_id': t_net_id,
'device_id': '',
'device_owner': '',
'mac_address': attributes.ATTR_NOT_SPECIFIED,
'fixed_ips': attributes.ATTR_NOT_SPECIFIED
}
}
_, port_id = self._prepare_top_element(
t_ctx, q_ctx, project_id, pod, bridge_port_ele, 'port', port_body)
return self.get_port(q_ctx, port_id)
def _get_bottom_bridge_elements(self, q_ctx, project_id,
pod, t_net, t_subnet, t_port):
t_ctx = t_context.get_context_from_neutron_context(q_ctx)
phy_net = cfg.CONF.tricircle.bridge_physical_network
vlan = cfg.CONF.tricircle.bridge_segmentation_id
net_body = {'network': {'tenant_id': project_id,
'name': t_net['id'],
'provider:network_type': 'vlan',
'provider:physical_network': phy_net,
'provider:segmentation_id': vlan,
'admin_state_up': True}}
_, b_net_id = self._prepare_bottom_element(
t_ctx, project_id, pod, t_net, 'network', net_body)
subnet_body = {'subnet': {'network_id': b_net_id,
'name': t_subnet['id'],
'ip_version': 4,
'cidr': t_subnet['cidr'],
'enable_dhcp': False,
'tenant_id': project_id}}
_, b_subnet_id = self._prepare_bottom_element(
t_ctx, project_id, pod, t_subnet, 'subnet', subnet_body)
port_body = {
'port': {
'tenant_id': project_id,
'admin_state_up': True,
'name': t_port['id'],
'network_id': b_net_id,
'fixed_ips': [
{'subnet_id': b_subnet_id,
'ip_address': t_port['fixed_ips'][0]['ip_address']}]
}
}
is_new, b_port_id = self._prepare_bottom_element(
t_ctx, project_id, pod, t_port, 'port', port_body)
return is_new, b_port_id
# NOTE(zhiyuan) the origin implementation in l3_db uses port returned from
# get_port in core plugin to check, change it to base plugin, since only
# top port information should be checked.
def _check_router_port(self, context, port_id, device_id):
port = super(TricirclePlugin, self).get_port(context, port_id)
if port['device_id'] != device_id:
raise exceptions.PortInUse(net_id=port['network_id'],
port_id=port['id'],
device_id=port['device_id'])
if not port['fixed_ips']:
msg = _('Router port must have at least one fixed IP')
raise exceptions.BadRequest(resource='router', msg=msg)
return port
def _unbound_top_interface(self, context, router_id, port_id):
super(TricirclePlugin, self).update_port(
context, port_id, {'port': {'device_id': '',
'device_owner': ''}})
with context.session.begin():
query = context.session.query(l3_db.RouterPort)
query.filter_by(port_id=port_id, router_id=router_id).delete()
def add_router_interface(self, context, router_id, interface_info):
t_ctx = t_context.get_context_from_neutron_context(context)
router = self._get_router(context, router_id)
project_id = router['tenant_id']
admin_project_id = 'admin_project_id'
add_by_port, _ = self._validate_interface_info(interface_info)
# make sure network not crosses pods
# TODO(zhiyuan) support cross-pod tenant network
az, t_net = self._judge_network_across_pods(
context, interface_info, add_by_port)
b_pod, b_az = az_ag.get_pod_by_az_tenant(t_ctx, az, project_id)
t_pod = None
for pod in db_api.list_pods(t_ctx):
if not pod['az_name']:
t_pod = pod
assert t_pod
router_body = {'router': {'name': router_id,
'distributed': False}}
_, b_router_id = self._prepare_bottom_element(
t_ctx, project_id, b_pod, router, 'router', router_body)
pool_id = self._get_bridge_subnet_pool_id(
t_ctx, context, admin_project_id, t_pod)
t_bridge_net, t_bridge_subnet = self._get_bridge_network_subnet(
t_ctx, context, project_id, t_pod, pool_id)
t_bridge_port = self._get_bridge_interface(
t_ctx, context, project_id, t_pod, t_bridge_net['id'],
b_router_id)
is_new, b_bridge_port_id = self._get_bottom_bridge_elements(
context, project_id, b_pod, t_bridge_net, t_bridge_subnet,
t_bridge_port)
# NOTE(zhiyuan) subnet pool, network, subnet are reusable resource,
# we decide not to remove them when operation fails, so before adding
# router interface, no clearing is needed.
is_success = False
for _ in xrange(2):
try:
return_info = super(TricirclePlugin,
self).add_router_interface(
context, router_id, interface_info)
is_success = True
except exceptions.PortInUse:
# NOTE(zhiyuan) so top interface is already bound to top
# router, we need to check if bottom interface is bound.
# safe to get port_id since only adding interface by port will
# get PortInUse exception
t_port_id = interface_info['port_id']
mappings = db_api.get_bottom_mappings_by_top_id(
t_ctx, t_port_id, t_constants.RT_PORT)
if not mappings:
# bottom interface does not exists, ignore this exception
# and continue to create bottom interface
self._unbound_top_interface(context, router_id, t_port_id)
else:
pod, b_port_id = mappings[0]
b_port = self._get_client(pod['pod_name']).get_ports(
t_ctx, b_port_id)
if not b_port['device_id']:
# bottom interface exists but is not bound, ignore this
# exception and continue to bind bottom interface
self._unbound_top_interface(context, router_id,
t_port_id)
else:
# bottom interface already bound, re-raise exception
raise
if is_success:
break
if not is_success:
raise Exception()
t_port_id = return_info['port_id']
t_port = self.get_port(context, t_port_id)
t_subnet = self.get_subnet(context,
t_port['fixed_ips'][0]['subnet_id'])
try:
b_port_id = self._get_bottom_elements(
t_ctx, project_id, b_pod, t_net, t_subnet, t_port)
except Exception:
# NOTE(zhiyuan) remove_router_interface will delete top interface.
# if mapping is already built between top and bottom interface,
# bottom interface and resource routing entry will also be deleted.
#
# but remove_router_interface may fail when deleting bottom
# interface, in this case, top and bottom interfaces are both left,
# user needs to manually delete top interface.
super(TricirclePlugin, self).remove_router_interface(
context, router_id, interface_info)
raise
client = self._get_client(b_pod['pod_name'])
try:
if is_new:
# only attach bridge port the first time
client.action_routers(t_ctx, 'add_interface', b_router_id,
{'port_id': b_bridge_port_id})
else:
# still need to check if the bridge port is bound
port = client.get_ports(t_ctx, b_bridge_port_id)
if not port.get('device_id'):
client.action_routers(t_ctx, 'add_interface', b_router_id,
{'port_id': b_bridge_port_id})
client.action_routers(t_ctx, 'add_interface', b_router_id,
{'port_id': b_port_id})
except Exception:
super(TricirclePlugin, self).remove_router_interface(
context, router_id, interface_info)
raise
return return_info

View File

@ -1,149 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import oslo_log.helpers as log_helpers
from oslo_log import log
from neutron.extensions import portbindings
from neutron.common import exceptions as n_exc
from neutron.common import rpc as n_rpc
from neutron.common import topics
from neutron.db import agentschedulers_db
from neutron.db import db_base_plugin_v2
from neutron.db import extradhcpopt_db
from neutron.db import portbindings_db
from neutron.db import securitygroups_db
from neutron.i18n import _LI
from tricircle.common import cascading_networking_api as c_net_api
from tricircle.networking import rpc as c_net_rpc
LOG = log.getLogger(__name__)
class TricirclePlugin(db_base_plugin_v2.NeutronDbPluginV2,
securitygroups_db.SecurityGroupDbMixin,
portbindings_db.PortBindingMixin,
extradhcpopt_db.ExtraDhcpOptMixin,
agentschedulers_db.DhcpAgentSchedulerDbMixin):
__native_bulk_support = True
__native_pagination_support = True
__native_sorting_support = True
supported_extension_aliases = ["quotas",
"extra_dhcp_opt",
"binding",
"security-group",
"external-net"]
def __init__(self):
super(TricirclePlugin, self).__init__()
LOG.info(_LI("Starting TricirclePlugin"))
self.vif_type = portbindings.VIF_TYPE_OVS
# When set to True, Nova plugs the VIF directly into the ovs bridge
# instead of using the hybrid mode.
self.vif_details = {portbindings.CAP_PORT_FILTER: True}
self._cascading_rpc_api = c_net_api.CascadingNetworkingNotifyAPI()
self._setup_rpc()
def _setup_rpc(self):
self.endpoints = [c_net_rpc.RpcCallbacks()]
@log_helpers.log_method_call
def start_rpc_listeners(self):
self.topic = topics.PLUGIN
self.conn = n_rpc.create_connection(new=True)
self.conn.create_consumer(self.topic, self.endpoints, fanout=False)
return self.conn.consume_in_threads()
def create_network(self, context, network):
with context.session.begin(subtransactions=True):
result = super(TricirclePlugin, self).create_network(
context,
network)
self._process_l3_create(context, result, network['network'])
LOG.debug("New network %s ", network['network']['name'])
if self._cascading_rpc_api:
self._cascading_rpc_api.create_network(context, network)
return result
def delete_network(self, context, network_id):
net = super(TricirclePlugin, self).delete_network(
context,
network_id)
if self._cascading_rpc_api:
self._cascading_rpc_api.delete_network(context, network_id)
return net
def update_network(self, context, network_id, network):
with context.session.begin(subtransactions=True):
net = super(TricirclePlugin, self).update_network(
context,
network_id,
network)
if self._cascading_rpc_api:
self._cascading_rpc_api.delete_network(
context,
network_id,
network)
return net
def create_port(self, context, port):
with context.session.begin(subtransactions=True):
neutron_db = super(TricirclePlugin, self).create_port(
context, port)
self._process_portbindings_create_and_update(context,
port['port'],
neutron_db)
neutron_db[portbindings.VNIC_TYPE] = portbindings.VNIC_NORMAL
# Call create port to the cascading API
LOG.debug("New port %s ", port['port'])
if self._cascading_rpc_api:
self._cascading_rpc_api.create_port(context, port)
return neutron_db
def delete_port(self, context, port_id, l3_port_check=True):
with context.session.begin():
ret_val = super(TricirclePlugin, self).delete_port(
context, port_id)
if self._cascading_rpc_api:
self._cascading_rpc_api.delete_port(context,
port_id,
l3_port_check=True)
return ret_val
def update_port_status(self, context, port_id, port_status):
with context.session.begin(subtransactions=True):
try:
port = super(TricirclePlugin, self).get_port(context, port_id)
port['status'] = port_status
neutron_db = super(TricirclePlugin, self).update_port(
context, port_id, {'port': port})
except n_exc.PortNotFound:
LOG.debug("Port %(port)s update to %(status)s not found",
{'port': port_id, 'status': port_status})
return None
return neutron_db
def extend_port_dict_binding(self, port_res, port_db):
super(TricirclePlugin, self).extend_port_dict_binding(
port_res, port_db)
port_res[portbindings.VNIC_TYPE] = portbindings.VNIC_NORMAL

View File

@ -1,38 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import neutron.common.constants as neutron_const
from neutron import manager
from oslo_log import log
import oslo_messaging
LOG = log.getLogger(__name__)
class RpcCallbacks(object):
target = oslo_messaging.Target(version='1.0')
def update_port_up(self, context, **kwargs):
port_id = kwargs.get('port_id')
plugin = manager.NeutronManager.get_plugin()
plugin.update_port_status(context, port_id,
neutron_const.PORT_STATUS_ACTIVE)
def update_port_down(self, context, **kwargs):
port_id = kwargs.get('port_id')
plugin = manager.NeutronManager.get_plugin()
plugin.update_port_status(context, port_id,
neutron_const.PORT_STATUS_DOWN)

View File

@ -0,0 +1,76 @@
# Copyright (c) 2015 Huawei, Tech. Co,. Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from oslo_config import cfg
from tricircle.common.i18n import _
from tricircle.common import restapp
from tricircle.nova_apigw.controllers import root
common_opts = [
cfg.StrOpt('bind_host', default='0.0.0.0',
help=_("The host IP to bind to")),
cfg.IntOpt('bind_port', default=19998,
help=_("The port to bind to")),
cfg.IntOpt('api_workers', default=1,
help=_("number of api workers")),
cfg.StrOpt('api_extensions_path', default="",
help=_("The path for API extensions")),
cfg.StrOpt('auth_strategy', default='keystone',
help=_("The type of authentication to use")),
cfg.BoolOpt('allow_bulk', default=True,
help=_("Allow the usage of the bulk API")),
cfg.BoolOpt('allow_pagination', default=False,
help=_("Allow the usage of the pagination")),
cfg.BoolOpt('allow_sorting', default=False,
help=_("Allow the usage of the sorting")),
cfg.StrOpt('pagination_max_limit', default="-1",
help=_("The maximum number of items returned in a single "
"response, value was 'infinite' or negative integer "
"means no limit")),
]
def setup_app(*args, **kwargs):
config = {
'server': {
'port': cfg.CONF.bind_port,
'host': cfg.CONF.bind_host
},
'app': {
'root': 'tricircle.nova_apigw.controllers.root.RootController',
'modules': ['tricircle.nova_apigw'],
'errors': {
400: '/error',
'__force_dict__': True
}
}
}
pecan_config = pecan.configuration.conf_from_dict(config)
app_hooks = [root.ErrorHook()]
app = pecan.make_app(
pecan_config.app.root,
debug=False,
wrap_app=restapp.auth_app,
force_canonical=False,
hooks=app_hooks,
guess_content_type_from_ext=True
)
return app

View File

@ -0,0 +1,128 @@
# Copyright (c) 2015 Huawei Tech. Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from pecan import expose
from pecan import rest
import oslo_db.exception as db_exc
from tricircle.common import az_ag
import tricircle.common.context as t_context
import tricircle.common.exceptions as t_exc
from tricircle.db import core
from tricircle.db import models
class AggregateActionController(rest.RestController):
def __init__(self, project_id, aggregate_id):
self.project_id = project_id
self.aggregate_id = aggregate_id
@expose(generic=True, template='json')
def post(self, **kw):
context = t_context.extract_context_from_environ()
if not context.is_admin:
pecan.abort(400, 'Admin role required to operate aggregates')
return
try:
with context.session.begin():
core.get_resource(context, models.Aggregate, self.aggregate_id)
except t_exc.ResourceNotFound:
pecan.abort(400, 'Aggregate not found')
return
if 'add_host' in kw or 'remove_host' in kw:
pecan.abort(400, 'Add and remove host action not supported')
return
# TODO(zhiyuan) handle aggregate metadata updating
aggregate = az_ag.get_one_ag(context, self.aggregate_id)
return {'aggregate': aggregate}
class AggregateController(rest.RestController):
def __init__(self, project_id):
self.project_id = project_id
@pecan.expose()
def _lookup(self, aggregate_id, action, *remainder):
if action == 'action':
return AggregateActionController(self.project_id,
aggregate_id), remainder
@expose(generic=True, template='json')
def post(self, **kw):
context = t_context.extract_context_from_environ()
if not context.is_admin:
pecan.abort(400, 'Admin role required to create aggregates')
return
if 'aggregate' not in kw:
pecan.abort(400, 'Request body not found')
return
host_aggregate = kw['aggregate']
name = host_aggregate['name'].strip()
avail_zone = host_aggregate.get('availability_zone')
if avail_zone:
avail_zone = avail_zone.strip()
try:
with context.session.begin():
aggregate = az_ag.create_ag_az(context,
ag_name=name,
az_name=avail_zone)
except db_exc.DBDuplicateEntry:
pecan.abort(409, 'Aggregate already exists')
return
except Exception:
pecan.abort(500, 'Fail to create host aggregate')
return
return {'aggregate': aggregate}
@expose(generic=True, template='json')
def get_one(self, _id):
context = t_context.extract_context_from_environ()
try:
with context.session.begin():
aggregate = az_ag.get_one_ag(context, _id)
return {'aggregate': aggregate}
except t_exc.ResourceNotFound:
pecan.abort(404, 'Aggregate not found')
return
@expose(generic=True, template='json')
def get_all(self):
context = t_context.extract_context_from_environ()
try:
with context.session.begin():
aggregates = az_ag.get_all_ag(context)
except Exception:
pecan.abort(500, 'Fail to get all host aggregates')
return
return {'aggregates': aggregates}
@expose(generic=True, template='json')
def delete(self, _id):
context = t_context.extract_context_from_environ()
try:
with context.session.begin():
az_ag.delete_ag(context, _id)
pecan.response.status = 200
except t_exc.ResourceNotFound:
pecan.abort(404, 'Aggregate not found')
return

View File

@ -0,0 +1,198 @@
# Copyright (c) 2015 Huawei Tech. Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from pecan import expose
from pecan import rest
import oslo_db.exception as db_exc
import tricircle.common.context as t_context
from tricircle.common import utils
from tricircle.db import core
from tricircle.db import models
class FlavorManageController(rest.RestController):
# NOTE(zhiyuan) according to nova API reference, flavor creating and
# deleting should use '/flavors/os-flavor-manage' path, but '/flavors/'
# also supports this two operations to keep compatible with nova client
def __init__(self, project_id):
self.project_id = project_id
@expose(generic=True, template='json')
def post(self, **kw):
context = t_context.extract_context_from_environ()
if not context.is_admin:
pecan.abort(400, 'Admin role required to create flavors')
return
required_fields = ['name', 'ram', 'vcpus', 'disk']
if 'flavor' not in kw:
pass
if not utils.validate_required_fields_set(kw['flavor'],
required_fields):
pass
flavor_dict = {
'name': kw['flavor']['name'],
'flavorid': kw['flavor'].get('id'),
'memory_mb': kw['flavor']['ram'],
'vcpus': kw['flavor']['vcpus'],
'root_gb': kw['flavor']['disk'],
'ephemeral_gb': kw['flavor'].get('OS-FLV-EXT-DATA:ephemeral', 0),
'swap': kw['flavor'].get('swap', 0),
'rxtx_factor': kw['flavor'].get('rxtx_factor', 1.0),
'is_public': kw['flavor'].get('os-flavor-access:is_public', True),
}
try:
with context.session.begin():
flavor = core.create_resource(
context, models.InstanceTypes, flavor_dict)
except db_exc.DBDuplicateEntry:
pecan.abort(409, 'Flavor already exists')
return
except Exception:
pecan.abort(500, 'Fail to create flavor')
return
return {'flavor': flavor}
@expose(generic=True, template='json')
def delete(self, _id):
context = t_context.extract_context_from_environ()
with context.session.begin():
flavors = core.query_resource(context, models.InstanceTypes,
[{'key': 'flavorid',
'comparator': 'eq',
'value': _id}], [])
if not flavors:
pecan.abort(404, 'Flavor not found')
return
core.delete_resource(context,
models.InstanceTypes, flavors[0]['id'])
pecan.response.status = 202
return
class FlavorController(rest.RestController):
def __init__(self, project_id):
self.project_id = project_id
@pecan.expose()
def _lookup(self, action, *remainder):
if action == 'os-flavor-manage':
return FlavorManageController(self.project_id), remainder
@expose(generic=True, template='json')
def post(self, **kw):
context = t_context.extract_context_from_environ()
if not context.is_admin:
pecan.abort(400, 'Admin role required to create flavors')
return
required_fields = ['name', 'ram', 'vcpus', 'disk']
if 'flavor' not in kw:
pecan.abort(400, 'Request body not found')
return
if not utils.validate_required_fields_set(kw['flavor'],
required_fields):
pecan.abort(400, 'Required field not set')
return
flavor_dict = {
'name': kw['flavor']['name'],
'flavorid': kw['flavor'].get('id'),
'memory_mb': kw['flavor']['ram'],
'vcpus': kw['flavor']['vcpus'],
'root_gb': kw['flavor']['disk'],
'ephemeral_gb': kw['flavor'].get('OS-FLV-EXT-DATA:ephemeral', 0),
'swap': kw['flavor'].get('swap', 0),
'rxtx_factor': kw['flavor'].get('rxtx_factor', 1.0),
'is_public': kw['flavor'].get('os-flavor-access:is_public', True),
}
try:
with context.session.begin():
flavor = core.create_resource(
context, models.InstanceTypes, flavor_dict)
except db_exc.DBDuplicateEntry:
pecan.abort(409, 'Flavor already exists')
return
except Exception:
pecan.abort(500, 'Fail to create flavor')
return
flavor['id'] = flavor['flavorid']
del flavor['flavorid']
return {'flavor': flavor}
@expose(generic=True, template='json')
def get_one(self, _id):
# NOTE(zhiyuan) this function handles two kinds of requests
# GET /flavors/flavor_id
# GET /flavors/detail
context = t_context.extract_context_from_environ()
if _id == 'detail':
with context.session.begin():
flavors = core.query_resource(context, models.InstanceTypes,
[], [])
for flavor in flavors:
flavor['id'] = flavor['flavorid']
del flavor['flavorid']
return {'flavors': flavors}
else:
with context.session.begin():
flavors = core.query_resource(context, models.InstanceTypes,
[{'key': 'flavorid',
'comparator': 'eq',
'value': _id}], [])
if not flavors:
pecan.abort(404, 'Flavor not found')
return
flavor = flavors[0]
flavor['id'] = flavor['flavorid']
del flavor['flavorid']
return {'flavor': flavor}
@expose(generic=True, template='json')
def get_all(self):
context = t_context.extract_context_from_environ()
with context.session.begin():
flavors = core.query_resource(context, models.InstanceTypes,
[], [])
return {'flavors': [dict(
[('id', flavor['flavorid']),
('name', flavor['name'])]) for flavor in flavors]}
@expose(generic=True, template='json')
def delete(self, _id):
# TODO(zhiyuan) handle foreign key constraint
context = t_context.extract_context_from_environ()
with context.session.begin():
flavors = core.query_resource(context, models.InstanceTypes,
[{'key': 'flavorid',
'comparator': 'eq',
'value': _id}], [])
if not flavors:
pecan.abort(404, 'Flavor not found')
return
core.delete_resource(context,
models.InstanceTypes, flavors[0]['id'])
pecan.response.status = 202
return

View File

@ -0,0 +1,43 @@
# Copyright (c) 2015 Huawei Tech. Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from pecan import expose
from pecan import rest
import tricircle.common.client as t_client
import tricircle.common.context as t_context
class ImageController(rest.RestController):
def __init__(self, project_id):
self.project_id = project_id
self.client = t_client.Client()
@expose(generic=True, template='json')
def get_one(self, _id):
context = t_context.extract_context_from_environ()
image = self.client.get_images(context, _id)
if not image:
pecan.abort(404, 'Image not found')
return
return {'image': image}
@expose(generic=True, template='json')
def get_all(self):
context = t_context.extract_context_from_environ()
images = self.client.list_images(context)
return {'images': images}

View File

@ -0,0 +1,163 @@
# Copyright (c) 2015 Huawei Tech. Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from pecan import expose
from pecan import hooks
from pecan import rest
import oslo_log.log as logging
import webob.exc as web_exc
from tricircle.common import context as ctx
from tricircle.common import xrpcapi
from tricircle.nova_apigw.controllers import aggregate
from tricircle.nova_apigw.controllers import flavor
from tricircle.nova_apigw.controllers import image
from tricircle.nova_apigw.controllers import server
LOG = logging.getLogger(__name__)
class ErrorHook(hooks.PecanHook):
# NOTE(zhiyuan) pecan's default error body is not compatible with nova
# client, clear body in this hook
def on_error(self, state, exc):
if isinstance(exc, web_exc.HTTPException):
exc.body = ''
return exc
class RootController(object):
@pecan.expose()
def _lookup(self, version, *remainder):
if version == 'v2.1':
return V21Controller(), remainder
@pecan.expose(generic=True, template='json')
def index(self):
return {
"versions": [
{
"status": "CURRENT",
"updated": "2013-07-23T11:33:21Z",
"links": [
{
"href": pecan.request.application_url + "/v2.1/",
"rel": "self"
}
],
"min_version": "2.1",
"version": "2.12",
"id": "v2.1"
}
]
}
@index.when(method='POST')
@index.when(method='PUT')
@index.when(method='DELETE')
@index.when(method='HEAD')
@index.when(method='PATCH')
def not_supported(self):
pecan.abort(405)
class V21Controller(object):
_media_type = "application/vnd.openstack.compute+json;version=2.1"
def __init__(self):
self.resource_controller = {
'flavors': flavor.FlavorController,
'os-aggregates': aggregate.AggregateController,
'servers': server.ServerController,
'images': image.ImageController,
}
def _get_resource_controller(self, project_id, remainder):
if not remainder:
pecan.abort(404)
return
resource = remainder[0]
if resource not in self.resource_controller:
pecan.abort(404)
return
return self.resource_controller[resource](project_id), remainder[1:]
@pecan.expose()
def _lookup(self, project_id, *remainder):
if project_id == 'testrpc':
return TestRPCController(), remainder
else:
return self._get_resource_controller(project_id, remainder)
@pecan.expose(generic=True, template='json')
def index(self):
return {
"version": {
"status": "CURRENT",
"updated": "2013-07-23T11:33:21Z",
"links": [
{
"href": pecan.request.application_url + "/v2.1/",
"rel": "self"
},
{
"href": "http://docs.openstack.org/",
"type": "text/html",
"rel": "describedby"
}
],
"min_version": "2.1",
"version": "2.12",
"media-types": [
{
"base": "application/json",
"type": self._media_type
}
],
"id": "v2.1"
}
}
@index.when(method='POST')
@index.when(method='PUT')
@index.when(method='DELETE')
@index.when(method='HEAD')
@index.when(method='PATCH')
def not_supported(self):
pecan.abort(405)
class TestRPCController(rest.RestController):
def __init__(self, *args, **kwargs):
super(TestRPCController, self).__init__(*args, **kwargs)
self.xjobapi = xrpcapi.XJobAPI()
@expose(generic=True, template='json')
def index(self):
if pecan.request.method != 'GET':
pecan.abort(405)
context = ctx.extract_context_from_environ()
payload = '#result from xjob rpc'
return self.xjobapi.test_rpc(context, payload)

View File

@ -0,0 +1,384 @@
# Copyright (c) 2015 Huawei Tech. Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from pecan import expose
from pecan import rest
from tricircle.common import az_ag
import tricircle.common.client as t_client
from tricircle.common import constants
import tricircle.common.context as t_context
import tricircle.common.lock_handle as t_lock
import tricircle.db.api as db_api
from tricircle.db import core
from tricircle.db import models
class ServerController(rest.RestController):
def __init__(self, project_id):
self.project_id = project_id
self.clients = {'top': t_client.Client()}
def _get_client(self, pod_name='top'):
if pod_name not in self.clients:
self.clients[pod_name] = t_client.Client(pod_name)
return self.clients[pod_name]
def _get_or_create_route(self, context, pod, _id, _type):
def list_resources(t_ctx, q_ctx, pod_, _id_, _type_):
client = self._get_client(pod_['pod_name'])
return client.list_resources(_type_, t_ctx, [{'key': 'name',
'comparator': 'eq',
'value': _id_}])
return t_lock.get_or_create_route(context, None,
self.project_id, pod, _id, _type,
list_resources)
def _get_create_network_body(self, network):
body = {
'network': {
'tenant_id': self.project_id,
'name': network['id'],
'admin_state_up': True
}
}
return body
def _get_create_subnet_body(self, subnet, bottom_net_id):
body = {
'subnet': {
'network_id': bottom_net_id,
'name': subnet['id'],
'ip_version': subnet['ip_version'],
'cidr': subnet['cidr'],
'gateway_ip': subnet['gateway_ip'],
'allocation_pools': subnet['allocation_pools'],
'enable_dhcp': subnet['enable_dhcp'],
'tenant_id': self.project_id
}
}
return body
def _get_create_port_body(self, port, subnet_map, bottom_net_id):
bottom_fixed_ips = []
for ip in port['fixed_ips']:
bottom_ip = {'subnet_id': subnet_map[ip['subnet_id']],
'ip_address': ip['ip_address']}
bottom_fixed_ips.append(bottom_ip)
body = {
'port': {
'tenant_id': self.project_id,
'admin_state_up': True,
'name': port['id'],
'network_id': bottom_net_id,
'mac_address': port['mac_address'],
'fixed_ips': bottom_fixed_ips
}
}
return body
def _get_create_dhcp_port_body(self, port, bottom_subnet_id,
bottom_net_id):
body = {
'port': {
'tenant_id': self.project_id,
'admin_state_up': True,
'name': port['id'],
'network_id': bottom_net_id,
'fixed_ips': [
{'subnet_id': bottom_subnet_id,
'ip_address': port['fixed_ips'][0]['ip_address']}
],
'mac_address': port['mac_address'],
'binding:profile': {},
'device_id': 'reserved_dhcp_port',
'device_owner': 'network:dhcp',
}
}
return body
def _prepare_neutron_element(self, context, pod, ele, _type, body):
def list_resources(t_ctx, q_ctx, pod_, _id_, _type_):
client = self._get_client(pod_['pod_name'])
return client.list_resources(_type_, t_ctx, [{'key': 'name',
'comparator': 'eq',
'value': _id_}])
def create_resources(t_ctx, q_ctx, pod_, body_, _type_):
client = self._get_client(pod_['pod_name'])
return client.create_resources(_type_, t_ctx, body_)
_, ele_id = t_lock.get_or_create_element(
context, None, # we don't need neutron context, so pass None
self.project_id, pod, ele, _type, body,
list_resources, create_resources)
return ele_id
def _handle_network(self, context, pod, net, subnets, port=None):
# network
net_body = self._get_create_network_body(net)
bottom_net_id = self._prepare_neutron_element(context, pod, net,
'network', net_body)
# subnet
subnet_map = {}
for subnet in subnets:
subnet_body = self._get_create_subnet_body(subnet, bottom_net_id)
bottom_subnet_id = self._prepare_neutron_element(
context, pod, subnet, 'subnet', subnet_body)
subnet_map[subnet['id']] = bottom_subnet_id
top_client = self._get_client()
top_port_body = {'port': {'network_id': net['id'],
'admin_state_up': True}}
# dhcp port
client = self._get_client(pod['pod_name'])
t_dhcp_port_filters = [
{'key': 'device_owner', 'comparator': 'eq',
'value': 'network:dhcp'},
{'key': 'network_id', 'comparator': 'eq',
'value': net['id']},
]
b_dhcp_port_filters = [
{'key': 'device_owner', 'comparator': 'eq',
'value': 'network:dhcp'},
{'key': 'network_id', 'comparator': 'eq',
'value': bottom_net_id},
]
top_dhcp_port_body = {
'port': {
'tenant_id': self.project_id,
'admin_state_up': True,
'name': 'dhcp_port',
'network_id': net['id'],
'binding:profile': {},
'device_id': 'reserved_dhcp_port',
'device_owner': 'network:dhcp',
}
}
t_dhcp_ports = top_client.list_ports(context, t_dhcp_port_filters)
t_subnet_dhcp_map = {}
for dhcp_port in t_dhcp_ports:
subnet_id = dhcp_port['fixed_ips'][0]['subnet_id']
t_subnet_dhcp_map[subnet_id] = dhcp_port
for t_subnet_id, b_subnet_id in subnet_map.iteritems():
if t_subnet_id in t_subnet_dhcp_map:
t_dhcp_port = t_subnet_dhcp_map[t_subnet_id]
else:
t_dhcp_port = top_client.create_ports(context,
top_dhcp_port_body)
mappings = db_api.get_bottom_mappings_by_top_id(
context, t_dhcp_port['id'], constants.RT_PORT)
pod_list = [mapping[0]['pod_id'] for mapping in mappings]
if pod['pod_id'] in pod_list:
# mapping exists, skip this subnet
continue
dhcp_port_body = self._get_create_dhcp_port_body(
t_dhcp_port, b_subnet_id, bottom_net_id)
t_dhcp_ip = t_dhcp_port['fixed_ips'][0]['ip_address']
b_dhcp_port = None
try:
b_dhcp_port = client.create_ports(context, dhcp_port_body)
except Exception:
# examine if we conflicted with a dhcp port which was
# automatically created by bottom pod
b_dhcp_ports = client.list_ports(context,
b_dhcp_port_filters)
dhcp_port_match = False
for dhcp_port in b_dhcp_ports:
subnet_id = dhcp_port['fixed_ips'][0]['subnet_id']
ip = dhcp_port['fixed_ips'][0]['ip_address']
if b_subnet_id == subnet_id and t_dhcp_ip == ip:
with context.session.begin():
core.create_resource(
context, models.ResourceRouting,
{'top_id': t_dhcp_port['id'],
'bottom_id': dhcp_port['id'],
'pod_id': pod['pod_id'],
'project_id': self.project_id,
'resource_type': constants.RT_PORT})
dhcp_port_match = True
break
if not dhcp_port_match:
# so we didn't conflict with a dhcp port, raise exception
raise
if b_dhcp_port:
with context.session.begin():
core.create_resource(context, models.ResourceRouting,
{'top_id': t_dhcp_port['id'],
'bottom_id': b_dhcp_port['id'],
'pod_id': pod['pod_id'],
'project_id': self.project_id,
'resource_type': constants.RT_PORT})
# there is still one thing to do, there may be other dhcp ports
# created by bottom pod, we need to delete them
b_dhcp_ports = client.list_ports(context,
b_dhcp_port_filters)
remove_port_list = []
for dhcp_port in b_dhcp_ports:
subnet_id = dhcp_port['fixed_ips'][0]['subnet_id']
ip = dhcp_port['fixed_ips'][0]['ip_address']
if b_subnet_id == subnet_id and t_dhcp_ip != ip:
remove_port_list.append(dhcp_port['id'])
for dhcp_port_id in remove_port_list:
# NOTE(zhiyuan) dhcp agent will receive this port-delete
# notification and re-configure dhcp so our newly created
# dhcp port can be used
client.delete_ports(context, dhcp_port_id)
# port
if not port:
port = top_client.create_ports(context, top_port_body)
port_body = self._get_create_port_body(port, subnet_map, bottom_net_id)
bottom_port_id = self._prepare_neutron_element(context, pod, port,
'port', port_body)
return bottom_port_id
def _handle_port(self, context, pod, port):
mappings = db_api.get_bottom_mappings_by_top_id(context, port['id'],
constants.RT_PORT)
if mappings:
# TODO(zhiyuan) judge return or raise exception
# NOTE(zhiyuan) user provides a port that already has mapped
# bottom port, return bottom id or raise an exception?
return mappings[0][1]
top_client = self._get_client()
# NOTE(zhiyuan) at this moment, bottom port has not been created,
# neutron plugin directly retrieves information from top, so the
# network id and subnet id in this port dict are safe to use
net = top_client.get_networks(context, port['network_id'])
subnets = []
for fixed_ip in port['fixed_ips']:
subnets.append(top_client.get_subnets(context,
fixed_ip['subnet_id']))
return self._handle_network(context, pod, net, subnets, port)
@staticmethod
def _get_create_server_body(origin, bottom_az):
body = {}
copy_fields = ['name', 'imageRef', 'flavorRef',
'max_count', 'min_count']
if bottom_az:
body['availability_zone'] = bottom_az
for field in copy_fields:
if field in origin:
body[field] = origin[field]
return body
def _get_all(self, context):
ret = []
pods = db_api.list_pods(context)
for pod in pods:
if not pod['az_name']:
continue
client = self._get_client(pod['pod_name'])
ret.extend(client.list_servers(context))
return ret
@expose(generic=True, template='json')
def get_one(self, _id):
context = t_context.extract_context_from_environ()
if _id == 'detail':
return {'servers': self._get_all(context)}
mappings = db_api.get_bottom_mappings_by_top_id(
context, _id, constants.RT_SERVER)
if not mappings:
pecan.abort(404, 'Server not found')
return
pod, bottom_id = mappings[0]
client = self._get_client(pod['pod_name'])
server = client.get_servers(context, bottom_id)
if not server:
pecan.abort(404, 'Server not found')
return
else:
return {'server': server}
@expose(generic=True, template='json')
def get_all(self):
context = t_context.extract_context_from_environ()
return {'servers': self._get_all(context)}
@expose(generic=True, template='json')
def post(self, **kw):
context = t_context.extract_context_from_environ()
if 'server' not in kw:
pecan.abort(400, 'Request body not found')
return
if 'availability_zone' not in kw['server']:
pecan.abort(400, 'Availability zone not set')
return
pod, b_az = az_ag.get_pod_by_az_tenant(
context, kw['server']['availability_zone'], self.project_id)
if not pod:
pecan.abort(400, 'No pod bound to availability zone')
return
server_body = self._get_create_server_body(kw['server'], b_az)
top_client = self._get_client()
if 'networks' in kw['server']:
server_body['networks'] = []
for net_info in kw['server']['networks']:
if 'uuid' in net_info:
network = top_client.get_networks(context,
net_info['uuid'])
if not network:
pecan.abort(400, 'Network not found')
return
subnets = top_client.list_subnets(
context, [{'key': 'network_id',
'comparator': 'eq',
'value': network['id']}])
if not subnets:
pecan.abort(400, 'Network not contain subnets')
return
bottom_port_id = self._handle_network(context, pod,
network, subnets)
elif 'port' in net_info:
port = top_client.get_ports(context, net_info['port'])
if not port:
pecan.abort(400, 'Port not found')
return
bottom_port_id = self._handle_port(context, pod, port)
server_body['networks'].append({'port': bottom_port_id})
client = self._get_client(pod['pod_name'])
nics = [
{'port-id': _port['port']} for _port in server_body['networks']]
server = client.create_servers(context,
name=server_body['name'],
image=server_body['imageRef'],
flavor=server_body['flavorRef'],
nics=nics)
with context.session.begin():
core.create_resource(context, models.ResourceRouting,
{'top_id': server['id'],
'bottom_id': server['id'],
'pod_id': pod['pod_id'],
'project_id': self.project_id,
'resource_type': constants.RT_SERVER})
return {'server': server}

View File

@ -0,0 +1,22 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import tricircle.nova_apigw.app
def list_opts():
return [
('DEFAULT', tricircle.nova_apigw.app.common_opts),
]

View File

@ -1,751 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import base64
import contextlib
import functools
import six
import sys
import time
import traceback
from oslo_config import cfg
import oslo_log.log as logging
import oslo_messaging as messaging
from oslo_utils import excutils
from oslo_utils import strutils
from tricircle.common.i18n import _
from tricircle.common.i18n import _LE
from tricircle.common.i18n import _LW
from tricircle.common.nova_lib import block_device
from tricircle.common.nova_lib import compute_manager
from tricircle.common.nova_lib import compute_utils
from tricircle.common.nova_lib import conductor
from tricircle.common.nova_lib import driver_block_device
from tricircle.common.nova_lib import exception
from tricircle.common.nova_lib import manager
from tricircle.common.nova_lib import network
from tricircle.common.nova_lib import network_model
from tricircle.common.nova_lib import objects
from tricircle.common.nova_lib import openstack_driver
from tricircle.common.nova_lib import pipelib
from tricircle.common.nova_lib import rpc
from tricircle.common.nova_lib import task_states
from tricircle.common.nova_lib import utils
from tricircle.common.nova_lib import vm_states
from tricircle.common.nova_lib import volume
import tricircle.common.utils as t_utils
CONF = cfg.CONF
compute_opts = [
cfg.StrOpt('default_access_ip_network_name',
help='Name of network to use to set access IPs for instances'),
cfg.IntOpt('network_allocate_retries',
default=0,
help="Number of times to retry network allocation on failures"),
]
CONF.register_opts(compute_opts)
LOG = logging.getLogger(__name__)
SERVICE_NAME = 'proxy_compute'
get_notifier = functools.partial(rpc.get_notifier, service=SERVICE_NAME)
wrap_exception = functools.partial(exception.wrap_exception,
get_notifier=get_notifier)
reverts_task_state = compute_manager.reverts_task_state
wrap_instance_fault = compute_manager.wrap_instance_fault
wrap_instance_event = compute_manager.wrap_instance_event
class ProxyComputeManager(manager.Manager):
target = messaging.Target(version='4.0')
def __init__(self, *args, **kwargs):
self.is_neutron_security_groups = (
openstack_driver.is_neutron_security_groups())
self.use_legacy_block_device_info = False
self.network_api = network.API()
self.volume_api = volume.API()
self.conductor_api = conductor.API()
self.compute_task_api = conductor.ComputeTaskAPI()
super(ProxyComputeManager, self).__init__(
service_name=SERVICE_NAME, *args, **kwargs)
def _decode_files(self, injected_files):
"""Base64 decode the list of files to inject."""
if not injected_files:
return []
def _decode(f):
path, contents = f
try:
decoded = base64.b64decode(contents)
return path, decoded
except TypeError:
raise exception.Base64Exception(path=path)
return [_decode(f) for f in injected_files]
def _cleanup_allocated_networks(self, context, instance,
requested_networks):
try:
self._deallocate_network(context, instance, requested_networks)
except Exception:
msg = _LE('Failed to deallocate networks')
LOG.exception(msg, instance=instance)
return
instance.system_metadata['network_allocated'] = 'False'
try:
instance.save()
except exception.InstanceNotFound:
pass
def _deallocate_network(self, context, instance,
requested_networks=None):
LOG.debug('Deallocating network for instance', instance=instance)
self.network_api.deallocate_for_instance(
context, instance, requested_networks=requested_networks)
def _cleanup_volumes(self, context, instance_uuid, bdms, raise_exc=True):
exc_info = None
for bdm in bdms:
LOG.debug("terminating bdm %s", bdm,
instance_uuid=instance_uuid)
if bdm.volume_id and bdm.delete_on_termination:
try:
self.volume_api.delete(context, bdm.volume_id)
except Exception as exc:
exc_info = sys.exc_info()
LOG.warn(_LW('Failed to delete volume: %(volume_id)s due '
'to %(exc)s'), {'volume_id': bdm.volume_id,
'exc': unicode(exc)})
if exc_info is not None and raise_exc:
six.reraise(exc_info[0], exc_info[1], exc_info[2])
def _instance_update(self, context, instance, **kwargs):
"""Update an instance in the database using kwargs as value."""
for k, v in kwargs.items():
setattr(instance, k, v)
instance.save()
def _set_instance_obj_error_state(self, context, instance,
clean_task_state=False):
try:
instance.vm_state = vm_states.ERROR
if clean_task_state:
instance.task_state = None
instance.save()
except exception.InstanceNotFound:
LOG.debug('Instance has been destroyed from under us while '
'trying to set it to ERROR', instance=instance)
def _notify_about_instance_usage(self, context, instance, event_suffix,
network_info=None, system_metadata=None,
extra_usage_info=None, fault=None):
compute_utils.notify_about_instance_usage(
self.notifier, context, instance, event_suffix,
network_info=network_info,
system_metadata=system_metadata,
extra_usage_info=extra_usage_info, fault=fault)
def _validate_instance_group_policy(self, context, instance,
filter_properties):
# NOTE(russellb) Instance group policy is enforced by the scheduler.
# However, there is a race condition with the enforcement of
# anti-affinity. Since more than one instance may be scheduled at the
# same time, it's possible that more than one instance with an
# anti-affinity policy may end up here. This is a validation step to
# make sure that starting the instance here doesn't violate the policy.
scheduler_hints = filter_properties.get('scheduler_hints') or {}
group_hint = scheduler_hints.get('group')
if not group_hint:
return
@utils.synchronized(group_hint)
def _do_validation(context, instance, group_hint):
group = objects.InstanceGroup.get_by_hint(context, group_hint)
if 'anti-affinity' not in group.policies and (
'affinity' not in group.policies):
return
group_hosts = group.get_hosts(context, exclude=[instance.uuid])
if self.host in group_hosts:
if 'anti-affinity' in group.policies:
msg = _("Anti-affinity instance group policy "
"was violated.")
raise exception.RescheduledException(
instance_uuid=instance.uuid,
reason=msg)
elif group_hosts and [self.host] != group_hosts:
# NOTE(huawei) Native code only considered anti-affinity
# policy, but affinity policy also have the same problem.
# so we add checker for affinity policy instance.
if 'affinity' in group.policies:
msg = _("affinity instance group policy was violated.")
raise exception.RescheduledException(
instance_uuid=instance.uuid,
reason=msg)
_do_validation(context, instance, group_hint)
@wrap_exception()
@reverts_task_state
@wrap_instance_fault
def build_and_run_instance(
self, context, host, instance, image, request_spec,
filter_properties, admin_password=None, injected_files=None,
requested_networks=None, security_groups=None,
block_device_mapping=None, node=None, limits=None):
if (requested_networks and
not isinstance(requested_networks,
objects.NetworkRequestList)):
requested_networks = objects.NetworkRequestList(
objects=[objects.NetworkRequest.from_tuple(t)
for t in requested_networks])
@utils.synchronized(instance.uuid)
def _locked_do_build_and_run_instance(*args, **kwargs):
self._do_build_and_run_instance(*args, **kwargs)
utils.spawn_n(_locked_do_build_and_run_instance,
context, host, instance, image, request_spec,
filter_properties, admin_password, injected_files,
requested_networks, security_groups,
block_device_mapping, node, limits)
@wrap_exception()
@reverts_task_state
@wrap_instance_event
@wrap_instance_fault
def _do_build_and_run_instance(self, context, host, instance, image,
request_spec, filter_properties,
admin_password, injected_files,
requested_networks, security_groups,
block_device_mapping, node=None,
limits=None):
try:
LOG.debug(_('Starting instance...'), context=context,
instance=instance)
instance.vm_state = vm_states.BUILDING
instance.task_state = None
instance.save(expected_task_state=(task_states.SCHEDULING, None))
except exception.InstanceNotFound:
msg = 'Instance disappeared before build.'
LOG.debug(msg, instance=instance)
return
except exception.UnexpectedTaskStateError as e:
LOG.debug(e.format_message(), instance=instance)
return
# b64 decode the files to inject:
decoded_files = self._decode_files(injected_files)
if limits is None:
limits = {}
if node is None:
node = t_utils.get_node_name(host)
LOG.debug('No node specified, defaulting to %s', node,
instance=instance)
try:
self._build_and_run_instance(
context, host, instance, image, request_spec, decoded_files,
admin_password, requested_networks, security_groups,
block_device_mapping, node, limits, filter_properties)
except exception.RescheduledException as e:
LOG.debug(e.format_message(), instance=instance)
retry = filter_properties.get('retry', None)
if not retry:
# no retry information, do not reschedule.
LOG.debug("Retry info not present, will not reschedule",
instance=instance)
self._cleanup_allocated_networks(context, instance,
requested_networks)
compute_utils.add_instance_fault_from_exc(
context, instance, e, sys.exc_info())
self._set_instance_obj_error_state(context, instance,
clean_task_state=True)
return
retry['exc'] = traceback.format_exception(*sys.exc_info())
self.network_api.cleanup_instance_network_on_host(
context, instance, self.host)
instance.task_state = task_states.SCHEDULING
instance.save()
self.compute_task_api.build_instances(
context, [instance], image, filter_properties, admin_password,
injected_files, requested_networks, security_groups,
block_device_mapping)
except (exception.InstanceNotFound,
exception.UnexpectedDeletingTaskStateError):
msg = 'Instance disappeared during build.'
LOG.debug(msg, instance=instance)
self._cleanup_allocated_networks(context, instance,
requested_networks)
except exception.BuildAbortException as e:
LOG.exception(e.format_message(), instance=instance)
self._cleanup_allocated_networks(context, instance,
requested_networks)
self._cleanup_volumes(context, instance.uuid,
block_device_mapping, raise_exc=False)
compute_utils.add_instance_fault_from_exc(
context, instance, e, sys.exc_info())
self._set_instance_obj_error_state(context, instance,
clean_task_state=True)
except Exception as e:
# should not reach here.
msg = _LE('Unexpected build failure, not rescheduling build.')
LOG.exception(msg, instance=instance)
self._cleanup_allocated_networks(context, instance,
requested_networks)
self._cleanup_volumes(context, instance.uuid,
block_device_mapping, raise_exc=False)
compute_utils.add_instance_fault_from_exc(context, instance,
e, sys.exc_info())
self._set_instance_obj_error_state(context, instance,
clean_task_state=True)
def _get_instance_nw_info(self, context, instance, use_slave=False):
"""Get a list of dictionaries of network data of an instance."""
return self.network_api.get_instance_nw_info(context, instance,
use_slave=use_slave)
def _allocate_network(self, context, instance, requested_networks, macs,
security_groups, dhcp_options):
"""Start network allocation asynchronously.
Return an instance of NetworkInfoAsyncWrapper that can be used to
retrieve the allocated networks when the operation has finished.
"""
# NOTE(comstud): Since we're allocating networks asynchronously,
# this task state has little meaning, as we won't be in this
# state for very long.
instance.vm_state = vm_states.BUILDING
instance.task_state = task_states.NETWORKING
instance.save(expected_task_state=[None])
is_vpn = pipelib.is_vpn_image(instance.image_ref)
return network_model.NetworkInfoAsyncWrapper(
self._allocate_network_async, context, instance,
requested_networks, macs, security_groups, is_vpn, dhcp_options)
def _allocate_network_async(self, context, instance, requested_networks,
macs, security_groups, is_vpn, dhcp_options):
"""Method used to allocate networks in the background.
Broken out for testing.
"""
LOG.debug("Allocating IP information in the background.",
instance=instance)
retries = CONF.network_allocate_retries
if retries < 0:
LOG.warn(_("Treating negative config value (%(retries)s) for "
"'network_allocate_retries' as 0."),
{'retries': retries})
attempts = retries > 1 and retries + 1 or 1
retry_time = 1
for attempt in range(1, attempts + 1):
try:
nwinfo = self.network_api.allocate_for_instance(
context, instance, vpn=is_vpn,
requested_networks=requested_networks,
macs=macs, security_groups=security_groups,
dhcp_options=dhcp_options)
LOG.debug('Instance network_info: |%s|', nwinfo,
instance=instance)
instance.system_metadata['network_allocated'] = 'True'
# NOTE(JoshNang) do not save the instance here, as it can cause
# races. The caller shares a reference to instance and waits
# for this async greenthread to finish before calling
# instance.save().
return nwinfo
except Exception:
exc_info = sys.exc_info()
log_info = {'attempt': attempt,
'attempts': attempts}
if attempt == attempts:
LOG.exception(_LE('Instance failed network setup '
'after %(attempts)d attempt(s)'),
log_info)
raise exc_info[0], exc_info[1], exc_info[2]
LOG.warn(_('Instance failed network setup '
'(attempt %(attempt)d of %(attempts)d)'),
log_info, instance=instance)
time.sleep(retry_time)
retry_time *= 2
if retry_time > 30:
retry_time = 30
# Not reached.
def _build_networks_for_instance(self, context, instance,
requested_networks, security_groups):
# If we're here from a reschedule the network may already be allocated.
if strutils.bool_from_string(
instance.system_metadata.get('network_allocated', 'False')):
# NOTE(alex_xu): The network_allocated is True means the network
# resource already allocated at previous scheduling, and the
# network setup is cleanup at previous. After rescheduling, the
# network resource need setup on the new host.
self.network_api.setup_instance_network_on_host(
context, instance, instance.host)
return self._get_instance_nw_info(context, instance)
if not self.is_neutron_security_groups:
security_groups = []
# NOTE(zhiyuan) in ComputeManager, driver method "macs_for_instance"
# and "dhcp_options_for_instance" are called to get macs and
# dhcp_options, here we just set them to None
macs = None
dhcp_options = None
network_info = self._allocate_network(context, instance,
requested_networks, macs,
security_groups, dhcp_options)
if not instance.access_ip_v4 and not instance.access_ip_v6:
# If CONF.default_access_ip_network_name is set, grab the
# corresponding network and set the access ip values accordingly.
# Note that when there are multiple ips to choose from, an
# arbitrary one will be chosen.
network_name = CONF.default_access_ip_network_name
if not network_name:
return network_info
for vif in network_info:
if vif['network']['label'] == network_name:
for ip in vif.fixed_ips():
if ip['version'] == 4:
instance.access_ip_v4 = ip['address']
if ip['version'] == 6:
instance.access_ip_v6 = ip['address']
instance.save()
break
return network_info
# NOTE(zhiyuan) the task of this function is to do some preparation job
# for driver and cinder volume, but in nova proxy _proxy_run_instance will
# do such job, remove this function after cinder proxy is ready and we
# confirm it is useless
def _prep_block_device(self, context, instance, bdms,
do_check_attach=True):
"""Set up the block device for an instance with error logging."""
try:
block_device_info = {
'root_device_name': instance['root_device_name'],
'swap': driver_block_device.convert_swap(bdms),
'ephemerals': driver_block_device.convert_ephemerals(bdms),
'block_device_mapping': (
driver_block_device.attach_block_devices(
driver_block_device.convert_volumes(bdms),
context, instance, self.volume_api,
self.driver, do_check_attach=do_check_attach) +
driver_block_device.attach_block_devices(
driver_block_device.convert_snapshots(bdms),
context, instance, self.volume_api,
self.driver, self._await_block_device_map_created,
do_check_attach=do_check_attach) +
driver_block_device.attach_block_devices(
driver_block_device.convert_images(bdms),
context, instance, self.volume_api,
self.driver, self._await_block_device_map_created,
do_check_attach=do_check_attach) +
driver_block_device.attach_block_devices(
driver_block_device.convert_blanks(bdms),
context, instance, self.volume_api,
self.driver, self._await_block_device_map_created,
do_check_attach=do_check_attach))
}
if self.use_legacy_block_device_info:
for bdm_type in ('swap', 'ephemerals', 'block_device_mapping'):
block_device_info[bdm_type] = \
driver_block_device.legacy_block_devices(
block_device_info[bdm_type])
# Get swap out of the list
block_device_info['swap'] = driver_block_device.get_swap(
block_device_info['swap'])
return block_device_info
except exception.OverQuota:
msg = _LW('Failed to create block device for instance due to '
'being over volume resource quota')
LOG.warn(msg, instance=instance)
raise exception.InvalidBDM()
except Exception:
LOG.exception(_LE('Instance failed block device setup'),
instance=instance)
raise exception.InvalidBDM()
def _default_block_device_names(self, context, instance,
image_meta, block_devices):
"""Verify that all the devices have the device_name set.
If not, provide a default name. It also ensures that there is a
root_device_name and is set to the first block device in the boot
sequence (boot_index=0).
"""
root_bdm = block_device.get_root_bdm(block_devices)
if not root_bdm:
return
# Get the root_device_name from the root BDM or the instance
root_device_name = None
update_instance = False
update_root_bdm = False
if root_bdm.device_name:
root_device_name = root_bdm.device_name
instance.root_device_name = root_device_name
update_instance = True
elif instance.root_device_name:
root_device_name = instance.root_device_name
root_bdm.device_name = root_device_name
update_root_bdm = True
else:
# NOTE(zhiyuan) if driver doesn't implement related function,
# function in compute_utils will be called
root_device_name = compute_utils.get_next_device_name(instance, [])
instance.root_device_name = root_device_name
root_bdm.device_name = root_device_name
update_instance = update_root_bdm = True
if update_instance:
instance.save()
if update_root_bdm:
root_bdm.save()
ephemerals = filter(block_device.new_format_is_ephemeral,
block_devices)
swap = filter(block_device.new_format_is_swap,
block_devices)
block_device_mapping = filter(
driver_block_device.is_block_device_mapping, block_devices)
# NOTE(zhiyuan) if driver doesn't implement related function,
# function in compute_utils will be called
compute_utils.default_device_names_for_instance(
instance, root_device_name, ephemerals, swap, block_device_mapping)
@contextlib.contextmanager
def _build_resources(self, context, instance, requested_networks,
security_groups, image, block_device_mapping):
resources = {}
network_info = None
try:
network_info = self._build_networks_for_instance(
context, instance, requested_networks, security_groups)
resources['network_info'] = network_info
except (exception.InstanceNotFound,
exception.UnexpectedDeletingTaskStateError):
raise
except exception.UnexpectedTaskStateError as e:
raise exception.BuildAbortException(instance_uuid=instance.uuid,
reason=e.format_message())
except Exception:
# Because this allocation is async any failures are likely to occur
# when the driver accesses network_info during spawn().
LOG.exception(_LE('Failed to allocate network(s)'),
instance=instance)
msg = _('Failed to allocate the network(s), not rescheduling.')
raise exception.BuildAbortException(instance_uuid=instance.uuid,
reason=msg)
try:
# Verify that all the BDMs have a device_name set and assign a
# default to the ones missing it with the help of the driver.
self._default_block_device_names(context, instance, image,
block_device_mapping)
instance.vm_state = vm_states.BUILDING
instance.task_state = task_states.BLOCK_DEVICE_MAPPING
instance.save()
# NOTE(zhiyuan) remove this commented code after cinder proxy is
# ready and we confirm _prep_block_device is useless
#
# block_device_info = self._prep_block_device(
# context, instance, block_device_mapping)
#
block_device_info = None
resources['block_device_info'] = block_device_info
except (exception.InstanceNotFound,
exception.UnexpectedDeletingTaskStateError):
with excutils.save_and_reraise_exception() as ctxt:
# Make sure the async call finishes
if network_info is not None:
network_info.wait(do_raise=False)
except exception.UnexpectedTaskStateError as e:
# Make sure the async call finishes
if network_info is not None:
network_info.wait(do_raise=False)
raise exception.BuildAbortException(instance_uuid=instance.uuid,
reason=e.format_message())
except Exception:
LOG.exception(_LE('Failure prepping block device'),
instance=instance)
# Make sure the async call finishes
if network_info is not None:
network_info.wait(do_raise=False)
msg = _('Failure prepping block device.')
raise exception.BuildAbortException(instance_uuid=instance.uuid,
reason=msg)
self._heal_proxy_networks(context, instance, network_info)
cascaded_ports = self._heal_proxy_ports(
context, instance, network_info)
resources['cascaded_ports'] = cascaded_ports
try:
yield resources
except Exception as exc:
with excutils.save_and_reraise_exception() as ctxt:
if not isinstance(exc, (
exception.InstanceNotFound,
exception.UnexpectedDeletingTaskStateError)):
LOG.exception(_LE('Instance failed to spawn'),
instance=instance)
# Make sure the async call finishes
if network_info is not None:
network_info.wait(do_raise=False)
try:
self._shutdown_instance(context, instance,
block_device_mapping,
requested_networks,
try_deallocate_networks=False)
except Exception:
ctxt.reraise = False
msg = _('Could not clean up failed build,'
' not rescheduling')
raise exception.BuildAbortException(
instance_uuid=instance.uuid, reason=msg)
def _build_and_run_instance(self, context, host, instance, image,
request_spec, injected_files, admin_password,
requested_networks, security_groups,
block_device_mapping, node, limits,
filter_properties):
image_name = image.get('name')
self._notify_about_instance_usage(context, instance, 'create.start',
extra_usage_info={
'image_name': image_name})
try:
self._validate_instance_group_policy(context, instance,
filter_properties)
with self._build_resources(context, instance, requested_networks,
security_groups, image,
block_device_mapping) as resources:
instance.vm_state = vm_states.BUILDING
instance.task_state = task_states.SPAWNING
instance.save(
expected_task_state=task_states.BLOCK_DEVICE_MAPPING)
cascaded_ports = resources['cascaded_ports']
request_spec['block_device_mapping'] = block_device_mapping
request_spec['security_group'] = security_groups
self._proxy_run_instance(
context, instance, request_spec, filter_properties,
requested_networks, injected_files, admin_password,
None, host, node, None, cascaded_ports)
except (exception.InstanceNotFound,
exception.UnexpectedDeletingTaskStateError) as e:
with excutils.save_and_reraise_exception():
self._notify_about_instance_usage(context, instance,
'create.end', fault=e)
except exception.ComputeResourcesUnavailable as e:
LOG.debug(e.format_message(), instance=instance)
self._notify_about_instance_usage(context, instance,
'create.error', fault=e)
raise exception.RescheduledException(
instance_uuid=instance.uuid, reason=e.format_message())
except exception.BuildAbortException as e:
with excutils.save_and_reraise_exception():
LOG.debug(e.format_message(), instance=instance)
self._notify_about_instance_usage(context, instance,
'create.error', fault=e)
except (exception.FixedIpLimitExceeded,
exception.NoMoreNetworks) as e:
LOG.warn(_LW('No more network or fixed IP to be allocated'),
instance=instance)
self._notify_about_instance_usage(context, instance,
'create.error', fault=e)
msg = _('Failed to allocate the network(s) with error %s, '
'not rescheduling.') % e.format_message()
raise exception.BuildAbortException(instance_uuid=instance.uuid,
reason=msg)
except (exception.VirtualInterfaceCreateException,
exception.VirtualInterfaceMacAddressException) as e:
LOG.exception(_LE('Failed to allocate network(s)'),
instance=instance)
self._notify_about_instance_usage(context, instance,
'create.error', fault=e)
msg = _('Failed to allocate the network(s), not rescheduling.')
raise exception.BuildAbortException(instance_uuid=instance.uuid,
reason=msg)
except (exception.FlavorDiskTooSmall,
exception.FlavorMemoryTooSmall,
exception.ImageNotActive,
exception.ImageUnacceptable) as e:
self._notify_about_instance_usage(context, instance,
'create.error', fault=e)
raise exception.BuildAbortException(instance_uuid=instance.uuid,
reason=e.format_message())
except Exception as e:
self._notify_about_instance_usage(context, instance,
'create.error', fault=e)
raise exception.RescheduledException(
instance_uuid=instance.uuid, reason=six.text_type(e))
def _shutdown_instance(self, context, instance, bdms,
requested_networks=None, notify=True,
try_deallocate_networks=True):
LOG.debug('Proxy stop instance')
# proxy new function below
def _heal_proxy_networks(self, context, instance, network_info):
pass
def _heal_proxy_ports(self, context, instance, network_info):
return []
def _proxy_run_instance(self, context, instance, request_spec=None,
filter_properties=None, requested_networks=None,
injected_files=None, admin_password=None,
is_first_time=False, host=None, node=None,
legacy_bdm_in_spec=True, physical_ports=None):
LOG.debug('Proxy run instance')

View File

@ -1,42 +0,0 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config.cfg import CONF
import tricircle.common.service as t_service
from tricircle.common.utils import get_import_path
from tricircle.proxy.compute_manager import ProxyComputeManager
_REPORT_INTERVAL = 30
_REPORT_INTERVAL_MAX = 60
def setup_server():
service = t_service.NovaService(
host=CONF.host,
# NOTE(zhiyuan) binary needs to start with "nova-"
# if nova service is used
binary="nova-proxy",
topic="proxy",
db_allowed=False,
periodic_enable=True,
report_interval=_REPORT_INTERVAL,
periodic_interval_max=_REPORT_INTERVAL_MAX,
manager=get_import_path(ProxyComputeManager),
)
t_service.fix_compute_service_exchange(service)
return service

20
tricircle/tests/base.py Normal file
View File

@ -0,0 +1,20 @@
# Copyright (c) 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslotest import base
class TestCase(base.BaseTestCase):
"""Test case base class for all unit tests."""

View File

View File

@ -0,0 +1,622 @@
# Copyright (c) 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from mock import patch
import pecan
from pecan.configuration import set_config
from pecan.testing import load_test_app
from oslo_config import cfg
from oslo_config import fixture as fixture_config
from tricircle.api import app
from tricircle.common import az_ag
from tricircle.common import context
from tricircle.common import utils
from tricircle.db import core
from tricircle.tests import base
OPT_GROUP_NAME = 'keystone_authtoken'
cfg.CONF.import_group(OPT_GROUP_NAME, "keystonemiddleware.auth_token")
def fake_is_admin(ctx):
return True
class API_FunctionalTest(base.TestCase):
def setUp(self):
super(API_FunctionalTest, self).setUp()
self.addCleanup(set_config, {}, overwrite=True)
cfg.CONF.register_opts(app.common_opts)
self.CONF = self.useFixture(fixture_config.Config()).conf
self.CONF.set_override('auth_strategy', 'noauth')
self.CONF.set_override('tricircle_db_connection', 'sqlite:///:memory:')
core.initialize()
core.ModelBase.metadata.create_all(core.get_engine())
self.context = context.get_admin_context()
self.app = self._make_app()
def _make_app(self, enable_acl=False):
self.config = {
'app': {
'root': 'tricircle.api.controllers.root.RootController',
'modules': ['tricircle.api'],
'enable_acl': enable_acl,
'errors': {
400: '/error',
'__force_dict__': True
}
},
}
return load_test_app(self.config)
def tearDown(self):
super(API_FunctionalTest, self).tearDown()
cfg.CONF.unregister_opts(app.common_opts)
pecan.set_config({}, overwrite=True)
core.ModelBase.metadata.drop_all(core.get_engine())
class TestPodController(API_FunctionalTest):
"""Test version listing on root URI."""
@patch.object(context, 'is_admin_context',
new=fake_is_admin)
def test_post_no_input(self):
pods = [
# missing pod
{
"pod_xxx":
{
"dc_name": "dc1",
"pod_az_name": "az1"
},
"expected_error": 400
}]
for test_pod in pods:
response = self.app.post_json(
'/v1.0/pods',
dict(pod_xxx=test_pod['pod_xxx']),
expect_errors=True)
self.assertEqual(response.status_int,
test_pod['expected_error'])
@patch.object(context, 'is_admin_context',
new=fake_is_admin)
def test_post_invalid_input(self):
pods = [
# missing az and pod
{
"pod":
{
"dc_name": "dc1",
"pod_az_name": "az1"
},
"expected_error": 422
},
# missing pod
{
"pod":
{
"pod_az_name": "az1",
"dc_name": "dc1",
"az_name": "az1"
},
"expected_error": 422
},
# missing pod
{
"pod":
{
"pod_az_name": "az1",
"dc_name": "dc1",
"az_name": "",
},
"expected_error": 422
},
# missing az
{
"pod":
{
"pod_name": "",
"pod_az_name": "az1",
"dc_name": "dc1"
},
"expected_error": 422
},
# az & pod == ""
{
"pod":
{
"pod_name": "",
"pod_az_name": "az1",
"dc_name": "dc1",
"az_name": ""
},
"expected_error": 422
},
# invalid pod
{
"pod":
{
"pod_name": "",
"pod_az_name": "az1",
"dc_name": "dc1",
"az_name": "az1"
},
"expected_error": 422
}
]
self._test_and_check(pods)
@patch.object(context, 'is_admin_context',
new=fake_is_admin)
def test_post_duplicate_top_region(self):
pods = [
# the first time to create TopRegion
{
"pod":
{
"pod_name": "TopRegion",
"pod_az_name": "az1",
"dc_name": "dc1"
},
"expected_error": 200
},
{
"pod":
{
"pod_name": "TopRegion2",
"pod_az_name": "",
"dc_name": "dc1"
},
"expected_error": 409
},
]
self._test_and_check(pods)
@patch.object(context, 'is_admin_context',
new=fake_is_admin)
def test_post_duplicate_pod(self):
pods = [
{
"pod":
{
"pod_name": "Pod1",
"pod_az_name": "az1",
"dc_name": "dc1",
"az_name": "AZ1"
},
"expected_error": 200
},
{
"pod":
{
"pod_name": "Pod1",
"pod_az_name": "az2",
"dc_name": "dc2",
"az_name": "AZ1"
},
"expected_error": 409
},
]
self._test_and_check(pods)
@patch.object(context, 'is_admin_context',
new=fake_is_admin)
def test_post_pod_duplicate_top_region(self):
pods = [
# the first time to create TopRegion
{
"pod":
{
"pod_name": "TopRegion",
"pod_az_name": "az1",
"dc_name": "dc1"
},
"expected_error": 200
},
{
"pod":
{
"pod_name": "TopRegion",
"pod_az_name": "az2",
"dc_name": "dc2",
"az_name": "AZ1"
},
"expected_error": 409
},
]
self._test_and_check(pods)
def _test_and_check(self, pods):
for test_pod in pods:
response = self.app.post_json(
'/v1.0/pods',
dict(pod=test_pod['pod']),
expect_errors=True)
self.assertEqual(response.status_int,
test_pod['expected_error'])
@patch.object(context, 'is_admin_context',
new=fake_is_admin)
def test_get_all(self):
pods = [
# the first time to create TopRegion
{
"pod":
{
"pod_name": "TopRegion",
"pod_az_name": "",
"dc_name": "dc1",
"az_name": ""
},
"expected_error": 200
},
{
"pod":
{
"pod_name": "Pod1",
"pod_az_name": "az1",
"dc_name": "dc2",
"az_name": "AZ1"
},
"expected_error": 200
},
{
"pod":
{
"pod_name": "Pod2",
"pod_az_name": "az1",
"dc_name": "dc2",
"az_name": "AZ1"
},
"expected_error": 200
},
]
self._test_and_check(pods)
response = self.app.get('/v1.0/pods')
self.assertEqual(response.status_int, 200)
self.assertIn('TopRegion', response)
self.assertIn('Pod1', response)
self.assertIn('Pod2', response)
@patch.object(context, 'is_admin_context',
new=fake_is_admin)
@patch.object(context, 'extract_context_from_environ')
def test_get_delete_one(self, mock_context):
mock_context.return_value = self.context
pods = [
{
"pod":
{
"pod_name": "Pod1",
"pod_az_name": "az1",
"dc_name": "dc2",
"az_name": "AZ1"
},
"expected_error": 200,
},
{
"pod":
{
"pod_name": "Pod2",
"pod_az_name": "az1",
"dc_name": "dc2",
"az_name": "AZ1"
},
"expected_error": 200,
},
{
"pod":
{
"pod_name": "Pod3",
"pod_az_name": "az1",
"dc_name": "dc2",
"az_name": "AZ2"
},
"expected_error": 200,
},
]
self._test_and_check(pods)
response = self.app.get('/v1.0/pods')
self.assertEqual(response.status_int, 200)
return_pods = response.json
for ret_pod in return_pods['pods']:
_id = ret_pod['pod_id']
single_ret = self.app.get('/v1.0/pods/' + str(_id))
self.assertEqual(single_ret.status_int, 200)
one_pod_ret = single_ret.json
get_one_pod = one_pod_ret['pod']
self.assertEqual(get_one_pod['pod_id'],
ret_pod['pod_id'])
self.assertEqual(get_one_pod['pod_name'],
ret_pod['pod_name'])
self.assertEqual(get_one_pod['pod_az_name'],
ret_pod['pod_az_name'])
self.assertEqual(get_one_pod['dc_name'],
ret_pod['dc_name'])
self.assertEqual(get_one_pod['az_name'],
ret_pod['az_name'])
_id = ret_pod['pod_id']
# check ag and az automaticly added
ag_name = utils.get_ag_name(ret_pod['pod_name'])
ag = az_ag.get_ag_by_name(self.context, ag_name)
self.assertIsNotNone(ag)
self.assertEqual(ag['name'],
utils.get_ag_name(ret_pod['pod_name']))
self.assertEqual(ag['availability_zone'], ret_pod['az_name'])
single_ret = self.app.delete('/v1.0/pods/' + str(_id))
self.assertEqual(single_ret.status_int, 200)
# make sure ag is deleted
ag = az_ag.get_ag_by_name(self.context, ag_name)
self.assertIsNone(ag)
class TestBindingController(API_FunctionalTest):
"""Test version listing on root URI."""
@patch.object(context, 'is_admin_context',
new=fake_is_admin)
def test_post_no_input(self):
pod_bindings = [
# missing pod_binding
{
"pod_xxx":
{
"tenant_id": "dddddd",
"pod_id": "0ace0db2-ef33-43a6-a150-42703ffda643"
},
"expected_error": 400
}]
for test_pod in pod_bindings:
response = self.app.post_json(
'/v1.0/bindings',
dict(pod_xxx=test_pod['pod_xxx']),
expect_errors=True)
self.assertEqual(response.status_int,
test_pod['expected_error'])
@patch.object(context, 'is_admin_context',
new=fake_is_admin)
def test_post_invalid_input(self):
pod_bindings = [
# missing tenant_id and or az_pod_map_id
{
"pod_binding":
{
"tenant_id": "dddddd",
"pod_id": ""
},
"expected_error": 422
},
{
"pod_binding":
{
"tenant_id": "",
"pod_id": "0ace0db2-ef33-43a6-a150-42703ffda643"
},
"expected_error": 422
},
{
"pod_binding":
{
"tenant_id": "dddddd",
},
"expected_error": 422
},
{
"pod_binding":
{
"pod_id": "0ace0db2-ef33-43a6-a150-42703ffda643"
},
"expected_error": 422
}
]
self._test_and_check(pod_bindings)
@patch.object(context, 'is_admin_context',
new=fake_is_admin)
def test_bindings(self):
pods = [
{
"pod":
{
"pod_name": "Pod1",
"pod_az_name": "az1",
"dc_name": "dc2",
"az_name": "AZ1"
},
"expected_error": 200
}
]
pod_bindings = [
{
"pod_binding":
{
"tenant_id": "dddddd",
"pod_id": "0ace0db2-ef33-43a6-a150-42703ffda643"
},
"expected_error": 200
},
{
"pod_binding":
{
"tenant_id": "aaaaa",
"pod_id": "0ace0db2-ef33-43a6-a150-42703ffda643"
},
"expected_error": 200
},
{
"pod_binding":
{
"tenant_id": "dddddd",
"pod_id": "0ace0db2-ef33-43a6-a150-42703ffda643"
},
"expected_error": 409
}
]
self._test_and_check_pod(pods)
_id = self._get_az_pod_id()
self._test_and_check(pod_bindings, _id)
# get all
response = self.app.get('/v1.0/bindings')
self.assertEqual(response.status_int, 200)
# get one
return_pod_bindings = response.json
for ret_pod in return_pod_bindings['pod_bindings']:
_id = ret_pod['id']
single_ret = self.app.get('/v1.0/bindings/' + str(_id))
self.assertEqual(single_ret.status_int, 200)
one_pot_ret = single_ret.json
get_one_pod = one_pot_ret['pod_binding']
self.assertEqual(get_one_pod['id'],
ret_pod['id'])
self.assertEqual(get_one_pod['tenant_id'],
ret_pod['tenant_id'])
self.assertEqual(get_one_pod['pod_id'],
ret_pod['pod_id'])
_id = ret_pod['id']
single_ret = self.app.delete('/v1.0/bindings/' + str(_id))
self.assertEqual(single_ret.status_int, 200)
def _get_az_pod_id(self):
response = self.app.get('/v1.0/pods')
self.assertEqual(response.status_int, 200)
return_pods = response.json
for ret_pod in return_pods['pods']:
_id = ret_pod['pod_id']
return _id
def _test_and_check(self, pod_bindings, _id=None):
for test_pod in pod_bindings:
if _id is not None:
test_pod['pod_binding']['pod_id'] = str(_id)
response = self.app.post_json(
'/v1.0/bindings',
dict(pod_binding=test_pod['pod_binding']),
expect_errors=True)
self.assertEqual(response.status_int,
test_pod['expected_error'])
def _test_and_check_pod(self, pods):
for test_pod in pods:
response = self.app.post_json(
'/v1.0/pods',
dict(pod=test_pod['pod']),
expect_errors=True)
self.assertEqual(response.status_int,
test_pod['expected_error'])

View File

@ -0,0 +1,171 @@
# Copyright (c) 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from pecan.configuration import set_config
from pecan.testing import load_test_app
from oslo_config import cfg
from oslo_config import fixture as fixture_config
from oslo_serialization import jsonutils
from oslo_utils import uuidutils
from tricircle.api import app
from tricircle.tests import base
OPT_GROUP_NAME = 'keystone_authtoken'
cfg.CONF.import_group(OPT_GROUP_NAME, "keystonemiddleware.auth_token")
class API_FunctionalTest(base.TestCase):
def setUp(self):
super(API_FunctionalTest, self).setUp()
self.addCleanup(set_config, {}, overwrite=True)
cfg.CONF.register_opts(app.common_opts)
self.CONF = self.useFixture(fixture_config.Config()).conf
self.CONF.set_override('auth_strategy', 'noauth')
self.app = self._make_app()
def _make_app(self, enable_acl=False):
self.config = {
'app': {
'root': 'tricircle.api.controllers.root.RootController',
'modules': ['tricircle.api'],
'enable_acl': enable_acl,
'errors': {
400: '/error',
'__force_dict__': True
}
},
}
return load_test_app(self.config)
def tearDown(self):
super(API_FunctionalTest, self).tearDown()
cfg.CONF.unregister_opts(app.common_opts)
pecan.set_config({}, overwrite=True)
class TestRootController(API_FunctionalTest):
"""Test version listing on root URI."""
def test_get(self):
response = self.app.get('/')
self.assertEqual(response.status_int, 200)
json_body = jsonutils.loads(response.body)
versions = json_body.get('versions')
self.assertEqual(1, len(versions))
self.assertEqual(versions[0]["id"], "v1.0")
def _test_method_returns_405(self, method):
api_method = getattr(self.app, method)
response = api_method('/', expect_errors=True)
self.assertEqual(response.status_int, 405)
def test_post(self):
self._test_method_returns_405('post')
def test_put(self):
self._test_method_returns_405('put')
def test_patch(self):
self._test_method_returns_405('patch')
def test_delete(self):
self._test_method_returns_405('delete')
def test_head(self):
self._test_method_returns_405('head')
class TestV1Controller(API_FunctionalTest):
def test_get(self):
response = self.app.get('/v1.0')
self.assertEqual(response.status_int, 200)
json_body = jsonutils.loads(response.body)
version = json_body.get('version')
self.assertEqual(version, "1.0")
def _test_method_returns_405(self, method):
api_method = getattr(self.app, method)
response = api_method('/v1.0', expect_errors=True)
self.assertEqual(response.status_int, 405)
def test_post(self):
self._test_method_returns_405('post')
def test_put(self):
self._test_method_returns_405('put')
def test_patch(self):
self._test_method_returns_405('patch')
def test_delete(self):
self._test_method_returns_405('delete')
def test_head(self):
self._test_method_returns_405('head')
class TestErrors(API_FunctionalTest):
def test_404(self):
response = self.app.get('/fake_path', expect_errors=True)
self.assertEqual(response.status_int, 404)
def test_bad_method(self):
response = self.app.patch('/v1.0/123',
expect_errors=True)
self.assertEqual(response.status_int, 404)
class TestRequestID(API_FunctionalTest):
def test_request_id(self):
response = self.app.get('/')
self.assertIn('x-openstack-request-id', response.headers)
self.assertTrue(
response.headers['x-openstack-request-id'].startswith('req-'))
id_part = response.headers['x-openstack-request-id'].split('req-')[1]
self.assertTrue(uuidutils.is_uuid_like(id_part))
class TestKeystoneAuth(API_FunctionalTest):
def setUp(self):
super(API_FunctionalTest, self).setUp()
self.addCleanup(set_config, {}, overwrite=True)
cfg.CONF.register_opts(app.common_opts)
self.CONF = self.useFixture(fixture_config.Config()).conf
cfg.CONF.set_override('auth_strategy', 'keystone')
self.app = self._make_app()
def test_auth_enforced(self):
response = self.app.get('/', expect_errors=True)
self.assertEqual(response.status_int, 401)

View File

@ -0,0 +1,172 @@
# Copyright (c) 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from pecan.configuration import set_config
from pecan.testing import load_test_app
from oslo_config import cfg
from oslo_config import fixture as fixture_config
from oslo_serialization import jsonutils
from oslo_utils import uuidutils
from tricircle.cinder_apigw import app
from tricircle.tests import base
OPT_GROUP_NAME = 'keystone_authtoken'
cfg.CONF.import_group(OPT_GROUP_NAME, "keystonemiddleware.auth_token")
class Cinder_API_GW_FunctionalTest(base.TestCase):
def setUp(self):
super(Cinder_API_GW_FunctionalTest, self).setUp()
self.addCleanup(set_config, {}, overwrite=True)
cfg.CONF.register_opts(app.common_opts)
self.CONF = self.useFixture(fixture_config.Config()).conf
self.CONF.set_override('auth_strategy', 'noauth')
self.app = self._make_app()
def _make_app(self, enable_acl=False):
self.config = {
'app': {
'root':
'tricircle.cinder_apigw.controllers.root.RootController',
'modules': ['tricircle.cinder_apigw'],
'enable_acl': enable_acl,
'errors': {
400: '/error',
'__force_dict__': True
}
},
}
return load_test_app(self.config)
def tearDown(self):
super(Cinder_API_GW_FunctionalTest, self).tearDown()
cfg.CONF.unregister_opts(app.common_opts)
pecan.set_config({}, overwrite=True)
class TestRootController(Cinder_API_GW_FunctionalTest):
"""Test version listing on root URI."""
def test_get(self):
response = self.app.get('/')
self.assertEqual(response.status_int, 200)
json_body = jsonutils.loads(response.body)
versions = json_body.get('versions')
self.assertEqual(1, len(versions))
self.assertEqual(versions[0]["id"], "v2.0")
def _test_method_returns_405(self, method):
api_method = getattr(self.app, method)
response = api_method('/', expect_errors=True)
self.assertEqual(response.status_int, 405)
def test_post(self):
self._test_method_returns_405('post')
def test_put(self):
self._test_method_returns_405('put')
def test_patch(self):
self._test_method_returns_405('patch')
def test_delete(self):
self._test_method_returns_405('delete')
def test_head(self):
self._test_method_returns_405('head')
class TestV2Controller(Cinder_API_GW_FunctionalTest):
def test_get(self):
response = self.app.get('/v2/')
self.assertEqual(response.status_int, 200)
json_body = jsonutils.loads(response.body)
version = json_body.get('version')
self.assertEqual(version["id"], "v2.0")
def _test_method_returns_405(self, method):
api_method = getattr(self.app, method)
response = api_method('/v2/', expect_errors=True)
self.assertEqual(response.status_int, 405)
def test_post(self):
self._test_method_returns_405('post')
def test_put(self):
self._test_method_returns_405('put')
def test_patch(self):
self._test_method_returns_405('patch')
def test_delete(self):
self._test_method_returns_405('delete')
def test_head(self):
self._test_method_returns_405('head')
class TestErrors(Cinder_API_GW_FunctionalTest):
def test_404(self):
response = self.app.get('/assert_called_once', expect_errors=True)
self.assertEqual(response.status_int, 404)
def test_bad_method(self):
response = self.app.patch('/v2/123',
expect_errors=True)
self.assertEqual(response.status_int, 404)
class TestRequestID(Cinder_API_GW_FunctionalTest):
def test_request_id(self):
response = self.app.get('/')
self.assertIn('x-openstack-request-id', response.headers)
self.assertTrue(
response.headers['x-openstack-request-id'].startswith('req-'))
id_part = response.headers['x-openstack-request-id'].split('req-')[1]
self.assertTrue(uuidutils.is_uuid_like(id_part))
class TestKeystoneAuth(Cinder_API_GW_FunctionalTest):
def setUp(self):
super(Cinder_API_GW_FunctionalTest, self).setUp()
self.addCleanup(set_config, {}, overwrite=True)
cfg.CONF.register_opts(app.common_opts)
self.CONF = self.useFixture(fixture_config.Config()).conf
cfg.CONF.set_override('auth_strategy', 'keystone')
self.app = self._make_app()
def test_auth_enforced(self):
response = self.app.get('/', expect_errors=True)
self.assertEqual(response.status_int, 401)

View File

@ -0,0 +1,460 @@
# Copyright (c) 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from mock import patch
import pecan
from pecan.configuration import set_config
from pecan.testing import load_test_app
from requests import Response
from oslo_config import cfg
from oslo_config import fixture as fixture_config
from oslo_serialization import jsonutils
from oslo_utils import uuidutils
from tricircle.cinder_apigw import app
from tricircle.common import constants as cons
from tricircle.common import context
from tricircle.common import httpclient as hclient
from tricircle.db import api as db_api
from tricircle.db import core
from tricircle.tests import base
OPT_GROUP_NAME = 'keystone_authtoken'
cfg.CONF.import_group(OPT_GROUP_NAME, "keystonemiddleware.auth_token")
FAKE_AZ = 'fake_az'
fake_volumes = []
def fake_volumes_forward_req(ctx, action, b_header, b_url, b_req_body):
resp = Response()
resp.status_code = 404
if action == 'POST':
b_body = jsonutils.loads(b_req_body)
if b_body.get('volume'):
vol = b_body['volume']
vol['id'] = uuidutils.generate_uuid()
stored_vol = {
'volume': vol,
'url': b_url
}
fake_volumes.append(stored_vol)
resp.status_code = 202
vol_dict = {'volume': vol}
resp._content = jsonutils.dumps(vol_dict)
# resp.json = vol_dict
return resp
pos = b_url.rfind('/volumes')
op = ''
cmp_url = b_url
if pos > 0:
op = b_url[pos:]
cmp_url = b_url[:pos] + '/volumes'
op = op[len('/volumes'):]
if action == 'GET':
if op == '' or op == '/detail':
tenant_id = b_url[:pos]
pos2 = tenant_id.rfind('/')
if pos2 > 0:
tenant_id = tenant_id[(pos2 + 1):]
else:
resp.status_code = 404
return resp
ret_vols = []
for temp_vol in fake_volumes:
if temp_vol['url'] != cmp_url:
continue
if temp_vol['volume']['project_id'] == tenant_id:
ret_vols.append(temp_vol['volume'])
vol_dicts = {'volumes': ret_vols}
resp._content = jsonutils.dumps(vol_dicts)
resp.status_code = 200
return resp
elif op != '':
if op[0] == '/':
_id = op[1:]
for vol in fake_volumes:
if vol['volume']['id'] == _id:
vol_dict = {'volume': vol['volume']}
resp._content = jsonutils.dumps(vol_dict)
resp.status_code = 200
return resp
if action == 'DELETE':
if op != '':
if op[0] == '/':
_id = op[1:]
for vol in fake_volumes:
if vol['volume']['id'] == _id:
fake_volumes.remove(vol)
resp.status_code = 202
return resp
else:
resp.status_code = 404
return resp
class CinderVolumeFunctionalTest(base.TestCase):
def setUp(self):
super(CinderVolumeFunctionalTest, self).setUp()
self.addCleanup(set_config, {}, overwrite=True)
cfg.CONF.register_opts(app.common_opts)
self.CONF = self.useFixture(fixture_config.Config()).conf
self.CONF.set_override('auth_strategy', 'noauth')
self.app = self._make_app()
self._init_db()
def _make_app(self, enable_acl=False):
self.config = {
'app': {
'root':
'tricircle.cinder_apigw.controllers.root.RootController',
'modules': ['tricircle.cinder_apigw'],
'enable_acl': enable_acl,
'errors': {
400: '/error',
'__force_dict__': True
}
},
}
return load_test_app(self.config)
def _init_db(self):
core.initialize()
core.ModelBase.metadata.create_all(core.get_engine())
# enforce foreign key constraint for sqlite
core.get_engine().execute('pragma foreign_keys=on')
self.context = context.Context()
pod_dict = {
'pod_id': 'fake_pod_id',
'pod_name': 'fake_pod_name',
'az_name': FAKE_AZ
}
config_dict = {
'service_id': 'fake_service_id',
'pod_id': 'fake_pod_id',
'service_type': cons.ST_CINDER,
'service_url': 'http://127.0.0.1:8774/v2/$(tenant_id)s'
}
pod_dict2 = {
'pod_id': 'fake_pod_id' + '2',
'pod_name': 'fake_pod_name' + '2',
'az_name': FAKE_AZ + '2'
}
config_dict2 = {
'service_id': 'fake_service_id' + '2',
'pod_id': 'fake_pod_id' + '2',
'service_type': cons.ST_CINDER,
'service_url': 'http://10.0.0.2:8774/v2/$(tenant_id)s'
}
top_pod = {
'pod_id': 'fake_top_pod_id',
'pod_name': 'RegionOne',
'az_name': ''
}
top_config = {
'service_id': 'fake_top_service_id',
'pod_id': 'fake_top_pod_id',
'service_type': cons.ST_CINDER,
'service_url': 'http://127.0.0.1:19998/v2/$(tenant_id)s'
}
db_api.create_pod(self.context, pod_dict)
db_api.create_pod(self.context, pod_dict2)
db_api.create_pod(self.context, top_pod)
db_api.create_pod_service_configuration(self.context, config_dict)
db_api.create_pod_service_configuration(self.context, config_dict2)
db_api.create_pod_service_configuration(self.context, top_config)
def tearDown(self):
super(CinderVolumeFunctionalTest, self).tearDown()
cfg.CONF.unregister_opts(app.common_opts)
pecan.set_config({}, overwrite=True)
core.ModelBase.metadata.drop_all(core.get_engine())
class TestVolumeController(CinderVolumeFunctionalTest):
@patch.object(hclient, 'forward_req',
new=fake_volumes_forward_req)
def test_post_error_case(self):
volumes = [
# no 'volume' parameter
{
"volume_xxx":
{
"name": 'vol_1',
"size": 10,
"project_id": 'my_tenant_id',
"metadata": {}
},
"expected_error": 400
},
# no AZ parameter
{
"volume":
{
"name": 'vol_1',
"size": 10,
"project_id": 'my_tenant_id',
"metadata": {}
},
"expected_error": 400
},
# incorrect AZ parameter
{
"volume":
{
"name": 'vol_1',
"availability_zone": FAKE_AZ + FAKE_AZ,
"size": 10,
"project_id": 'my_tenant_id',
"metadata": {}
},
"expected_error": 500
},
]
self._test_and_check(volumes, 'my_tenant_id')
@patch.object(hclient, 'forward_req',
new=fake_volumes_forward_req)
def test_post_one_and_get_one(self):
tenant1_volumes = [
# normal volume with correct parameter
{
"volume":
{
"name": 'vol_1',
"availability_zone": FAKE_AZ,
"source_volid": '',
"consistencygroup_id": '',
"snapshot_id": '',
"source_replica": '',
"size": 10,
"user_id": '',
"imageRef": '',
"attach_status": "detached",
"volume_type": '',
"project_id": 'my_tenant_id',
"metadata": {}
},
"expected_error": 202
},
# same tenant, multiple volumes
{
"volume":
{
"name": 'vol_2',
"availability_zone": FAKE_AZ,
"source_volid": '',
"consistencygroup_id": '',
"snapshot_id": '',
"source_replica": '',
"size": 20,
"user_id": '',
"imageRef": '',
"attach_status": "detached",
"volume_type": '',
"project_id": 'my_tenant_id',
"metadata": {}
},
"expected_error": 202
},
# same tenant, different az
{
"volume":
{
"name": 'vol_3',
"availability_zone": FAKE_AZ + '2',
"source_volid": '',
"consistencygroup_id": '',
"snapshot_id": '',
"source_replica": '',
"size": 20,
"user_id": '',
"imageRef": '',
"attach_status": "detached",
"volume_type": '',
"project_id": 'my_tenant_id',
"metadata": {}
},
"expected_error": 202
},
]
tenant2_volumes = [
# different tenant, same az
{
"volume":
{
"name": 'vol_4',
"availability_zone": FAKE_AZ,
"source_volid": '',
"consistencygroup_id": '',
"snapshot_id": '',
"source_replica": '',
"size": 20,
"user_id": '',
"imageRef": '',
"attach_status": "detached",
"volume_type": '',
"project_id": 'my_tenant_id_2',
"metadata": {}
},
"expected_error": 202
},
]
self._test_and_check(tenant1_volumes, 'my_tenant_id')
self._test_and_check(tenant2_volumes, 'my_tenant_id_2')
self._test_detail_check('my_tenant_id', 3)
self._test_detail_check('my_tenant_id_2', 1)
@patch.object(hclient, 'forward_req',
new=fake_volumes_forward_req)
def test_post_one_and_delete_one(self):
volumes = [
# normal volume with correct parameter
{
"volume":
{
"name": 'vol_1',
"availability_zone": FAKE_AZ,
"source_volid": '',
"consistencygroup_id": '',
"snapshot_id": '',
"source_replica": '',
"size": 10,
"user_id": '',
"imageRef": '',
"attach_status": "detached",
"volume_type": '',
"project_id": 'my_tenant_id',
"metadata": {}
},
"expected_error": 202
},
]
self._test_and_check_delete(volumes, 'my_tenant_id')
@patch.object(hclient, 'forward_req',
new=fake_volumes_forward_req)
def test_get(self):
response = self.app.get('/v2/my_tenant_id/volumes')
self.assertEqual(response.status_int, 200)
json_body = jsonutils.loads(response.body)
vols = json_body.get('volumes')
self.assertEqual(0, len(vols))
def _test_and_check(self, volumes, tenant_id):
for test_vol in volumes:
if test_vol.get('volume'):
response = self.app.post_json(
'/v2/' + tenant_id + '/volumes',
dict(volume=test_vol['volume']),
expect_errors=True)
elif test_vol.get('volume_xxx'):
response = self.app.post_json(
'/v2/' + tenant_id + '/volumes',
dict(volume=test_vol['volume_xxx']),
expect_errors=True)
else:
return
self.assertEqual(response.status_int,
test_vol['expected_error'])
if response.status_int == 202:
json_body = jsonutils.loads(response.body)
res_vol = json_body.get('volume')
query_resp = self.app.get(
'/v2/' + tenant_id + '/volumes/' + res_vol['id'])
self.assertEqual(query_resp.status_int, 200)
json_body = jsonutils.loads(query_resp.body)
query_vol = json_body.get('volume')
self.assertEqual(res_vol['id'], query_vol['id'])
self.assertEqual(res_vol['name'], query_vol['name'])
self.assertEqual(res_vol['availability_zone'],
query_vol['availability_zone'])
self.assertIn(res_vol['availability_zone'],
[FAKE_AZ, FAKE_AZ + '2'])
def _test_and_check_delete(self, volumes, tenant_id):
for test_vol in volumes:
if test_vol.get('volume'):
response = self.app.post_json(
'/v2/' + tenant_id + '/volumes',
dict(volume=test_vol['volume']),
expect_errors=True)
self.assertEqual(response.status_int,
test_vol['expected_error'])
if response.status_int == 202:
json_body = jsonutils.loads(response.body)
_id = json_body.get('volume')['id']
query_resp = self.app.get(
'/v2/' + tenant_id + '/volumes/' + _id)
self.assertEqual(query_resp.status_int, 200)
delete_resp = self.app.delete(
'/v2/' + tenant_id + '/volumes/' + _id)
self.assertEqual(delete_resp.status_int, 202)
def _test_detail_check(self, tenant_id, vol_size):
resp = self.app.get(
'/v2/' + tenant_id + '/volumes' + '/detail',
expect_errors=True)
self.assertEqual(resp.status_int, 200)
json_body = jsonutils.loads(resp.body)
ret_vols = json_body.get('volumes')
self.assertEqual(len(ret_vols), vol_size)

View File

@ -0,0 +1,173 @@
# Copyright (c) 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from pecan.configuration import set_config
from pecan.testing import load_test_app
from oslo_config import cfg
from oslo_config import fixture as fixture_config
from oslo_serialization import jsonutils
from oslo_utils import uuidutils
from tricircle.nova_apigw import app
from tricircle.tests import base
OPT_GROUP_NAME = 'keystone_authtoken'
cfg.CONF.import_group(OPT_GROUP_NAME, "keystonemiddleware.auth_token")
class Nova_API_GW_FunctionalTest(base.TestCase):
def setUp(self):
super(Nova_API_GW_FunctionalTest, self).setUp()
self.addCleanup(set_config, {}, overwrite=True)
cfg.CONF.register_opts(app.common_opts)
self.CONF = self.useFixture(fixture_config.Config()).conf
self.CONF.set_override('auth_strategy', 'noauth')
self.app = self._make_app()
def _make_app(self, enable_acl=False):
self.config = {
'app': {
'root': 'tricircle.nova_apigw.controllers.root.RootController',
'modules': ['tricircle.nova_apigw'],
'enable_acl': enable_acl,
'errors': {
400: '/error',
'__force_dict__': True
}
},
}
return load_test_app(self.config)
def tearDown(self):
super(Nova_API_GW_FunctionalTest, self).tearDown()
cfg.CONF.unregister_opts(app.common_opts)
pecan.set_config({}, overwrite=True)
class TestRootController(Nova_API_GW_FunctionalTest):
"""Test version listing on root URI."""
def test_get(self):
response = self.app.get('/')
self.assertEqual(response.status_int, 200)
json_body = jsonutils.loads(response.body)
versions = json_body.get('versions')
self.assertEqual(1, len(versions))
self.assertEqual(versions[0]["min_version"], "2.1")
self.assertEqual(versions[0]["id"], "v2.1")
def _test_method_returns_405(self, method):
api_method = getattr(self.app, method)
response = api_method('/', expect_errors=True)
self.assertEqual(response.status_int, 405)
def test_post(self):
self._test_method_returns_405('post')
def test_put(self):
self._test_method_returns_405('put')
def test_patch(self):
self._test_method_returns_405('patch')
def test_delete(self):
self._test_method_returns_405('delete')
def test_head(self):
self._test_method_returns_405('head')
class TestV21Controller(Nova_API_GW_FunctionalTest):
def test_get(self):
response = self.app.get('/v2.1/')
self.assertEqual(response.status_int, 200)
json_body = jsonutils.loads(response.body)
version = json_body.get('version')
self.assertEqual(version["min_version"], "2.1")
self.assertEqual(version["id"], "v2.1")
def _test_method_returns_405(self, method):
api_method = getattr(self.app, method)
response = api_method('/v2.1', expect_errors=True)
self.assertEqual(response.status_int, 405)
def test_post(self):
self._test_method_returns_405('post')
def test_put(self):
self._test_method_returns_405('put')
def test_patch(self):
self._test_method_returns_405('patch')
def test_delete(self):
self._test_method_returns_405('delete')
def test_head(self):
self._test_method_returns_405('head')
class TestErrors(Nova_API_GW_FunctionalTest):
def test_404(self):
response = self.app.get('/assert_called_once', expect_errors=True)
self.assertEqual(response.status_int, 404)
def test_bad_method(self):
response = self.app.patch('/v2.1/123',
expect_errors=True)
self.assertEqual(response.status_int, 404)
class TestRequestID(Nova_API_GW_FunctionalTest):
def test_request_id(self):
response = self.app.get('/')
self.assertIn('x-openstack-request-id', response.headers)
self.assertTrue(
response.headers['x-openstack-request-id'].startswith('req-'))
id_part = response.headers['x-openstack-request-id'].split('req-')[1]
self.assertTrue(uuidutils.is_uuid_like(id_part))
class TestKeystoneAuth(Nova_API_GW_FunctionalTest):
def setUp(self):
super(Nova_API_GW_FunctionalTest, self).setUp()
self.addCleanup(set_config, {}, overwrite=True)
cfg.CONF.register_opts(app.common_opts)
self.CONF = self.useFixture(fixture_config.Config()).conf
cfg.CONF.set_override('auth_strategy', 'keystone')
self.app = self._make_app()
def test_auth_enforced(self):
response = self.app.get('/', expect_errors=True)
self.assertEqual(response.status_int, 401)

View File

@ -0,0 +1,135 @@
# Copyright (c) 2015 Huawei Tech. Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from mock import patch
import unittest
import pecan
from tricircle.api.controllers import pod
from tricircle.common import context
from tricircle.common import utils
from tricircle.db import core
from tricircle.db import models
class PodsControllerTest(unittest.TestCase):
def setUp(self):
core.initialize()
core.ModelBase.metadata.create_all(core.get_engine())
self.controller = pod.PodsController()
self.context = context.get_admin_context()
@patch.object(context, 'extract_context_from_environ')
def test_post_top_pod(self, mock_context):
mock_context.return_value = self.context
kw = {'pod': {'pod_name': 'TopPod', 'az_name': ''}}
pod_id = self.controller.post(**kw)['pod']['pod_id']
with self.context.session.begin():
pod = core.get_resource(self.context, models.Pod, pod_id)
self.assertEqual(pod['pod_name'], 'TopPod')
self.assertEqual(pod['az_name'], '')
pods = core.query_resource(self.context, models.Pod,
[{'key': 'pod_name',
'comparator': 'eq',
'value': 'TopPod'}], [])
self.assertEqual(len(pods), 1)
@patch.object(context, 'extract_context_from_environ')
def test_post_bottom_pod(self, mock_context):
mock_context.return_value = self.context
kw = {'pod': {'pod_name': 'BottomPod', 'az_name': 'TopAZ'}}
pod_id = self.controller.post(**kw)['pod']['pod_id']
with self.context.session.begin():
pod = core.get_resource(self.context, models.Pod, pod_id)
self.assertEqual(pod['pod_name'], 'BottomPod')
self.assertEqual(pod['az_name'], 'TopAZ')
pods = core.query_resource(self.context, models.Pod,
[{'key': 'pod_name',
'comparator': 'eq',
'value': 'BottomPod'}], [])
self.assertEqual(len(pods), 1)
ag_name = utils.get_ag_name('BottomPod')
aggregates = core.query_resource(self.context, models.Aggregate,
[{'key': 'name',
'comparator': 'eq',
'value': ag_name}], [])
self.assertEqual(len(aggregates), 1)
metadatas = core.query_resource(
self.context, models.AggregateMetadata,
[{'key': 'key', 'comparator': 'eq',
'value': 'availability_zone'},
{'key': 'aggregate_id', 'comparator': 'eq',
'value': aggregates[0]['id']}], [])
self.assertEqual(len(metadatas), 1)
self.assertEqual(metadatas[0]['value'], 'TopAZ')
@patch.object(context, 'extract_context_from_environ')
def test_get_one(self, mock_context):
mock_context.return_value = self.context
kw = {'pod': {'pod_name': 'TopPod', 'az_name': ''}}
pod_id = self.controller.post(**kw)['pod']['pod_id']
pod = self.controller.get_one(pod_id)
self.assertEqual(pod['pod']['pod_name'], 'TopPod')
self.assertEqual(pod['pod']['az_name'], '')
@patch.object(context, 'extract_context_from_environ')
def test_get_all(self, mock_context):
mock_context.return_value = self.context
kw1 = {'pod': {'pod_name': 'TopPod', 'az_name': ''}}
kw2 = {'pod': {'pod_name': 'BottomPod', 'az_name': 'TopAZ'}}
self.controller.post(**kw1)
self.controller.post(**kw2)
pods = self.controller.get_all()
actual = [(pod['pod_name'],
pod['az_name']) for pod in pods['pods']]
expect = [('TopPod', ''), ('BottomPod', 'TopAZ')]
self.assertItemsEqual(expect, actual)
@patch.object(pecan, 'response', new=mock.Mock)
@patch.object(context, 'extract_context_from_environ')
def test_delete(self, mock_context):
mock_context.return_value = self.context
kw = {'pod': {'pod_name': 'BottomPod', 'az_name': 'TopAZ'}}
pod_id = self.controller.post(**kw)['pod']['pod_id']
self.controller.delete(pod_id)
with self.context.session.begin():
pods = core.query_resource(self.context, models.Pod,
[{'key': 'pod_name',
'comparator': 'eq',
'value': 'BottomPod'}], [])
self.assertEqual(len(pods), 0)
ag_name = utils.get_ag_name('BottomPod')
aggregates = core.query_resource(self.context, models.Aggregate,
[{'key': 'name',
'comparator': 'eq',
'value': ag_name}], [])
self.assertEqual(len(aggregates), 0)
metadatas = core.query_resource(
self.context, models.AggregateMetadata,
[{'key': 'key', 'comparator': 'eq',
'value': 'availability_zone'},
{'key': 'value', 'comparator': 'eq',
'value': 'TopAZ'}], [])
self.assertEqual(len(metadatas), 0)
def tearDown(self):
core.ModelBase.metadata.drop_all(core.get_engine())

Some files were not shown because too many files have changed in this diff Show More