Init Patch for Stateless Architecture

Init patch for stateless architecture. API, db and client modules
are borrowed from master branch, but dispatcher and proxy modules
are removed. Also adapt OpenStack new project template.

Change-Id: I9f84cb029195cf26e9ad71ae3cbd69b2cc765bb2
This commit is contained in:
zhiyuan_cai 2015-12-03 11:12:19 +08:00
parent 552c49474c
commit 16015ac91a
74 changed files with 3009 additions and 10064 deletions

88
.gitignore vendored
View File

@ -1,45 +1,53 @@
*.DS_Store
*.egg*
*.log
*.mo
*.pyc
*.swo
*.swp
*.sqlite
*.iml
*~
.autogenerated
*.py[cod]
# C extensions
*.so
# Packages
*.egg
*.egg-info
dist
build
eggs
parts
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
.nova-venv
.tox
nosetests.xml
.testrepository
.venv
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
.ropeproject
.testrepository/
.tox
.idea
.venv
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# pbr generates these
AUTHORS
Authors
build-stamp
build/*
CA/
ChangeLog
coverage.xml
cover/*
covhtml
dist/*
doc/source/api/*
doc/build/*
etc/nova/nova.conf.sample
instances
keeper
keys
local_settings.py
MANIFEST
nosetests.xml
nova/tests/cover/*
nova/vcsversion.py
tools/conf/nova.conf*
tools/lintstack.head.py
tools/pylint_exceptions
etc/nova/nova.conf.sample
# Editors
*~
.*.swp
.*sw?

7
.testr.conf Normal file
View File

@ -0,0 +1,7 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

311
README.md
View File

@ -1,308 +1,9 @@
Tricircle
===============================
# Tricircle
Tricircle is a project for [Openstack cascading solution](https://wiki.openstack.org/wiki/OpenStack_cascading_solution), including the source code of Nova Proxy, Cinder Proxy, Neutron L2/L3 Proxy, Glance sync manager and Ceilometer Proxy(not implemented yet).
(For PoC source code, please switch to ["poc"](https://github.com/openstack/tricircle/tree/poc) tag, or ["stable/fortest"](https://github.com/openstack/tricircle/tree/stable/fortest) branch)
The project name "Tricircle" comes from a fractal. See the blog ["OpenStack cascading and fractal"](https://www.linkedin.com/today/post/article/20140729022031-23841540-openstack-cascading-and-fractal) for more information.
Tricircle is a openstack project that aims to deal with OpenStack deployment across multiple sites. It provides users a single management view by having only one OpenStack instance on behalf of all the involved ones. It essentially serves as a communication bus between the central OpenStack instance and the other OpenStack instances that are called upon.
Important to know
-----------
* Only about 15k code lines developed for OpenStack cascading.
* The source code now based on Juno is for PoC only. Refactory will be done constantly to reach OpenStack acceptance standard.
* The Neutron cascading using the feature of provider network. But horizon doen't support provider network very well. So you have to use Neutron CLI to create a network.
* Support L2 networking(VxLAN) across cascaded OpenStack, but only point2point remote host IP tunneling supported now.L2 networking through L2GW to reduce population traffic and simplify networking topology will be developed in the near future.
* The L3 networking across casacaded OpenStack will set up tunneling network for piggy data path, useing GRE tunneling over extra_route to brige the router in different cascaded OpenStack.Therefore, the loca L2 network (VLAN,VxLAN) in one cascaded OpenStack can reach L2 network(VLAN,VxLAN) located in another cascaded OpenStack.
* Glance cascading using Glance V2 API. Only CLI/pythonclient support V2 API, the Horizon doesn't support that version. So image management should be done through CLI, and using V2 only. Otherwise, the glance cascading cannot work properly.
* Glance cascading is not used by default, eg, useing global Glance by default. If Glance cascading is required, configuration is required.
Key modules
-----------
* Nova proxy
The hypervisor driver for Nova running on Nova-Compute node. Transfer the VM operation to cascaded Nova. Also responsible for attach volume and network to the VM in the cascaded OpenStack.
* Cinder proxy
The Cinder-Volume driver for Cinder running on Cinder-Volume node.. Transfer the volume operation to cascaded Cinder.
* Neuton proxy
Including L2 proxy and L3 proxy, Similar role like OVS-Agent/L3-Agent. Finish L2/L3-networking in the cascaded OpenStack, including cross OpenStack networking.
* Glance sync
Synchronize image among the cascading and policy determined Cascaded OpenStacks
Patches required
------------------
* Juno-Patches
Pacthes for OpenStack Juno version, including patches for cascading level and cacscaded level.
Feature Supported
------------------
* Nova cascading
Launch/Reboot/Terminate/Resize/Rescue/Pause/Un-pause/Suspend/Resume/VNC Console/Attach Volume/Detach Volume/Snapshot/KeyPair/Flavor
* Cinder cascading
Create Volume/Delete Volume/Attach Volume/Detach Volume/Extend Volume/Create Snapshot/Delete Snapshot/List Snapshots/Create Volume from Snapshot/Create Volume from Image/Create Volume from Volume (Clone)/Create Image from Volume
* Neutron cascading
Network/Subnet/Port/Router. Including L2/L3 networking across cascaded OpenStacks
* Glance cascading
Only support V2 api. Create Image/Delete Image/List Image/Update Image/Upload Image/Patch Location/VM Snapshot/Image Synchronization
Known Issues
------------------
* Launch VM only support "boot from image", "boot from volume", "boot from snapshot"
* Flavor only support new created flavor synchronized to the cascaded OpenStack, does not support flavor update synchronization to cascaded OpenStack yet.
Installation without Glance cascading
------------
* **Prerequisites**
- the minimal installation requires three OpenStack Juno installated to experience across cascaded OpenStacks L2/L3 function. The minimal setup needs four nodes, see the following picture:
![minimal_setup](./minimal_setup.png?raw=true)
- the cascading OpenStack needs two node, Node1 and Node 2. Add Node1 to AZ1, Node2 to AZ2 in the cascading OpenStack for both Nova and Cinder.
- It's recommended to name the cascading Openstack region to "Cascading" or "Region1"
- Node1 is all-in-one OpenStack installation with KeyStone and Glance, Node1 also function as Nova-Compute/Cinder-Volume/Neutron OVS-Agent/L3-Agent node, and will be replaced to be the proxy node for AZ1.
- Node2 is general Nova-Compute node with Cinder-Volume, Neutron OVS-Agent/L3-Agent function installed. And will be replaced to be the proxy node for AZ2
- the all-in-one cascaded OpenStack installed in Node3 function as the AZ1. Node3 will also function as the Nova-Compute/Cinder-Volume/Neutron OVS-Agent/L3-Agent in order to be able to create VMs/Volume/Networking in this AZ1. Glance is only required to be installed if Glance cascading needed. Add Node3 to AZ1 in the cascaded OpenStack both for Nova and Cinder. It's recommended to name the cascaded Openstack region for Node3 to "AZ1"
- the all-in-one cascaded OpenStack installed in Node4 function as the AZ2. Node3 will also function as the Nova-Compute/Cinder-Volume/Neutron OVS-Agent/L3-Agent in order to be able to create VMs/Volume/Networking in this AZ2. Glance is only required to be installed if Glance cascading needed.Add Node4 to AZ2 in the cascaded OpenStack both for Nova and Cinder.It's recommended to name the cascaded Openstack region for Node4 to "AZ2"
Make sure the time of these four nodes are synchronized. Because the Nova Proxy/Cinder Proxy/Neutron L2/L3 Proxy will query the cascaded OpenStack using timestamp, incorrect time will lead to VM/Volume/Port status synchronization not work properly.
Register all services endpoint in the global shared KeyStone.
Make sure the 3 OpenStack can work independently before cascading introduced, eg. you can boot VM with network, create volume and attach volume in each OpenStack. After verify that 3 OpenStack can work independently, clean all created resources VM/Volume/Network.
After all OpenStack installation is ready, it's time to install Juno pathces both for cascading OpenStack and cascaded OpenStack, and then replace the Nova-Compute/Cinder-Volume/Neutron OVS-Agent/L3-Agent to Nova Proxy / Cinder Proxy / Neutron l2/l3 Proxy.
* **Juno pachtes installation step by step**
1. Node1
- Patches for Neutron - neutron_cascading_l3_patch
This patch is to enable cross cascaded OpenStack L3 routing over extra route.The mapping between cascaded OpenStack and it's onlink external network which is used for GRE tunneling data path
Navigate to the folder
```
cd ./tricircle/juno-patches/neutron/neutron_cascading_l3_patch
```
follow README.md instruction to install the patch
2. Node3
- Patches for Cinder - timestamp-query-patch
This patch is to make the cascaded Cinder being able to execute query with timestamp filter, but not to return all objects.
Navigate to the folder
```
cd ./tricircle/juno-patches/cinder/timestamp-query-patch
```
follow README.md instruction to install the patch
- Patches for Neutron - neutron_timestamp_cascaded_patch
This patch is to make Neutron being able to provide timestamp based port query.
Navigate to the folder
```
cd ./tricircle/juno-patches/neutron/neutron_timestamp_cascaded_patch
```
follow README.md instruction to install the patch
- Patches for Neutron - neutron_cascaded_l3_patch
This patch is to enable cross cascaded OpenStack L3 routing over extra route..
Navigate to the folder
```
cd ./tricircle/juno-patches/neutron/neutron_cascaded_l3_patch
```
follow README.md instruction to install the patch
3. Node4
- Patches for Cinder - timestamp-query-patch
This patch is to make the cascaded Cinder being able to execute query with timestamp filter, but not to return all objects.
Navigate to the folder
```
cd ./tricircle/juno-patches/cinder/timestamp-query-patch
```
follow README.md instruction to install the patch
- Patches for Neutron - neutron_timestamp_cascaded_patch
This patch is to make Neutron being able to provide timestamp based port query.
Navigate to the folder
```
cd ./tricircle/juno-patches/neutron/neutron_timestamp_cascaded_patch
```
follow README.md instruction to install the patch
- Patches for Neutron - neutron_cascaded_l3_patch
This patch is to enable cross cascaded OpenStack L3 routing over extra route..
Navigate to the folder
```
cd ./tricircle/juno-patches/neutron/neutron_cascaded_l3_patch
```
follow README.md instruction to install the patch
* **Proxy installation step by step**
1. Node1
- Nova proxy
Navigate to the folder
```
cd ./tricircle/novaproxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
- Cinder proxy
Navigate to the folder
```
cd ./tricircle/cinderproxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
- L2 proxy
Navigate to the folder
```
cd ./tricircle/neutronproxy/l2-proxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
- L3 proxy
Navigate to the folder
```
cd ./tricircle/neutronproxy/l3-proxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
2. Node2
- Nova proxy
Navigate to the folder
```
cd ./tricircle/novaproxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
- Cinder proxy
Navigate to the folder
```
cd ./tricircle/cinderproxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
- L2 proxy
Navigate to the folder
```
cd ./tricircle/neutronproxy/l2-proxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
- L3 proxy
Navigate to the folder
```
cd ./tricircle/neutronproxy/l3-proxy
```
follow README.md instruction to install the proxy. Please change the configuration value in the install.sh according to your environment setting
Upgrade to Glance cascading
------------
* **Prerequisites**
- To experience the glance cascading feature, you can simply upgrade the current installation with several step, see the following picture:
![minimal_setup_with_glance_cascading](./minimal_setup_with_glance_cascading.png?raw=true)
1. Node1
- Patches for Glance - glance_location_patch
This patch is to make the glance being able to handle http url location. The patch also insert the sync manager to the chain of responsibility.
Navigate to the folder
```
cd ./tricircle/juno-patches/glance/glance_location_patch
```
follow README.md instruction to install the patch
- Patches for Glance - glance_store_patch
This patch is to make the glance being able to handle http url location.
Navigate to the folder
```
cd ./tricircle/juno-patches/glance_store/glance_store_patch
```
follow README.md instruction to install the patch
- Sync Manager
Navigate to the folder
```
cd ./tricircle/glancesync
```
modify the storage scheme configuration for cascading and cascaded level
```
vi ./tricircle/glancesync/etc/glance/glance_store.yaml
```
follow README.md instruction to install the sync manager. Please change the configuration value in the install.sh according to your environment setting, espeically for configuration:
sync_enabled=True
sync_server_port=9595
sync_server_host=127.0.0.1
2. Node3
- Glance Installation
Please install Glance in the Node3 as the casacded Glance.
Register the service endpoint in the KeyStone.
Change the glance endpoint in nova.conf and cinder.conf to the Glance located in Node3
3. Node4
- Glance Installation
Please install Glance in the Node4 as the casacded Glance.
Register the service endpoint in the KeyStone
Change the glance endpoint in nova.conf and cinder.conf to the Glance located in Node4
4. Configuration
- Change Nova proxy configuration on Node1, setting the "cascaded_glance_flag" to True and add "cascaded_glance_url" of Node3 configurantion according to Nova-proxy README.MD instruction
- Change Cinder proxy configuration on Node1, setting the "glance_cascading_flag" to True and add "cascaded_glance_url" of Node3 configurantion according to Nova-proxy README.MD instruction
- Change Nova proxy configuration on Node2, setting the "cascaded_glance_flag" to True and add "cascaded_glance_url" of Node4 configurantion according to Nova-proxy README.MD instruction
- Change Cinder proxy configuration on Node2, setting the "glance_cascading_flag" to True and add "cascaded_glance_url" of Node4 configurantion according to Nova-proxy README.MD instruction
5. Experience Glance cascading
- Restart all related service
- Use Glance V2 api to create Image, Upload Image or patch location for Image. Image should be able to sync to distributed Glance if sync_enabled is setting to True
- Sync image only during first time usage but not uploading or patch location is still in testing phase, may not work properly.
- Create VM/Volume/etc from Horizon
## Project Resources
- Project status, bugs, and blueprints are tracked on [Launchpad](https://launchpad.net/tricircle)
- Additional resources are linked from the project [Wiki](https://wiki.openstack.org/wiki/Tricircle) page

61
cmd/api.py Normal file
View File

@ -0,0 +1,61 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Much of this module is based on the work of the Ironic team
# see http://git.openstack.org/cgit/openstack/ironic/tree/ironic/cmd/api.py
import logging as std_logging
import sys
from oslo_config import cfg
from oslo_log import log as logging
from oslo_service import wsgi
from tricircle.api import app
from tricircle.common import config
from tricircle.common.i18n import _LI
from tricircle.common.i18n import _LW
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
def main():
config.init(app.common_opts, sys.argv[1:])
application = app.setup_app()
host = CONF.bind_host
port = CONF.bind_port
workers = CONF.api_workers
if workers < 1:
LOG.warning(_LW("Wrong worker number, worker = %(workers)s"), workers)
workers = 1
LOG.info(_LI("Server on http://%(host)s:%(port)s with %(workers)s"),
{'host': host, 'port': port, 'workers': workers})
service = wsgi.Server(CONF, 'Tricircle', application, host, port)
app.serve(service, CONF, workers)
LOG.info(_LI("Configuration:"))
CONF.log_opt_values(LOG, std_logging.INFO)
app.wait()
if __name__ == '__main__':
main()

36
cmd/manage.py Normal file
View File

@ -0,0 +1,36 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sys
from oslo_config import cfg
from tricircle.db import core
from tricircle.db import migration_helpers
def main(argv=None, config_files=None):
core.initialize()
cfg.CONF(args=argv[2:],
project='tricircle',
default_config_files=config_files)
migration_helpers.find_migrate_repo()
migration_helpers.sync_repo(1)
if __name__ == '__main__':
config_file = sys.argv[1]
main(argv=sys.argv, config_files=[config_file])

View File

@ -1,247 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from oslo.utils import importutils
from oslo_log import log as logging
logger = logging.getLogger(__name__)
from nova.compute import compute_keystoneclient as hkc
from novaclient import client as novaclient
from novaclient import shell as novashell
try:
from swiftclient import client as swiftclient
except ImportError:
swiftclient = None
logger.info('swiftclient not available')
try:
from neutronclient.v2_0 import client as neutronclient
except ImportError:
neutronclient = None
logger.info('neutronclient not available')
try:
from cinderclient import client as cinderclient
except ImportError:
cinderclient = None
logger.info('cinderclient not available')
try:
from ceilometerclient.v2 import client as ceilometerclient
except ImportError:
ceilometerclient = None
logger.info('ceilometerclient not available')
cloud_opts = [
cfg.StrOpt('cloud_backend',
default=None,
help="Cloud module to use as a backend. Defaults to OpenStack.")
]
cfg.CONF.register_opts(cloud_opts)
class OpenStackClients(object):
'''
Convenience class to create and cache client instances.
'''
def __init__(self, context):
self.context = context
self._nova = {}
self._keystone = None
self._swift = None
self._neutron = None
self._cinder = None
self._ceilometer = None
@property
def auth_token(self):
# if there is no auth token in the context
# attempt to get one using the context username and password
return self.context.auth_token or self.keystone().auth_token
def keystone(self):
if self._keystone:
return self._keystone
self._keystone = hkc.KeystoneClient(self.context)
return self._keystone
def url_for(self, **kwargs):
return self.keystone().url_for(**kwargs)
def nova(self, service_type='compute'):
if service_type in self._nova:
return self._nova[service_type]
con = self.context
if self.auth_token is None:
logger.error("Nova connection failed, no auth_token!")
return None
computeshell = novashell.OpenStackComputeShell()
extensions = computeshell._discover_extensions("1.1")
args = {
'project_id': con.tenant,
'auth_url': con.auth_url,
'service_type': service_type,
'username': con.username,
'api_key': con.password,
'region_name':con.region_name,
'extensions': extensions
}
if con.password is not None:
if self.context.region_name is None:
management_url = self.url_for(service_type=service_type)
else:
management_url = self.url_for(
service_type=service_type,
attr='region',
filter_value=self.context.region_name)
else:
management_url = con.nova_url + '/' + con.tenant_id
client = novaclient.Client(2, **args)
client.client.auth_token = self.auth_token
client.client.management_url = management_url
self._nova[service_type] = client
return client
def swift(self):
if swiftclient is None:
return None
if self._swift:
return self._swift
con = self.context
if self.auth_token is None:
logger.error("Swift connection failed, no auth_token!")
return None
args = {
'auth_version': '2.0',
'tenant_name': con.tenant_id,
'user': con.username,
'key': None,
'authurl': None,
'preauthtoken': self.auth_token,
'preauthurl': self.url_for(service_type='object-store')
}
self._swift = swiftclient.Connection(**args)
return self._swift
def neutron(self):
if neutronclient is None:
return None
if self._neutron:
return self._neutron
con = self.context
if self.auth_token is None:
logger.error("Neutron connection failed, no auth_token!")
return None
if con.password is not None:
if self.context.region_name is None:
management_url = self.url_for(service_type='network')
else:
management_url = self.url_for(
service_type='network',
attr='region',
filter_value=self.context.region_name)
else:
management_url = con.neutron_url
args = {
'auth_url': con.auth_url,
'service_type': 'network',
'token': self.auth_token,
'endpoint_url': management_url
}
self._neutron = neutronclient.Client(**args)
return self._neutron
def cinder(self):
if cinderclient is None:
return self.nova('volume')
if self._cinder:
return self._cinder
con = self.context
if self.auth_token is None:
logger.error("Cinder connection failed, no auth_token!")
return None
args = {
'service_type': 'volume',
'auth_url': con.auth_url,
'project_id': con.tenant_id,
'username': None,
'api_key': None
}
self._cinder = cinderclient.Client('1', **args)
if con.password is not None:
if self.context.region_name is None:
management_url = self.url_for(service_type='volume')
else:
management_url = self.url_for(
service_type='volume',
attr='region',
filter_value=self.context.region_name)
else:
management_url = con.cinder_url + '/' + con.tenant_id
self._cinder.client.auth_token = self.auth_token
self._cinder.client.management_url = management_url
return self._cinder
def ceilometer(self):
if ceilometerclient is None:
return None
if self._ceilometer:
return self._ceilometer
if self.auth_token is None:
logger.error("Ceilometer connection failed, no auth_token!")
return None
con = self.context
args = {
'auth_url': con.auth_url,
'service_type': 'metering',
'project_id': con.tenant_id,
'token': lambda: self.auth_token,
'endpoint': self.url_for(service_type='metering'),
}
client = ceilometerclient.Client(**args)
self._ceilometer = client
return self._ceilometer
if cfg.CONF.cloud_backend:
cloud_backend_module = importutils.import_module(cfg.CONF.cloud_backend)
Clients = cloud_backend_module.Clients
else:
Clients = OpenStackClients
logger.debug('Using backend %s' % Clients)

View File

@ -1,199 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from nova.openstack.common import local
from nova import exception
from nova import wsgi
from oslo_context import context
from oslo.utils import importutils
from oslo.utils import uuidutils
def generate_request_id():
return 'req-' + uuidutils.generate_uuid()
class RequestContext(context.RequestContext):
"""
Stores information about the security context under which the user
accesses the system, as well as additional request information.
"""
def __init__(self, auth_token=None, username=None, password=None,
aws_creds=None, tenant=None,
tenant_id=None, auth_url=None, roles=None,
is_admin=False, region_name=None,
nova_url=None, cinder_url=None, neutron_url=None,
read_only=False, show_deleted=False,
owner_is_tenant=True, overwrite=True,
trust_id=None, trustor_user_id=None,
**kwargs):
"""
:param overwrite: Set to False to ensure that the greenthread local
copy of the index is not overwritten.
:param kwargs: Extra arguments that might be present, but we ignore
because they possibly came in from older rpc messages.
"""
super(RequestContext, self).__init__(auth_token=auth_token,
user=username, tenant=tenant,
is_admin=is_admin,
read_only=read_only,
show_deleted=show_deleted,
request_id='unused')
self.username = username
self.password = password
self.aws_creds = aws_creds
self.tenant_id = tenant_id
self.auth_url = auth_url
self.roles = roles or []
self.owner_is_tenant = owner_is_tenant
if overwrite or not hasattr(local.store, 'context'):
self.update_store()
self._session = None
self.trust_id = trust_id
self.trustor_user_id = trustor_user_id
self.nova_url = nova_url
self.cinder_url = cinder_url
self.neutron_url = neutron_url
self.region_name = region_name
def update_store(self):
local.store.context = self
def to_dict(self):
return {'auth_token': self.auth_token,
'username': self.username,
'password': self.password,
'aws_creds': self.aws_creds,
'tenant': self.tenant,
'tenant_id': self.tenant_id,
'trust_id': self.trust_id,
'trustor_user_id': self.trustor_user_id,
'auth_url': self.auth_url,
'roles': self.roles,
'is_admin': self.is_admin}
@classmethod
def from_dict(cls, values):
return cls(**values)
@property
def owner(self):
"""Return the owner to correlate with an image."""
return self.tenant if self.owner_is_tenant else self.user
def get_admin_context(read_deleted="no"):
return RequestContext(is_admin=True)
class ContextMiddleware(wsgi.Middleware):
opts = [cfg.BoolOpt('owner_is_tenant', default=True),
cfg.StrOpt('admin_role', default='admin')]
def __init__(self, app, conf, **local_conf):
cfg.CONF.register_opts(self.opts)
# Determine the context class to use
self.ctxcls = RequestContext
if 'context_class' in local_conf:
self.ctxcls = importutils.import_class(local_conf['context_class'])
super(ContextMiddleware, self).__init__(app)
def make_context(self, *args, **kwargs):
"""
Create a context with the given arguments.
"""
kwargs.setdefault('owner_is_tenant', cfg.CONF.owner_is_tenant)
return self.ctxcls(*args, **kwargs)
def process_request(self, req):
"""
Extract any authentication information in the request and
construct an appropriate context from it.
A few scenarios exist:
1. If X-Auth-Token is passed in, then consult TENANT and ROLE headers
to determine permissions.
2. An X-Auth-Token was passed in, but the Identity-Status is not
confirmed. For now, just raising a NotAuthenticated exception.
3. X-Auth-Token is omitted. If we were using Keystone, then the
tokenauth middleware would have rejected the request, so we must be
using NoAuth. In that case, assume that is_admin=True.
"""
headers = req.headers
try:
"""
This sets the username/password to the admin user because you
need this information in order to perform token authentication.
The real 'username' is the 'tenant'.
We should also check here to see if X-Auth-Token is not set and
in that case we should assign the user/pass directly as the real
username/password and token as None. 'tenant' should still be
the username.
"""
username = None
password = None
aws_creds = None
if headers.get('X-Auth-User') is not None:
username = headers.get('X-Auth-User')
password = headers.get('X-Auth-Key')
elif headers.get('X-Auth-EC2-Creds') is not None:
aws_creds = headers.get('X-Auth-EC2-Creds')
token = headers.get('X-Auth-Token')
tenant = headers.get('X-Tenant-Name')
tenant_id = headers.get('X-Tenant-Id')
auth_url = headers.get('X-Auth-Url')
roles = headers.get('X-Roles')
if roles is not None:
roles = roles.split(',')
except Exception:
raise exception.NotAuthenticated()
req.context = self.make_context(auth_token=token,
tenant=tenant, tenant_id=tenant_id,
aws_creds=aws_creds,
username=username,
password=password,
auth_url=auth_url, roles=roles,
is_admin=True)
def ContextMiddleware_filter_factory(global_conf, **local_conf):
"""
Factory method for paste.deploy
"""
conf = global_conf.copy()
conf.update(local_conf)
def filter(app):
return ContextMiddleware(app, conf)
return filter

View File

@ -1,315 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_context import context
from nova import exception
import eventlet
from keystoneclient.v2_0 import client as kc
from keystoneclient.v3 import client as kc_v3
from oslo.config import cfg
from oslo.utils import importutils
from oslo_log import log as logging
logger = logging.getLogger('nova.compute.keystoneclient')
class KeystoneClient(object):
"""
Wrap keystone client so we can encapsulate logic used in resources
Note this is intended to be initialized from a resource on a per-session
basis, so the session context is passed in on initialization
Also note that a copy of this is created every resource as self.keystone()
via the code in engine/client.py, so there should not be any need to
directly instantiate instances of this class inside resources themselves
"""
def __init__(self, context):
# We have to maintain two clients authenticated with keystone:
# - ec2 interface is v2.0 only
# - trusts is v3 only
# If a trust_id is specified in the context, we immediately
# authenticate so we can populate the context with a trust token
# otherwise, we delay client authentication until needed to avoid
# unnecessary calls to keystone.
#
# Note that when you obtain a token using a trust, it cannot be
# used to reauthenticate and get another token, so we have to
# get a new trust-token even if context.auth_token is set.
#
# - context.auth_url is expected to contain the v2.0 keystone endpoint
self.context = context
self._client_v2 = None
self._client_v3 = None
if self.context.trust_id:
# Create a connection to the v2 API, with the trust_id, this
# populates self.context.auth_token with a trust-scoped token
self._client_v2 = self._v2_client_init()
@property
def client_v3(self):
if not self._client_v3:
# Create connection to v3 API
self._client_v3 = self._v3_client_init()
return self._client_v3
@property
def client_v2(self):
if not self._client_v2:
self._client_v2 = self._v2_client_init()
return self._client_v2
def _v2_client_init(self):
kwargs = {
'auth_url': self.context.auth_url
}
auth_kwargs = {}
# Note try trust_id first, as we can't reuse auth_token in that case
if self.context.trust_id is not None:
# We got a trust_id, so we use the admin credentials
# to authenticate, then re-scope the token to the
# trust impersonating the trustor user.
# Note that this currently requires the trustor tenant_id
# to be passed to the authenticate(), unlike the v3 call
kwargs.update(self._service_admin_creds(api_version=2))
auth_kwargs['trust_id'] = self.context.trust_id
auth_kwargs['tenant_id'] = self.context.tenant_id
elif self.context.auth_token is not None:
kwargs['tenant_name'] = self.context.tenant
kwargs['token'] = self.context.auth_token
elif self.context.password is not None:
kwargs['username'] = self.context.username
kwargs['password'] = self.context.password
kwargs['tenant_name'] = self.context.tenant
kwargs['tenant_id'] = self.context.tenant_id
else:
logger.error("Keystone v2 API connection failed, no password or "
"auth_token!")
raise exception.AuthorizationFailure()
client_v2 = kc.Client(**kwargs)
client_v2.authenticate(**auth_kwargs)
# If we are authenticating with a trust auth_kwargs are set, so set
# the context auth_token with the re-scoped trust token
if auth_kwargs:
# Sanity check
if not client_v2.auth_ref.trust_scoped:
logger.error("v2 trust token re-scoping failed!")
raise exception.AuthorizationFailure()
# All OK so update the context with the token
self.context.auth_token = client_v2.auth_ref.auth_token
self.context.auth_url = kwargs.get('auth_url')
return client_v2
@staticmethod
def _service_admin_creds(api_version=2):
# Import auth_token to have keystone_authtoken settings setup.
importutils.import_module('keystoneclient.middleware.auth_token')
creds = {
'username': cfg.CONF.keystone_authtoken.admin_user,
'password': cfg.CONF.keystone_authtoken.admin_password,
}
if api_version >= 3:
creds['auth_url'] =\
cfg.CONF.keystone_authtoken.auth_uri.replace('v2.0', 'v3')
creds['project_name'] =\
cfg.CONF.keystone_authtoken.admin_tenant_name
else:
creds['auth_url'] = cfg.CONF.keystone_authtoken.auth_uri
creds['tenant_name'] =\
cfg.CONF.keystone_authtoken.admin_tenant_name
return creds
def _v3_client_init(self):
kwargs = {}
if self.context.auth_token is not None:
kwargs['project_name'] = self.context.tenant
kwargs['token'] = self.context.auth_token
kwargs['auth_url'] = self.context.auth_url.replace('v2.0', 'v3')
kwargs['endpoint'] = kwargs['auth_url']
elif self.context.trust_id is not None:
# We got a trust_id, so we use the admin credentials and get a
# Token back impersonating the trustor user
kwargs.update(self._service_admin_creds(api_version=3))
kwargs['trust_id'] = self.context.trust_id
elif self.context.password is not None:
kwargs['username'] = self.context.username
kwargs['password'] = self.context.password
kwargs['project_name'] = self.context.tenant
kwargs['project_id'] = self.context.tenant_id
kwargs['auth_url'] = self.context.auth_url.replace('v2.0', 'v3')
kwargs['endpoint'] = kwargs['auth_url']
else:
logger.error("Keystone v3 API connection failed, no password or "
"auth_token!")
raise exception.AuthorizationFailure()
client = kc_v3.Client(**kwargs)
# Have to explicitly authenticate() or client.auth_ref is None
client.authenticate()
return client
def create_trust_context(self):
"""
If cfg.CONF.deferred_auth_method is trusts, we create a
trust using the trustor identity in the current context, with the
trustee as the heat service user and return a context containing
the new trust_id
If deferred_auth_method != trusts, or the current context already
contains a trust_id, we do nothing and return the current context
"""
if self.context.trust_id:
return self.context
# We need the service admin user ID (not name), as the trustor user
# can't lookup the ID in keystoneclient unless they're admin
# workaround this by creating a temporary admin client connection
# then getting the user ID from the auth_ref
admin_creds = self._service_admin_creds()
admin_client = kc.Client(**admin_creds)
trustee_user_id = admin_client.auth_ref.user_id
trustor_user_id = self.client_v3.auth_ref.user_id
trustor_project_id = self.client_v3.auth_ref.project_id
roles = cfg.CONF.trusts_delegated_roles
trust = self.client_v3.trusts.create(trustor_user=trustor_user_id,
trustee_user=trustee_user_id,
project=trustor_project_id,
impersonation=True,
role_names=roles)
trust_context = context.RequestContext.from_dict(
self.context.to_dict())
trust_context.trust_id = trust.id
trust_context.trustor_user_id = trustor_user_id
return trust_context
def delete_trust(self, trust_id):
"""
Delete the specified trust.
"""
self.client_v3.trusts.delete(trust_id)
def create_stack_user(self, username, password=''):
"""
Create a user defined as part of a stack, either via template
or created internally by a resource. This user will be added to
the heat_stack_user_role as defined in the config
Returns the keystone ID of the resulting user
"""
if(len(username) > 64):
logger.warning("Truncating the username %s to the last 64 "
"characters." % username)
# get the last 64 characters of the username
username = username[-64:]
user = self.client_v2.users.create(username,
password,
'%s@heat-api.org' %
username,
tenant_id=self.context.tenant_id,
enabled=True)
# We add the new user to a special keystone role
# This role is designed to allow easier differentiation of the
# heat-generated "stack users" which will generally have credentials
# deployed on an instance (hence are implicitly untrusted)
roles = self.client_v2.roles.list()
stack_user_role = [r.id for r in roles
if r.name == cfg.CONF.heat_stack_user_role]
if len(stack_user_role) == 1:
role_id = stack_user_role[0]
logger.debug("Adding user %s to role %s" % (user.id, role_id))
self.client_v2.roles.add_user_role(user.id, role_id,
self.context.tenant_id)
else:
logger.error("Failed to add user %s to role %s, check role exists!"
% (username, cfg.CONF.heat_stack_user_role))
return user.id
def delete_stack_user(self, user_id):
user = self.client_v2.users.get(user_id)
# FIXME (shardy) : need to test, do we still need this retry logic?
# Copied from user.py, but seems like something we really shouldn't
# need to do, no bug reference in the original comment (below)...
# tempory hack to work around an openstack bug.
# seems you can't delete a user first time - you have to try
# a couple of times - go figure!
tmo = eventlet.Timeout(10)
status = 'WAITING'
reason = 'Timed out trying to delete user'
try:
while status == 'WAITING':
try:
user.delete()
status = 'DELETED'
except Exception as ce:
reason = str(ce)
logger.warning("Problem deleting user %s: %s" %
(user_id, reason))
eventlet.sleep(1)
except eventlet.Timeout as t:
if t is not tmo:
# not my timeout
raise
else:
status = 'TIMEDOUT'
finally:
tmo.cancel()
if status != 'DELETED':
raise exception.Error(reason)
def delete_ec2_keypair(self, user_id, accesskey):
self.client_v2.ec2.delete(user_id, accesskey)
def get_ec2_keypair(self, user_id):
# We make the assumption that each user will only have one
# ec2 keypair, it's not clear if AWS allow multiple AccessKey resources
# to be associated with a single User resource, but for simplicity
# we assume that here for now
cred = self.client_v2.ec2.list(user_id)
if len(cred) == 0:
return self.client_v2.ec2.create(user_id, self.context.tenant_id)
if len(cred) == 1:
return cred[0]
else:
logger.error("Unexpected number of ec2 credentials %s for %s" %
(len(cred), user_id))
def disable_stack_user(self, user_id):
# FIXME : This won't work with the v3 keystone API
self.client_v2.users.update_enabled(user_id, False)
def enable_stack_user(self, user_id):
# FIXME : This won't work with the v3 keystone API
self.client_v2.users.update_enabled(user_id, True)
def url_for(self, **kwargs):
return self.client_v2.service_catalog.url_for(**kwargs)
@property
def auth_token(self):
return self.client_v2.auth_token

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,49 @@
#
# Sample DevStack local.conf.
#
# This sample file is intended to be used for your typical Cascade DevStack Top
# environment that's running all of OpenStack on a single host. This can also
#
# No changes to this sample configuration are required for this to work.
#
[[local|localrc]]
DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=password
ADMIN_PASSWORD=password
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs
HOST_IP=172.16.10.10
FIXED_RANGE=10.0.0.0/24
NETWORK_GATEWAY=10.0.0.1
FIXED_NETWORK_SIZE=256
FLOATING_RANGE=10.100.100.160/24
Q_FLOATING_ALLOCATION_POOL=start=10.100.100.160,end=10.100.100.192
PUBLIC_NETWORK_GATEWAY=10.100.100.3
Q_ENABLE_TRICIRCLE=True
enable_plugin tricircle https://git.openstack.org/openstack/tricircle master
# Tricircle Services
enable_service t-api
# Use Neutron instead of nova-network
disable_service n-net
disable_service n-cpu
disable_service n-sch
enable_service q-svc
disable_service q-dhcp
disable_service q-l3
disable_service q-agt
disable_service c-api
disable_service c-vol
disable_service c-bak
disable_service c-sch
disable_service cinder

114
devstack/plugin.sh Normal file
View File

@ -0,0 +1,114 @@
# Devstack extras script to install Tricircle
# Test if any tricircle services are enabled
# is_tricircle_enabled
function is_tricircle_enabled {
[[ ,${ENABLED_SERVICES} =~ ,"t-" ]] && return 0
return 1
}
# create_tricircle_accounts() - Set up common required tricircle
# service accounts in keystone
# Project User Roles
# -------------------------------------------------------------------------
# $SERVICE_TENANT_NAME tricircle service
function create_tricircle_accounts {
if [[ "$ENABLED_SERVICES" =~ "t-api" ]]; then
create_service_user "tricircle"
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
local tricircle_api=$(get_or_create_service "tricircle" \
"Cascading" "OpenStack Cascading Service")
get_or_create_endpoint $tricircle_api \
"$REGION_NAME" \
"$SERVICE_PROTOCOL://$TRICIRCLE_API_HOST:$TRICIRCLE_API_PORT/v1.0" \
"$SERVICE_PROTOCOL://$TRICIRCLE_API_HOST:$TRICIRCLE_API_PORT/v1.0" \
"$SERVICE_PROTOCOL://$TRICIRCLE_API_HOST:$TRICIRCLE_API_PORT/v1.0"
fi
fi
}
# create_tricircle_cache_dir() - Set up cache dir for tricircle
function create_tricircle_cache_dir {
# Delete existing dir
sudo rm -rf $TRICIRCLE_AUTH_CACHE_DIR
sudo mkdir -p $TRICIRCLE_AUTH_CACHE_DIR
sudo chown `whoami` $TRICIRCLE_AUTH_CACHE_DIR
}
function configure_tricircle_api {
if is_service_enabled t-api ; then
echo "Configuring Tricircle API"
touch $TRICIRCLE_API_CONF
iniset $TRICIRCLE_API_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
iniset $TRICIRCLE_API_CONF DEFAULT verbose True
iniset $TRICIRCLE_API_CONF DEFAULT use_syslog $SYSLOG
iniset $TRICIRCLE_API_CONF database connection `database_connection_url tricircle`
iniset $TRICIRCLE_API_CONF client admin_username admin
iniset $TRICIRCLE_API_CONF client admin_password $ADMIN_PASSWORD
iniset $TRICIRCLE_API_CONF client admin_tenant demo
iniset $TRICIRCLE_API_CONF client auto_refresh_endpoint True
iniset $TRICIRCLE_API_CONF client top_site_name $OS_REGION_NAME
iniset $TRICIRCLE_API_CONF oslo_concurrency lock_path $TRICIRCLE_STATE_PATH/lock
setup_colorized_logging $TRICIRCLE_API_CONF DEFAULT tenant_name
if is_service_enabled keystone; then
create_tricircle_cache_dir
# Configure auth token middleware
configure_auth_token_middleware $TRICIRCLE_API_CONF tricircle \
$TRICIRCLE_AUTH_CACHE_DIR
else
iniset $TRICIRCLE_API_CONF DEFAULT auth_strategy noauth
fi
fi
}
if [[ "$Q_ENABLE_TRICIRCLE" == "True" ]]; then
if [[ "$1" == "stack" && "$2" == "pre-install" ]]; then
echo summary "Tricircle pre-install"
elif [[ "$1" == "stack" && "$2" == "install" ]]; then
echo_summary "Installing Tricircle"
git_clone $TRICIRCLE_REPO $TRICIRCLE_DIR $TRICIRCLE_BRANCH
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
echo_summary "Configuring Tricircle"
configure_tricircle_api
echo export PYTHONPATH=\$PYTHONPATH:$TRICIRCLE_DIR >> $RC_DIR/.localrc.auto
recreate_database tricircle
python "$TRICIRCLE_DIR/cmd/manage.py" "$TRICIRCLE_API_CONF"
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
echo_summary "Initializing Tricircle Service"
if is_service_enabled t-api; then
create_tricircle_accounts
run_process t-api "python $TRICIRCLE_API --config-file $TRICIRCLE_API_CONF"
fi
fi
if [[ "$1" == "unstack" ]]; then
if is_service_enabled t-api; then
stop_process t-api
fi
fi
fi

21
devstack/settings Normal file
View File

@ -0,0 +1,21 @@
# Git information
TRICIRCLE_REPO=${TRICIRCLE_REPO:-https://git.openstack.org/cgit/openstack/tricircle/}
TRICIRCLE_DIR=$DEST/tricircle
TRICIRCLE_BRANCH=${TRICIRCLE_BRANCH:-master}
# common variables
TRICIRCLE_CONF_DIR=${TRICIRCLE_CONF_DIR:-/etc/tricircle}
TRICIRCLE_STATE_PATH=${TRICIRCLE_STATE_PATH:-/var/lib/tricircle}
# tricircle rest api
TRICIRCLE_API=$TRICIRCLE_DIR/cmd/api.py
TRICIRCLE_API_CONF=$TRICIRCLE_CONF_DIR/api.conf
TRICIRCLE_API_LISTEN_ADDRESS=${TRICIRCLE_API_LISTEN_ADDRESS:-0.0.0.0}
TRICIRCLE_API_HOST=${TRICIRCLE_API_HOST:-$SERVICE_HOST}
TRICIRCLE_API_PORT=${TRICIRCLE_API_PORT:-19999}
TRICIRCLE_API_PROTOCOL=${TRICIRCLE_API_PROTOCOL:-$SERVICE_PROTOCOL}
TRICIRCLE_AUTH_CACHE_DIR=${TRICIRCLE_AUTH_CACHE_DIR:-/var/cache/tricircle}
export PYTHONPATH=$PYTHONPATH:$TRICIRCLE_DIR

118
doc/source/api_v1.rst Normal file
View File

@ -0,0 +1,118 @@
================
Tricircle API v1
================
This API describes the ways of interacting with Tricircle(Cascade) service via
HTTP protocol using Representational State Transfer(ReST).
Application Root [/]
====================
Application Root provides links to all possible API methods for Tricircle. URLs
for other resources described below are relative to Application Root.
API v1 Root [/v1/]
==================
All API v1 URLs are relative to API v1 root.
Site [/sites/{site_id}]
=======================
A site represents a region in Keystone. When operating a site, Tricircle
decides the correct endpoints to send request based on the region of the site.
Considering the 2-layers architecture of Tricircle, we also have 2 kinds of
sites: top site and bottom site. A site has the following attributes:
- site_id
- site_name
- az_id
**site_id** is automatically generated when creating a site. **site_name** is
specified by user but **MUST** match the region name registered in Keystone.
When creating a bottom site, Tricircle automatically creates a host aggregate
and assigns the new availability zone id to **az_id**. Top site doesn't need a
host aggregate so **az_id** is left empty.
URL Parameters
--------------
- site_id: Site id
Models
------
::
{
"site_id": "302e02a6-523c-4a92-a8d1-4939b31a788c",
"site_name": "Site1",
"az_id": "az_Site1"
}
Retrieve Site List [GET]
------------------------
- URL: /sites
- Status: 200
- Returns: List of Sites
Response
::
{
"sites": [
{
"site_id": "f91ca3a5-d5c6-45d6-be4c-763f5a2c4aa3",
"site_name": "RegionOne",
"az_id": ""
},
{
"site_id": "302e02a6-523c-4a92-a8d1-4939b31a788c",
"site_name": "Site1",
"az_id": "az_Site1"
}
]
}
Retrieve a Single Site [GET]
----------------------------
- URL: /sites/site_id
- Status: 200
- Returns: Site
Response
::
{
"site": {
"site_id": "302e02a6-523c-4a92-a8d1-4939b31a788c",
"site_name": "Site1",
"az_id": "az_Site1"
}
}
Create a Site [POST]
--------------------
- URL: /sites
- Status: 201
- Returns: Created Site
Request (application/json)
.. csv-table::
:header: "Parameter", "Type", "Description"
name, string, name of the Site
top, bool, "indicate whether it's a top Site, optional, default false"
::
{
"name": "RegionOne"
"top": true
}
Response
::
{
"site": {
"site_id": "f91ca3a5-d5c6-45d6-be4c-763f5a2c4aa3",
"site_name": "RegionOne",
"az_id": ""
}
}

75
doc/source/conf.py Executable file
View File

@ -0,0 +1,75 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
# 'sphinx.ext.intersphinx',
'oslosphinx'
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'tricircle'
copyright = u'2015, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
# intersphinx_mapping = {'http://docs.python.org/': None}

21
doc/source/index.rst Normal file
View File

@ -0,0 +1,21 @@
.. tricircle documentation master file, created by
sphinx-quickstart on Wed Dec 2 17:00:36 2015.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to tricircle's documentation!
========================================================
Contents:
.. toctree::
:maxdepth: 2
api_v1
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

15
etc/config-generator.conf Normal file
View File

@ -0,0 +1,15 @@
[DEFAULT]
output_file = etc/api.conf.sample
wrap_width = 79
namespace = tricircle.api
namespace = tricircle.client
namespace = oslo.log
namespace = oslo.messaging
namespace = oslo.policy
namespace = oslo.service.periodic_task
namespace = oslo.service.service
namespace = oslo.service.sslutils
namespace = oslo.db
namespace = oslo.middleware
namespace = oslo.concurrency
namespace = keystonemiddleware.auth_token

View File

@ -1,185 +0,0 @@
"""Implementation of an cascading image service that uses to sync the image
from cascading glance to the special cascaded glance.
"""
import logging
import os
import urlparse
from oslo.config import cfg
from nova.image import glance
from nova.image.sync import drivers as drivermgr
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
glance_cascading_opt = [
cfg.StrOpt('image_copy_dest_location_url',
default='file:///var/lib/glance/images',
help=("The path cascaded image_data copy to."),
deprecated_opts=[cfg.DeprecatedOpt('dest_location_url',
group='DEFAULT')]),
cfg.StrOpt('image_copy_dest_host',
default='127.0.0.1',
help=("The host name where image_data copy to."),
deprecated_opts=[cfg.DeprecatedOpt('dest_host',
group='DEFAULT')]),
cfg.StrOpt('image_copy_dest_user',
default='glance',
help=("The user name of cascaded glance for copy."),
deprecated_opts=[cfg.DeprecatedOpt('dest_user',
group='DEFAULT')]),
cfg.StrOpt('image_copy_dest_password',
default='openstack',
help=("The passowrd of cascaded glance for copy."),
deprecated_opts=[cfg.DeprecatedOpt('dest_password',
group='DEFAULT')]),
cfg.StrOpt('image_copy_source_location_url',
default='file:///var/lib/glance/images',
help=("where the cascaded image data from"),
deprecated_opts=[cfg.DeprecatedOpt('source_location_url',
group='DEFAULT')]),
cfg.StrOpt('image_copy_source_host',
default='0.0.0.1',
help=("The host name where image_data copy from."),
deprecated_opts=[cfg.DeprecatedOpt('source_host',
group='DEFAULT')]),
cfg.StrOpt('image_copy_source_user',
default='glance',
help=("The user name of glance for copy."),
deprecated_opts=[cfg.DeprecatedOpt('source_user',
group='DEFAULT')]),
cfg.StrOpt('image_copy_source_password',
default='openstack',
help=("The passowrd of glance for copy."),
deprecated_opts=[cfg.DeprecatedOpt('source_password',
group='DEFAULT')]),
]
CONF.register_opts(glance_cascading_opt)
_V2_IMAGE_CREATE_PROPERTIES = ['container_format', 'disk_format', 'min_disk',
'min_ram', 'name', 'protected']
def get_adding_image_properties(image):
_tags = list(image.tags) or []
kwargs = {}
for key in _V2_IMAGE_CREATE_PROPERTIES:
try:
value = getattr(image, key, None)
if value and value != 'None':
kwargs[key] = value
except KeyError:
pass
if _tags:
kwargs['tags'] = _tags
return kwargs
def get_candidate_path(image, scheme='file'):
locations = image.locations or []
for loc in locations:
if loc['url'].startswith(scheme):
return loc['url'] if scheme != 'file' \
else loc['url'][len('file://'):]
return None
def get_copy_driver(scheme_key):
return drivermgr.get_store_driver(scheme_key)
def get_host_port(url):
if not url:
return None, None
pieces = urlparse.urlparse(url)
return pieces.netloc.split(":")[0], pieces.netloc.split(":")[1]
class GlanceCascadingService(object):
def __init__(self, cascading_client=None):
self._client = cascading_client or glance.GlanceClientWrapper()
def sync_image(self, context, cascaded_url, cascading_image):
cascaded_glance_url = cascaded_url
_host, _port = get_host_port(cascaded_glance_url)
_cascaded_client = glance.GlanceClientWrapper(context=context,
host=_host,
port=_port,
version=2)
image_meta = get_adding_image_properties(cascading_image)
cascaded_image = _cascaded_client.call(context, 2, 'create',
**image_meta)
image_id = cascading_image.id
cascaded_id = cascaded_image.id
candidate_path = get_candidate_path(cascading_image)
LOG.debug("the candidate path is %s." % (candidate_path))
# copy image
try:
image_loc = self._copy_data(image_id, cascaded_id, candidate_path)
except Exception as e:
LOG.exception(("copy image failed, reason=%s") % e)
raise
else:
if not image_loc:
LOG.exception(("copy image Exception, no cascaded_loc"))
try:
# patch loc to the cascaded image
csd_locs = [{'url': image_loc,
'metadata': {}
}]
_cascaded_client.call(context, 2, 'update', cascaded_id,
remove_props=None,
locations=csd_locs)
except Exception as e:
LOG.exception(("patch loc to cascaded image Exception, reason: %s"
% e))
raise
try:
# patch glance-loc to cascading image
csg_locs = cascading_image.locations
glance_loc = '%s/v2/images/%s' % (cascaded_glance_url,
cascaded_id)
csg_locs.append({'url': glance_loc,
'metadata': {'image_id': str(cascaded_id),
'action': 'upload'
}
})
self._client.call(context, 2, 'update', image_id,
remove_props=None, locations=csg_locs)
except Exception as e:
LOG.exception(("patch loc to cascading image Exception, reason: %s"
% e))
raise
return cascaded_id
@staticmethod
def _copy_data(cascading_id, cascaded_id, candidate_path):
source_pieces = urlparse.urlparse(CONF.image_copy_source_location_url)
dest_pieces = urlparse.urlparse(CONF.image_copy_dest_location_url)
source_scheme = source_pieces.scheme
dest_scheme = dest_pieces.scheme
_key = ('%s:%s' % (source_scheme, dest_scheme))
copy_driver = get_copy_driver(_key)
source_path = os.path.join(source_pieces.path, cascading_id)
dest_path = os.path.join(dest_pieces.path, cascaded_id)
source_location = {'host': CONF.image_copy_source_host,
'login_user': CONF.image_copy_source_user,
'login_password': CONF.image_copy_source_password,
'path': source_path
}
dest_location = {'host': CONF.image_copy_dest_host,
'login_user': CONF.image_copy_dest_user,
'login_password': CONF.image_copy_dest_password,
'path': dest_path
}
return copy_driver.copy_to(source_location,
dest_location,
candidate_path=candidate_path)

View File

@ -1,6 +0,0 @@
from nova.exception import NovaException
from nova.i18n import _
class GlanceSyncException(NovaException):
msg_fmt = _("Sync image failed: %(reason)s")

View File

@ -1 +0,0 @@

View File

@ -1,12 +0,0 @@
import nova.image.sync.drivers.filesystem
_store_drivers_map = {
'file:file':filesystem.Store
}
def get_store_driver(scheme_key):
cls = _store_drivers_map.get(scheme_key)
return cls()

View File

@ -1,106 +0,0 @@
import logging
import sys
from oslo.config import cfg
import pxssh
import pexpect
from nova.i18n import _
from nova.image import exception
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
sync_opt = [
cfg.IntOpt('scp_copy_timeout', default=3600,
help=_('when snapshot, max wait (second)time for snapshot '
'status become active.'),
deprecated_opts=[cfg.DeprecatedOpt('scp_copy_timeout',
group='DEFAULT')]),
]
CONF.register_opts(sync_opt, group='sync')
def _get_ssh(hostname, username, password):
s = pxssh.pxssh()
s.login(hostname, username, password, original_prompt='[#$>]')
s.logfile = sys.stdout
return s
class Store(object):
def copy_to(self, from_location, to_location, candidate_path=None):
from_store_loc = from_location
to_store_loc = to_location
LOG.debug(_('from_store_loc is: %s'), from_store_loc)
if from_store_loc['host'] == to_store_loc['host'] and \
from_store_loc['path'] == to_store_loc['path']:
LOG.info(_('The from_loc is same to to_loc, no need to copy. the '
'host:path is %s:%s') % (from_store_loc['host'],
from_store_loc['path']))
return 'file://%s' % to_store_loc['path']
to_host = r"""{username}@{host}""".format(
username=to_store_loc['login_user'],
host=to_store_loc['host'])
to_path = r"""{to_host}:{path}""".format(to_host=to_host,
path=to_store_loc['path'])
copy_path = from_store_loc['path']
try:
from_ssh = _get_ssh(from_store_loc['host'],
from_store_loc['login_user'],
from_store_loc['login_password'])
except Exception:
msg = _('ssh login failed to %(user)s:%(passwd)s %(host)s' %
{'user': from_store_loc['login_user'],
'passwd': from_store_loc['login_password'],
'host': from_store_loc['host']
})
LOG.exception(msg)
raise exception.GlanceSyncException(reason=msg)
from_ssh.sendline('ls %s' % copy_path)
from_ssh.prompt()
if 'cannot access' in from_ssh.before or \
'No such file' in from_ssh.before:
if candidate_path:
from_ssh.sendline('ls %s' % candidate_path)
from_ssh.prompt()
if 'cannot access' not in from_ssh.before and \
'No such file' not in from_ssh.before:
copy_path = candidate_path
else:
msg = _("the image path for copy to is not exists, file copy"
"failed: path is %s" % copy_path)
LOG.exception(msg)
raise exception.GlanceSyncException(reason=msg)
from_ssh.sendline('scp -P 22 %s %s' % (copy_path, to_path))
while True:
scp_index = from_ssh.expect(['.yes/no.', '.assword:.',
pexpect.TIMEOUT])
if scp_index == 0:
from_ssh.sendline('yes')
from_ssh.prompt()
elif scp_index == 1:
from_ssh.sendline(to_store_loc['login_password'])
from_ssh.prompt(timeout=CONF.sync.scp_copy_timeout)
break
else:
msg = _("scp commond execute failed, with copy_path %s and "
"to_path %s" % (copy_path, to_path))
LOG.exception(msg)
raise exception.GlanceSyncException(reason=msg)
if from_ssh:
from_ssh.logout()
return 'file://%s' % to_store_loc['path']

View File

@ -1,4 +0,0 @@
The Open vSwitch (OVS) Neutron plugin has been removed and replaced by ML2. You
must run the migration manually to upgrade to Juno.
See neutron/db/migration/migrate_to_ml2.py

View File

@ -1,237 +0,0 @@
# Copyright 2014, Huawei, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# @author: Haojie Jia, Huawei
from oslo.config import cfg
#from heat.openstack.common import importutils
#from heat.openstack.common import log as logging
from oslo.utils import importutils
from oslo_log import log as logging
logger = logging.getLogger(__name__)
from neutron.plugins.l2_proxy.agent import neutron_keystoneclient as hkc
from novaclient import client as novaclient
from novaclient import shell as novashell
try:
from swiftclient import client as swiftclient
except ImportError:
swiftclient = None
logger.info('swiftclient not available')
try:
from neutronclient.v2_0 import client as neutronclient
except ImportError:
neutronclient = None
logger.info('neutronclient not available')
try:
from cinderclient import client as cinderclient
except ImportError:
cinderclient = None
logger.info('cinderclient not available')
try:
from ceilometerclient.v2 import client as ceilometerclient
except ImportError:
ceilometerclient = None
logger.info('ceilometerclient not available')
cloud_opts = [
cfg.StrOpt('cloud_backend',
default=None,
help="Cloud module to use as a backend. Defaults to OpenStack.")
]
cfg.CONF.register_opts(cloud_opts)
class OpenStackClients(object):
'''
Convenience class to create and cache client instances.
'''
def __init__(self, context):
self.context = context
self._nova = {}
self._keystone = None
self._swift = None
self._neutron = None
self._cinder = None
self._ceilometer = None
@property
def auth_token(self):
# if there is no auth token in the context
# attempt to get one using the context username and password
return self.context.auth_token or self.keystone().auth_token
def keystone(self):
if self._keystone:
return self._keystone
self._keystone = hkc.KeystoneClient(self.context)
return self._keystone
def url_for(self, **kwargs):
return self.keystone().url_for(**kwargs)
def nova(self, service_type='compute'):
if service_type in self._nova:
return self._nova[service_type]
con = self.context
if self.auth_token is None:
logger.error("Nova connection failed, no auth_token!")
return None
computeshell = novashell.OpenStackComputeShell()
extensions = computeshell._discover_extensions("1.1")
args = {
'project_id': con.tenant_id,
'auth_url': con.auth_url,
'service_type': service_type,
'username': None,
'api_key': None,
'extensions': extensions
}
client = novaclient.Client(1.1, **args)
management_url = self.url_for(
service_type=service_type,
attr='region',
filter_value='RegionTwo')
client.client.auth_token = self.auth_token
client.client.management_url = management_url
# management_url = self.url_for(service_type=service_type,attr='region',filter_value='RegionTwo')
# client.client.auth_token = self.auth_token
# client.client.management_url = 'http://172.31.127.32:8774/v2/49a3d7c4bbb34a6f843ccc87bab844aa'
self._nova[service_type] = client
return client
def swift(self):
if swiftclient is None:
return None
if self._swift:
return self._swift
con = self.context
if self.auth_token is None:
logger.error("Swift connection failed, no auth_token!")
return None
args = {
'auth_version': '2.0',
'tenant_name': con.tenant_id,
'user': con.username,
'key': None,
'authurl': None,
'preauthtoken': self.auth_token,
'preauthurl': self.url_for(service_type='object-store')
}
self._swift = swiftclient.Connection(**args)
return self._swift
def neutron(self):
if neutronclient is None:
return None
if self._neutron:
return self._neutron
con = self.context
if self.auth_token is None:
logger.error("Neutron connection failed, no auth_token!")
return None
if self.context.region_name is None:
management_url = self.url_for(service_type='network')
else:
management_url = self.url_for(
service_type='network',
attr='region',
filter_value=self.context.region_name)
args = {
'auth_url': con.auth_url,
'service_type': 'network',
'token': self.auth_token,
'endpoint_url': management_url
}
self._neutron = neutronclient.Client(**args)
return self._neutron
def cinder(self):
if cinderclient is None:
return self.nova('volume')
if self._cinder:
return self._cinder
con = self.context
if self.auth_token is None:
logger.error("Cinder connection failed, no auth_token!")
return None
args = {
'service_type': 'volume',
'auth_url': con.auth_url,
'project_id': con.tenant_id,
'username': None,
'api_key': None
}
self._cinder = cinderclient.Client('1', **args)
management_url = self.url_for(service_type='volume')
self._cinder.client.auth_token = self.auth_token
self._cinder.client.management_url = management_url
return self._cinder
def ceilometer(self):
if ceilometerclient is None:
return None
if self._ceilometer:
return self._ceilometer
if self.auth_token is None:
logger.error("Ceilometer connection failed, no auth_token!")
return None
con = self.context
args = {
'auth_url': con.auth_url,
'service_type': 'metering',
'project_id': con.tenant_id,
'token': lambda: self.auth_token,
'endpoint': self.url_for(service_type='metering'),
}
client = ceilometerclient.Client(**args)
self._ceilometer = client
return self._ceilometer
if cfg.CONF.cloud_backend:
cloud_backend_module = importutils.import_module(cfg.CONF.cloud_backend)
Clients = cloud_backend_module.Clients
else:
Clients = OpenStackClients
logger.debug('Using backend %s' % Clients)

File diff suppressed because it is too large Load Diff

View File

@ -1,317 +0,0 @@
# Copyright 2014, Huawei, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# @author: Haojie Jia, Huawei
from oslo_context import context
from neutron.common import exceptions
import eventlet
from keystoneclient.v2_0 import client as kc
from keystoneclient.v3 import client as kc_v3
from oslo.config import cfg
from oslo.utils import importutils
from oslo_log import log as logging
logger = logging.getLogger(
'neutron.plugins.cascading_proxy_agent.keystoneclient')
class KeystoneClient(object):
"""
Wrap keystone client so we can encapsulate logic used in resources
Note this is intended to be initialized from a resource on a per-session
basis, so the session context is passed in on initialization
Also note that a copy of this is created every resource as self.keystone()
via the code in engine/client.py, so there should not be any need to
directly instantiate instances of this class inside resources themselves
"""
def __init__(self, context):
# We have to maintain two clients authenticated with keystone:
# - ec2 interface is v2.0 only
# - trusts is v3 only
# If a trust_id is specified in the context, we immediately
# authenticate so we can populate the context with a trust token
# otherwise, we delay client authentication until needed to avoid
# unnecessary calls to keystone.
#
# Note that when you obtain a token using a trust, it cannot be
# used to reauthenticate and get another token, so we have to
# get a new trust-token even if context.auth_token is set.
#
# - context.auth_url is expected to contain the v2.0 keystone endpoint
self.context = context
self._client_v2 = None
self._client_v3 = None
if self.context.trust_id:
# Create a connection to the v2 API, with the trust_id, this
# populates self.context.auth_token with a trust-scoped token
self._client_v2 = self._v2_client_init()
@property
def client_v3(self):
if not self._client_v3:
# Create connection to v3 API
self._client_v3 = self._v3_client_init()
return self._client_v3
@property
def client_v2(self):
if not self._client_v2:
self._client_v2 = self._v2_client_init()
return self._client_v2
def _v2_client_init(self):
kwargs = {
'auth_url': self.context.auth_url
}
auth_kwargs = {}
# Note try trust_id first, as we can't reuse auth_token in that case
if self.context.trust_id is not None:
# We got a trust_id, so we use the admin credentials
# to authenticate, then re-scope the token to the
# trust impersonating the trustor user.
# Note that this currently requires the trustor tenant_id
# to be passed to the authenticate(), unlike the v3 call
kwargs.update(self._service_admin_creds(api_version=2))
auth_kwargs['trust_id'] = self.context.trust_id
auth_kwargs['tenant_id'] = self.context.tenant_id
elif self.context.auth_token is not None:
kwargs['tenant_name'] = self.context.tenant
kwargs['token'] = self.context.auth_token
elif self.context.password is not None:
kwargs['username'] = self.context.username
kwargs['password'] = self.context.password
kwargs['tenant_name'] = self.context.tenant
kwargs['tenant_id'] = self.context.tenant_id
else:
logger.error("Keystone v2 API connection failed, no password or "
"auth_token!")
raise exception.AuthorizationFailure()
client_v2 = kc.Client(**kwargs)
client_v2.authenticate(**auth_kwargs)
# If we are authenticating with a trust auth_kwargs are set, so set
# the context auth_token with the re-scoped trust token
if auth_kwargs:
# Sanity check
if not client_v2.auth_ref.trust_scoped:
logger.error("v2 trust token re-scoping failed!")
raise exception.AuthorizationFailure()
# All OK so update the context with the token
self.context.auth_token = client_v2.auth_ref.auth_token
self.context.auth_url = kwargs.get('auth_url')
return client_v2
@staticmethod
def _service_admin_creds(api_version=2):
# Import auth_token to have keystone_authtoken settings setup.
importutils.import_module('keystoneclient.middleware.auth_token')
creds = {
'username': cfg.CONF.keystone_authtoken.admin_user,
'password': cfg.CONF.keystone_authtoken.admin_password,
}
if api_version >= 3:
creds['auth_url'] =\
cfg.CONF.keystone_authtoken.auth_uri.replace('v2.0', 'v3')
creds['project_name'] =\
cfg.CONF.keystone_authtoken.admin_tenant_name
else:
creds['auth_url'] = cfg.CONF.keystone_authtoken.auth_uri
creds['tenant_name'] =\
cfg.CONF.keystone_authtoken.admin_tenant_name
return creds
def _v3_client_init(self):
kwargs = {}
if self.context.auth_token is not None:
kwargs['project_name'] = self.context.tenant
kwargs['token'] = self.context.auth_token
kwargs['auth_url'] = self.context.auth_url.replace('v2.0', 'v3')
kwargs['endpoint'] = kwargs['auth_url']
elif self.context.trust_id is not None:
# We got a trust_id, so we use the admin credentials and get a
# Token back impersonating the trustor user
kwargs.update(self._service_admin_creds(api_version=3))
kwargs['trust_id'] = self.context.trust_id
elif self.context.password is not None:
kwargs['username'] = self.context.username
kwargs['password'] = self.context.password
kwargs['project_name'] = self.context.tenant
kwargs['project_id'] = self.context.tenant_id
kwargs['auth_url'] = self.context.auth_url.replace('v2.0', 'v3')
kwargs['endpoint'] = kwargs['auth_url']
else:
logger.error("Keystone v3 API connection failed, no password or "
"auth_token!")
raise exception.AuthorizationFailure()
client = kc_v3.Client(**kwargs)
# Have to explicitly authenticate() or client.auth_ref is None
client.authenticate()
return client
def create_trust_context(self):
"""
If cfg.CONF.deferred_auth_method is trusts, we create a
trust using the trustor identity in the current context, with the
trustee as the heat service user and return a context containing
the new trust_id
If deferred_auth_method != trusts, or the current context already
contains a trust_id, we do nothing and return the current context
"""
if self.context.trust_id:
return self.context
# We need the service admin user ID (not name), as the trustor user
# can't lookup the ID in keystoneclient unless they're admin
# workaround this by creating a temporary admin client connection
# then getting the user ID from the auth_ref
admin_creds = self._service_admin_creds()
admin_client = kc.Client(**admin_creds)
trustee_user_id = admin_client.auth_ref.user_id
trustor_user_id = self.client_v3.auth_ref.user_id
trustor_project_id = self.client_v3.auth_ref.project_id
roles = cfg.CONF.trusts_delegated_roles
trust = self.client_v3.trusts.create(trustor_user=trustor_user_id,
trustee_user=trustee_user_id,
project=trustor_project_id,
impersonation=True,
role_names=roles)
trust_context = context.RequestContext.from_dict(
self.context.to_dict())
trust_context.trust_id = trust.id
trust_context.trustor_user_id = trustor_user_id
return trust_context
def delete_trust(self, trust_id):
"""
Delete the specified trust.
"""
self.client_v3.trusts.delete(trust_id)
def create_stack_user(self, username, password=''):
"""
Create a user defined as part of a stack, either via template
or created internally by a resource. This user will be added to
the heat_stack_user_role as defined in the config
Returns the keystone ID of the resulting user
"""
if(len(username) > 64):
logger.warning("Truncating the username %s to the last 64 "
"characters." % username)
# get the last 64 characters of the username
username = username[-64:]
user = self.client_v2.users.create(username,
password,
'%s@heat-api.org' %
username,
tenant_id=self.context.tenant_id,
enabled=True)
# We add the new user to a special keystone role
# This role is designed to allow easier differentiation of the
# heat-generated "stack users" which will generally have credentials
# deployed on an instance (hence are implicitly untrusted)
roles = self.client_v2.roles.list()
stack_user_role = [r.id for r in roles
if r.name == cfg.CONF.heat_stack_user_role]
if len(stack_user_role) == 1:
role_id = stack_user_role[0]
logger.debug("Adding user %s to role %s" % (user.id, role_id))
self.client_v2.roles.add_user_role(user.id, role_id,
self.context.tenant_id)
else:
logger.error("Failed to add user %s to role %s, check role exists!"
% (username, cfg.CONF.heat_stack_user_role))
return user.id
def delete_stack_user(self, user_id):
user = self.client_v2.users.get(user_id)
# FIXME (shardy) : need to test, do we still need this retry logic?
# Copied from user.py, but seems like something we really shouldn't
# need to do, no bug reference in the original comment (below)...
# tempory hack to work around an openstack bug.
# seems you can't delete a user first time - you have to try
# a couple of times - go figure!
tmo = eventlet.Timeout(10)
status = 'WAITING'
reason = 'Timed out trying to delete user'
try:
while status == 'WAITING':
try:
user.delete()
status = 'DELETED'
except Exception as ce:
reason = str(ce)
logger.warning("Problem deleting user %s: %s" %
(user_id, reason))
eventlet.sleep(1)
except eventlet.Timeout as t:
if t is not tmo:
# not my timeout
raise
else:
status = 'TIMEDOUT'
finally:
tmo.cancel()
if status != 'DELETED':
raise exception.Error(reason)
def delete_ec2_keypair(self, user_id, accesskey):
self.client_v2.ec2.delete(user_id, accesskey)
def get_ec2_keypair(self, user_id):
# We make the assumption that each user will only have one
# ec2 keypair, it's not clear if AWS allow multiple AccessKey resources
# to be associated with a single User resource, but for simplicity
# we assume that here for now
cred = self.client_v2.ec2.list(user_id)
if len(cred) == 0:
return self.client_v2.ec2.create(user_id, self.context.tenant_id)
if len(cred) == 1:
return cred[0]
else:
logger.error("Unexpected number of ec2 credentials %s for %s" %
(len(cred), user_id))
def disable_stack_user(self, user_id):
# FIXME : This won't work with the v3 keystone API
self.client_v2.users.update_enabled(user_id, False)
def enable_stack_user(self, user_id):
# FIXME : This won't work with the v3 keystone API
self.client_v2.users.update_enabled(user_id, True)
def url_for(self, **kwargs):
return self.client_v2.service_catalog.url_for(**kwargs)
@property
def auth_token(self):
return self.client_v2.auth_token

View File

@ -1,206 +0,0 @@
# Copyright 2014, Huawei, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# @author: Haojie Jia, Huawei
from oslo.config import cfg
#from heat.openstack.common import local
#from neutron.openstack.common import local
#from heat.common import exception
from neutron.common import exceptions
#from heat.common import wsgi
from neutron import wsgi
#from neutron.openstack.common import context
from oslo_context import context
#from heat.openstack.common import importutils
#from neutron.openstack.common import importutils
from oslo.utils import importutils
#from heat.openstack.common import uuidutils
#from neutron.openstack.common import uuidutils
from oslo.utils import uuidutils
def generate_request_id():
return 'req-' + uuidutils.generate_uuid()
class RequestContext(context.RequestContext):
"""
Stores information about the security context under which the user
accesses the system, as well as additional request information.
"""
def __init__(self, auth_token=None, username=None, password=None,
aws_creds=None, tenant=None,
tenant_id=None, auth_url=None, roles=None, is_admin=False,
region_name=None, read_only=False, show_deleted=False,
owner_is_tenant=True, overwrite=True,
trust_id=None, trustor_user_id=None,
**kwargs):
"""
:param overwrite: Set to False to ensure that the greenthread local
copy of the index is not overwritten.
:param kwargs: Extra arguments that might be present, but we ignore
because they possibly came in from older rpc messages.
"""
super(RequestContext, self).__init__(auth_token=auth_token,
user=username, tenant=tenant,
is_admin=is_admin,
read_only=read_only,
show_deleted=show_deleted,
request_id='unused')
self.username = username
self.password = password
self.aws_creds = aws_creds
self.tenant_id = tenant_id
self.auth_url = auth_url
self.roles = roles or []
self.region_name = region_name
self.owner_is_tenant = owner_is_tenant
# if overwrite or not hasattr(local.store, 'context'):
# self.update_store()
self._session = None
self.trust_id = trust_id
self.trustor_user_id = trustor_user_id
# def update_store(self):
# local.store.context = self
def to_dict(self):
return {'auth_token': self.auth_token,
'username': self.username,
'password': self.password,
'aws_creds': self.aws_creds,
'tenant': self.tenant,
'tenant_id': self.tenant_id,
'trust_id': self.trust_id,
'trustor_user_id': self.trustor_user_id,
'auth_url': self.auth_url,
'roles': self.roles,
'is_admin': self.is_admin,
'region_name': self.region_name}
@classmethod
def from_dict(cls, values):
return cls(**values)
@property
def owner(self):
"""Return the owner to correlate with an image."""
return self.tenant if self.owner_is_tenant else self.user
def get_admin_context(read_deleted="no"):
return RequestContext(is_admin=True)
class ContextMiddleware(wsgi.Middleware):
opts = [cfg.BoolOpt('owner_is_tenant', default=True),
cfg.StrOpt('admin_role', default='admin')]
def __init__(self, app, conf, **local_conf):
cfg.CONF.register_opts(self.opts)
# Determine the context class to use
self.ctxcls = RequestContext
if 'context_class' in local_conf:
self.ctxcls = importutils.import_class(local_conf['context_class'])
super(ContextMiddleware, self).__init__(app)
def make_context(self, *args, **kwargs):
"""
Create a context with the given arguments.
"""
kwargs.setdefault('owner_is_tenant', cfg.CONF.owner_is_tenant)
return self.ctxcls(*args, **kwargs)
def process_request(self, req):
"""
Extract any authentication information in the request and
construct an appropriate context from it.
A few scenarios exist:
1. If X-Auth-Token is passed in, then consult TENANT and ROLE headers
to determine permissions.
2. An X-Auth-Token was passed in, but the Identity-Status is not
confirmed. For now, just raising a NotAuthenticated exception.
3. X-Auth-Token is omitted. If we were using Keystone, then the
tokenauth middleware would have rejected the request, so we must be
using NoAuth. In that case, assume that is_admin=True.
"""
headers = req.headers
try:
"""
This sets the username/password to the admin user because you
need this information in order to perform token authentication.
The real 'username' is the 'tenant'.
We should also check here to see if X-Auth-Token is not set and
in that case we should assign the user/pass directly as the real
username/password and token as None. 'tenant' should still be
the username.
"""
username = None
password = None
aws_creds = None
if headers.get('X-Auth-User') is not None:
username = headers.get('X-Auth-User')
password = headers.get('X-Auth-Key')
elif headers.get('X-Auth-EC2-Creds') is not None:
aws_creds = headers.get('X-Auth-EC2-Creds')
token = headers.get('X-Auth-Token')
tenant = headers.get('X-Tenant-Name')
tenant_id = headers.get('X-Tenant-Id')
auth_url = headers.get('X-Auth-Url')
roles = headers.get('X-Roles')
if roles is not None:
roles = roles.split(',')
except Exception:
raise exception.NotAuthenticated()
req.context = self.make_context(auth_token=token,
tenant=tenant, tenant_id=tenant_id,
aws_creds=aws_creds,
username=username,
password=password,
auth_url=auth_url, roles=roles,
is_admin=True)
def ContextMiddleware_filter_factory(global_conf, **local_conf):
"""
Factory method for paste.deploy
"""
conf = global_conf.copy()
conf.update(local_conf)
def filter(app):
return ContextMiddleware(app, conf)
return filter

View File

@ -1,718 +0,0 @@
# Copyright 2014, Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron.api.rpc.handlers import dvr_rpc
from neutron.common import constants as n_const
from neutron.common import utils as n_utils
from neutron.openstack.common import log as logging
from neutron.plugins.openvswitch.common import constants
LOG = logging.getLogger(__name__)
# A class to represent a DVR-hosted subnet including vif_ports resident on
# that subnet
class LocalDVRSubnetMapping:
def __init__(self, subnet, csnat_ofport=constants.OFPORT_INVALID):
# set of commpute ports on on this dvr subnet
self.compute_ports = {}
self.subnet = subnet
self.csnat_ofport = csnat_ofport
self.dvr_owned = False
def __str__(self):
return ("subnet = %s compute_ports = %s csnat_port = %s"
" is_dvr_owned = %s" %
(self.subnet, self.get_compute_ofports(),
self.get_csnat_ofport(), self.is_dvr_owned()))
def get_subnet_info(self):
return self.subnet
def set_dvr_owned(self, owned):
self.dvr_owned = owned
def is_dvr_owned(self):
return self.dvr_owned
def add_compute_ofport(self, vif_id, ofport):
self.compute_ports[vif_id] = ofport
def remove_compute_ofport(self, vif_id):
self.compute_ports.pop(vif_id, 0)
def remove_all_compute_ofports(self):
self.compute_ports.clear()
def get_compute_ofports(self):
return self.compute_ports
def set_csnat_ofport(self, ofport):
self.csnat_ofport = ofport
def get_csnat_ofport(self):
return self.csnat_ofport
class OVSPort:
def __init__(self, id, ofport, mac, device_owner):
self.id = id
self.mac = mac
self.ofport = ofport
self.subnets = set()
self.device_owner = device_owner
def __str__(self):
return ("OVSPort: id = %s, ofport = %s, mac = %s,"
"device_owner = %s, subnets = %s" %
(self.id, self.ofport, self.mac,
self.device_owner, self.subnets))
def add_subnet(self, subnet_id):
self.subnets.add(subnet_id)
def remove_subnet(self, subnet_id):
self.subnets.remove(subnet_id)
def remove_all_subnets(self):
self.subnets.clear()
def get_subnets(self):
return self.subnets
def get_device_owner(self):
return self.device_owner
def get_mac(self):
return self.mac
def get_ofport(self):
return self.ofport
class OVSDVRNeutronAgent(dvr_rpc.DVRAgentRpcApiMixin):
'''
Implements OVS-based DVR(Distributed Virtual Router), for overlay networks.
'''
# history
# 1.0 Initial version
def __init__(self, context, plugin_rpc, integ_br, tun_br,
patch_int_ofport=constants.OFPORT_INVALID,
patch_tun_ofport=constants.OFPORT_INVALID,
host=None, enable_tunneling=False,
enable_distributed_routing=False):
self.context = context
self.plugin_rpc = plugin_rpc
self.int_br = integ_br
self.tun_br = tun_br
self.patch_int_ofport = patch_int_ofport
self.patch_tun_ofport = patch_tun_ofport
self.host = host
self.enable_tunneling = enable_tunneling
self.enable_distributed_routing = enable_distributed_routing
def reset_ovs_parameters(self, integ_br, tun_br,
patch_int_ofport, patch_tun_ofport):
'''Reset the openvswitch parameters'''
if not (self.enable_tunneling and self.enable_distributed_routing):
return
self.int_br = integ_br
self.tun_br = tun_br
self.patch_int_ofport = patch_int_ofport
self.patch_tun_ofport = patch_tun_ofport
def setup_dvr_flows_on_integ_tun_br(self):
'''Setup up initial dvr flows into br-int and br-tun'''
if not (self.enable_tunneling and self.enable_distributed_routing):
return
LOG.debug("L2 Agent operating in DVR Mode")
self.dvr_mac_address = None
self.local_dvr_map = {}
self.local_csnat_map = {}
self.local_ports = {}
self.registered_dvr_macs = set()
# get the local DVR MAC Address
try:
details = self.plugin_rpc.get_dvr_mac_address_by_host(
self.context, self.host)
LOG.debug("L2 Agent DVR: Received response for "
"get_dvr_mac_address_by_host() from "
"plugin: %r", details)
self.dvr_mac_address = details['mac_address']
except Exception:
LOG.error(_("DVR: Failed to obtain local DVR Mac address"))
self.enable_distributed_routing = False
# switch all traffic using L2 learning
self.int_br.add_flow(table=constants.LOCAL_SWITCHING,
priority=1, actions="normal")
return
# Remove existing flows in integration bridge
self.int_br.remove_all_flows()
# Add a canary flow to int_br to track OVS restarts
self.int_br.add_flow(table=constants.CANARY_TABLE, priority=0,
actions="drop")
# Insert 'drop' action as the default for Table DVR_TO_SRC_MAC
self.int_br.add_flow(table=constants.DVR_TO_SRC_MAC,
priority=1,
actions="drop")
# Insert 'normal' action as the default for Table LOCAL_SWITCHING
self.int_br.add_flow(table=constants.LOCAL_SWITCHING,
priority=1,
actions="normal")
dvr_macs = self.plugin_rpc.get_dvr_mac_address_list(self.context)
LOG.debug("L2 Agent DVR: Received these MACs: %r", dvr_macs)
for mac in dvr_macs:
if mac['mac_address'] == self.dvr_mac_address:
continue
# Table 0 (default) will now sort DVR traffic from other
# traffic depending on in_port
self.int_br.add_flow(table=constants.LOCAL_SWITCHING,
priority=2,
in_port=self.patch_tun_ofport,
dl_src=mac['mac_address'],
actions="resubmit(,%s)" %
constants.DVR_TO_SRC_MAC)
# Table DVR_NOT_LEARN ensures unique dvr macs in the cloud
# are not learnt, as they may
# result in flow explosions
self.tun_br.add_flow(table=constants.DVR_NOT_LEARN,
priority=1,
dl_src=mac['mac_address'],
actions="output:%s" % self.patch_int_ofport)
self.registered_dvr_macs.add(mac['mac_address'])
self.tun_br.add_flow(priority=1,
in_port=self.patch_int_ofport,
actions="resubmit(,%s)" %
constants.DVR_PROCESS)
# table-miss should be sent to learning table
self.tun_br.add_flow(table=constants.DVR_NOT_LEARN,
priority=0,
actions="resubmit(,%s)" %
constants.LEARN_FROM_TUN)
self.tun_br.add_flow(table=constants.DVR_PROCESS,
priority=0,
actions="resubmit(,%s)" %
constants.PATCH_LV_TO_TUN)
def dvr_mac_address_update(self, dvr_macs):
if not (self.enable_tunneling and self.enable_distributed_routing):
return
LOG.debug("DVR Mac address update with host-mac: %s", dvr_macs)
if not self.dvr_mac_address:
LOG.debug("Self mac unknown, ignoring this "
"dvr_mac_address_update() ")
return
dvr_host_macs = set()
for entry in dvr_macs:
if entry['mac_address'] == self.dvr_mac_address:
continue
dvr_host_macs.add(entry['mac_address'])
if dvr_host_macs == self.registered_dvr_macs:
LOG.debug("DVR Mac address already up to date")
return
dvr_macs_added = dvr_host_macs - self.registered_dvr_macs
dvr_macs_removed = self.registered_dvr_macs - dvr_host_macs
for oldmac in dvr_macs_removed:
self.int_br.delete_flows(table=constants.LOCAL_SWITCHING,
in_port=self.patch_tun_ofport,
dl_src=oldmac)
self.tun_br.delete_flows(table=constants.DVR_NOT_LEARN,
dl_src=oldmac)
LOG.debug("Removed DVR MAC flow for %s", oldmac)
self.registered_dvr_macs.remove(oldmac)
for newmac in dvr_macs_added:
self.int_br.add_flow(table=constants.LOCAL_SWITCHING,
priority=2,
in_port=self.patch_tun_ofport,
dl_src=newmac,
actions="resubmit(,%s)" %
constants.DVR_TO_SRC_MAC)
self.tun_br.add_flow(table=constants.DVR_NOT_LEARN,
priority=1,
dl_src=newmac,
actions="output:%s" % self.patch_int_ofport)
LOG.debug("Added DVR MAC flow for %s", newmac)
self.registered_dvr_macs.add(newmac)
def is_dvr_router_interface(self, device_owner):
return device_owner == n_const.DEVICE_OWNER_DVR_INTERFACE
def process_tunneled_network(self, network_type, lvid, segmentation_id):
if not (self.enable_tunneling and self.enable_distributed_routing):
return
self.tun_br.add_flow(table=constants.TUN_TABLE[network_type],
priority=1,
tun_id=segmentation_id,
actions="mod_vlan_vid:%s,"
"resubmit(,%s)" %
(lvid, constants.DVR_NOT_LEARN))
def _bind_distributed_router_interface_port(self, port, fixed_ips,
device_owner, local_vlan):
# since router port must have only one fixed IP, directly
# use fixed_ips[0]
subnet_uuid = fixed_ips[0]['subnet_id']
csnat_ofport = constants.OFPORT_INVALID
ldm = None
if subnet_uuid in self.local_dvr_map:
ldm = self.local_dvr_map[subnet_uuid]
csnat_ofport = ldm.get_csnat_ofport()
if csnat_ofport == constants.OFPORT_INVALID:
LOG.error(_("DVR: Duplicate DVR router interface detected "
"for subnet %s"), subnet_uuid)
return
else:
# set up LocalDVRSubnetMapping available for this subnet
subnet_info = self.plugin_rpc.get_subnet_for_dvr(self.context,
subnet_uuid)
if not subnet_info:
LOG.error(_("DVR: Unable to retrieve subnet information"
" for subnet_id %s"), subnet_uuid)
return
LOG.debug("get_subnet_for_dvr for subnet %s returned with %s" %
(subnet_uuid, subnet_info))
ldm = LocalDVRSubnetMapping(subnet_info)
self.local_dvr_map[subnet_uuid] = ldm
# DVR takes over
ldm.set_dvr_owned(True)
subnet_info = ldm.get_subnet_info()
ip_subnet = subnet_info['cidr']
local_compute_ports = (
self.plugin_rpc.get_ports_on_host_by_subnet(
self.context, self.host, subnet_uuid))
LOG.debug("DVR: List of ports received from "
"get_ports_on_host_by_subnet %s",
local_compute_ports)
for prt in local_compute_ports:
vif = self.int_br.get_vif_port_by_id(prt['id'])
if not vif:
continue
ldm.add_compute_ofport(vif.vif_id, vif.ofport)
if vif.vif_id in self.local_ports:
# ensure if a compute port is already on
# a different dvr routed subnet
# if yes, queue this subnet to that port
ovsport = self.local_ports[vif.vif_id]
ovsport.add_subnet(subnet_uuid)
else:
# the compute port is discovered first here that its on
# a dvr routed subnet queue this subnet to that port
ovsport = OVSPort(vif.vif_id, vif.ofport,
vif.vif_mac, prt['device_owner'])
ovsport.add_subnet(subnet_uuid)
self.local_ports[vif.vif_id] = ovsport
# create rule for just this vm port
self.int_br.add_flow(table=constants.DVR_TO_SRC_MAC,
priority=4,
dl_vlan=local_vlan,
dl_dst=ovsport.get_mac(),
actions="strip_vlan,mod_dl_src:%s,"
"output:%s" %
(subnet_info['gateway_mac'],
ovsport.get_ofport()))
# create rule to forward broadcast/multicast frames from dvr
# router interface to appropriate local tenant ports
ofports = ','.join(map(str, ldm.get_compute_ofports().values()))
if csnat_ofport != constants.OFPORT_INVALID:
ofports = str(csnat_ofport) + ',' + ofports
if ofports:
self.int_br.add_flow(table=constants.DVR_TO_SRC_MAC,
priority=2,
proto='ip',
dl_vlan=local_vlan,
nw_dst=ip_subnet,
actions="strip_vlan,mod_dl_src:%s,"
"output:%s" %
(subnet_info['gateway_mac'], ofports))
self.tun_br.add_flow(table=constants.DVR_PROCESS,
priority=3,
dl_vlan=local_vlan,
proto='arp',
nw_dst=subnet_info['gateway_ip'],
actions="drop")
self.tun_br.add_flow(table=constants.DVR_PROCESS,
priority=2,
dl_vlan=local_vlan,
dl_dst=port.vif_mac,
actions="drop")
self.tun_br.add_flow(table=constants.DVR_PROCESS,
priority=1,
dl_vlan=local_vlan,
dl_src=port.vif_mac,
actions="mod_dl_src:%s,resubmit(,%s)" %
(self.dvr_mac_address,
constants.PATCH_LV_TO_TUN))
# the dvr router interface is itself a port, so capture it
# queue this subnet to that port. A subnet appears only once as
# a router interface on any given router
ovsport = OVSPort(port.vif_id, port.ofport,
port.vif_mac, device_owner)
ovsport.add_subnet(subnet_uuid)
self.local_ports[port.vif_id] = ovsport
def _bind_port_on_dvr_subnet(self, port, fixed_ips,
device_owner, local_vlan):
# Handle new compute port added use-case
subnet_uuid = None
for ips in fixed_ips:
if ips['subnet_id'] not in self.local_dvr_map:
continue
subnet_uuid = ips['subnet_id']
ldm = self.local_dvr_map[subnet_uuid]
if not ldm.is_dvr_owned():
# well this is CSNAT stuff, let dvr come in
# and do plumbing for this vm later
continue
# This confirms that this compute port belongs
# to a dvr hosted subnet.
# Accommodate this VM Port into the existing rule in
# the integration bridge
LOG.debug("DVR: Plumbing compute port %s", port.vif_id)
subnet_info = ldm.get_subnet_info()
ip_subnet = subnet_info['cidr']
csnat_ofport = ldm.get_csnat_ofport()
ldm.add_compute_ofport(port.vif_id, port.ofport)
if port.vif_id in self.local_ports:
# ensure if a compute port is already on a different
# dvr routed subnet
# if yes, queue this subnet to that port
ovsport = self.local_ports[port.vif_id]
ovsport.add_subnet(subnet_uuid)
else:
# the compute port is discovered first here that its
# on a dvr routed subnet, queue this subnet to that port
ovsport = OVSPort(port.vif_id, port.ofport,
port.vif_mac, device_owner)
ovsport.add_subnet(subnet_uuid)
self.local_ports[port.vif_id] = ovsport
# create a rule for this vm port
self.int_br.add_flow(table=constants.DVR_TO_SRC_MAC,
priority=4,
dl_vlan=local_vlan,
dl_dst=ovsport.get_mac(),
actions="strip_vlan,mod_dl_src:%s,"
"output:%s" %
(subnet_info['gateway_mac'],
ovsport.get_ofport()))
ofports = ','.join(map(str, ldm.get_compute_ofports().values()))
if csnat_ofport != constants.OFPORT_INVALID:
ofports = str(csnat_ofport) + ',' + ofports
self.int_br.add_flow(table=constants.DVR_TO_SRC_MAC,
priority=2,
proto='ip',
dl_vlan=local_vlan,
nw_dst=ip_subnet,
actions="strip_vlan,mod_dl_src:%s,"
" output:%s" %
(subnet_info['gateway_mac'], ofports))
def _bind_centralized_snat_port_on_dvr_subnet(self, port, fixed_ips,
device_owner, local_vlan):
if port.vif_id in self.local_ports:
# throw an error if CSNAT port is already on a different
# dvr routed subnet
ovsport = self.local_ports[port.vif_id]
subs = list(ovsport.get_subnets())
LOG.error(_("Centralized-SNAT port %s already seen on "),
port.vif_id)
LOG.error(_("a different subnet %s"), subs[0])
return
# since centralized-SNAT (CSNAT) port must have only one fixed
# IP, directly use fixed_ips[0]
subnet_uuid = fixed_ips[0]['subnet_id']
ldm = None
subnet_info = None
if subnet_uuid not in self.local_dvr_map:
# no csnat ports seen on this subnet - create csnat state
# for this subnet
subnet_info = self.plugin_rpc.get_subnet_for_dvr(self.context,
subnet_uuid)
ldm = LocalDVRSubnetMapping(subnet_info, port.ofport)
self.local_dvr_map[subnet_uuid] = ldm
else:
ldm = self.local_dvr_map[subnet_uuid]
subnet_info = ldm.get_subnet_info()
# Store csnat OF Port in the existing DVRSubnetMap
ldm.set_csnat_ofport(port.ofport)
# create ovsPort footprint for csnat port
ovsport = OVSPort(port.vif_id, port.ofport,
port.vif_mac, device_owner)
ovsport.add_subnet(subnet_uuid)
self.local_ports[port.vif_id] = ovsport
self.int_br.add_flow(table=constants.DVR_TO_SRC_MAC,
priority=4,
dl_vlan=local_vlan,
dl_dst=ovsport.get_mac(),
actions="strip_vlan,mod_dl_src:%s,"
" output:%s" %
(subnet_info['gateway_mac'],
ovsport.get_ofport()))
ofports = ','.join(map(str, ldm.get_compute_ofports().values()))
ofports = str(ldm.get_csnat_ofport()) + ',' + ofports
ip_subnet = subnet_info['cidr']
self.int_br.add_flow(table=constants.DVR_TO_SRC_MAC,
priority=2,
proto='ip',
dl_vlan=local_vlan,
nw_dst=ip_subnet,
actions="strip_vlan,mod_dl_src:%s,"
" output:%s" %
(subnet_info['gateway_mac'], ofports))
def bind_port_to_dvr(self, port, network_type, fixed_ips,
device_owner, local_vlan_id):
# a port coming up as distributed router interface
if not (self.enable_tunneling and self.enable_distributed_routing):
return
if network_type not in constants.TUNNEL_NETWORK_TYPES:
return
if device_owner == n_const.DEVICE_OWNER_DVR_INTERFACE:
self._bind_distributed_router_interface_port(port, fixed_ips,
device_owner,
local_vlan_id)
if device_owner and n_utils.is_dvr_serviced(device_owner):
self._bind_port_on_dvr_subnet(port, fixed_ips,
device_owner,
local_vlan_id)
if device_owner == n_const.DEVICE_OWNER_ROUTER_SNAT:
self._bind_centralized_snat_port_on_dvr_subnet(port, fixed_ips,
device_owner,
local_vlan_id)
def _unbind_distributed_router_interface_port(self, port, local_vlan):
ovsport = self.local_ports[port.vif_id]
# removal of distributed router interface
subnet_ids = ovsport.get_subnets()
subnet_set = set(subnet_ids)
# ensure we process for all the subnets laid on this removed port
for sub_uuid in subnet_set:
if sub_uuid not in self.local_dvr_map:
continue
ldm = self.local_dvr_map[sub_uuid]
subnet_info = ldm.get_subnet_info()
ip_subnet = subnet_info['cidr']
# DVR is no more owner
ldm.set_dvr_owned(False)
# remove all vm rules for this dvr subnet
# clear of compute_ports altogether
compute_ports = ldm.get_compute_ofports()
for vif_id in compute_ports:
ovsport = self.local_ports[vif_id]
self.int_br.delete_flows(table=constants.DVR_TO_SRC_MAC,
dl_vlan=local_vlan,
dl_dst=ovsport.get_mac())
ldm.remove_all_compute_ofports()
if ldm.get_csnat_ofport() != -1:
# If there is a csnat port on this agent, preserve
# the local_dvr_map state
ofports = str(ldm.get_csnat_ofport())
self.int_br.add_flow(table=constants.DVR_TO_SRC_MAC,
priority=2,
proto='ip',
dl_vlan=local_vlan,
nw_dst=ip_subnet,
actions="strip_vlan,mod_dl_src:%s,"
" output:%s" %
(subnet_info['gateway_mac'], ofports))
else:
# removed port is a distributed router interface
self.int_br.delete_flows(table=constants.DVR_TO_SRC_MAC,
proto='ip', dl_vlan=local_vlan,
nw_dst=ip_subnet)
# remove subnet from local_dvr_map as no dvr (or) csnat
# ports available on this agent anymore
self.local_dvr_map.pop(sub_uuid, None)
self.tun_br.delete_flows(table=constants.DVR_PROCESS,
dl_vlan=local_vlan,
proto='arp',
nw_dst=subnet_info['gateway_ip'])
ovsport.remove_subnet(sub_uuid)
self.tun_br.delete_flows(table=constants.DVR_PROCESS,
dl_vlan=local_vlan,
dl_dst=port.vif_mac)
self.tun_br.delete_flows(table=constants.DVR_PROCESS,
dl_vlan=local_vlan,
dl_src=port.vif_mac)
# release port state
self.local_ports.pop(port.vif_id, None)
def _unbind_port_on_dvr_subnet(self, port, local_vlan):
ovsport = self.local_ports[port.vif_id]
# This confirms that this compute port being removed belonged
# to a dvr hosted subnet.
# Accommodate this VM Port into the existing rule in
# the integration bridge
LOG.debug("DVR: Removing plumbing for compute port %s", port)
subnet_ids = ovsport.get_subnets()
# ensure we process for all the subnets laid on this port
for sub_uuid in subnet_ids:
if sub_uuid not in self.local_dvr_map:
continue
ldm = self.local_dvr_map[sub_uuid]
subnet_info = ldm.get_subnet_info()
ldm.remove_compute_ofport(port.vif_id)
ofports = ','.join(map(str, ldm.get_compute_ofports().values()))
ip_subnet = subnet_info['cidr']
# first remove this vm port rule
self.int_br.delete_flows(table=constants.DVR_TO_SRC_MAC,
dl_vlan=local_vlan,
dl_dst=ovsport.get_mac())
if ldm.get_csnat_ofport() != -1:
# If there is a csnat port on this agent, preserve
# the local_dvr_map state
ofports = str(ldm.get_csnat_ofport()) + ',' + ofports
self.int_br.add_flow(table=constants.DVR_TO_SRC_MAC,
priority=2,
proto='ip',
dl_vlan=local_vlan,
nw_dst=ip_subnet,
actions="strip_vlan,mod_dl_src:%s,"
" output:%s" %
(subnet_info['gateway_mac'], ofports))
else:
if ofports:
self.int_br.add_flow(table=constants.DVR_TO_SRC_MAC,
priority=2,
proto='ip',
dl_vlan=local_vlan,
nw_dst=ip_subnet,
actions="strip_vlan,mod_dl_src:%s,"
" output:%s" %
(subnet_info['gateway_mac'],
ofports))
else:
# remove the flow altogether, as no ports (both csnat/
# compute) are available on this subnet in this
# agent
self.int_br.delete_flows(table=constants.DVR_TO_SRC_MAC,
proto='ip',
dl_vlan=local_vlan,
nw_dst=ip_subnet)
# release port state
self.local_ports.pop(port.vif_id, None)
def _unbind_centralized_snat_port_on_dvr_subnet(self, port, local_vlan):
ovsport = self.local_ports[port.vif_id]
# This confirms that this compute port being removed belonged
# to a dvr hosted subnet.
# Accommodate this VM Port into the existing rule in
# the integration bridge
LOG.debug("DVR: Removing plumbing for csnat port %s", port)
sub_uuid = list(ovsport.get_subnets())[0]
# ensure we process for all the subnets laid on this port
if sub_uuid not in self.local_dvr_map:
return
ldm = self.local_dvr_map[sub_uuid]
subnet_info = ldm.get_subnet_info()
ip_subnet = subnet_info['cidr']
ldm.set_csnat_ofport(constants.OFPORT_INVALID)
# then remove csnat port rule
self.int_br.delete_flows(table=constants.DVR_TO_SRC_MAC,
dl_vlan=local_vlan,
dl_dst=ovsport.get_mac())
ofports = ','.join(map(str, ldm.get_compute_ofports().values()))
if ofports:
self.int_br.add_flow(table=constants.DVR_TO_SRC_MAC,
priority=2,
proto='ip',
dl_vlan=local_vlan,
nw_dst=ip_subnet,
actions="strip_vlan,mod_dl_src:%s,"
" output:%s" %
(subnet_info['gateway_mac'], ofports))
else:
self.int_br.delete_flows(table=constants.DVR_TO_SRC_MAC,
proto='ip',
dl_vlan=local_vlan,
nw_dst=ip_subnet)
if not ldm.is_dvr_owned():
# if not owned by DVR (only used for csnat), remove this
# subnet state altogether
self.local_dvr_map.pop(sub_uuid, None)
# release port state
self.local_ports.pop(port.vif_id, None)
def unbind_port_from_dvr(self, vif_port, local_vlan_id):
if not (self.enable_tunneling and self.enable_distributed_routing):
return
# Handle port removed use-case
if vif_port and vif_port.vif_id not in self.local_ports:
LOG.debug("DVR: Non distributed port, ignoring %s", vif_port)
return
ovsport = self.local_ports[vif_port.vif_id]
device_owner = ovsport.get_device_owner()
if device_owner == n_const.DEVICE_OWNER_DVR_INTERFACE:
self._unbind_distributed_router_interface_port(vif_port,
local_vlan_id)
if device_owner and n_utils.is_dvr_serviced(device_owner):
self._unbind_port_on_dvr_subnet(vif_port, local_vlan_id)
if device_owner == n_const.DEVICE_OWNER_ROUTER_SNAT:
self._unbind_centralized_snat_port_on_dvr_subnet(vif_port,
local_vlan_id)

View File

@ -1,16 +0,0 @@
This directory contains files that are required for the XenAPI support.
They should be installed in the XenServer / Xen Cloud Platform dom0.
If you install them manually, you will need to ensure that the newly
added files are executable. You can do this by running the following
command (from dom0):
chmod a+x /etc/xapi.d/plugins/*
Otherwise, you can build an rpm by running the following command:
./contrib/build-rpm.sh
and install the rpm by running the following command (from dom0):
rpm -i openstack-neutron-xen-plugins.rpm

View File

@ -1,34 +0,0 @@
#!/bin/bash
set -eux
thisdir=$(dirname $(readlink -f "$0"))
export NEUTRON_ROOT="$thisdir/../../../../../../"
export PYTHONPATH=$NEUTRON_ROOT
cd $NEUTRON_ROOT
VERSION=$(sh -c "(cat $NEUTRON_ROOT/neutron/version.py; \
echo 'print version_info.release_string()') | \
python")
cd -
PACKAGE=openstack-neutron-xen-plugins
RPMBUILD_DIR=$PWD/rpmbuild
if [ ! -d $RPMBUILD_DIR ]; then
echo $RPMBUILD_DIR is missing
exit 1
fi
for dir in BUILD BUILDROOT SRPMS RPMS SOURCES; do
rm -rf $RPMBUILD_DIR/$dir
mkdir -p $RPMBUILD_DIR/$dir
done
rm -rf /tmp/$PACKAGE
mkdir /tmp/$PACKAGE
cp -r ../etc/xapi.d /tmp/$PACKAGE
tar czf $RPMBUILD_DIR/SOURCES/$PACKAGE.tar.gz -C /tmp $PACKAGE
rpmbuild -ba --nodeps --define "_topdir $RPMBUILD_DIR" \
--define "version $VERSION" \
$RPMBUILD_DIR/SPECS/$PACKAGE.spec

View File

@ -1,30 +0,0 @@
Name: openstack-neutron-xen-plugins
Version: %{version}
Release: 1
Summary: Files for XenAPI support.
License: ASL 2.0
Group: Applications/Utilities
Source0: openstack-neutron-xen-plugins.tar.gz
BuildArch: noarch
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
%define debug_package %{nil}
%description
This package contains files that are required for XenAPI support for Neutron.
%prep
%setup -q -n openstack-neutron-xen-plugins
%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/etc
cp -r xapi.d $RPM_BUILD_ROOT/etc
chmod a+x $RPM_BUILD_ROOT/etc/xapi.d/plugins/*
%clean
rm -rf $RPM_BUILD_ROOT
%files
%defattr(-,root,root,-)
/etc/xapi.d/plugins/*

View File

@ -1,72 +0,0 @@
#!/usr/bin/env python
# Copyright 2012 OpenStack Foundation
# Copyright 2012 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# XenAPI plugin for executing network commands (ovs, iptables, etc) on dom0
#
import gettext
gettext.install('neutron', unicode=1)
try:
import json
except ImportError:
import simplejson as json
import subprocess
import XenAPIPlugin
ALLOWED_CMDS = [
'ip',
'ovs-ofctl',
'ovs-vsctl',
]
class PluginError(Exception):
"""Base Exception class for all plugin errors."""
def __init__(self, *args):
Exception.__init__(self, *args)
def _run_command(cmd, cmd_input):
"""Abstracts out the basics of issuing system commands. If the command
returns anything in stderr, a PluginError is raised with that information.
Otherwise, the output from stdout is returned.
"""
pipe = subprocess.PIPE
proc = subprocess.Popen(cmd, shell=False, stdin=pipe, stdout=pipe,
stderr=pipe, close_fds=True)
(out, err) = proc.communicate(cmd_input)
if err:
raise PluginError(err)
return out
def run_command(session, args):
cmd = json.loads(args.get('cmd'))
if cmd and cmd[0] not in ALLOWED_CMDS:
msg = _("Dom0 execution of '%s' is not permitted") % cmd[0]
raise PluginError(msg)
result = _run_command(cmd, json.loads(args.get('cmd_input', 'null')))
return json.dumps(result)
if __name__ == "__main__":
XenAPIPlugin.dispatch({"run_command": run_command})

View File

@ -1,135 +0,0 @@
# Copyright 2012 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from neutron.agent.common import config
from neutron.plugins.common import constants as p_const
from neutron.plugins.l2_proxy.common import constants
DEFAULT_BRIDGE_MAPPINGS = []
DEFAULT_VLAN_RANGES = []
DEFAULT_TUNNEL_RANGES = []
DEFAULT_TUNNEL_TYPES = []
ovs_opts = [
cfg.StrOpt('integration_bridge', default='br-int',
help=_("Integration bridge to use.")),
cfg.BoolOpt('enable_tunneling', default=False,
help=_("Enable tunneling support.")),
cfg.StrOpt('tunnel_bridge', default='br-tun',
help=_("Tunnel bridge to use.")),
cfg.StrOpt('int_peer_patch_port', default='patch-tun',
help=_("Peer patch port in integration bridge for tunnel "
"bridge.")),
cfg.StrOpt('tun_peer_patch_port', default='patch-int',
help=_("Peer patch port in tunnel bridge for integration "
"bridge.")),
cfg.StrOpt('local_ip', default='',
help=_("Local IP address of GRE tunnel endpoints.")),
cfg.ListOpt('bridge_mappings',
default=DEFAULT_BRIDGE_MAPPINGS,
help=_("List of <physical_network>:<bridge>. "
"Deprecated for ofagent.")),
cfg.StrOpt('tenant_network_type', default='local',
help=_("Network type for tenant networks "
"(local, vlan, gre, vxlan, or none).")),
cfg.ListOpt('network_vlan_ranges',
default=DEFAULT_VLAN_RANGES,
help=_("List of <physical_network>:<vlan_min>:<vlan_max> "
"or <physical_network>.")),
cfg.ListOpt('tunnel_id_ranges',
default=DEFAULT_TUNNEL_RANGES,
help=_("List of <tun_min>:<tun_max>.")),
cfg.StrOpt('tunnel_type', default='',
help=_("The type of tunnels to use when utilizing tunnels, "
"either 'gre' or 'vxlan'.")),
cfg.BoolOpt('use_veth_interconnection', default=False,
help=_("Use veths instead of patch ports to interconnect the "
"integration bridge to physical bridges.")),
]
agent_opts = [
cfg.IntOpt('polling_interval', default=2,
help=_("The number of seconds the agent will wait between "
"polling for local device changes.")),
cfg.BoolOpt('minimize_polling',
default=True,
help=_("Minimize polling by monitoring ovsdb for interface "
"changes.")),
cfg.IntOpt('ovsdb_monitor_respawn_interval',
default=constants.DEFAULT_OVSDBMON_RESPAWN,
help=_("The number of seconds to wait before respawning the "
"ovsdb monitor after losing communication with it.")),
cfg.ListOpt('tunnel_types', default=DEFAULT_TUNNEL_TYPES,
help=_("Network types supported by the agent "
"(gre and/or vxlan).")),
cfg.IntOpt('vxlan_udp_port', default=p_const.VXLAN_UDP_PORT,
help=_("The UDP port to use for VXLAN tunnels.")),
cfg.IntOpt('veth_mtu',
help=_("MTU size of veth interfaces")),
cfg.BoolOpt('l2_population', default=False,
help=_("Use ML2 l2population mechanism driver to learn "
"remote MAC and IPs and improve tunnel scalability.")),
cfg.BoolOpt('arp_responder', default=False,
help=_("Enable local ARP responder if it is supported. "
"Requires OVS 2.1 and ML2 l2population driver. "
"Allows the switch (when supporting an overlay) "
"to respond to an ARP request locally without "
"performing a costly ARP broadcast into the overlay.")),
cfg.BoolOpt('dont_fragment', default=True,
help=_("Set or un-set the don't fragment (DF) bit on "
"outgoing IP packet carrying GRE/VXLAN tunnel.")),
cfg.BoolOpt('enable_distributed_routing', default=False,
help=_("Make the l2 agent run in DVR mode.")),
# add by jiahaojie 00209498
cfg.StrOpt('os_region_name', default=None,
help=_("region name to use")),
cfg.StrOpt('keystone_auth_url', default='http://127.0.0.1:35357/v2.0',
help=_("keystone auth url to use")),
cfg.StrOpt('neutron_user_name',
help=_("access neutron user name to use")),
cfg.StrOpt('neutron_password',
help=_("access neutron password to use")),
cfg.StrOpt('neutron_tenant_name',
help=_("access neutron tenant to use")),
# add by jiahaojie 00209498
cfg.StrOpt('cascading_os_region_name', default=None,
help=_("region name to use")),
cfg.StrOpt('cascading_auth_url', default='http://127.0.0.1:35357/v2.0',
help=_("keystone auth url to use")),
cfg.StrOpt('cascading_user_name',
help=_("access neutron user name to use")),
cfg.StrOpt('cascading_password',
help=_("access neutron password to use")),
cfg.StrOpt('cascading_tenant_name',
help=_("access neutron tenant to use")),
cfg.IntOpt('pagination_limit', default=-1,
help=_("list ports pagination limit, default value is -1,"
"means no pagination")),
cfg.StrOpt('query_ports_mode', default='nova_proxy',
help=_("query ports mode, default value is nova_proxy,"
"means query ports from nova_proxy")),
cfg.StrOpt('proxy_sock_path', default='/var/l2proxysock',
help=_("socket path when query ports from nova_proxy")),
]
cfg.CONF.register_opts(ovs_opts, "OVS")
cfg.CONF.register_opts(agent_opts, "AGENT")
config.register_agent_state_opts_helper(cfg.CONF)
config.register_root_helper(cfg.CONF)

View File

@ -1,73 +0,0 @@
# Copyright (c) 2012 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from neutron.plugins.common import constants as p_const
# Special vlan_id value in ovs_vlan_allocations table indicating flat network
FLAT_VLAN_ID = -1
# Topic for tunnel notifications between the plugin and agent
TUNNEL = 'tunnel'
# Name prefixes for veth device or patch port pair linking the integration
# bridge with the physical bridge for a physical network
PEER_INTEGRATION_PREFIX = 'int-'
PEER_PHYSICAL_PREFIX = 'phy-'
# Nonexistent peer used to create patch ports without associating them, it
# allows to define flows before association
NONEXISTENT_PEER = 'nonexistent-peer'
# The different types of tunnels
TUNNEL_NETWORK_TYPES = [p_const.TYPE_GRE, p_const.TYPE_VXLAN]
# Various tables for DVR use of integration bridge flows
LOCAL_SWITCHING = 0
DVR_TO_SRC_MAC = 1
# Various tables for tunneling flows
DVR_PROCESS = 1
PATCH_LV_TO_TUN = 2
GRE_TUN_TO_LV = 3
VXLAN_TUN_TO_LV = 4
DVR_NOT_LEARN = 9
LEARN_FROM_TUN = 10
UCAST_TO_TUN = 20
ARP_RESPONDER = 21
FLOOD_TO_TUN = 22
# Tables for integration bridge
# Table 0 is used for forwarding.
CANARY_TABLE = 23
# Map tunnel types to tables number
TUN_TABLE = {p_const.TYPE_GRE: GRE_TUN_TO_LV,
p_const.TYPE_VXLAN: VXLAN_TUN_TO_LV}
# The default respawn interval for the ovsdb monitor
DEFAULT_OVSDBMON_RESPAWN = 30
# Represent invalid OF Port
OFPORT_INVALID = -1
ARP_RESPONDER_ACTIONS = ('move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],'
'mod_dl_src:%(mac)s,'
'load:0x2->NXM_OF_ARP_OP[],'
'move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],'
'move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],'
'load:%(mac)#x->NXM_NX_ARP_SHA[],'
'load:%(ip)#x->NXM_OF_ARP_SPA[],'
'in_port')

View File

@ -1,107 +0,0 @@
# Copyright 2011 VMware, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy import Boolean, Column, ForeignKey, Integer, String
from sqlalchemy.schema import UniqueConstraint
from neutron.db import model_base
from neutron.db import models_v2
from sqlalchemy import orm
class VlanAllocation(model_base.BASEV2):
"""Represents allocation state of vlan_id on physical network."""
__tablename__ = 'ovs_vlan_allocations'
physical_network = Column(String(64), nullable=False, primary_key=True)
vlan_id = Column(Integer, nullable=False, primary_key=True,
autoincrement=False)
allocated = Column(Boolean, nullable=False)
def __init__(self, physical_network, vlan_id):
self.physical_network = physical_network
self.vlan_id = vlan_id
self.allocated = False
def __repr__(self):
return "<VlanAllocation(%s,%d,%s)>" % (self.physical_network,
self.vlan_id, self.allocated)
class TunnelAllocation(model_base.BASEV2):
"""Represents allocation state of tunnel_id."""
__tablename__ = 'ovs_tunnel_allocations'
tunnel_id = Column(Integer, nullable=False, primary_key=True,
autoincrement=False)
allocated = Column(Boolean, nullable=False)
def __init__(self, tunnel_id):
self.tunnel_id = tunnel_id
self.allocated = False
def __repr__(self):
return "<TunnelAllocation(%d,%s)>" % (self.tunnel_id, self.allocated)
class NetworkBinding(model_base.BASEV2):
"""Represents binding of virtual network to physical realization."""
__tablename__ = 'ovs_network_bindings'
network_id = Column(String(36),
ForeignKey('networks.id', ondelete="CASCADE"),
primary_key=True)
# 'gre', 'vlan', 'flat', 'local'
network_type = Column(String(32), nullable=False)
physical_network = Column(String(64))
segmentation_id = Column(Integer) # tunnel_id or vlan_id
network = orm.relationship(
models_v2.Network,
backref=orm.backref("binding", lazy='joined',
uselist=False, cascade='delete'))
def __init__(self, network_id, network_type, physical_network,
segmentation_id):
self.network_id = network_id
self.network_type = network_type
self.physical_network = physical_network
self.segmentation_id = segmentation_id
def __repr__(self):
return "<NetworkBinding(%s,%s,%s,%d)>" % (self.network_id,
self.network_type,
self.physical_network,
self.segmentation_id)
class TunnelEndpoint(model_base.BASEV2):
"""Represents tunnel endpoint in RPC mode."""
__tablename__ = 'ovs_tunnel_endpoints'
__table_args__ = (
UniqueConstraint('id', name='uniq_ovs_tunnel_endpoints0id'),
model_base.BASEV2.__table_args__,
)
ip_address = Column(String(64), primary_key=True)
id = Column(Integer, nullable=False)
def __init__(self, ip_address, id):
self.ip_address = ip_address
self.id = id
def __repr__(self):
return "<TunnelEndpoint(%s,%s)>" % (self.ip_address, self.id)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 75 KiB

46
requirements.txt Normal file
View File

@ -0,0 +1,46 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr>=1.6
Babel>=1.3
Paste
PasteDeploy>=1.5.0
Routes!=2.0,!=2.1,>=1.12.3;python_version=='2.7'
Routes!=2.0,>=1.12.3;python_version!='2.7'
debtcollector>=0.3.0 # Apache-2.0
eventlet>=0.17.4
pecan>=1.0.0
greenlet>=0.3.2
httplib2>=0.7.5
requests!=2.8.0,>=2.5.2
Werkzeug>=0.7 # BSD License
Jinja2>=2.8 # BSD License (3 clause)
keystonemiddleware!=2.4.0,>=2.0.0
netaddr!=0.7.16,>=0.7.12
retrying!=1.3.0,>=1.2.3 # Apache-2.0
SQLAlchemy<1.1.0,>=0.9.9
WebOb>=1.2.3
python-cinderclient>=1.3.1
python-keystoneclient!=1.8.0,>=1.6.0
python-neutronclient>=2.6.0
python-novaclient>=2.29.0,!=2.33.0
alembic>=0.8.0
six>=1.9.0
stevedore>=1.5.0 # Apache-2.0
oslo.concurrency>=2.3.0 # Apache-2.0
oslo.config>=2.6.0 # Apache-2.0
oslo.context>=0.2.0 # Apache-2.0
oslo.db>=3.0.0 # Apache-2.0
oslo.i18n>=1.5.0 # Apache-2.0
oslo.log>=1.12.0 # Apache-2.0
oslo.messaging!=2.8.0,>2.6.1 # Apache-2.0
oslo.middleware>=2.9.0 # Apache-2.0
oslo.policy>=0.5.0 # Apache-2.0
oslo.rootwrap>=2.0.0 # Apache-2.0
oslo.serialization>=1.10.0 # Apache-2.0
oslo.service>=0.12.0 # Apache-2.0
oslo.utils!=2.6.0,>=2.4.0 # Apache-2.0
oslo.versionedobjects>=0.9.0
SQLAlchemy<1.1.0,>=0.9.9
sqlalchemy-migrate>=0.9.6

51
setup.cfg Normal file
View File

@ -0,0 +1,51 @@
[metadata]
name = tricircle
summary = Tricircle is an OpenStack project that aims to deal with OpenStack deployment across multiple sites.
description-file =
README.md
author = OpenStack
author-email = openstack-dev@lists.openstack.org
home-page = http://www.openstack.org/
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.3
Programming Language :: Python :: 3.4
[files]
packages =
tricircle
[build_sphinx]
source-dir = doc/source
build-dir = doc/build
all_files = 1
[upload_sphinx]
upload-dir = doc/build/html
[compile_catalog]
directory = tricircle/locale
domain = tricircle
[update_catalog]
domain = tricircle
output_dir = tricircle/locale
input_file = tricircle/locale/tricircle.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = tricircle/locale/tricircle.pot
[entry_points]
oslo.config.opts =
tricircle.api = tricircle.api.opts:list_opts
tricircle.client = tricircle.common.opts:list_opts

29
setup.py Normal file
View File

@ -0,0 +1,29 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr'],
pbr=True)

23
test-requirements.txt Normal file
View File

@ -0,0 +1,23 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
hacking<0.11,>=0.10.2
cliff>=1.14.0 # Apache-2.0
coverage>=3.6
fixtures>=1.3.1
mock>=1.2
python-subunit>=0.0.18
requests-mock>=0.6.0 # Apache-2.0
sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
oslosphinx>=2.5.0 # Apache-2.0
testrepository>=0.0.18
testtools>=1.4.0
testresources>=0.2.4
testscenarios>=0.4
WebTest>=2.0
oslotest>=1.10.0 # Apache-2.0
os-testr>=0.4.1
tempest-lib>=0.10.0
ddt>=0.7.0
pylint==1.4.4 # GNU GPL v2

40
tox.ini Normal file
View File

@ -0,0 +1,40 @@
[tox]
minversion = 1.6
envlist = py34,py27,pypy,pep8
skipsdist = True
[testenv]
sitepackages = True
usedevelop = True
install_command = pip install -U --force-reinstall {opts} {packages}
setenv =
VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/test-requirements.txt
commands = python setup.py testr --slowest --testr-args='{posargs}'
whitelist_externals = rm
[testenv:pep8]
commands = flake8
[testenv:venv]
commands = {posargs}
[testenv:cover]
commands = python setup.py testr --coverage --testr-args='{posargs}'
[testenv:genconfig]
commands = oslo-config-generator --config-file=etc/config-generator.conf
[testenv:docs]
commands = python setup.py build_sphinx
[testenv:debug]
commands = oslo_debug_helper {posargs}
[flake8]
# E123, E125 skipped as they are invalid PEP-8.
show-source = True
ignore = E123,E125
builtins = _
exclude=.venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,build

109
tricircle/api/app.py Normal file
View File

@ -0,0 +1,109 @@
# Copyright (c) 2015 Huawei, Tech. Co,. Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from keystonemiddleware import auth_token
from oslo_config import cfg
from oslo_middleware import request_id
from oslo_service import service
import pecan
import tricircle.common.exceptions as t_exc
from tricircle.common.i18n import _
common_opts = [
cfg.StrOpt('bind_host', default='0.0.0.0',
help=_("The host IP to bind to")),
cfg.IntOpt('bind_port', default=19999,
help=_("The port to bind to")),
cfg.IntOpt('api_workers', default=1,
help=_("number of api workers")),
cfg.StrOpt('api_extensions_path', default="",
help=_("The path for API extensions")),
cfg.StrOpt('auth_strategy', default='keystone',
help=_("The type of authentication to use")),
cfg.BoolOpt('allow_bulk', default=True,
help=_("Allow the usage of the bulk API")),
cfg.BoolOpt('allow_pagination', default=False,
help=_("Allow the usage of the pagination")),
cfg.BoolOpt('allow_sorting', default=False,
help=_("Allow the usage of the sorting")),
cfg.StrOpt('pagination_max_limit', default="-1",
help=_("The maximum number of items returned in a single "
"response, value was 'infinite' or negative integer "
"means no limit")),
]
def setup_app(*args, **kwargs):
config = {
'server': {
'port': cfg.CONF.bind_port,
'host': cfg.CONF.bind_host
},
'app': {
'root': 'tricircle.api.controllers.root.RootController',
'modules': ['tricircle.api'],
'errors': {
400: '/error',
'__force_dict__': True
}
}
}
pecan_config = pecan.configuration.conf_from_dict(config)
# app_hooks = [], hook collection will be put here later
app = pecan.make_app(
pecan_config.app.root,
debug=False,
wrap_app=_wrap_app,
force_canonical=False,
hooks=[],
guess_content_type_from_ext=True
)
return app
def _wrap_app(app):
app = request_id.RequestId(app)
if cfg.CONF.auth_strategy == 'noauth':
pass
elif cfg.CONF.auth_strategy == 'keystone':
# NOTE(zhiyuan) pkg_resources will try to load tricircle to get module
# version, passing "project" as empty string to bypass it
app = auth_token.AuthProtocol(app, {'project': ''})
else:
raise t_exc.InvalidConfigurationOption(
opt_name='auth_strategy', opt_value=cfg.CONF.auth_strategy)
return app
_launcher = None
def serve(api_service, conf, workers=1):
global _launcher
if _launcher:
raise RuntimeError(_('serve() can only be called once'))
_launcher = service.launch(conf, api_service, workers=workers)
def wait():
_launcher.wait()

View File

193
tricircle/api/controllers/root.py Executable file
View File

@ -0,0 +1,193 @@
# Copyright (c) 2015 Huawei Tech. Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import uuid
import oslo_log.log as logging
import pecan
from pecan import request
from pecan import rest
from tricircle.common import client
import tricircle.common.context as t_context
from tricircle.common import exceptions
from tricircle.common import utils
from tricircle.db import models
LOG = logging.getLogger(__name__)
def expose(*args, **kwargs):
kwargs.setdefault('content_type', 'application/json')
kwargs.setdefault('template', 'json')
return pecan.expose(*args, **kwargs)
def when(index, *args, **kwargs):
kwargs.setdefault('content_type', 'application/json')
kwargs.setdefault('template', 'json')
return index.when(*args, **kwargs)
class RootController(object):
@expose()
def _lookup(self, version, *remainder):
if version == 'v1.0':
return V1Controller(), remainder
@pecan.expose('json')
def index(self):
return {
"versions": [
{
"status": "CURRENT",
"links": [
{
"rel": "self",
"href": pecan.request.application_url + "/v1.0/"
}
],
"id": "v1.0",
"updated": "2015-09-09"
}
]
}
class V1Controller(object):
def __init__(self):
self.sub_controllers = {
"sites": SitesController()
}
for name, ctrl in self.sub_controllers.items():
setattr(self, name, ctrl)
@pecan.expose('json')
def index(self):
return {
"version": "1.0",
"links": [
{"rel": "self",
"href": pecan.request.application_url + "/v1.0"}
] + [
{"rel": name,
"href": pecan.request.application_url + "/v1.0/" + name}
for name in sorted(self.sub_controllers)
]
}
def _extract_context_from_environ(environ):
context_paras = {'auth_token': 'HTTP_X_AUTH_TOKEN',
'user': 'HTTP_X_USER_ID',
'tenant': 'HTTP_X_TENANT_ID',
'user_name': 'HTTP_X_USER_NAME',
'tenant_name': 'HTTP_X_PROJECT_NAME',
'domain': 'HTTP_X_DOMAIN_ID',
'user_domain': 'HTTP_X_USER_DOMAIN_ID',
'project_domain': 'HTTP_X_PROJECT_DOMAIN_ID',
'request_id': 'openstack.request_id'}
for key in context_paras:
context_paras[key] = environ.get(context_paras[key])
role = environ.get('HTTP_X_ROLE')
# TODO(zhiyuan): replace with policy check
context_paras['is_admin'] = role == 'admin'
return t_context.Context(**context_paras)
def _get_environment():
return request.environ
class SitesController(rest.RestController):
"""ReST controller to handle CRUD operations of site resource"""
@expose()
def put(self, site_id, **kw):
return {'message': 'PUT'}
@expose()
def get_one(self, site_id):
context = _extract_context_from_environ(_get_environment())
try:
return {'site': models.get_site(context, site_id)}
except exceptions.ResourceNotFound:
pecan.abort(404, 'Site with id %s not found' % site_id)
@expose()
def get_all(self):
context = _extract_context_from_environ(_get_environment())
sites = models.list_sites(context, [])
return {'sites': sites}
@expose()
def post(self, **kw):
context = _extract_context_from_environ(_get_environment())
if not context.is_admin:
pecan.abort(400, 'Admin role required to create sites')
return
site_name = kw.get('name')
is_top_site = kw.get('top', False)
if not site_name:
pecan.abort(400, 'Name of site required')
return
site_filters = [{'key': 'site_name', 'comparator': 'eq',
'value': site_name}]
sites = models.list_sites(context, site_filters)
if sites:
pecan.abort(409, 'Site with name %s exists' % site_name)
return
ag_name = utils.get_ag_name(site_name)
# top site doesn't need az
az_name = utils.get_az_name(site_name) if not is_top_site else ''
try:
site_dict = {'site_id': str(uuid.uuid4()),
'site_name': site_name,
'az_id': az_name}
site = models.create_site(context, site_dict)
except Exception as e:
LOG.debug(e.message)
pecan.abort(500, 'Fail to create site')
return
# top site doesn't need aggregate
if is_top_site:
pecan.response.status = 201
return {'site': site}
else:
try:
top_client = client.Client()
top_client.create_aggregates(context, ag_name, az_name)
except Exception as e:
LOG.debug(e.message)
# delete previously created site
models.delete_site(context, site['site_id'])
pecan.abort(500, 'Fail to create aggregate')
return
pecan.response.status = 201
return {'site': site}
@expose()
def delete(self, site_id):
return {'message': 'DELETE'}

22
tricircle/api/opts.py Normal file
View File

@ -0,0 +1,22 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import tricircle.api.app
def list_opts():
return [
('DEFAULT', tricircle.api.app.common_opts),
]

View File

406
tricircle/common/client.py Normal file
View File

@ -0,0 +1,406 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import functools
import inspect
import six
import uuid
from keystoneclient.auth.identity import v3 as auth_identity
from keystoneclient.auth import token_endpoint
from keystoneclient import session
from keystoneclient.v3 import client as keystone_client
from oslo_config import cfg
from oslo_log import log as logging
import tricircle.common.context as tricircle_context
from tricircle.common import exceptions
from tricircle.common import resource_handle
from tricircle.db import models
client_opts = [
cfg.StrOpt('auth_url',
default='http://127.0.0.1:5000/v3',
help='keystone authorization url'),
cfg.StrOpt('identity_url',
default='http://127.0.0.1:35357/v3',
help='keystone service url'),
cfg.BoolOpt('auto_refresh_endpoint',
default=False,
help='if set to True, endpoint will be automatically'
'refreshed if timeout accessing endpoint'),
cfg.StrOpt('top_site_name',
help='name of top site which client needs to access'),
cfg.StrOpt('admin_username',
help='username of admin account, needed when'
' auto_refresh_endpoint set to True'),
cfg.StrOpt('admin_password',
help='password of admin account, needed when'
' auto_refresh_endpoint set to True'),
cfg.StrOpt('admin_tenant',
help='tenant name of admin account, needed when'
' auto_refresh_endpoint set to True'),
cfg.StrOpt('admin_user_domain_name',
default='Default',
help='user domain name of admin account, needed when'
' auto_refresh_endpoint set to True'),
cfg.StrOpt('admin_tenant_domain_name',
default='Default',
help='tenant domain name of admin account, needed when'
' auto_refresh_endpoint set to True')
]
client_opt_group = cfg.OptGroup('client')
cfg.CONF.register_group(client_opt_group)
cfg.CONF.register_opts(client_opts, group=client_opt_group)
LOG = logging.getLogger(__name__)
def _safe_operation(operation_name):
def handle_func(func):
@six.wraps(func)
def handle_args(*args, **kwargs):
instance, resource, context = args[:3]
if resource not in instance.operation_resources_map[
operation_name]:
raise exceptions.ResourceNotSupported(resource, operation_name)
retries = 1
for _ in xrange(retries + 1):
try:
service = instance.resource_service_map[resource]
instance._ensure_endpoint_set(context, service)
return func(*args, **kwargs)
except exceptions.EndpointNotAvailable as e:
if cfg.CONF.client.auto_refresh_endpoint:
LOG.warn(e.message + ', update endpoint and try again')
instance._update_endpoint_from_keystone(context, True)
else:
raise
return handle_args
return handle_func
class Client(object):
def __init__(self, site_name=None):
self.auth_url = cfg.CONF.client.auth_url
self.resource_service_map = {}
self.operation_resources_map = collections.defaultdict(set)
self.service_handle_map = {}
self.site_name = site_name
if not self.site_name:
self.site_name = cfg.CONF.client.top_site_name
for _, handle_class in inspect.getmembers(resource_handle):
if not inspect.isclass(handle_class):
continue
if not hasattr(handle_class, 'service_type'):
continue
handle_obj = handle_class(self.auth_url)
self.service_handle_map[handle_obj.service_type] = handle_obj
for resource in handle_obj.support_resource:
self.resource_service_map[resource] = handle_obj.service_type
for operation, index in six.iteritems(
resource_handle.operation_index_map):
# add parentheses to emphasize we mean to do bitwise and
if (handle_obj.support_resource[resource] & index) == 0:
continue
self.operation_resources_map[operation].add(resource)
setattr(self, '%s_%ss' % (operation, resource),
functools.partial(
getattr(self, '%s_resources' % operation),
resource))
def _get_keystone_session(self):
auth = auth_identity.Password(
auth_url=cfg.CONF.client.identity_url,
username=cfg.CONF.client.admin_username,
password=cfg.CONF.client.admin_password,
project_name=cfg.CONF.client.admin_tenant,
user_domain_name=cfg.CONF.client.admin_user_domain_name,
project_domain_name=cfg.CONF.client.admin_tenant_domain_name)
return session.Session(auth=auth)
def _get_admin_token(self):
return self._get_keystone_session().get_token()
def _get_admin_project_id(self):
return self._get_keystone_session().get_project_id()
def _get_endpoint_from_keystone(self, cxt):
auth = token_endpoint.Token(cfg.CONF.client.identity_url,
cxt.auth_token)
sess = session.Session(auth=auth)
cli = keystone_client.Client(session=sess)
service_id_name_map = {}
for service in cli.services.list():
service_dict = service.to_dict()
service_id_name_map[service_dict['id']] = service_dict['name']
region_service_endpoint_map = {}
for endpoint in cli.endpoints.list():
endpoint_dict = endpoint.to_dict()
if endpoint_dict['interface'] != 'public':
continue
region_id = endpoint_dict['region']
service_id = endpoint_dict['service_id']
url = endpoint_dict['url']
service_name = service_id_name_map[service_id]
if region_id not in region_service_endpoint_map:
region_service_endpoint_map[region_id] = {}
region_service_endpoint_map[region_id][service_name] = url
return region_service_endpoint_map
def _get_config_with_retry(self, cxt, filters, site, service, retry):
conf_list = models.list_site_service_configurations(cxt, filters)
if len(conf_list) > 1:
raise exceptions.EndpointNotUnique(site, service)
if len(conf_list) == 0:
if not retry:
raise exceptions.EndpointNotFound(site, service)
self._update_endpoint_from_keystone(cxt, True)
return self._get_config_with_retry(cxt,
filters, site, service, False)
return conf_list
def _ensure_endpoint_set(self, cxt, service):
handle = self.service_handle_map[service]
if not handle.is_endpoint_url_set():
site_filters = [{'key': 'site_name',
'comparator': 'eq',
'value': self.site_name}]
site_list = models.list_sites(cxt, site_filters)
if len(site_list) == 0:
raise exceptions.ResourceNotFound(models.Site,
self.site_name)
# site_name is unique key, safe to get the first element
site_id = site_list[0]['site_id']
config_filters = [
{'key': 'site_id', 'comparator': 'eq', 'value': site_id},
{'key': 'service_type', 'comparator': 'eq', 'value': service}]
conf_list = self._get_config_with_retry(
cxt, config_filters, site_id, service,
cfg.CONF.client.auto_refresh_endpoint)
url = conf_list[0]['service_url']
handle.update_endpoint_url(url)
def _update_endpoint_from_keystone(self, cxt, is_internal):
"""Update the database by querying service endpoint url from Keystone
:param cxt: context object
:param is_internal: if True, this method utilizes pre-configured admin
username and password to apply an new admin token, this happens only
when auto_refresh_endpoint is set to True. if False, token in cxt is
directly used, users should prepare admin token themselves
:return: None
"""
if is_internal:
admin_context = tricircle_context.Context()
admin_context.auth_token = self._get_admin_token()
endpoint_map = self._get_endpoint_from_keystone(admin_context)
else:
endpoint_map = self._get_endpoint_from_keystone(cxt)
for region in endpoint_map:
# use region name to query site
site_filters = [{'key': 'site_name', 'comparator': 'eq',
'value': region}]
site_list = models.list_sites(cxt, site_filters)
# skip region/site not registered in cascade service
if len(site_list) != 1:
continue
for service in endpoint_map[region]:
site_id = site_list[0]['site_id']
config_filters = [{'key': 'site_id', 'comparator': 'eq',
'value': site_id},
{'key': 'service_type', 'comparator': 'eq',
'value': service}]
config_list = models.list_site_service_configurations(
cxt, config_filters)
if len(config_list) > 1:
raise exceptions.EndpointNotUnique(site_id, service)
if len(config_list) == 1:
config_id = config_list[0]['service_id']
update_dict = {
'service_url': endpoint_map[region][service]}
models.update_site_service_configuration(
cxt, config_id, update_dict)
else:
config_dict = {
'service_id': str(uuid.uuid4()),
'site_id': site_id,
'service_type': service,
'service_url': endpoint_map[region][service]
}
models.create_site_service_configuration(
cxt, config_dict)
def get_endpoint(self, cxt, site_id, service):
"""Get endpoint url of given site and service
:param cxt: context object
:param site_id: site id
:param service: service type
:return: endpoint url for given site and service
:raises: EndpointNotUnique, EndpointNotFound
"""
config_filters = [
{'key': 'site_id', 'comparator': 'eq', 'value': site_id},
{'key': 'service_type', 'comparator': 'eq', 'value': service}]
conf_list = self._get_config_with_retry(
cxt, config_filters, site_id, service,
cfg.CONF.client.auto_refresh_endpoint)
return conf_list[0]['service_url']
def update_endpoint_from_keystone(self, cxt):
"""Update the database by querying service endpoint url from Keystone
Only admin should invoke this method since it requires admin token
:param cxt: context object containing admin token
:return: None
"""
self._update_endpoint_from_keystone(cxt, False)
@_safe_operation('list')
def list_resources(self, resource, cxt, filters=None):
"""Query resource in site of top layer
Directly invoke this method to query resources, or use
list_(resource)s (self, cxt, filters=None), for example,
list_servers (self, cxt, filters=None). These methods are
automatically generated according to the supported resources
of each ResourceHandle class.
:param resource: resource type
:param cxt: context object
:param filters: list of dict with key 'key', 'comparator', 'value'
like {'key': 'name', 'comparator': 'eq', 'value': 'private'}, 'key'
is the field name of resources
:return: list of dict containing resources information
:raises: EndpointNotAvailable
"""
if cxt.is_admin and not cxt.auth_token:
cxt.auth_token = self._get_admin_token()
cxt.tenant = self._get_admin_project_id()
service = self.resource_service_map[resource]
handle = self.service_handle_map[service]
filters = filters or []
return handle.handle_list(cxt, resource, filters)
@_safe_operation('create')
def create_resources(self, resource, cxt, *args, **kwargs):
"""Create resource in site of top layer
Directly invoke this method to create resources, or use
create_(resource)s (self, cxt, *args, **kwargs). These methods are
automatically generated according to the supported resources of each
ResourceHandle class.
:param resource: resource type
:param cxt: context object
:param args, kwargs: passed according to resource type
--------------------------
resource -> args -> kwargs
--------------------------
aggregate -> name, availability_zone_name -> none
--------------------------
:return: a dict containing resource information
:raises: EndpointNotAvailable
"""
if cxt.is_admin and not cxt.auth_token:
cxt.auth_token = self._get_admin_token()
cxt.tenant = self._get_admin_project_id()
service = self.resource_service_map[resource]
handle = self.service_handle_map[service]
return handle.handle_create(cxt, resource, *args, **kwargs)
@_safe_operation('delete')
def delete_resources(self, resource, cxt, resource_id):
"""Delete resource in site of top layer
Directly invoke this method to delete resources, or use
delete_(resource)s (self, cxt, obj_id). These methods are
automatically generated according to the supported resources
of each ResourceHandle class.
:param resource: resource type
:param cxt: context object
:param resource_id: id of resource
:return: None
:raises: EndpointNotAvailable
"""
if cxt.is_admin and not cxt.auth_token:
cxt.auth_token = self._get_admin_token()
cxt.tenant = self._get_admin_project_id()
service = self.resource_service_map[resource]
handle = self.service_handle_map[service]
handle.handle_delete(cxt, resource, resource_id)
@_safe_operation('get')
def get_resources(self, resource, cxt, resource_id):
"""Get resource in site of top layer
Directly invoke this method to get resources, or use
get(resource)s (self, cxt, obj_id). These methods are
automatically generated according to the supported resources
of each ResourceHandle class.
:param resource: resource type
:param cxt: context object
:param resource_id: id of resource
:return: a dict containing resource information
:raises: EndpointNotAvailable
"""
if cxt.is_admin and not cxt.auth_token:
cxt.auth_token = self._get_admin_token()
cxt.tenant = self._get_admin_project_id()
service = self.resource_service_map[resource]
handle = self.service_handle_map[service]
handle.handle_get(cxt, resource, resource_id)
@_safe_operation('action')
def action_resources(self, resource, cxt, action, *args, **kwargs):
"""Apply action on resource in site of top layer
Directly invoke this method to apply action, or use
action_(resource)s (self, cxt, action, *args, **kwargs). These methods
are automatically generated according to the supported resources of
each ResourceHandle class.
:param resource: resource type
:param cxt: context object
:param action: action applied on resource
:param args, kwargs: passed according to resource type
--------------------------
resource -> action -> args -> kwargs
--------------------------
aggregate -> add_host -> aggregate, host -> none
volume -> set_bootable -> volume, flag -> none
--------------------------
:return: None
:raises: EndpointNotAvailable
"""
if cxt.is_admin and not cxt.auth_token:
cxt.auth_token = self._get_admin_token()
cxt.tenant = self._get_admin_project_id()
service = self.resource_service_map[resource]
handle = self.service_handle_map[service]
return handle.handle_action(cxt, resource, action, *args, **kwargs)

View File

@ -0,0 +1,67 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Routines for configuring tricircle, largely copy from Neutron
"""
import sys
from oslo_config import cfg
import oslo_log.log as logging
from tricircle.common.i18n import _LI
# from tricircle import policy
from tricircle.common import version
LOG = logging.getLogger(__name__)
def init(opts, args, **kwargs):
# Register the configuration options
cfg.CONF.register_opts(opts)
# ks_session.Session.register_conf_options(cfg.CONF)
# auth.register_conf_options(cfg.CONF)
logging.register_options(cfg.CONF)
cfg.CONF(args=args, project='tricircle',
version='%%(prog)s %s' % version.version_info.release_string(),
**kwargs)
_setup_logging()
def _setup_logging():
"""Sets up the logging options for a log with supplied name."""
product_name = "tricircle"
logging.setup(cfg.CONF, product_name)
LOG.info(_LI("Logging enabled!"))
LOG.info(_LI("%(prog)s version %(version)s"),
{'prog': sys.argv[0],
'version': version.version_info.release_string()})
LOG.debug("command line: %s", " ".join(sys.argv))
def reset_service():
# Reset worker in case SIGHUP is called.
# Note that this is called only in case a service is running in
# daemon mode.
_setup_logging()
# TODO(zhiyuan) enforce policy later
# policy.refresh()

View File

@ -0,0 +1,73 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import oslo_context.context as oslo_ctx
from tricircle.db import core
def get_db_context():
return Context()
def get_admin_context():
ctx = Context()
ctx.is_admin = True
return ctx
class ContextBase(oslo_ctx.RequestContext):
def __init__(self, auth_token=None, user_id=None, tenant_id=None,
is_admin=False, request_id=None, overwrite=True,
user_name=None, tenant_name=None, **kwargs):
super(ContextBase, self).__init__(
auth_token=auth_token,
user=user_id or kwargs.get('user', None),
tenant=tenant_id or kwargs.get('tenant', None),
domain=kwargs.get('domain', None),
user_domain=kwargs.get('user_domain', None),
project_domain=kwargs.get('project_domain', None),
is_admin=is_admin,
read_only=kwargs.get('read_only', False),
show_deleted=kwargs.get('show_deleted', False),
request_id=request_id,
resource_uuid=kwargs.get('resource_uuid', None),
overwrite=overwrite)
self.user_name = user_name
self.tenant_name = tenant_name
def to_dict(self):
ctx_dict = super(ContextBase, self).to_dict()
ctx_dict.update({
'user_name': self.user_name,
'tenant_name': self.tenant_name
})
return ctx_dict
@classmethod
def from_dict(cls, ctx):
return cls(**ctx)
class Context(ContextBase):
def __init__(self, **kwargs):
super(Context, self).__init__(**kwargs)
self._session = None
@property
def session(self):
if not self._session:
self._session = core.get_session()
return self._session

View File

@ -0,0 +1,122 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Tricircle base exception handling.
"""
import six
from oslo_utils import excutils
from tricircle.common.i18n import _
class TricircleException(Exception):
"""Base Tricircle Exception.
To correctly use this class, inherit from it and define
a 'message' property. That message will get printf'd
with the keyword arguments provided to the constructor.
"""
message = _("An unknown exception occurred.")
def __init__(self, **kwargs):
try:
super(TricircleException, self).__init__(self.message % kwargs)
self.msg = self.message % kwargs
except Exception:
with excutils.save_and_reraise_exception() as ctxt:
if not self.use_fatal_exceptions():
ctxt.reraise = False
# at least get the core message out if something happened
super(TricircleException, self).__init__(self.message)
if six.PY2:
def __unicode__(self):
return unicode(self.msg)
def use_fatal_exceptions(self):
return False
class BadRequest(TricircleException):
message = _('Bad %(resource)s request: %(msg)s')
class NotFound(TricircleException):
pass
class Conflict(TricircleException):
pass
class NotAuthorized(TricircleException):
message = _("Not authorized.")
class ServiceUnavailable(TricircleException):
message = _("The service is unavailable")
class AdminRequired(NotAuthorized):
message = _("User does not have admin privileges: %(reason)s")
class InUse(TricircleException):
message = _("The resource is inuse")
class InvalidConfigurationOption(TricircleException):
message = _("An invalid value was provided for %(opt_name)s: "
"%(opt_value)s")
class EndpointNotAvailable(TricircleException):
message = "Endpoint %(url)s for %(service)s is not available"
def __init__(self, service, url):
super(EndpointNotAvailable, self).__init__(service=service, url=url)
class EndpointNotUnique(TricircleException):
message = "Endpoint for %(service)s in %(site)s not unique"
def __init__(self, site, service):
super(EndpointNotUnique, self).__init__(site=site, service=service)
class EndpointNotFound(TricircleException):
message = "Endpoint for %(service)s in %(site)s not found"
def __init__(self, site, service):
super(EndpointNotFound, self).__init__(site=site, service=service)
class ResourceNotFound(TricircleException):
message = "Could not find %(resource_type)s: %(unique_key)s"
def __init__(self, model, unique_key):
resource_type = model.__name__.lower()
super(ResourceNotFound, self).__init__(resource_type=resource_type,
unique_key=unique_key)
class ResourceNotSupported(TricircleException):
message = "%(method)s method not supported for %(resource)s"
def __init__(self, resource, method):
super(ResourceNotSupported, self).__init__(resource=resource,
method=method)

30
tricircle/common/i18n.py Normal file
View File

@ -0,0 +1,30 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import oslo_i18n
_translators = oslo_i18n.TranslatorFactory(domain='tricircle')
# The primary translation function using the well-known name "_"
_ = _translators.primary
# Translators for log levels.
#
# The abbreviated names are meant to reflect the usual use of a short
# name like '_'. The "L" is for "log" and the other letter comes from
# the level.
_LI = _translators.log_info
_LW = _translators.log_warning
_LE = _translators.log_error
_LC = _translators.log_critical

22
tricircle/common/opts.py Normal file
View File

@ -0,0 +1,22 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import tricircle.common.client
def list_opts():
return [
('client', tricircle.common.client.client_opts),
]

View File

@ -0,0 +1,248 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from cinderclient import client as c_client
from cinderclient import exceptions as c_exceptions
import glanceclient as g_client
import glanceclient.exc as g_exceptions
from neutronclient.common import exceptions as q_exceptions
from neutronclient.neutron import client as q_client
from novaclient import client as n_client
from novaclient import exceptions as n_exceptions
from oslo_config import cfg
from oslo_log import log as logging
from requests import exceptions as r_exceptions
from tricircle.common import exceptions
client_opts = [
cfg.IntOpt('cinder_timeout',
default=60,
help='timeout for cinder client in seconds'),
cfg.IntOpt('glance_timeout',
default=60,
help='timeout for glance client in seconds'),
cfg.IntOpt('neutron_timeout',
default=60,
help='timeout for neutron client in seconds'),
cfg.IntOpt('nova_timeout',
default=60,
help='timeout for nova client in seconds'),
]
cfg.CONF.register_opts(client_opts, group='client')
LIST, CREATE, DELETE, GET, ACTION = 1, 2, 4, 8, 16
operation_index_map = {'list': LIST, 'create': CREATE,
'delete': DELETE, 'get': GET, 'action': ACTION}
LOG = logging.getLogger(__name__)
def _transform_filters(filters):
filter_dict = {}
for query_filter in filters:
# only eq filter supported at first
if query_filter['comparator'] != 'eq':
continue
key = query_filter['key']
value = query_filter['value']
filter_dict[key] = value
return filter_dict
class ResourceHandle(object):
def __init__(self, auth_url):
self.auth_url = auth_url
self.endpoint_url = None
def is_endpoint_url_set(self):
return self.endpoint_url is not None
def update_endpoint_url(self, url):
self.endpoint_url = url
class GlanceResourceHandle(ResourceHandle):
service_type = 'glance'
support_resource = {'image': LIST}
def _get_client(self, cxt):
return g_client.Client('1',
token=cxt.auth_token,
auth_url=self.auth_url,
endpoint=self.endpoint_url,
timeout=cfg.CONF.client.glance_timeout)
def handle_list(self, cxt, resource, filters):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
return [res.to_dict() for res in getattr(
client, collection).list(filters=_transform_filters(filters))]
except g_exceptions.InvalidEndpoint:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable('glance',
client.http_client.endpoint)
class NeutronResourceHandle(ResourceHandle):
service_type = 'neutron'
support_resource = {'network': LIST,
'subnet': LIST,
'port': LIST,
'router': LIST,
'security_group': LIST,
'security_group_rule': LIST}
def _get_client(self, cxt):
return q_client.Client('2.0',
token=cxt.auth_token,
auth_url=self.auth_url,
endpoint_url=self.endpoint_url,
timeout=cfg.CONF.client.neutron_timeout)
def handle_list(self, cxt, resource, filters):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
search_opts = _transform_filters(filters)
return [res for res in getattr(
client, 'list_%s' % collection)(**search_opts)[collection]]
except q_exceptions.ConnectionFailed:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable(
'neutron', client.httpclient.endpoint_url)
class NovaResourceHandle(ResourceHandle):
service_type = 'nova'
support_resource = {'flavor': LIST,
'server': LIST,
'aggregate': LIST | CREATE | DELETE | ACTION}
def _get_client(self, cxt):
cli = n_client.Client('2',
auth_token=cxt.auth_token,
auth_url=self.auth_url,
timeout=cfg.CONF.client.nova_timeout)
cli.set_management_url(
self.endpoint_url.replace('$(tenant_id)s', cxt.tenant))
return cli
def handle_list(self, cxt, resource, filters):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
# only server list supports filter
if resource == 'server':
search_opts = _transform_filters(filters)
return [res.to_dict() for res in getattr(
client, collection).list(search_opts=search_opts)]
else:
return [res.to_dict() for res in getattr(client,
collection).list()]
except r_exceptions.ConnectTimeout:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable('nova',
client.client.management_url)
def handle_create(self, cxt, resource, *args, **kwargs):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
return getattr(client, collection).create(
*args, **kwargs).to_dict()
except r_exceptions.ConnectTimeout:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable('nova',
client.client.management_url)
def handle_delete(self, cxt, resource, resource_id):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
return getattr(client, collection).delete(resource_id)
except r_exceptions.ConnectTimeout:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable('nova',
client.client.management_url)
except n_exceptions.NotFound:
LOG.debug("Delete %(resource)s %(resource_id)s which not found",
{'resource': resource, 'resource_id': resource_id})
def handle_action(self, cxt, resource, action, *args, **kwargs):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
resource_manager = getattr(client, collection)
getattr(resource_manager, action)(*args, **kwargs)
except r_exceptions.ConnectTimeout:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable('nova',
client.client.management_url)
class CinderResourceHandle(ResourceHandle):
service_type = 'cinder'
support_resource = {'volume': GET | ACTION,
'transfer': CREATE | ACTION}
def _get_client(self, cxt):
cli = c_client.Client('2',
auth_token=cxt.auth_token,
auth_url=self.auth_url,
timeout=cfg.CONF.client.cinder_timeout)
cli.set_management_url(
self.endpoint_url.replace('$(tenant_id)s', cxt.tenant))
return cli
def handle_get(self, cxt, resource, resource_id):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
res = getattr(client, collection).get(resource_id)
info = {}
info.update(res._info)
return info
except r_exceptions.ConnectTimeout:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable('cinder',
client.client.management_url)
def handle_delete(self, cxt, resource, resource_id):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
return getattr(client, collection).delete(resource_id)
except r_exceptions.ConnectTimeout:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable('cinder',
client.client.management_url)
except c_exceptions.NotFound:
LOG.debug("Delete %(resource)s %(resource_id)s which not found",
{'resource': resource, 'resource_id': resource_id})
def handle_action(self, cxt, resource, action, *args, **kwargs):
try:
client = self._get_client(cxt)
collection = '%ss' % resource
resource_manager = getattr(client, collection)
getattr(resource_manager, action)(*args, **kwargs)
except r_exceptions.ConnectTimeout:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable('cinder',
client.client.management_url)

30
tricircle/common/utils.py Normal file
View File

@ -0,0 +1,30 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
def get_import_path(cls):
return cls.__module__ + "." + cls.__name__
def get_ag_name(site_name):
return 'ag_%s' % site_name
def get_az_name(site_name):
return 'az_%s' % site_name
def get_node_name(site_name):
return "cascade_%s" % site_name

View File

@ -0,0 +1,17 @@
# Copyright 2011 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pbr.version
version_info = pbr.version.VersionInfo('tricircle')

0
tricircle/db/__init__.py Normal file
View File

143
tricircle/db/core.py Normal file
View File

@ -0,0 +1,143 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
import oslo_db.options as db_options
import oslo_db.sqlalchemy.session as db_session
from oslo_utils import strutils
import sqlalchemy as sql
from sqlalchemy.ext import declarative
from sqlalchemy.inspection import inspect
from tricircle.common import exceptions
_engine_facade = None
ModelBase = declarative.declarative_base()
def _filter_query(model, query, filters):
"""Apply filter to query
:param model:
:param query:
:param filters: list of filter dict with key 'key', 'comparator', 'value'
like {'key': 'site_id', 'comparator': 'eq', 'value': 'test_site_uuid'}
:return:
"""
filter_dict = {}
for query_filter in filters:
# only eq filter supported at first
if query_filter['comparator'] != 'eq':
continue
key = query_filter['key']
if key not in model.attributes:
continue
if isinstance(inspect(model).columns[key].type, sql.Boolean):
filter_dict[key] = strutils.bool_from_string(query_filter['value'])
else:
filter_dict[key] = query_filter['value']
if filter_dict:
return query.filter_by(**filter_dict)
else:
return query
def _get_engine_facade():
global _engine_facade
if not _engine_facade:
_engine_facade = db_session.EngineFacade.from_config(cfg.CONF)
return _engine_facade
def _get_resource(context, model, pk_value):
res_obj = context.session.query(model).get(pk_value)
if not res_obj:
raise exceptions.ResourceNotFound(model, pk_value)
return res_obj
def create_resource(context, model, res_dict):
res_obj = model.from_dict(res_dict)
context.session.add(res_obj)
context.session.flush()
# retrieve auto-generated fields
context.session.refresh(res_obj)
return res_obj.to_dict()
def delete_resource(context, model, pk_value):
res_obj = _get_resource(context, model, pk_value)
context.session.delete(res_obj)
def get_engine():
return _get_engine_facade().get_engine()
def get_resource(context, model, pk_value):
return _get_resource(context, model, pk_value).to_dict()
def get_session(expire_on_commit=False):
return _get_engine_facade().get_session(expire_on_commit=expire_on_commit)
def initialize():
db_options.set_defaults(
cfg.CONF,
connection='sqlite:///:memory:')
def query_resource(context, model, filters):
query = context.session.query(model)
objs = _filter_query(model, query, filters)
return [obj.to_dict() for obj in objs]
def update_resource(context, model, pk_value, update_dict):
res_obj = _get_resource(context, model, pk_value)
for key in update_dict:
if key not in model.attributes:
continue
skip = False
for pkey in inspect(model).primary_key:
if pkey.name == key:
skip = True
break
if skip:
continue
setattr(res_obj, key, update_dict[key])
return res_obj.to_dict()
class DictBase(object):
attributes = []
@classmethod
def from_dict(cls, d):
return cls(**d)
def to_dict(self):
d = {}
for attr in self.__class__.attributes:
d[attr] = getattr(self, attr)
return d
def __getitem__(self, key):
return getattr(self, key)

View File

@ -1,4 +1,5 @@
# Copyright 2012 Red Hat, Inc.
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
@ -11,3 +12,6 @@
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
DB_INIT_VERSION = 0

View File

@ -0,0 +1,26 @@
[db_settings]
# Used to identify which repository this database is versioned under.
# You can use the name of your project.
repository_id=tricircle
# The name of the database table used to track the schema version.
# This name shouldn't already be used by your project.
# If this is changed once a database is under version control, you'll need to
# change the table name in each database too.
version_table=migrate_version
# When committing a change script, Migrate will attempt to generate the
# sql for all supported databases; normally, if one of them fails - probably
# because you don't have that database installed - it is ignored and the
# commit continues, perhaps ending successfully.
# Databases in this list MUST compile successfully during a commit, or the
# entire commit will fail. List the databases your application will actually
# be using to ensure your updates to that database work properly.
# This must be a list; example: ['postgres','sqlite']
required_dbs=[]
# When creating new change scripts, Migrate will stamp the new script with
# a version number. By default this is latest_version + 1. You can set this
# to 'true' to tell Migrate to use the UTC timestamp instead.
use_timestamp_numbering=False

View File

@ -0,0 +1,54 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import migrate
import sqlalchemy as sql
def upgrade(migrate_engine):
meta = sql.MetaData()
meta.bind = migrate_engine
cascaded_sites = sql.Table(
'cascaded_sites', meta,
sql.Column('site_id', sql.String(length=64), primary_key=True),
sql.Column('site_name', sql.String(length=64), unique=True,
nullable=False),
sql.Column('az_id', sql.String(length=64), nullable=False),
mysql_engine='InnoDB',
mysql_charset='utf8')
cascaded_site_service_configuration = sql.Table(
'cascaded_site_service_configuration', meta,
sql.Column('service_id', sql.String(length=64), primary_key=True),
sql.Column('site_id', sql.String(length=64), nullable=False),
sql.Column('service_type', sql.String(length=64), nullable=False),
sql.Column('service_url', sql.String(length=512), nullable=False),
mysql_engine='InnoDB',
mysql_charset='utf8')
tables = [cascaded_sites, cascaded_site_service_configuration]
for table in tables:
table.create()
fkey = {'columns': [cascaded_site_service_configuration.c.site_id],
'references': [cascaded_sites.c.site_id]}
migrate.ForeignKeyConstraint(columns=fkey['columns'],
refcolumns=fkey['references'],
name=fkey.get('name')).create()
def downgrade(migrate_engine):
raise NotImplementedError('can not downgrade from init repo.')

View File

@ -0,0 +1,38 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from oslo_db.sqlalchemy import migration
from tricircle import db
from tricircle.db import core
from tricircle.db import migrate_repo
def find_migrate_repo(package=None, repo_name='migrate_repo'):
package = package or db
path = os.path.abspath(os.path.join(
os.path.dirname(package.__file__), repo_name))
# TODO(zhiyuan) handle path not valid exception
return path
def sync_repo(version):
repo_abs_path = find_migrate_repo()
init_version = migrate_repo.DB_INIT_VERSION
engine = core.get_engine()
migration.db_sync(engine, repo_abs_path, version, init_version)

90
tricircle/db/models.py Normal file
View File

@ -0,0 +1,90 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sqlalchemy as sql
from tricircle.db import core
def create_site(context, site_dict):
with context.session.begin():
return core.create_resource(context, Site, site_dict)
def delete_site(context, site_id):
with context.session.begin():
return core.delete_resource(context, Site, site_id)
def get_site(context, site_id):
with context.session.begin():
return core.get_resource(context, Site, site_id)
def list_sites(context, filters):
with context.session.begin():
return core.query_resource(context, Site, filters)
def update_site(context, site_id, update_dict):
with context.session.begin():
return core.update_resource(context, Site, site_id, update_dict)
def create_site_service_configuration(context, config_dict):
with context.session.begin():
return core.create_resource(context, SiteServiceConfiguration,
config_dict)
def delete_site_service_configuration(context, config_id):
with context.session.begin():
return core.delete_resource(context,
SiteServiceConfiguration, config_id)
def list_site_service_configurations(context, filters):
with context.session.begin():
return core.query_resource(context, SiteServiceConfiguration, filters)
def update_site_service_configuration(context, config_id, update_dict):
with context.session.begin():
return core.update_resource(
context, SiteServiceConfiguration, config_id, update_dict)
class Site(core.ModelBase, core.DictBase):
__tablename__ = 'cascaded_sites'
attributes = ['site_id', 'site_name', 'az_id']
site_id = sql.Column('site_id', sql.String(length=64), primary_key=True)
site_name = sql.Column('site_name', sql.String(length=64), unique=True,
nullable=False)
az_id = sql.Column('az_id', sql.String(length=64), nullable=False)
class SiteServiceConfiguration(core.ModelBase, core.DictBase):
__tablename__ = 'cascaded_site_service_configuration'
attributes = ['service_id', 'site_id', 'service_type', 'service_url']
service_id = sql.Column('service_id', sql.String(length=64),
primary_key=True)
site_id = sql.Column('site_id', sql.String(length=64),
sql.ForeignKey('cascaded_sites.site_id'),
nullable=False)
service_type = sql.Column('service_type', sql.String(length=64),
nullable=False)
service_url = sql.Column('service_url', sql.String(length=512),
nullable=False)

View File

View File

View File

View File

@ -0,0 +1,150 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from mock import patch
import unittest
import pecan
import tricircle.api.controllers.root as root_controller
from tricircle.common import client
from tricircle.common import context
from tricircle.db import core
from tricircle.db import models
class ControllerTest(unittest.TestCase):
def setUp(self):
core.initialize()
core.ModelBase.metadata.create_all(core.get_engine())
self.context = context.Context()
self.context.is_admin = True
root_controller._get_environment = mock.Mock(return_value={})
root_controller._extract_context_from_environ = mock.Mock(
return_value=self.context)
pecan.abort = mock.Mock()
pecan.response = mock.Mock()
def tearDown(self):
core.ModelBase.metadata.drop_all(core.get_engine())
class SitesControllerTest(ControllerTest):
def setUp(self):
super(SitesControllerTest, self).setUp()
self.controller = root_controller.SitesController()
def test_post_top_site(self):
kw = {'name': 'TopSite', 'top': True}
site_id = self.controller.post(**kw)['site']['site_id']
site = models.get_site(self.context, site_id)
self.assertEqual(site['site_name'], 'TopSite')
self.assertEqual(site['az_id'], '')
@patch.object(client.Client, 'create_resources')
def test_post_bottom_site(self, mock_method):
kw = {'name': 'BottomSite'}
site_id = self.controller.post(**kw)['site']['site_id']
site = models.get_site(self.context, site_id)
self.assertEqual(site['site_name'], 'BottomSite')
self.assertEqual(site['az_id'], 'az_BottomSite')
mock_method.assert_called_once_with('aggregate', self.context,
'ag_BottomSite', 'az_BottomSite')
def test_post_site_name_missing(self):
kw = {'top': True}
self.controller.post(**kw)
pecan.abort.assert_called_once_with(400, 'Name of site required')
def test_post_conflict(self):
kw = {'name': 'TopSite', 'top': True}
self.controller.post(**kw)
self.controller.post(**kw)
pecan.abort.assert_called_once_with(409,
'Site with name TopSite exists')
def test_post_not_admin(self):
self.context.is_admin = False
kw = {'name': 'TopSite', 'top': True}
self.controller.post(**kw)
pecan.abort.assert_called_once_with(
400, 'Admin role required to create sites')
@patch.object(client.Client, 'create_resources')
def test_post_decide_top(self, mock_method):
# 'top' default to False
# top site
kw = {'name': 'Site1', 'top': True}
self.controller.post(**kw)
# bottom site
kw = {'name': 'Site2', 'top': False}
self.controller.post(**kw)
kw = {'name': 'Site3'}
self.controller.post(**kw)
calls = [mock.call('aggregate', self.context, 'ag_Site%d' % i,
'az_Site%d' % i) for i in xrange(2, 4)]
mock_method.assert_has_calls(calls)
@patch.object(models, 'create_site')
def test_post_create_site_exception(self, mock_method):
mock_method.side_effect = Exception
kw = {'name': 'BottomSite'}
self.controller.post(**kw)
pecan.abort.assert_called_once_with(500, 'Fail to create site')
@patch.object(client.Client, 'create_resources')
def test_post_create_aggregate_exception(self, mock_method):
mock_method.side_effect = Exception
kw = {'name': 'BottomSite'}
self.controller.post(**kw)
pecan.abort.assert_called_once_with(500, 'Fail to create aggregate')
# make sure site is deleted
site_filter = [{'key': 'site_name',
'comparator': 'eq',
'value': 'BottomSite'}]
sites = models.list_sites(self.context, site_filter)
self.assertEqual(len(sites), 0)
def test_get_one(self):
kw = {'name': 'TopSite', 'top': True}
site_id = self.controller.post(**kw)['site']['site_id']
return_site = self.controller.get_one(site_id)['site']
self.assertEqual(return_site, {'site_id': site_id,
'site_name': 'TopSite',
'az_id': ''})
def test_get_one_not_found(self):
self.controller.get_one('fake_id')
pecan.abort.assert_called_once_with(404,
'Site with id fake_id not found')
@patch.object(client.Client, 'create_resources', new=mock.Mock)
def test_get_all(self):
kw1 = {'name': 'TopSite', 'top': True}
kw2 = {'name': 'BottomSite'}
self.controller.post(**kw1)
self.controller.post(**kw2)
sites = self.controller.get_all()
actual_result = [(site['site_name'],
site['az_id']) for site in sites['sites']]
expect_result = [('BottomSite', 'az_BottomSite'), ('TopSite', '')]
self.assertItemsEqual(actual_result, expect_result)
def tearDown(self):
core.ModelBase.metadata.drop_all(core.get_engine())

View File

View File

@ -0,0 +1,303 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import unittest
import uuid
import mock
from mock import patch
from oslo_config import cfg
from tricircle.common import client
from tricircle.common import context
from tricircle.common import exceptions
from tricircle.common import resource_handle
from tricircle.db import core
from tricircle.db import models
FAKE_AZ = 'fake_az'
FAKE_RESOURCE = 'fake_res'
FAKE_SITE_ID = 'fake_site_id'
FAKE_SITE_NAME = 'fake_site_name'
FAKE_SERVICE_ID = 'fake_service_id'
FAKE_TYPE = 'fake_type'
FAKE_URL = 'http://127.0.0.1:12345'
FAKE_URL_INVALID = 'http://127.0.0.1:23456'
FAKE_RESOURCES = [{'name': 'res1'}, {'name': 'res2'}]
class FakeException(Exception):
pass
class FakeClient(object):
def __init__(self, url):
self.endpoint = url
def list_fake_res(self, search_opts):
# make sure endpoint is correctly set
if self.endpoint != FAKE_URL:
raise FakeException()
if not search_opts:
return [res for res in FAKE_RESOURCES]
else:
return [res for res in FAKE_RESOURCES if (
res['name'] == search_opts['name'])]
def create_fake_res(self, name):
if self.endpoint != FAKE_URL:
raise FakeException()
FAKE_RESOURCES.append({'name': name})
return {'name': name}
def delete_fake_res(self, name):
if self.endpoint != FAKE_URL:
raise FakeException()
try:
FAKE_RESOURCES.remove({'name': name})
except ValueError:
pass
def action_fake_res(self, name, rename):
if self.endpoint != FAKE_URL:
raise FakeException()
for res in FAKE_RESOURCES:
if res['name'] == name:
res['name'] = rename
break
class FakeResHandle(resource_handle.ResourceHandle):
def _get_client(self, cxt):
return FakeClient(self.endpoint_url)
def handle_list(self, cxt, resource, filters):
try:
cli = self._get_client(cxt)
return cli.list_fake_res(
resource_handle._transform_filters(filters))
except FakeException:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable(FAKE_TYPE, cli.endpoint)
def handle_create(self, cxt, resource, name):
try:
cli = self._get_client(cxt)
return cli.create_fake_res(name)
except FakeException:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable(FAKE_TYPE, cli.endpoint)
def handle_delete(self, cxt, resource, name):
try:
cli = self._get_client(cxt)
cli.delete_fake_res(name)
except FakeException:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable(FAKE_TYPE, cli.endpoint)
def handle_action(self, cxt, resource, action, name, rename):
try:
cli = self._get_client(cxt)
cli.action_fake_res(name, rename)
except FakeException:
self.endpoint_url = None
raise exceptions.EndpointNotAvailable(FAKE_TYPE, cli.endpoint)
class ClientTest(unittest.TestCase):
def setUp(self):
core.initialize()
core.ModelBase.metadata.create_all(core.get_engine())
# enforce foreign key constraint for sqlite
core.get_engine().execute('pragma foreign_keys=on')
self.context = context.Context()
site_dict = {
'site_id': FAKE_SITE_ID,
'site_name': FAKE_SITE_NAME,
'az_id': FAKE_AZ
}
config_dict = {
'service_id': FAKE_SERVICE_ID,
'site_id': FAKE_SITE_ID,
'service_type': FAKE_TYPE,
'service_url': FAKE_URL
}
models.create_site(self.context, site_dict)
models.create_site_service_configuration(self.context, config_dict)
global FAKE_RESOURCES
FAKE_RESOURCES = [{'name': 'res1'}, {'name': 'res2'}]
cfg.CONF.set_override(name='top_site_name', override=FAKE_SITE_NAME,
group='client')
self.client = client.Client()
self.client.resource_service_map[FAKE_RESOURCE] = FAKE_TYPE
self.client.operation_resources_map['list'].add(FAKE_RESOURCE)
self.client.operation_resources_map['create'].add(FAKE_RESOURCE)
self.client.operation_resources_map['delete'].add(FAKE_RESOURCE)
self.client.operation_resources_map['action'].add(FAKE_RESOURCE)
self.client.service_handle_map[FAKE_TYPE] = FakeResHandle(None)
def test_list(self):
resources = self.client.list_resources(
FAKE_RESOURCE, self.context, [])
self.assertEqual(resources, [{'name': 'res1'}, {'name': 'res2'}])
def test_list_with_filters(self):
resources = self.client.list_resources(
FAKE_RESOURCE, self.context, [{'key': 'name',
'comparator': 'eq',
'value': 'res2'}])
self.assertEqual(resources, [{'name': 'res2'}])
def test_create(self):
resource = self.client.create_resources(FAKE_RESOURCE, self.context,
'res3')
self.assertEqual(resource, {'name': 'res3'})
resources = self.client.list_resources(FAKE_RESOURCE, self.context)
self.assertEqual(resources, [{'name': 'res1'}, {'name': 'res2'},
{'name': 'res3'}])
def test_delete(self):
self.client.delete_resources(FAKE_RESOURCE, self.context, 'res1')
resources = self.client.list_resources(FAKE_RESOURCE, self.context)
self.assertEqual(resources, [{'name': 'res2'}])
def test_action(self):
self.client.action_resources(FAKE_RESOURCE, self.context,
'rename', 'res1', 'res3')
resources = self.client.list_resources(FAKE_RESOURCE, self.context)
self.assertEqual(resources, [{'name': 'res3'}, {'name': 'res2'}])
def test_list_endpoint_not_found(self):
cfg.CONF.set_override(name='auto_refresh_endpoint', override=False,
group='client')
# delete the configuration so endpoint cannot be found
models.delete_site_service_configuration(self.context, FAKE_SERVICE_ID)
# auto refresh set to False, directly raise exception
self.assertRaises(exceptions.EndpointNotFound,
self.client.list_resources,
FAKE_RESOURCE, self.context, [])
def test_resource_not_supported(self):
# no such resource
self.assertRaises(exceptions.ResourceNotSupported,
self.client.list_resources,
'no_such_resource', self.context, [])
# remove "create" entry for FAKE_RESOURCE
self.client.operation_resources_map['create'].remove(FAKE_RESOURCE)
# operation not supported
self.assertRaises(exceptions.ResourceNotSupported,
self.client.create_resources,
FAKE_RESOURCE, self.context, [])
def test_list_endpoint_not_found_retry(self):
cfg.CONF.set_override(name='auto_refresh_endpoint', override=True,
group='client')
# delete the configuration so endpoint cannot be found
models.delete_site_service_configuration(self.context, FAKE_SERVICE_ID)
self.client._get_admin_token = mock.Mock()
self.client._get_endpoint_from_keystone = mock.Mock()
self.client._get_endpoint_from_keystone.return_value = {
FAKE_SITE_NAME: {FAKE_TYPE: FAKE_URL}
}
resources = self.client.list_resources(
FAKE_RESOURCE, self.context, [])
self.assertEqual(resources, [{'name': 'res1'}, {'name': 'res2'}])
def test_list_endpoint_not_unique(self):
# add a new configuration with same site and service type
config_dict = {
'service_id': FAKE_SERVICE_ID + '_new',
'site_id': FAKE_SITE_ID,
'service_type': FAKE_TYPE,
'service_url': FAKE_URL
}
models.create_site_service_configuration(self.context, config_dict)
self.assertRaises(exceptions.EndpointNotUnique,
self.client.list_resources,
FAKE_RESOURCE, self.context, [])
def test_list_endpoint_not_valid(self):
cfg.CONF.set_override(name='auto_refresh_endpoint', override=False,
group='client')
update_dict = {'service_url': FAKE_URL_INVALID}
# update url to an invalid one
models.update_site_service_configuration(self.context,
FAKE_SERVICE_ID,
update_dict)
# auto refresh set to False, directly raise exception
self.assertRaises(exceptions.EndpointNotAvailable,
self.client.list_resources,
FAKE_RESOURCE, self.context, [])
def test_list_endpoint_not_valid_retry(self):
cfg.CONF.set_override(name='auto_refresh_endpoint', override=True,
group='client')
update_dict = {'service_url': FAKE_URL_INVALID}
# update url to an invalid one
models.update_site_service_configuration(self.context,
FAKE_SERVICE_ID,
update_dict)
self.client._get_admin_token = mock.Mock()
self.client._get_endpoint_from_keystone = mock.Mock()
self.client._get_endpoint_from_keystone.return_value = {
FAKE_SITE_NAME: {FAKE_TYPE: FAKE_URL}
}
resources = self.client.list_resources(
FAKE_RESOURCE, self.context, [])
self.assertEqual(resources, [{'name': 'res1'}, {'name': 'res2'}])
@patch.object(models, 'create_site_service_configuration')
@patch.object(models, 'update_site_service_configuration')
def test_update_endpoint_from_keystone(self, update_mock, create_mock):
self.client._get_admin_token = mock.Mock()
self.client._get_endpoint_from_keystone = mock.Mock()
self.client._get_endpoint_from_keystone.return_value = {
FAKE_SITE_NAME: {FAKE_TYPE: FAKE_URL,
'another_fake_type': 'http://127.0.0.1:34567'},
'not_registered_site': {FAKE_TYPE: FAKE_URL}
}
uuid.uuid4 = mock.Mock()
uuid.uuid4.return_value = 'another_fake_service_id'
self.client.update_endpoint_from_keystone(self.context)
update_dict = {'service_url': FAKE_URL}
create_dict = {'service_id': 'another_fake_service_id',
'site_id': FAKE_SITE_ID,
'service_type': 'another_fake_type',
'service_url': 'http://127.0.0.1:34567'}
# not registered site is skipped
update_mock.assert_called_once_with(
self.context, FAKE_SERVICE_ID, update_dict)
create_mock.assert_called_once_with(self.context, create_dict)
def test_get_endpoint(self):
cfg.CONF.set_override(name='auto_refresh_endpoint', override=False,
group='client')
url = self.client.get_endpoint(self.context, FAKE_SITE_ID, FAKE_TYPE)
self.assertEqual(url, FAKE_URL)
def tearDown(self):
core.ModelBase.metadata.drop_all(core.get_engine())

View File

View File

@ -0,0 +1,101 @@
# Copyright 2015 Huawei Technologies Co., Ltd.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import unittest
from tricircle.common import context
from tricircle.common import exceptions
from tricircle.db import core
from tricircle.db import models
class ModelsTest(unittest.TestCase):
def setUp(self):
core.initialize()
core.ModelBase.metadata.create_all(core.get_engine())
self.context = context.Context()
def test_obj_to_dict(self):
site = {'site_id': 'test_site_uuid',
'site_name': 'test_site',
'az_id': 'test_az_uuid'}
site_obj = models.Site.from_dict(site)
for attr in site_obj.attributes:
self.assertEqual(getattr(site_obj, attr), site[attr])
def test_create(self):
site = {'site_id': 'test_site_uuid',
'site_name': 'test_site',
'az_id': 'test_az_uuid'}
site_ret = models.create_site(self.context, site)
self.assertEqual(site_ret, site)
configuration = {
'service_id': 'test_config_uuid',
'site_id': 'test_site_uuid',
'service_type': 'nova',
'service_url': 'http://test_url'
}
config_ret = models.create_site_service_configuration(self.context,
configuration)
self.assertEqual(config_ret, configuration)
def test_update(self):
site = {'site_id': 'test_site_uuid',
'site_name': 'test_site',
'az_id': 'test_az1_uuid'}
models.create_site(self.context, site)
update_dict = {'site_id': 'fake_uuid',
'site_name': 'test_site2',
'az_id': 'test_az2_uuid'}
ret = models.update_site(self.context, 'test_site_uuid', update_dict)
# primary key value will not be updated
self.assertEqual(ret['site_id'], 'test_site_uuid')
self.assertEqual(ret['site_name'], 'test_site2')
self.assertEqual(ret['az_id'], 'test_az2_uuid')
def test_delete(self):
site = {'site_id': 'test_site_uuid',
'site_name': 'test_site',
'az_id': 'test_az_uuid'}
models.create_site(self.context, site)
models.delete_site(self.context, 'test_site_uuid')
self.assertRaises(exceptions.ResourceNotFound, models.get_site,
self.context, 'test_site_uuid')
def test_query(self):
site1 = {'site_id': 'test_site1_uuid',
'site_name': 'test_site1',
'az_id': 'test_az1_uuid'}
site2 = {'site_id': 'test_site2_uuid',
'site_name': 'test_site2',
'az_id': 'test_az2_uuid'}
models.create_site(self.context, site1)
models.create_site(self.context, site2)
filters = [{'key': 'site_name',
'comparator': 'eq',
'value': 'test_site2'}]
sites = models.list_sites(self.context, filters)
self.assertEqual(len(sites), 1)
self.assertEqual(sites[0], site2)
filters = [{'key': 'site_name',
'comparator': 'eq',
'value': 'test_site3'}]
sites = models.list_sites(self.context, filters)
self.assertEqual(len(sites), 0)
def tearDown(self):
core.ModelBase.metadata.drop_all(core.get_engine())