Init billing service 'bilean' from heat.

Change-Id: I8da3a902346f7c5d101dc55bda5b5b0a95243d49
This commit is contained in:
lvdongbing 2015-12-24 04:07:45 -05:00
parent 96aadf3751
commit 625bd5acd9
109 changed files with 11058 additions and 0 deletions

24
.gitignore vendored Normal file
View File

@ -0,0 +1,24 @@
*.db
*.log
*.pyc
*.swp
.DS_Store
.coverage
.tox
AUTHORS
ChangeLog
bilean.egg-info/
bilean/versioninfo
build/
covhtml
dist/
doc/build
doc/source/bilean.*
doc/source/modules.rst
etc/bilean.conf
nosetests.xml
pep8.txt
requirements.txt
tests/test.db.pristine
vendor
etc/bilean/bilean.conf.sample

4
.gitreview Executable file
View File

@ -0,0 +1,4 @@
[gerrit]
host=dev.kylincloud.me
port=29418
project=ubuntu-14.04/bilean.git

2
AUTHORS Normal file
View File

@ -0,0 +1,2 @@
admin <kylincloud@163.com>
lvdongbing <dongbing.lv@kylin-cloud.com>

5
ChangeLog Normal file
View File

@ -0,0 +1,5 @@
CHANGES
=======
* Billing service for KylinCloud
* Initial empty repository

176
LICENSE Normal file
View File

@ -0,0 +1,176 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

4
README.rst Normal file
View File

@ -0,0 +1,4 @@
Bilean
======
/Todo/

0
bilean/__init__.py Normal file
View File

0
bilean/api/__init__.py Normal file
View File

View File

View File

@ -0,0 +1,134 @@
# -*- coding: utf-8 -*-
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
'''
A middleware that turns exceptions into parsable string.
'''
import traceback
from oslo_config import cfg
import six
import webob
from bilean.common import exception
from bilean.common import serializers
from bilean.common import wsgi
class Fault(object):
def __init__(self, error):
self.error = error
@webob.dec.wsgify(RequestClass=wsgi.Request)
def __call__(self, req):
serializer = serializers.JSONResponseSerializer()
resp = webob.Response(request=req)
default_webob_exc = webob.exc.HTTPInternalServerError()
resp.status_code = self.error.get('code', default_webob_exc.code)
serializer.default(resp, self.error)
return resp
class FaultWrapper(wsgi.Middleware):
"""Replace error body with something the client can parse."""
error_map = {
'Forbidden': webob.exc.HTTPForbidden,
'InternalError': webob.exc.HTTPInternalServerError,
'InvalidParameter': webob.exc.HTTPBadRequest,
'InvalidSchemaError': webob.exc.HTTPBadRequest,
'MultipleChoices': webob.exc.HTTPBadRequest,
'RuleNotFound': webob.exc.HTTPNotFound,
'RuleTypeNotFound': webob.exc.HTTPNotFound,
'RuleTypeNotMatch': webob.exc.HTTPBadRequest,
'ReceiverNotFound': webob.exc.HTTPNotFound,
'RequestLimitExceeded': webob.exc.HTTPBadRequest,
'ResourceInUse': webob.exc.HTTPConflict,
'BileanBadRequest': webob.exc.HTTPBadRequest,
'SpecValidationFailed': webob.exc.HTTPBadRequest,
}
def _map_exception_to_error(self, class_exception):
if class_exception == Exception:
return webob.exc.HTTPInternalServerError
if class_exception.__name__ not in self.error_map:
return self._map_exception_to_error(class_exception.__base__)
return self.error_map[class_exception.__name__]
def _error(self, ex):
trace = None
traceback_marker = 'Traceback (most recent call last)'
webob_exc = None
if isinstance(ex, exception.HTTPExceptionDisguise):
# An HTTP exception was disguised so it could make it here
# let's remove the disguise and set the original HTTP exception
if cfg.CONF.debug:
trace = ''.join(traceback.format_tb(ex.tb))
ex = ex.exc
webob_exc = ex
ex_type = ex.__class__.__name__
is_remote = ex_type.endswith('_Remote')
if is_remote:
ex_type = ex_type[:-len('_Remote')]
full_message = six.text_type(ex)
if '\n' in full_message and is_remote:
message, msg_trace = full_message.split('\n', 1)
elif traceback_marker in full_message:
message, msg_trace = full_message.split(traceback_marker, 1)
message = message.rstrip('\n')
msg_trace = traceback_marker + msg_trace
else:
if six.PY3:
msg_trace = traceback.format_exception(type(ex), ex,
ex.__traceback__)
else:
msg_trace = traceback.format_exc()
message = full_message
if isinstance(ex, exception.BileanException):
message = ex.message
if cfg.CONF.debug and not trace:
trace = msg_trace
if not webob_exc:
webob_exc = self._map_exception_to_error(ex.__class__)
error = {
'code': webob_exc.code,
'title': webob_exc.title,
'explanation': webob_exc.explanation,
'error': {
'code': webob_exc.code,
'message': message,
'type': ex_type,
'traceback': trace,
}
}
return error
def process_request(self, req):
try:
return req.get_response(self.application)
except Exception as exc:
return req.get_response(Fault(self._error(exc)))

View File

@ -0,0 +1,38 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_middleware import ssl
ssl_middleware_opts = [
cfg.StrOpt('secure_proxy_ssl_header',
default='X-Forwarded-Proto',
deprecated_group='DEFAULT',
help="The HTTP Header that will be used to determine which "
"the original request protocol scheme was, even if it was "
"removed by an SSL terminator proxy.")
]
class SSLMiddleware(ssl.SSLMiddleware):
def __init__(self, application, *args, **kwargs):
# NOTE(cbrandily): calling super(ssl.SSLMiddleware, self).__init__
# allows to define our opt (including a deprecation).
super(ssl.SSLMiddleware, self).__init__(application, *args, **kwargs)
self.oslo_conf.register_opts(
ssl_middleware_opts, group='oslo_middleware')
def list_opts():
yield None, ssl_middleware_opts

View File

@ -0,0 +1,125 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
A filter middleware that inspects the requested URI for a version string
and/or Accept headers and attempts to negotiate an API controller to
return
"""
import re
import webob
from bilean.common import wsgi
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
class VersionNegotiationFilter(wsgi.Middleware):
def __init__(self, version_controller, app, conf, **local_conf):
self.versions_app = version_controller(conf)
self.version_uri_regex = re.compile(r"^v(\d+)\.?(\d+)?")
self.conf = conf
super(VersionNegotiationFilter, self).__init__(app)
def process_request(self, req):
"""Process Accept header or simply return correct API controller.
If there is a version identifier in the URI, simply
return the correct API controller, otherwise, if we
find an Accept: header, process it
"""
# See if a version identifier is in the URI passed to
# us already. If so, simply return the right version
# API controller
msg = ("Processing request: %(method)s %(path)s Accept: "
"%(accept)s" % {'method': req.method,
'path': req.path, 'accept': req.accept})
LOG.debug(msg)
# If the request is for /versions, just return the versions container
if req.path_info_peek() in ("versions", ""):
return self.versions_app
match = self._match_version_string(req.path_info_peek(), req)
if match:
major_version = req.environ['api.major_version']
minor_version = req.environ['api.minor_version']
if (major_version == 1 and minor_version == 0):
LOG.debug("Matched versioned URI. "
"Version: %(major_version)d.%(minor_version)d"
% {'major_version': major_version,
'minor_version': minor_version})
# Strip the version from the path
req.path_info_pop()
return None
else:
LOG.debug("Unknown version in versioned URI: "
"%(major_version)d.%(minor_version)d. "
"Returning version choices."
% {'major_version': major_version,
'minor_version': minor_version})
return self.versions_app
accept = str(req.accept)
if accept.startswith('application/vnd.openstack.orchestration-'):
token_loc = len('application/vnd.openstack.orchestration-')
accept_version = accept[token_loc:]
match = self._match_version_string(accept_version, req)
if match:
major_version = req.environ['api.major_version']
minor_version = req.environ['api.minor_version']
if (major_version == 1 and minor_version == 0):
LOG.debug("Matched versioned media type. Version: "
"%(major_version)d.%(minor_version)d"
% {'major_version': major_version,
'minor_version': minor_version})
return None
else:
LOG.debug("Unknown version in accept header: "
"%(major_version)d.%(minor_version)d..."
"returning version choices."
% {'major_version': major_version,
'minor_version': minor_version})
return self.versions_app
else:
if req.accept not in ('*/*', ''):
LOG.debug("Unknown accept header: %s..."
"returning HTTP not found.", req.accept)
return webob.exc.HTTPNotFound()
return None
def _match_version_string(self, subject, req):
"""Given a subject, tries to match a major and/or minor version number.
If found, sets the api.major_version and api.minor_version environ
variables.
Returns True if there was a match, false otherwise.
:param subject: The string to check
:param req: Webob.Request object
"""
match = self.version_uri_regex.match(subject)
if match:
major_version, minor_version = match.groups(0)
major_version = int(major_version)
minor_version = int(minor_version)
req.environ['api.major_version'] = major_version
req.environ['api.minor_version'] = minor_version
return match is not None

View File

@ -0,0 +1,35 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.api.middleware.fault import FaultWrapper
from bilean.api.middleware.ssl import SSLMiddleware
from bilean.api.middleware.version_negotiation import VersionNegotiationFilter
from bilean.api.openstack import versions
from bilean.common import context
def version_negotiation_filter(app, conf, **local_conf):
return VersionNegotiationFilter(versions.Controller, app,
conf, **local_conf)
def faultwrap_filter(app, conf, **local_conf):
return FaultWrapper(app)
def sslmiddleware_filter(app, conf, **local_conf):
return SSLMiddleware(app)
def contextmiddleware_filter(app, conf, **local_conf):
return context.ContextMiddleware(app)

View File

@ -0,0 +1,126 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import routes
from bilean.api.openstack.v1 import events
from bilean.api.openstack.v1 import resources
from bilean.api.openstack.v1 import rules
from bilean.api.openstack.v1 import users
from bilean.common import wsgi
class API(wsgi.Router):
"""WSGI router for Bilean v1 ReST API requests."""
def __init__(self, conf, **local_conf):
self.conf = conf
mapper = routes.Mapper()
# Users
users_resource = users.create_resource(conf)
users_path = "/{tenant_id}/users"
with mapper.submapper(controller=users_resource,
path_prefix=users_path) as user_mapper:
# User collection
user_mapper.connect("users_index",
"",
action="index",
conditions={'method': 'GET'})
# User detail
user_mapper.connect("user_show",
"/{user_id}",
action="show",
conditions={'method': 'GET'})
# Update user
user_mapper.connect("user_update",
"/{user_id}",
action="update",
conditions={'method': 'PUT'})
# Resources
res_resource = resources.create_resource(conf)
res_path = "/{tenant_id}/resources"
with mapper.submapper(controller=res_resource,
path_prefix=res_path) as res_mapper:
# Resource collection
res_mapper.connect("resource_index",
"",
action="index",
conditions={'method': 'GET'})
# Resource detail
res_mapper.connect("resource_show",
"/{resource_id}",
action="show",
conditions={'method': 'GET'})
# Validate creation
res_mapper.connect("validate_creation",
"",
action="validate_creation",
conditions={'method': 'POST'})
# Rules
rule_resource = rules.create_resource(conf)
rule_path = "/{tenant_id}/rules"
with mapper.submapper(controller=rule_resource,
path_prefix=rule_path) as rule_mapper:
# Rule collection
rule_mapper.connect("rules_index",
"",
action="index",
conditions={'method': 'GET'})
# Rule detail
rule_mapper.connect("rule_show",
"/{rule_id}",
action="show",
conditions={'method': 'GET'})
# Create rule
rule_mapper.connect("rule_create",
"",
action="create",
conditions={'method': 'POST'})
# Update rule
rule_mapper.connect("rule_update",
"/{rule_id}",
action="update",
conditions={'method': 'PUT'})
# Delete rule
rule_mapper.connect("rule_delete",
"/{rule_id}",
action="delete",
conditions={'method': 'DELETE'})
# Events
event_resource = events.create_resource(conf)
event_path = "/{tenant_id}/events"
with mapper.submapper(controller=event_resource,
path_prefix=event_path) as event_mapper:
# Event collection
event_mapper.connect("events_index",
"",
action="index",
conditions={'method': 'GET'})
super(API, self).__init__(mapper)

View File

@ -0,0 +1,87 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import itertools
from bilean.api.openstack.v1 import util
from bilean.common import params
from bilean.common import serializers
from bilean.common import wsgi
from bilean.rpc import client as rpc_client
def format_event(req, res, keys=None):
keys = keys or []
include_key = lambda k: k in keys if keys else True
def transform(key, value):
if not include_key(key):
return
else:
yield (key, value)
return dict(itertools.chain.from_iterable(
transform(k, v) for k, v in res.items()))
class EventController(object):
"""WSGI controller for Events in Bilean v1 API
Implements the API actions
"""
# Define request scope (must match what is in policy.json)
REQUEST_SCOPE = 'events'
def __init__(self, options):
self.options = options
self.rpc_client = rpc_client.EngineClient()
@util.policy_enforce
def index(self, req, tenant_id):
"""Lists summary information for all users"""
filter_fields = {
'user_id': 'string',
'resource_type': 'string',
'action': 'string',
'start': 'timestamp',
'end': 'timestamp',
}
filter_params = util.get_allowed_params(req.params, filter_fields)
if 'aggregate' in req.params:
aggregate = req.params.get('aggregate')
if aggregate in ['sum', 'avg']:
filter_params['aggregate'] = aggregate
events = self.rpc_client.list_events(
req.context, filters=filter_params)
event_statistics = self._init_event_statistics()
for e in events:
if e[0] in event_statistics:
event_statistics[e[0]] = e[1]
return dict(events=event_statistics)
events = self.rpc_client.list_events(
req.context, filters=filter_params)
return dict(events=events)
def _init_event_statistics(self):
event_statistics = {}
for resource in params.RESOURCE_TYPES:
event_statistics[resource] = 0
return event_statistics
def create_resource(options):
"""User resource factory method."""
deserializer = wsgi.JSONRequestDeserializer()
serializer = serializers.JSONResponseSerializer()
return wsgi.Resource(EventController(options), deserializer, serializer)

View File

@ -0,0 +1,110 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import itertools
from webob import exc
from bilean.api.openstack.v1 import util
from bilean.api import validator
from bilean.common import exception
from bilean.common.i18n import _
from bilean.common import serializers
from bilean.common import wsgi
from bilean.rpc import client as rpc_client
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
def format_resource(req, res, keys=None):
keys = keys or []
include_key = lambda k: k in keys if keys else True
def transform(key, value):
if not include_key(key):
return
else:
yield (key, value)
return dict(itertools.chain.from_iterable(
transform(k, v) for k, v in res.items()))
class ResourceController(object):
"""WSGI controller for Resources in Bilean v1 API
Implements the API actions
"""
# Define request scope (must match what is in policy.json)
REQUEST_SCOPE = 'resources'
def __init__(self, options):
self.options = options
self.rpc_client = rpc_client.EngineClient()
@util.policy_enforce
def index(self, req, tenant_id):
"""Lists summary information for all resources"""
resource_list = self.rpc_client.list_resources(req.context)
return dict(resources=resource_list)
@util.policy_enforce
def show(self, req, resource_id):
"""Gets detailed information for a resource"""
resource = self.rpc_client.show_resource(req.context, resource_id)
return {'resource': format_resource(req, resource)}
@util.policy_enforce
def validate_creation(self, req, body):
"""Validate resources creation
:param user_id: Id of user to validate
:param body: dict body include resources and count
:return True|False
"""
if not validator.is_valid_body(body):
raise exc.HTTPUnprocessableEntity()
if not body.get('resources'):
msg = _("Resources is empty")
raise exc.HTTPBadRequest(explanation=msg)
if body.get('count'):
try:
validator.validate_integer(
body.get('count'), 'count', 0, 1000)
except exception.InvalidInput as e:
raise exc.HTTPBadRequest(explanation=e.format_message())
resources = body.get('resources')
try:
for resource in resources:
validator.validate_resource(resource)
except exception.InvalidInput as e:
raise exc.HTTPBadRequest(explanation=e.format_message())
except Exception as e:
raise exc.HTTPBadRequest(explanation=e)
try:
return self.rpc_client.validate_creation(req.context, body)
except Exception as e:
LOG.error(e)
def create_resource(options):
"""Resource resource factory method."""
deserializer = wsgi.JSONRequestDeserializer()
serializer = serializers.JSONResponseSerializer()
return wsgi.Resource(ResourceController(options), deserializer, serializer)

View File

@ -0,0 +1,107 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import itertools
import six
from webob import exc
from bilean.api.openstack.v1 import util
from bilean.api import validator
from bilean.common.i18n import _
from bilean.common import params
from bilean.common import serializers
from bilean.common import wsgi
from bilean.rpc import client as rpc_client
class RuleData(object):
'''The data accompanying a POST/PUT request to create/update a rule.'''
def __init__(self, data):
self.data = data
def name(self):
if params.RULE_NAME not in self.data:
raise exc.HTTPBadRequest(_("No rule name specified"))
return self.data[params.RULE_NAME]
def spec(self):
if params.RULE_SPEC not in self.data:
raise exc.HTTPBadRequest(_("No rule spec provided"))
return self.data[params.RULE_SPEC]
def metadata(self):
return self.data.get(params.RULE_METADATA, None)
class RuleController(object):
"""WSGI controller for Rules in Bilean v1 API
Implements the API actions
"""
# Define request scope (must match what is in policy.json)
REQUEST_SCOPE = 'rules'
def __init__(self, options):
self.options = options
self.rpc_client = rpc_client.EngineClient()
def default(self, req, **args):
raise exc.HTTPNotFound()
@util.policy_enforce
def index(self, req):
"""Lists summary information for all rules"""
rule_list = self.rpc_client.list_rules(req.context)
return dict(rules=rule_list)
@util.policy_enforce
def show(self, req, rule_id):
"""Gets detailed information for a rule"""
return self.rpc_client.show_rule(req.context, rule_id)
@util.policy_enforce
def create(self, req, body):
"""Create a new rule"""
if not validator.is_valid_body(body):
raise exc.HTTPUnprocessableEntity()
rule_data = body.get('rule')
data = RuleData(rule_data)
result = self.rpc_client.rule_create(req.context,
data.name(),
data.spec(),
data.metadata())
return {'rule': result}
@util.policy_enforce
def delete(self, req, rule_id):
"""Delete a rule with given rule_id"""
res = self.rpc_client.delete_rule(req.context, rule_id)
if res is not None:
raise exc.HTTPBadRequest(res['Error'])
raise exc.HTTPNoContent()
def create_resource(options):
"""Rule resource factory method."""
deserializer = wsgi.JSONRequestDeserializer()
serializer = serializers.JSONResponseSerializer()
return wsgi.Resource(RuleController(options), deserializer, serializer)

View File

@ -0,0 +1,134 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import itertools
from webob import exc
from bilean.api.openstack.v1 import util
from bilean.api import validator
from bilean.common import exception
from bilean.common.i18n import _
from bilean.common import serializers
from bilean.common import wsgi
from bilean.rpc import client as rpc_client
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
def format_user(req, res, keys=None):
keys = keys or []
include_key = lambda k: k in keys if keys else True
def transform(key, value):
if not include_key(key):
return
else:
yield (key, value)
return dict(itertools.chain.from_iterable(
transform(k, v) for k, v in res.items()))
class UserController(object):
"""WSGI controller for Users in Bilean v1 API
Implements the API actions
"""
# Define request scope (must match what is in policy.json)
REQUEST_SCOPE = 'users'
def __init__(self, options):
self.options = options
self.rpc_client = rpc_client.EngineClient()
@util.policy_enforce
def index(self, req, tenant_id):
"""Lists summary information for all users"""
user_list = self.rpc_client.list_users(req.context)
return {'users': [format_user(req, u) for u in user_list]}
@util.policy_enforce
def show(self, req, tenant_id, user_id):
"""Gets detailed information for a user"""
try:
return self.rpc_client.show_user(req.context, user_id)
except exception.NotFound:
msg = _("User with id: %s could be found") % user_id
raise exc.HTTPNotFound(explanation=msg)
@util.policy_enforce
def update(self, req, tenant_id, user_id, body):
"""Update a specify user
:param user_id: Id of user to update
"""
if not validator.is_valid_body(body):
raise exc.HTTPUnprocessableEntity()
update_dict = {}
if 'balance' in body:
balance = body.get('balance')
try:
validator.validate_float(balance, 'User_balance', 0, 1000000)
except exception.InvalidInput as e:
raise exc.HTTPBadRequest(explanation=e.format_message())
update_dict['balance'] = balance
if 'credit' in body:
credit = body.get('credit')
try:
validator.validate_integer(credit, 'User_credit', 0, 100000)
except exception.InvalidInput as e:
raise exc.HTTPBadRequest(explanation=e.format_message())
update_dict['credit'] = credit
if 'status' in body:
status = body.get('status')
try:
validator.validate_string(status, 'User_status',
available_fields=['active',
'freeze'])
except exception.InvalidInput as e:
raise exc.HTTPBadRequest(explanation=e.format_message())
update_dict['status'] = status
if 'action' in body:
action = body.get('action')
try:
validator.validate_string(action, 'Action',
available_fields=['recharge',
'update',
'deduct'])
except exception.InvalidInput as e:
raise exc.HTTPBadRequest(explanation=e.format_message())
update_dict['action'] = action
try:
return self.rpc_client.update_user(req.context,
user_id,
update_dict)
except exception.NotFound:
msg = _("User with id: %s could be found") % user_id
raise exc.HTTPNotFound(explanation=msg)
except Exception as e:
LOG.error(e)
def create_resource(options):
"""User resource factory method."""
deserializer = wsgi.JSONRequestDeserializer()
serializer = serializers.JSONResponseSerializer()
return wsgi.Resource(UserController(options), deserializer, serializer)

View File

@ -0,0 +1,67 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import functools
import six
from webob import exc
from oslo_utils import timeutils
def policy_enforce(handler):
"""Decorator that enforces policies.
Checks the path matches the request context and enforce policy defined in
policy.json.
This is a handler method decorator.
"""
@functools.wraps(handler)
def handle_bilean_method(controller, req, tenant_id, **kwargs):
if req.context.tenant_id != tenant_id:
raise exc.HTTPForbidden()
allowed = req.context.policy.enforce(context=req.context,
action=handler.__name__,
scope=controller.REQUEST_SCOPE)
if not allowed:
raise exc.HTTPForbidden()
return handler(controller, req, **kwargs)
return handle_bilean_method
def get_allowed_params(params, whitelist):
"""Extract from ``params`` all entries listed in ``whitelist``.
The returning dict will contain an entry for a key if, and only if,
there's an entry in ``whitelist`` for that key and at least one entry in
``params``. If ``params`` contains multiple entries for the same key, it
will yield an array of values: ``{key: [v1, v2,...]}``
:param params: a NestedMultiDict from webob.Request.params
:param whitelist: an array of strings to whitelist
:returns: a dict with {key: value} pairs
"""
allowed_params = {}
for key, key_type in six.iteritems(whitelist):
value = params.get(key)
if value:
if key_type == 'timestamp':
value = timeutils.parse_isotime(value)
value = value.replace(tzinfo=None)
allowed_params[key] = value
return allowed_params

View File

@ -0,0 +1,54 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Controller that returns information on the bilean API versions
"""
import httplib
import json
import webob.dec
class Controller(object):
"""A controller that produces information on the bilean API versions."""
def __init__(self, conf):
self.conf = conf
@webob.dec.wsgify
def __call__(self, req):
"""Respond to a request for all OpenStack API versions."""
version_objs = [
{
"id": "v1.0",
"status": "CURRENT",
"links": [
{
"rel": "self",
"href": self.get_href(req)
}]
}]
body = json.dumps(dict(versions=version_objs))
response = webob.Response(request=req,
status=httplib.MULTIPLE_CHOICES,
content_type='application/json')
response.body = body
return response
def get_href(self, req):
return "%s/v1/" % req.host_url

202
bilean/api/validator.py Normal file
View File

@ -0,0 +1,202 @@
# Copyright 2011 Cloudscaling, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
from bilean.common import exception
from bilean.common.i18n import _
from bilean.common import params
from oslo_log import log as logging
from oslo_utils import uuidutils
LOG = logging.getLogger(__name__)
def _validate_uuid_format(uid):
return uuidutils.is_uuid_like(uid)
def is_valid_body(body, entity_name=None):
if entity_name is not None:
if not (body and entity_name in body):
return False
def is_dict(d):
try:
d.get(None)
return True
except AttributeError:
return False
return is_dict(body)
def validate(args, validator):
"""Validate values of args against validators in validator.
:param args: Dict of values to be validated.
:param validator: A dict where the keys map to keys in args
and the values are validators.
Applies each validator to ``args[key]``
:returns: True if validation succeeds. Otherwise False.
A validator should be a callable which accepts 1 argument and which
returns True if the argument passes validation. False otherwise.
A validator should not raise an exception to indicate validity of the
argument.
Only validates keys which show up in both args and validator.
"""
for key in validator:
if key not in args:
continue
f = validator[key]
assert callable(f)
if not f(args[key]):
LOG.debug("%(key)s with value %(value)s failed"
" validator %(name)s",
{'key': key, 'value': args[key], 'name': f.__name__})
return False
return True
def validate_string(value, name=None, min_length=0, max_length=None,
available_fields=None):
"""Check the length of specified string
:param value: the value of the string
:param name: the name of the string
:param min_length: the min_length of the string
:param max_length: the max_length of the string
"""
if not isinstance(value, six.string_types):
if name is None:
msg = _("The input is not a string or unicode")
else:
msg = _("%s is not a string or unicode") % name
raise exception.InvalidInput(message=msg)
if name is None:
name = value
if available_fields:
if value not in available_fields:
msg = _("%(name)s must be in %(fields)s") % {
'name': name, 'fields': available_fields}
raise exception.InvalidInput(message=msg)
if len(value) < min_length:
msg = _("%(name)s has a minimum character requirement of "
"%(min_length)s.") % {'name': name, 'min_length': min_length}
raise exception.InvalidInput(message=msg)
if max_length and len(value) > max_length:
msg = _("%(name)s has more than %(max_length)s "
"characters.") % {'name': name, 'max_length': max_length}
raise exception.InvalidInput(message=msg)
def validate_resource(resource):
"""Make sure that resource is valid"""
if not is_valid_body(resource):
msg = _("%s is not a dict") % resource
raise exception.InvalidInput(message=msg)
if resource['resource_type']:
validate_string(resource['resource_type'],
available_fields=params.RESOURCE_TYPES)
else:
msg = _('Expected resource_type field for resource')
raise exception.InvalidInput(reason=msg)
if resource['value']:
validate_integer(resource['value'], 'resource_value', min_value=1)
else:
msg = _('Expected resource value field for resource')
raise exception.InvalidInput(reason=msg)
def validate_integer(value, name, min_value=None, max_value=None):
"""Make sure that value is a valid integer, potentially within range."""
try:
value = int(str(value))
except (ValueError, UnicodeEncodeError):
msg = _('%(value_name)s must be an integer')
raise exception.InvalidInput(reason=(
msg % {'value_name': name}))
if min_value is not None:
if value < min_value:
msg = _('%(value_name)s must be >= %(min_value)d')
raise exception.InvalidInput(
reason=(msg % {'value_name': name,
'min_value': min_value}))
if max_value is not None:
if value > max_value:
msg = _('%(value_name)s must be <= %(max_value)d')
raise exception.InvalidInput(
reason=(
msg % {'value_name': name,
'max_value': max_value})
)
return value
def validate_float(value, name, min_value=None, max_value=None):
"""Make sure that value is a valid float, potentially within range."""
try:
value = float(str(value))
except (ValueError, UnicodeEncodeError):
msg = _('%(value_name)s must be an float')
raise exception.InvalidInput(reason=(
msg % {'value_name': name}))
if min_value is not None:
if value < min_value:
msg = _('%(value_name)s must be >= %(min_value)d')
raise exception.InvalidInput(
reason=(msg % {'value_name': name,
'min_value': min_value}))
if max_value is not None:
if value > max_value:
msg = _('%(value_name)s must be <= %(max_value)d')
raise exception.InvalidInput(
reason=(
msg % {'value_name': name,
'max_value': max_value}))
return value
def is_none_string(val):
"""Check if a string represents a None value."""
if not isinstance(val, six.string_types):
return False
return val.lower() == 'none'
def check_isinstance(obj, cls):
"""Checks that obj is of type cls, and lets PyLint infer types."""
if isinstance(obj, cls):
return obj
raise Exception(_('Expected object of type: %s') % (str(cls)))

0
bilean/cmd/__init__.py Normal file
View File

91
bilean/cmd/manage.py Normal file
View File

@ -0,0 +1,91 @@
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
CLI interface for bilean management.
"""
import sys
from oslo_config import cfg
from oslo_log import log as logging
from bilean.common.i18n import _
from bilean.db import api
from bilean.db import utils
from bilean import version
CONF = cfg.CONF
def do_db_version():
"""Print database's current migration level."""
print(api.db_version(api.get_engine()))
def do_db_sync():
"""Place a database under migration control and upgrade.
Creating first if necessary.
"""
api.db_sync(api.get_engine(), CONF.command.version)
def purge_deleted():
"""Remove database records that have been previously soft deleted."""
utils.purge_deleted(CONF.command.age, CONF.command.granularity)
def add_command_parsers(subparsers):
parser = subparsers.add_parser('db_version')
parser.set_defaults(func=do_db_version)
parser = subparsers.add_parser('db_sync')
parser.set_defaults(func=do_db_sync)
parser.add_argument('version', nargs='?')
parser.add_argument('current_version', nargs='?')
parser = subparsers.add_parser('purge_deleted')
parser.set_defaults(func=purge_deleted)
parser.add_argument('age', nargs='?', default='90',
help=_('How long to preserve deleted data.'))
parser.add_argument(
'-g', '--granularity', default='days',
choices=['days', 'hours', 'minutes', 'seconds'],
help=_('Granularity to use for age argument, defaults to days.'))
command_opt = cfg.SubCommandOpt('command',
title='Commands',
help='Show available commands.',
handler=add_command_parsers)
def main():
logging.register_options(CONF)
logging.setup(CONF, 'bilean-manage')
CONF.register_cli_opt(command_opt)
try:
default_config_files = cfg.find_config_files('bilean', 'bilean-engine')
CONF(sys.argv[1:], project='bilean', prog='bilean-manage',
version=version.version_info.version_string(),
default_config_files=default_config_files)
except RuntimeError as e:
sys.exit("ERROR: %s" % e)
try:
CONF.command.func()
except Exception as e:
sys.exit("ERROR: %s" % e)

View File

167
bilean/common/config.py Normal file
View File

@ -0,0 +1,167 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Routines for configuring Bilean."""
import logging as sys_logging
import os
import socket
from oslo_concurrency import processutils
from oslo_config import cfg
from oslo_log import log as logging
from bilean.common.i18n import _
from bilean.common import wsgi
LOG = logging.getLogger(__name__)
paste_deploy_group = cfg.OptGroup('paste_deploy')
paste_deploy_opts = [
cfg.StrOpt('api_paste_config', default="api-paste.ini",
help=_("The API paste config file to use."))]
service_opts = [
cfg.IntOpt('periodic_interval',
default=60,
help=_('Seconds between running periodic tasks.')),
cfg.StrOpt('region_name_for_services',
help=_('Default region name used to get services endpoints.')),
cfg.IntOpt('max_response_size',
default=524288,
help=_('Maximum raw byte size of data from web response.')),
cfg.IntOpt('num_engine_workers',
default=processutils.get_worker_count(),
help=_('Number of heat-engine processes to fork and run.')),
cfg.StrOpt('environment_dir',
default='/etc/bilean/environments',
help=_('The directory to search for environment files.')),]
rpc_opts = [
cfg.StrOpt('host',
default=socket.gethostname(),
help=_('Name of the engine node. '
'This can be an opaque identifier. '
'It is not necessarily a hostname, FQDN, '
'or IP address.'))]
authentication_group = cfg.OptGroup('authentication')
authentication_opts = [
cfg.StrOpt('auth_url', default='',
help=_('Complete public identity V3 API endpoint.')),
cfg.StrOpt('service_username', default='bilean',
help=_('Bilean service user name')),
cfg.StrOpt('service_password', default='',
help=_('Password specified for the Bilean service user.')),
cfg.StrOpt('service_project_name', default='service',
help=_('Name of the service project.')),
cfg.StrOpt('service_user_domain', default='Default',
help=_('Name of the domain for the service user.')),
cfg.StrOpt('service_project_domain', default='Default',
help=_('Name of the domain for the service project.')),
]
clients_group = cfg.OptGroup('clients')
clients_opts = [
cfg.StrOpt('endpoint_type',
help=_(
'Type of endpoint in Identity service catalog to use '
'for communication with the OpenStack service.')),
cfg.StrOpt('ca_file',
help=_('Optional CA cert file to use in SSL connections.')),
cfg.StrOpt('cert_file',
help=_('Optional PEM-formatted certificate chain file.')),
cfg.StrOpt('key_file',
help=_('Optional PEM-formatted file that contains the '
'private key.')),
cfg.BoolOpt('insecure',
help=_("If set, then the server's certificate will not "
"be verified."))]
client_http_log_debug_opts = [
cfg.BoolOpt('http_log_debug',
default=False,
help=_("Allow client's debug log output."))]
revision_group = cfg.OptGroup('revision')
revision_opts = [
cfg.StrOpt('bilean_api_revision', default='1.0',
help=_('Bilean API revision.')),
cfg.StrOpt('bilean_engine_revision', default='1.0',
help=_('Bilean engine revision.'))]
def list_opts():
yield None, rpc_opts
yield None, service_opts
yield paste_deploy_group.name, paste_deploy_opts
yield authentication_group.name, authentication_opts
yield revision_group.name, revision_opts
yield clients_group.name, clients_opts
cfg.CONF.register_group(paste_deploy_group)
cfg.CONF.register_group(authentication_group)
cfg.CONF.register_group(revision_group)
cfg.CONF.register_group(clients_group)
for group, opts in list_opts():
cfg.CONF.register_opts(opts, group=group)
def _get_deployment_config_file():
"""Retrieves the deployment_config_file config item.
Item formatted as an absolute pathname.
"""
config_path = cfg.CONF.find_file(
cfg.CONF.paste_deploy['api_paste_config'])
if config_path is None:
return None
return os.path.abspath(config_path)
def load_paste_app(app_name=None):
"""Builds and returns a WSGI app from a paste config file.
We assume the last config file specified in the supplied ConfigOpts
object is the paste config file.
:param app_name: name of the application to load
:raises RuntimeError when config file cannot be located or application
cannot be loaded from config file
"""
if app_name is None:
app_name = cfg.CONF.prog
conf_file = _get_deployment_config_file()
if conf_file is None:
raise RuntimeError(_("Unable to locate config file [%s]") %
cfg.CONF.paste_deploy['api_paste_config'])
try:
app = wsgi.paste_deploy_app(conf_file, app_name, cfg.CONF)
# Log the options used when starting if we're in debug mode...
if cfg.CONF.debug:
cfg.CONF.log_opt_values(logging.getLogger(app_name),
sys_logging.DEBUG)
return app
except (LookupError, ImportError) as e:
raise RuntimeError(_("Unable to load %(app_name)s from "
"configuration file %(conf_file)s."
"\nGot: %(e)r") % {'app_name': app_name,
'conf_file': conf_file,
'e': e})

299
bilean/common/context.py Normal file
View File

@ -0,0 +1,299 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from keystoneclient import access
from keystoneclient import auth
from keystoneclient.auth.identity import access as access_plugin
from keystoneclient.auth.identity import v3
from keystoneclient.auth import token_endpoint
from oslo_config import cfg
from oslo_context import context
from oslo_log import log as logging
import oslo_messaging
from oslo_middleware import request_id as oslo_request_id
from oslo_utils import importutils
import six
from bilean.common import exception
from bilean.common.i18n import _LE, _LW
from bilean.common import policy
from bilean.common import wsgi
from bilean.db import api as db_api
from bilean.engine import clients
LOG = logging.getLogger(__name__)
TRUSTEE_CONF_GROUP = 'trustee'
auth.register_conf_options(cfg.CONF, TRUSTEE_CONF_GROUP)
cfg.CONF.import_group('authentication', 'bilean.common.config')
class RequestContext(context.RequestContext):
"""Stores information about the security context.
Under the security context the user accesses the system, as well as
additional request information.
"""
def __init__(self, auth_token=None, username=None, password=None,
aws_creds=None, tenant=None, user_id=None,
tenant_id=None, auth_url=None, roles=None, is_admin=None,
read_only=False, show_deleted=False,
overwrite=True, trust_id=None, trustor_user_id=None,
request_id=None, auth_token_info=None, region_name=None,
auth_plugin=None, trusts_auth_plugin=None, **kwargs):
"""Initialisation of the request context.
:param overwrite: Set to False to ensure that the greenthread local
copy of the index is not overwritten.
:param kwargs: Extra arguments that might be present, but we ignore
because they possibly came in from older rpc messages.
"""
super(RequestContext, self).__init__(auth_token=auth_token,
user=username, tenant=tenant,
is_admin=is_admin,
read_only=read_only,
show_deleted=show_deleted,
request_id=request_id)
self.username = username
self.user_id = user_id
self.password = password
self.region_name = region_name
self.aws_creds = aws_creds
self.tenant_id = tenant_id
self.auth_token_info = auth_token_info
self.auth_url = auth_url
self.roles = roles or []
self._session = None
self._clients = None
self.trust_id = trust_id
self.trustor_user_id = trustor_user_id
self.policy = policy.Enforcer()
self._auth_plugin = auth_plugin
self._trusts_auth_plugin = trusts_auth_plugin
if is_admin is None:
self.is_admin = self.policy.check_is_admin(self)
else:
self.is_admin = is_admin
@property
def session(self):
if self._session is None:
self._session = db_api.get_session()
return self._session
@property
def clients(self):
if self._clients is None:
self._clients = clients.Clients(self)
return self._clients
def to_dict(self):
user_idt = '{user} {tenant}'.format(user=self.user_id or '-',
tenant=self.tenant_id or '-')
return {'auth_token': self.auth_token,
'username': self.username,
'user_id': self.user_id,
'password': self.password,
'aws_creds': self.aws_creds,
'tenant': self.tenant,
'tenant_id': self.tenant_id,
'trust_id': self.trust_id,
'trustor_user_id': self.trustor_user_id,
'auth_token_info': self.auth_token_info,
'auth_url': self.auth_url,
'roles': self.roles,
'is_admin': self.is_admin,
'user': self.user,
'request_id': self.request_id,
'show_deleted': self.show_deleted,
'region_name': self.region_name,
'user_identity': user_idt}
@classmethod
def from_dict(cls, values):
return cls(**values)
@property
def keystone_v3_endpoint(self):
if self.auth_url:
return self.auth_url.replace('v2.0', 'v3')
raise exception.AuthorizationFailure()
@property
def trusts_auth_plugin(self):
if self._trusts_auth_plugin:
return self._trusts_auth_plugin
self._trusts_auth_plugin = auth.load_from_conf_options(
cfg.CONF, TRUSTEE_CONF_GROUP, trust_id=self.trust_id)
if self._trusts_auth_plugin:
return self._trusts_auth_plugin
LOG.warn(_LW('Using the keystone_authtoken user as the bilean '
'trustee user directly is deprecated. Please add the '
'trustee credentials you need to the %s section of '
'your bilean.conf file.') % TRUSTEE_CONF_GROUP)
cfg.CONF.import_group('keystone_authtoken',
'keystonemiddleware.auth_token')
self._trusts_auth_plugin = v3.Password(
username=cfg.CONF.keystone_authtoken.admin_user,
password=cfg.CONF.keystone_authtoken.admin_password,
user_domain_id='default',
auth_url=self.keystone_v3_endpoint,
trust_id=self.trust_id)
return self._trusts_auth_plugin
def _create_auth_plugin(self):
if self.auth_token_info:
auth_ref = access.AccessInfo.factory(body=self.auth_token_info,
auth_token=self.auth_token)
return access_plugin.AccessInfoPlugin(
auth_url=self.keystone_v3_endpoint,
auth_ref=auth_ref)
if self.auth_token:
# FIXME(jamielennox): This is broken but consistent. If you
# only have a token but don't load a service catalog then
# url_for wont work. Stub with the keystone endpoint so at
# least it might be right.
return token_endpoint.Token(endpoint=self.keystone_v3_endpoint,
token=self.auth_token)
if self.password:
return v3.Password(username=self.username,
password=self.password,
project_id=self.tenant_id,
user_domain_id='default',
auth_url=self.keystone_v3_endpoint)
LOG.error(_LE("Keystone v3 API connection failed, no password "
"trust or auth_token!"))
raise exception.AuthorizationFailure()
def reload_auth_plugin(self):
self._auth_plugin = None
@property
def auth_plugin(self):
if not self._auth_plugin:
if self.trust_id:
self._auth_plugin = self.trusts_auth_plugin
else:
self._auth_plugin = self._create_auth_plugin()
return self._auth_plugin
def get_admin_context(show_deleted=False):
return RequestContext(is_admin=True, show_deleted=show_deleted)
def get_service_context(show_deleted=False):
conf = cfg.CONF.authentication
return RequestContext(username=conf.service_username,
password=conf.service_password,
tenant=conf.service_project_name,
auth_url=conf.auth_url)
class ContextMiddleware(wsgi.Middleware):
def __init__(self, app, conf, **local_conf):
# Determine the context class to use
self.ctxcls = RequestContext
if 'context_class' in local_conf:
self.ctxcls = importutils.import_class(local_conf['context_class'])
super(ContextMiddleware, self).__init__(app)
def make_context(self, *args, **kwargs):
"""Create a context with the given arguments."""
return self.ctxcls(*args, **kwargs)
def process_request(self, req):
"""Constructs an appropriate context from extracted auth information.
Extract any authentication information in the request and construct an
appropriate context from it.
"""
headers = req.headers
environ = req.environ
try:
username = None
password = None
aws_creds = None
if headers.get('X-Auth-User') is not None:
username = headers.get('X-Auth-User')
password = headers.get('X-Auth-Key')
elif headers.get('X-Auth-EC2-Creds') is not None:
aws_creds = headers.get('X-Auth-EC2-Creds')
user_id = headers.get('X-User-Id')
token = headers.get('X-Auth-Token')
tenant = headers.get('X-Project-Name')
tenant_id = headers.get('X-Project-Id')
region_name = headers.get('X-Region-Name')
auth_url = headers.get('X-Auth-Url')
roles = headers.get('X-Roles')
if roles is not None:
roles = roles.split(',')
token_info = environ.get('keystone.token_info')
auth_plugin = environ.get('keystone.token_auth')
req_id = environ.get(oslo_request_id.ENV_REQUEST_ID)
except Exception:
raise exception.NotAuthenticated()
req.context = self.make_context(auth_token=token,
tenant=tenant, tenant_id=tenant_id,
aws_creds=aws_creds,
username=username,
user_id=user_id,
password=password,
auth_url=auth_url,
roles=roles,
request_id=req_id,
auth_token_info=token_info,
region_name=region_name,
auth_plugin=auth_plugin)
def ContextMiddleware_filter_factory(global_conf, **local_conf):
"""Factory method for paste.deploy."""
conf = global_conf.copy()
conf.update(local_conf)
def filter(app):
return ContextMiddleware(app, conf)
return filter
def request_context(func):
@six.wraps(func)
def wrapped(self, ctx, *args, **kwargs):
try:
return func(self, ctx, *args, **kwargs)
except exception.BileanException:
raise oslo_messaging.rpc.dispatcher.ExpectedException()
return wrapped

277
bilean/common/exception.py Normal file
View File

@ -0,0 +1,277 @@
#
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
'''
Bilean exception subclasses.
'''
import sys
from oslo_log import log as logging
import six
from bilean.common.i18n import _
from bilean.common.i18n import _LE
_FATAL_EXCEPTION_FORMAT_ERRORS = False
LOG = logging.getLogger(__name__)
class BileanException(Exception):
'''Base Bilean Exception.
To correctly use this class, inherit from it and define
a 'msg_fmt' property. That msg_fmt will get printf'd
with the keyword arguments provided to the constructor.
'''
message = _("An unknown exception occurred.")
def __init__(self, **kwargs):
self.kwargs = kwargs
try:
self.message = self.msg_fmt % kwargs
except KeyError:
# exc_info = sys.exc_info()
# if kwargs doesn't match a variable in the message
# log the issue and the kwargs
LOG.exception(_LE('Exception in string format operation'))
for name, value in six.iteritems(kwargs):
LOG.error("%s: %s" % (name, value)) # noqa
if _FATAL_EXCEPTION_FORMAT_ERRORS:
raise
# raise exc_info[0], exc_info[1], exc_info[2]
def __str__(self):
return six.text_type(self.message)
def __unicode__(self):
return six.text_type(self.message)
def __deepcopy__(self, memo):
return self.__class__(**self.kwargs)
class SIGHUPInterrupt(BileanException):
msg_fmt = _("System SIGHUP signal received.")
class NotAuthenticated(BileanException):
msg_fmt = _("You are not authenticated.")
class Forbidden(BileanException):
msg_fmt = _("You are not authorized to complete this action.")
class BileanBadRequest(BileanException):
msg_fmt = _("The request is malformed: %(msg)s")
class MultipleChoices(BileanException):
msg_fmt = _("Multiple results found matching the query criteria %(arg)s. "
"Please be more specific.")
class InvalidParameter(BileanException):
msg_fmt = _("Invalid value '%(value)s' specified for '%(name)s'")
class ClusterNotFound(BileanException):
msg_fmt = _("The cluster (%(cluster)s) could not be found.")
class NodeNotFound(BileanException):
msg_fmt = _("The node (%(node)s) could not be found.")
class RuleTypeNotFound(BileanException):
msg_fmt = _("Rule type (%(rule_type)s) is not found.")
class ProfileTypeNotMatch(BileanException):
msg_fmt = _("%(message)s")
class ProfileNotFound(BileanException):
msg_fmt = _("The profile (%(profile)s) could not be found.")
class ProfileNotSpecified(BileanException):
msg_fmt = _("Profile not specified.")
class ProfileOperationFailed(BileanException):
msg_fmt = _("%(message)s")
class ProfileOperationTimeout(BileanException):
msg_fmt = _("%(message)s")
class PolicyNotSpecified(BileanException):
msg_fmt = _("Policy not specified.")
class PolicyTypeNotFound(BileanException):
msg_fmt = _("Policy type (%(policy_type)s) is not found.")
class PolicyNotFound(BileanException):
msg_fmt = _("The policy (%(policy)s) could not be found.")
class PolicyBindingNotFound(BileanException):
msg_fmt = _("The policy (%(policy)s) is not found attached to the "
"specified cluster (%(identity)s).")
class PolicyTypeConflict(BileanException):
msg_fmt = _("The policy with type (%(policy_type)s) already exists.")
class InvalidSchemaError(BileanException):
msg_fmt = _("%(message)s")
class SpecValidationFailed(BileanException):
msg_fmt = _("%(message)s")
class FeatureNotSupported(BileanException):
msg_fmt = _("%(feature)s is not supported.")
class Error(BileanException):
msg_fmt = "%(message)s"
def __init__(self, msg):
super(Error, self).__init__(message=msg)
class ResourceInUse(BileanException):
msg_fmt = _("The %(resource_type)s (%(resource_id)s) is still in use.")
class InvalidContentType(BileanException):
msg_fmt = _("Invalid content type %(content_type)s")
class RequestLimitExceeded(BileanException):
msg_fmt = _('Request limit exceeded: %(message)s')
class WebhookNotFound(BileanException):
msg_fmt = _("The webhook (%(webhook)s) could not be found.")
class ReceiverNotFound(BileanException):
msg_fmt = _("The receiver (%(receiver)s) could not be found.")
class ActionNotFound(BileanException):
msg_fmt = _("The action (%(action)s) could not be found.")
class ActionInProgress(BileanException):
msg_fmt = _("Cluster %(cluster_name)s already has an action (%(action)s) "
"in progress.")
class EventNotFound(BileanException):
msg_fmt = _("The event (%(event)s) could not be found.")
class NodeNotOrphan(BileanException):
msg_fmt = _("%(message)s")
class InternalError(BileanException):
'''A base class for internal exceptions in bilean.
The internal exception classes which inherit from :class:`InternalError`
class should be translated to a user facing exception type if need to be
made user visible.
'''
msg_fmt = _('ERROR %(code)s happens for %(message)s.')
message = _('Internal error happens')
def __init__(self, **kwargs):
super(InternalError, self).__init__(**kwargs)
if 'code' in kwargs.keys():
self.code = kwargs.get('code', 500)
self.message = kwargs.get('message')
class ResourceBusyError(InternalError):
msg_fmt = _("The %(resource_type)s (%(resource_id)s) is busy now.")
class TrustNotFound(InternalError):
# Internal exception, not to be exposed to end user.
msg_fmt = _("The trust for trustor (%(trustor)s) could not be found.")
class ResourceCreationFailure(InternalError):
# Used when creating resources in other services
msg_fmt = _("Failed in creating %(rtype)s.")
class ResourceUpdateFailure(InternalError):
# Used when updating resources from other services
msg_fmt = _("Failed in updating %(resource)s.")
class ResourceDeletionFailure(InternalError):
# Used when deleting resources from other services
msg_fmt = _("Failed in deleting %(resource)s.")
class ResourceNotFound(InternalError):
# Used when retrieving resources from other services
msg_fmt = _("The resource (%(resource)s) could not be found.")
class ResourceStatusError(InternalError):
msg_fmt = _("The resource %(resource_id)s is in error status "
"- '%(status)s' due to '%(reason)s'.")
class InvalidPlugin(InternalError):
msg_fmt = _("%(message)s")
class InvalidSpec(InternalError):
msg_fmt = _("%(message)s")
class PolicyNotAttached(InternalError):
msg_fmt = _("The policy (%(policy)s) is not attached to the specified "
"cluster (%(cluster)s).")
class HTTPExceptionDisguise(Exception):
"""Disguises HTTP exceptions.
The purpose is to let them be handled by the webob fault application
in the wsgi pipeline.
"""
def __init__(self, exception):
self.exc = exception
self.tb = sys.exc_info()[2]

35
bilean/common/i18n.py Normal file
View File

@ -0,0 +1,35 @@
# Copyright 2014 IBM Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# It's based on oslo.i18n usage in OpenStack Keystone project and
# recommendations from http://docs.openstack.org/developer/oslo.i18n/usage.html
import oslo_i18n
_translators = oslo_i18n.TranslatorFactory(domain='bilean')
# The primary translation function using the well-known name "_"
_ = _translators.primary
# Translators for log levels.
#
# The abbreviated names are meant to reflect the usual use of a short
# name like '_'. The "L" is for "log" and the other letter comes from
# the level.
_LI = _translators.log_info
_LW = _translators.log_warning
_LE = _translators.log_error
_LC = _translators.log_critical

139
bilean/common/messaging.py Normal file
View File

@ -0,0 +1,139 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import eventlet
from oslo_config import cfg
import oslo_messaging
from oslo_serialization import jsonutils
from bilean.common import context
NOTIFIER = None
TRANSPORTS = {}
TRANSPORT = None
DEFAULT_URL = "__default__"
class RequestContextSerializer(oslo_messaging.Serializer):
def __init__(self, base):
self._base = base
def serialize_entity(self, ctxt, entity):
if not self._base:
return entity
return self._base.serialize_entity(ctxt, entity)
def deserialize_entity(self, ctxt, entity):
if not self._base:
return entity
return self._base.deserialize_entity(ctxt, entity)
@staticmethod
def serialize_context(ctxt):
return ctxt.to_dict()
@staticmethod
def deserialize_context(ctxt):
return context.RequestContext.from_dict(ctxt)
class JsonPayloadSerializer(oslo_messaging.NoOpSerializer):
@classmethod
def serialize_entity(cls, context, entity):
return jsonutils.to_primitive(entity, convert_instances=True)
def setup(url=None, optional=False):
"""Initialise the oslo_messaging layer."""
global TRANSPORT, TRANSPORTS, NOTIFIER
if url and url.startswith("fake://"):
# NOTE(sileht): oslo_messaging fake driver uses time.sleep
# for task switch, so we need to monkey_patch it
eventlet.monkey_patch(time=True)
if not TRANSPORT:
oslo_messaging.set_transport_defaults('bilean')
exmods = ['bilean.common.exception']
try:
TRANSPORT = oslo_messaging.get_transport(
cfg.CONF, url, allowed_remote_exmods=exmods)
except oslo_messaging.InvalidTransportURL as e:
TRANSPORT = None
if not optional or e.url:
# NOTE(sileht): oslo_messaging is configured but unloadable
# so reraise the exception
raise
if not NOTIFIER and TRANSPORT:
serializer = RequestContextSerializer(JsonPayloadSerializer())
NOTIFIER = oslo_messaging.Notifier(TRANSPORT, serializer=serializer)
TRANSPORTS[url] = TRANSPORT
def cleanup():
"""Cleanup the oslo_messaging layer."""
global TRANSPORTS, NOTIFIER
for url in TRANSPORTS:
TRANSPORTS[url].cleanup()
del TRANSPORTS[url]
TRANSPORT = NOTIFIER = None
def get_transport(url=None, optional=False, cache=True):
"""Initialise the oslo_messaging layer."""
global TRANSPORTS, DEFAULT_URL
cache_key = url or DEFAULT_URL
transport = TRANSPORTS.get(cache_key)
if not transport or not cache:
try:
transport = oslo_messaging.get_transport(cfg.CONF, url)
except oslo_messaging.InvalidTransportURL as e:
if not optional or e.url:
# NOTE(sileht): oslo_messaging is configured but unloadable
# so reraise the exception
raise
return None
else:
if cache:
TRANSPORTS[cache_key] = transport
return transport
def get_rpc_server(target, endpoint):
"""Return a configured oslo_messaging rpc server."""
serializer = RequestContextSerializer(JsonPayloadSerializer())
return oslo_messaging.get_rpc_server(TRANSPORT, target, [endpoint],
executor='eventlet',
serializer=serializer)
def get_rpc_client(**kwargs):
"""Return a configured oslo_messaging RPCClient."""
target = oslo_messaging.Target(**kwargs)
serializer = RequestContextSerializer(JsonPayloadSerializer())
return oslo_messaging.RPCClient(TRANSPORT, target,
serializer=serializer)
def get_notification_listener(transport, targets, endpoints,
allow_requeue=False):
"""Return a configured oslo_messaging notification listener."""
return oslo_messaging.get_notification_listener(
transport, targets, endpoints, executor='eventlet',
allow_requeue=allow_requeue)
def get_notifier(publisher_id):
"""Return a configured oslo_messaging notifier."""
return NOTIFIER.prepare(publisher_id=publisher_id)

104
bilean/common/params.py Normal file
View File

@ -0,0 +1,104 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# RESOURCE_TYPES = ["instance", "volume", "bandwidth", "ha", "router", "rdb",
# "load_balancer", "snapshot", "self_image"]
# RESOURCE_STATUS = ["active", "paused"]
MIN_VALUE = "1"
MAX_VALUE = "100000000"
RPC_ATTRs = (
ENGINE_TOPIC,
ENGINE_HEALTH_MGR_TOPIC,
NOTIFICATION_TOPICS,
RPC_API_VERSION,
) = (
'bilean-engine',
'engine-health_mgr',
'billing_notifications',
'1.0',
)
USER_KEYS = (
USER_ID,
USER_BALANCE,
USER_RATE,
USER_CREDIT,
USER_LAST_BILL,
USER_STATUS,
USER_STATUS_REASION,
USER_CREATED_AT,
USER_UPDATED_AT
) = (
'id',
'balance',
'rate',
'credit',
'last_bill',
'status',
'status_reason',
'created_at',
'updated_at'
)
RES_KEYS = (
RES_ID,
RES_RESOURCE_TYPE,
RES_SIZE,
RES_RATE,
RES_STATUS,
RES_STATUS_REASON,
RES_USER_ID,
RES_RULE_ID,
RES_RESOURCE_REF,
RES_CREATED_AT,
RES_UPDATED_AT,
RES_DELETED_AT,
RES_DELETED
) = (
'id',
'resource_type',
'size',
'rate',
'status',
'status_reason',
'user_id',
'rule_id',
'resource_ref',
'created_at',
'updated_at',
'deleted_at',
'deleted'
)
RULE_KEYS = (
RULE_ID,
RULE_NAME,
RULE_TYPE,
RULE_SPEC,
RULE_METADATA,
RULE_UPDATED_AT,
RULE_CREATED_AT,
RULE_DELETED_AT,
) = (
'id',
'name',
'type',
'spec',
'metadata',
'updated_at',
'created_at',
'deleted_at',
)

115
bilean/common/policy.py Normal file
View File

@ -0,0 +1,115 @@
#
# Copyright (c) 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Based on glance/api/policy.py
"""Policy Engine For Bilean."""
from oslo_config import cfg
from oslo_log import log as logging
from oslo_policy import policy
import six
from bilean.common import exception
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
DEFAULT_RULES = policy.Rules.from_dict({'default': '!'})
DEFAULT_RESOURCE_RULES = policy.Rules.from_dict({'default': '@'})
class Enforcer(object):
"""Responsible for loading and enforcing rules."""
def __init__(self, scope='bilean', exc=exception.Forbidden,
default_rule=DEFAULT_RULES['default'], policy_file=None):
self.scope = scope
self.exc = exc
self.default_rule = default_rule
self.enforcer = policy.Enforcer(
CONF, default_rule=default_rule, policy_file=policy_file)
def set_rules(self, rules, overwrite=True):
"""Create a new Rules object based on the provided dict of rules."""
rules_obj = policy.Rules(rules, self.default_rule)
self.enforcer.set_rules(rules_obj, overwrite)
def load_rules(self, force_reload=False):
"""Set the rules found in the json file on disk."""
self.enforcer.load_rules(force_reload)
def _check(self, context, rule, target, exc, *args, **kwargs):
"""Verifies that the action is valid on the target in this context.
:param context: Bilean request context
:param rule: String representing the action to be checked
:param target: Dictionary representing the object of the action.
:raises: self.exc (defaults to bilean.common.exception.Forbidden)
:returns: A non-False value if access is allowed.
"""
do_raise = False if not exc else True
credentials = context.to_dict()
return self.enforcer.enforce(rule, target, credentials,
do_raise, exc=exc, *args, **kwargs)
def enforce(self, context, action, scope=None, target=None):
"""Verifies that the action is valid on the target in this context.
:param context: Bilean request context
:param action: String representing the action to be checked
:param target: Dictionary representing the object of the action.
:raises: self.exc (defaults to bilean.common.exception.Forbidden)
:returns: A non-False value if access is allowed.
"""
_action = '%s:%s' % (scope or self.scope, action)
_target = target or {}
return self._check(context, _action, _target, self.exc, action=action)
def check_is_admin(self, context):
"""Whether or not roles contains 'admin' role according to policy.json.
:param context: Bilean request context
:returns: A non-False value if the user is admin according to policy
"""
return self._check(context, 'context_is_admin', target={}, exc=None)
class ResourceEnforcer(Enforcer):
def __init__(self, default_rule=DEFAULT_RESOURCE_RULES['default'],
**kwargs):
super(ResourceEnforcer, self).__init__(
default_rule=default_rule, **kwargs)
def enforce(self, context, res_type, scope=None, target=None):
# NOTE(pas-ha): try/except just to log the exception
try:
result = super(ResourceEnforcer, self).enforce(
context, res_type,
scope=scope or 'resource_types',
target=target)
except self.exc as ex:
LOG.info(six.text_type(ex))
raise
if not result:
if self.exc:
raise self.exc(action=res_type)
else:
return result
def enforce_stack(self, stack, scope=None, target=None):
for res in stack.resources.values():
self.enforce(stack.context, res.type(), scope=scope, target=target)

430
bilean/common/schema.py Normal file
View File

@ -0,0 +1,430 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import numbers
import six
from oslo_log import log as logging
from oslo_utils import strutils
from bilean.common import exception
from bilean.common.i18n import _
LOG = logging.getLogger(__name__)
class AnyIndexDict(collections.Mapping):
'''Convenience schema for a list.'''
def __init__(self, value):
self.value = value
def __getitem__(self, key):
if key != '*' and not isinstance(key, six.integer_types):
raise KeyError(_('Invalid key %s') % str(key))
return self.value
def __iter__(self):
yield '*'
def __len__(self):
return 1
class Schema(collections.Mapping):
'''Class for validating rule specifications.'''
KEYS = (
TYPE, DESCRIPTION, DEFAULT, REQUIRED, SCHEMA, UPDATABLE,
CONSTRAINTS, READONLY,
) = (
'type', 'description', 'default', 'required', 'schema', 'updatable',
'constraints', 'readonly',
)
TYPES = (
INTEGER, STRING, NUMBER, BOOLEAN, MAP, LIST,
) = (
'Integer', 'String', 'Number', 'Boolean', 'Map', 'List',
)
def __init__(self, description=None, default=None,
required=False, schema=None, updatable=False,
readonly=False, constraints=None):
if schema is not None:
if type(self) not in (List, Map):
msg = _('Schema valid only for List or Map, not '
'"%s"') % self[self.TYPE]
raise exception.InvalidSchemaError(message=msg)
if self[self.TYPE] == self.LIST:
self.schema = AnyIndexDict(schema)
else:
self.schema = schema
self.description = description
self.default = default
self.required = required
self.updatable = updatable
self.constraints = constraints or []
self.readonly = readonly
self._len = None
def has_default(self):
return self.default is not None
def get_default(self):
return self.resolve(self.default)
def _validate_default(self, context):
if self.default is None:
return
try:
self.validate(self.default, context)
except (ValueError, TypeError) as exc:
raise exception.InvalidSchemaError(
message=_('Invalid default %(default)s (%(exc)s)') %
dict(default=self.default, exc=exc))
def validate(self, context=None):
'''Validates the schema.
This method checks if the schema itself is valid.
'''
self._validate_default(context)
# validated nested schema: List or Map
if self.schema:
if isinstance(self.schema, AnyIndexDict):
self.schema.value.validate(context)
else:
for nested_schema in self.schema.values():
nested_schema.validate(context)
def validate_constraints(self, value, context=None, skipped=None):
if not skipped:
skipped = []
try:
for constraint in self.constraints:
if type(constraint) not in skipped:
constraint.validate(value, context)
except ValueError as ex:
raise exception.SpecValidationFailed(message=six.text_type(ex))
def __getitem__(self, key):
if key == self.DESCRIPTION:
if self.description is not None:
return self.description
elif key == self.DEFAULT:
if self.default is not None:
return self.default
elif key == self.SCHEMA:
if self.schema is not None:
return dict((n, dict(s)) for n, s in self.schema.items())
elif key == self.REQUIRED:
return self.required
elif key == self.READONLY:
return self.readonly
elif key == self.CONSTRAINTS:
if self.constraints:
return [dict(c) for c in self.constraints]
raise KeyError(key)
def __iter__(self):
for k in self.KEYS:
try:
self[k]
except KeyError:
pass
else:
yield k
def __len__(self):
if self._len is None:
self._len = len(list(iter(self)))
return self._len
class Boolean(Schema):
def __getitem__(self, key):
if key == self.TYPE:
return self.BOOLEAN
else:
return super(Boolean, self).__getitem__(key)
def to_schema_type(self, value):
return strutils.bool_from_string(str(value), strict=True)
def resolve(self, value):
if str(value).lower() not in ('true', 'false'):
msg = _('The value "%s" is not a valid Boolean') % value
raise exception.SpecValidationFailed(message=msg)
return strutils.bool_from_string(value, strict=True)
def validate(self, value, context=None):
if isinstance(value, bool):
return
self.resolve(value)
class Integer(Schema):
def __getitem__(self, key):
if key == self.TYPE:
return self.INTEGER
else:
return super(Integer, self).__getitem__(key)
def to_schema_type(self, value):
if isinstance(value, six.integer_types):
return value
try:
num = int(value)
except ValueError:
raise ValueError(_('%s is not an intger.') % num)
return num
def resolve(self, value):
try:
return int(value)
except (TypeError, ValueError):
msg = _('The value "%s" cannot be converted into an '
'integer.') % value
raise exception.SpecValidationFailed(message=msg)
def validate(self, value, context=None):
if not isinstance(value, six.integer_types):
value = self.resolve(value)
self.validate_constraints(value, self, context)
class String(Schema):
def __getitem__(self, key):
if key == self.TYPE:
return self.STRING
else:
return super(String, self).__getitem__(key)
def to_schema_type(self, value):
return str(value)
def resolve(self, value):
try:
return str(value)
except (TypeError, ValueError) as ex:
raise ex
def validate(self, value, context=None):
if not isinstance(value, six.string_types):
msg = _('The value "%s" cannot be converted into a '
'string.') % value
raise exception.SpecValidationFailed(message=msg)
self.resolve(value)
self.validate_constraints(value, self, context)
class Number(Schema):
def __getitem__(self, key):
if key == self.TYPE:
return self.NUMBER
else:
return super(Number, self).__getitem__(key)
def to_schema_type(self, value):
if isinstance(value, numbers.Number):
return value
try:
return int(value)
except ValueError:
return float(value)
def resolve(self, value):
if isinstance(value, numbers.Number):
return value
try:
return int(value)
except ValueError:
return float(value)
def validate(self, value, context=None):
if isinstance(value, numbers.Number):
return
self.resolve(value)
self.resolve_constraints(value, self, context)
class List(Schema):
def __getitem__(self, key):
if key == self.TYPE:
return self.LIST
else:
return super(List, self).__getitem__(key)
def _get_children(self, values, keys, context):
sub_schema = self.schema
if sub_schema is not None:
# We have a child schema specified for list elements
# Fake a dict of array elements, since we have only one schema
schema_arr = dict((k, sub_schema[k]) for k in keys)
subspec = Spec(schema_arr, dict(values))
subspec.validate()
return ((k, subspec[k]) for k in keys)
else:
return values
def get_default(self):
if not isinstance(self.default, collections.Sequence):
raise TypeError(_('"%s" is not a List') % self.default)
return self.default
def resolve(self, value, context=None):
if not isinstance(value, collections.Sequence):
raise TypeError(_('"%s" is not a List') % value)
return [v[1] for v in self._get_children(enumerate(value),
list(range(len(value))),
context)]
def validate(self, value, context=None):
if not isinstance(value, collections.Mapping):
raise TypeError(_('"%s" is not a Map') % value)
for key, child in self.schema.items():
item_value = value.get(key)
child.validate(item_value, context)
class Map(Schema):
def __getitem__(self, key):
if key == self.TYPE:
return self.MAP
else:
return super(Map, self).__getitem__(key)
def _get_children(self, values, context=None):
# There are cases where the Map is not specified to the very detailed
# levels, we treat them as valid specs as well.
if self.schema is None:
return values
sub_schema = self.schema
if sub_schema is not None:
# sub_schema shoud be a dict here
subspec = Spec(sub_schema, dict(values))
subspec.validate()
return ((k, subspec[k]) for k in sub_schema)
else:
return values
def get_default(self):
if not isinstance(self.default, collections.Mapping):
raise TypeError(_('"%s" is not a Map') % self.default)
return self.default
def resolve(self, value, context=None):
if not isinstance(value, collections.Mapping):
raise TypeError(_('"%s" is not a Map') % value)
return dict(self._get_children(six.iteritems(value), context))
def validate(self, value, context=None):
if not isinstance(value, collections.Mapping):
raise TypeError(_('"%s" is not a Map') % value)
for key, child in self.schema.items():
item_value = value.get(key)
child.validate(item_value, context)
class Spec(collections.Mapping):
'''A class that contains all spec items.'''
def __init__(self, schema, data):
self._schema = schema
self._data = data
def validate(self):
'''Validate the schema.'''
for (k, s) in self._schema.items():
try:
# validate through resolve
self.resolve_value(k)
except (TypeError, ValueError) as err:
msg = _('Spec validation error (%(key)s): %(err)s') % dict(
key=k, err=six.text_type(err))
raise exception.SpecValidationFailed(message=msg)
for key in self._data:
if key not in self._schema:
msg = _('Unrecognizable spec item "%s"') % key
raise exception.SpecValidationFailed(message=msg)
def resolve_value(self, key):
if key not in self:
raise KeyError(_('Invalid spec item: "%s"') % key)
schema_item = self._schema[key]
if key in self._data:
raw_value = self._data[key]
return schema_item.resolve(raw_value)
elif schema_item.has_default():
return schema_item.get_default()
elif schema_item.required:
raise ValueError(_('Required spec item "%s" not assigned') % key)
def __getitem__(self, key):
'''Lazy evaluation for spec items.'''
return self.resolve_value(key)
def __len__(self):
'''Number of items in the spec.
A spec always contain all keys though some may be not specified.
'''
return len(self._schema)
def __contains__(self, key):
return key in self._schema
def __iter__(self):
return iter(self._schema)
def get_spec_version(spec):
if not isinstance(spec, dict):
msg = _('The provided spec is not a map.')
raise exception.SpecValidationFailed(message=msg)
if 'type' not in spec:
msg = _("The 'type' key is missing from the provided spec map.")
raise exception.SpecValidationFailed(message=msg)
if 'version' not in spec:
msg = _("The 'version' key is missing from the provided spec map.")
raise exception.SpecValidationFailed(message=msg)
return (spec['type'], spec['version'])

View File

@ -0,0 +1,41 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Utility methods for serializing responses
"""
import datetime
from oslo_log import log as logging
from oslo_serialization import jsonutils
from oslo_utils import encodeutils
import six
LOG = logging.getLogger(__name__)
class JSONResponseSerializer(object):
def to_json(self, data):
def sanitizer(obj):
if isinstance(obj, datetime.datetime):
return obj.isoformat()
return six.text_type(obj)
response = jsonutils.dumps(data, default=sanitizer, sort_keys=True)
LOG.debug("JSON response : %s" % response)
return response
def default(self, response, result):
response.content_type = 'application/json'
response.body = encodeutils.safe_encode(self.to_json(result))

157
bilean/common/utils.py Normal file
View File

@ -0,0 +1,157 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
'''
Utilities module.
'''
import random
import string
from cryptography.fernet import Fernet
import requests
from requests import exceptions
from six.moves import urllib
from oslo_config import cfg
from oslo_log import log as logging
from oslo_utils import encodeutils
from oslo_utils import strutils
from bilean.common import exception
from bilean.common.i18n import _
from bilean.common.i18n import _LI
cfg.CONF.import_opt('max_response_size', 'bilean.common.config')
LOG = logging.getLogger(__name__)
class URLFetchError(exception.Error, IOError):
pass
def parse_int_param(name, value, allow_zero=True, allow_negative=False,
lower_limit=None, upper_limit=None):
if value is None:
return None
if value in ('0', 0):
if allow_zero:
return int(value)
raise exception.InvalidParameter(name=name, value=value)
try:
result = int(value)
except (TypeError, ValueError):
raise exception.InvalidParameter(name=name, value=value)
else:
if any([(allow_negative is False and result < 0),
(lower_limit and result < lower_limit),
(upper_limit and result > upper_limit)]):
raise exception.InvalidParameter(name=name, value=value)
return result
def parse_bool_param(name, value):
if str(value).lower() not in ('true', 'false'):
raise exception.InvalidParameter(name=name, value=str(value))
return strutils.bool_from_string(value, strict=True)
def url_fetch(url, allowed_schemes=('http', 'https')):
'''Get the data at the specified URL.
The URL must use the http: or https: schemes.
The file: scheme is also supported if you override
the allowed_schemes argument.
Raise an IOError if getting the data fails.
'''
LOG.info(_LI('Fetching data from %s'), url)
components = urllib.parse.urlparse(url)
if components.scheme not in allowed_schemes:
raise URLFetchError(_('Invalid URL scheme %s') % components.scheme)
if components.scheme == 'file':
try:
return urllib.request.urlopen(url).read()
except urllib.error.URLError as uex:
raise URLFetchError(_('Failed to retrieve data: %s') % uex)
try:
resp = requests.get(url, stream=True)
resp.raise_for_status()
# We cannot use resp.text here because it would download the entire
# file, and a large enough file would bring down the engine. The
# 'Content-Length' header could be faked, so it's necessary to
# download the content in chunks to until max_response_size is reached.
# The chunk_size we use needs to balance CPU-intensive string
# concatenation with accuracy (eg. it's possible to fetch 1000 bytes
# greater than max_response_size with a chunk_size of 1000).
reader = resp.iter_content(chunk_size=1000)
result = ""
for chunk in reader:
result += chunk
if len(result) > cfg.CONF.max_response_size:
raise URLFetchError("Data exceeds maximum allowed size (%s"
" bytes)" % cfg.CONF.max_response_size)
return result
except exceptions.RequestException as ex:
raise URLFetchError(_('Failed to retrieve data: %s') % ex)
def encrypt(msg):
'''Encrypt message with random key.
:param msg: message to be encrypted
:returns: encrypted msg and key to decrypt
'''
password = Fernet.generate_key()
f = Fernet(password)
key = f.encrypt(encodeutils.safe_encode(msg))
return encodeutils.safe_decode(password), encodeutils.safe_decode(key)
def decrypt(msg, key):
'''Decrypt message using provided key.
:param msg: encrypted message
:param key: key used to decrypt
:returns: decrypted message string
'''
f = Fernet(encodeutils.safe_encode(msg))
msg = f.decrypt(encodeutils.safe_encode(key))
return encodeutils.safe_decode(msg)
def random_name(length=8):
if length <= 0:
return ''
lead = random.choice(string.ascii_letters)
tail = ''.join(random.choice(string.ascii_letters + string.digits)
for i in range(length-1))
return lead + tail
def format_time(value):
"""Cut microsecond and format to isoformat string."""
if value:
value = value.replace(microsecond=0)
value = value.isoformat()
return value

923
bilean/common/wsgi.py Normal file
View File

@ -0,0 +1,923 @@
#
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2013 IBM Corp.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Utility methods for working with WSGI servers
"""
import abc
import errno
import logging as std_logging
import os
import signal
import sys
import time
import eventlet
from eventlet.green import socket
from eventlet.green import ssl
import eventlet.greenio
import eventlet.wsgi
import functools
from oslo_config import cfg
import oslo_i18n
from oslo_log import log as logging
from oslo_serialization import jsonutils
from oslo_utils import importutils
from paste import deploy
import routes
import routes.middleware
import six
import webob.dec
import webob.exc
from bilean.common import exception
from bilean.common.i18n import _
from bilean.common.i18n import _LE
from bilean.common.i18n import _LI
from bilean.common.i18n import _LW
from bilean.common import serializers
LOG = logging.getLogger(__name__)
URL_LENGTH_LIMIT = 50000
api_opts = [
cfg.IPOpt('bind_host', default='0.0.0.0',
help=_('Address to bind the server. Useful when '
'selecting a particular network interface.')),
cfg.PortOpt('bind_port', default=8770,
help=_('The port on which the server will listen.')),
cfg.IntOpt('backlog', default=4096,
help=_("Number of backlog requests "
"to configure the socket with.")),
cfg.StrOpt('cert_file',
help=_("Location of the SSL certificate file "
"to use for SSL mode.")),
cfg.StrOpt('key_file',
help=_("Location of the SSL key file to use "
"for enabling SSL mode.")),
cfg.IntOpt('workers', default=0,
help=_("Number of workers for Bilean service.")),
cfg.IntOpt('max_header_line', default=16384,
help=_('Maximum line size of message headers to be accepted. '
'max_header_line may need to be increased when using '
'large tokens (typically those generated by the '
'Keystone v3 API with big service catalogs).')),
cfg.IntOpt('tcp_keepidle', default=600,
help=_('The value for the socket option TCP_KEEPIDLE. This is '
'the time in seconds that the connection must be idle '
'before TCP starts sending keepalive probes.')),
]
api_group = cfg.OptGroup('bilean_api')
cfg.CONF.register_group(api_group)
cfg.CONF.register_opts(api_opts, group=api_group)
wsgi_eventlet_opts = [
cfg.BoolOpt('wsgi_keep_alive', default=True,
help=_("If false, closes the client socket explicitly.")),
cfg.IntOpt('client_socket_timeout', default=900,
help=_("Timeout for client connections' socket operations. "
"If an incoming connection is idle for this number of "
"seconds it will be closed. A value of '0' indicates "
"waiting forever.")),
]
wsgi_eventlet_group = cfg.OptGroup('eventlet_opts')
cfg.CONF.register_group(wsgi_eventlet_group)
cfg.CONF.register_opts(wsgi_eventlet_opts, group=wsgi_eventlet_group)
json_size_opt = cfg.IntOpt('max_json_body_size', default=1048576,
help=_('Maximum raw byte size of JSON request body.'
' Should be larger than max_template_size.'))
cfg.CONF.register_opt(json_size_opt)
def list_opts():
yield None, [json_size_opt]
yield 'bilean_api', api_opts
yield 'eventlet_opts', wsgi_eventlet_opts
def get_bind_addr(conf, default_port=None):
return (conf.bind_host, conf.bind_port or default_port)
def get_socket(conf, default_port):
'''Bind socket to bind ip:port in conf
:param conf: a cfg.ConfigOpts object
:param default_port: port to bind to if none is specified in conf
:returns : a socket object as returned from socket.listen or
ssl.wrap_socket if conf specifies cert_file
'''
bind_addr = get_bind_addr(conf, default_port)
# TODO(jaypipes): eventlet's greened socket module does not actually
# support IPv6 in getaddrinfo(). We need to get around this in the
# future or monitor upstream for a fix
address_family = [addr[0] for addr in socket.getaddrinfo(bind_addr[0],
bind_addr[1], socket.AF_UNSPEC, socket.SOCK_STREAM)
if addr[0] in (socket.AF_INET, socket.AF_INET6)][0]
cert_file = conf.cert_file
key_file = conf.key_file
use_ssl = cert_file or key_file
if use_ssl and (not cert_file or not key_file):
raise RuntimeError(_("When running server in SSL mode, you must "
"specify both a cert_file and key_file "
"option value in your configuration file"))
sock = None
retry_until = time.time() + 30
while not sock and time.time() < retry_until:
try:
sock = eventlet.listen(bind_addr, backlog=conf.backlog,
family=address_family)
except socket.error as err:
if err.args[0] != errno.EADDRINUSE:
raise
eventlet.sleep(0.1)
if not sock:
raise RuntimeError(_("Could not bind to %(bind_addr)s after trying "
" 30 seconds") % {'bind_addr': bind_addr})
return sock
class WritableLogger(object):
"""A thin wrapper that responds to `write` and logs."""
def __init__(self, LOG, level=std_logging.DEBUG):
self.LOG = LOG
self.level = level
def write(self, msg):
self.LOG.log(self.level, msg.rstrip("\n"))
class Server(object):
"""Server class to manage multiple WSGI sockets and applications."""
def __init__(self, name, conf, threads=1000):
os.umask(0o27) # ensure files are created with the correct privileges
self._logger = logging.getLogger("eventlet.wsgi.server")
self._wsgi_logger = WritableLogger(self._logger)
self.name = name
self.threads = threads
self.children = set()
self.stale_children = set()
self.running = True
self.pgid = os.getpid()
self.conf = conf
try:
os.setpgid(self.pgid, self.pgid)
except OSError:
self.pgid = 0
def kill_children(self, *args):
"""Kills the entire process group."""
LOG.error(_LE('SIGTERM received'))
signal.signal(signal.SIGTERM, signal.SIG_IGN)
signal.signal(signal.SIGINT, signal.SIG_IGN)
self.running = False
os.killpg(0, signal.SIGTERM)
def hup(self, *args):
"""Reloads configuration files with zero down time."""
LOG.error(_LE('SIGHUP received'))
signal.signal(signal.SIGHUP, signal.SIG_IGN)
raise exception.SIGHUPInterrupt
def start(self, application, default_port):
"""Run a WSGI server with the given application.
:param application: The application to run in the WSGI server
:param conf: a cfg.ConfigOpts object
:param default_port: Port to bind to if none is specified in conf
"""
eventlet.wsgi.MAX_HEADER_LINE = self.conf.max_header_line
self.application = application
self.default_port = default_port
self.configure_socket()
self.start_wsgi()
def start_wsgi(self):
if self.conf.workers == 0:
# Useful for profiling, test, debug etc.
self.pool = eventlet.GreenPool(size=self.threads)
self.pool.spawn_n(self._single_run, self.application, self.sock)
return
LOG.info(_LI("Starting %d workers") % self.conf.workers)
signal.signal(signal.SIGTERM, self.kill_children)
signal.signal(signal.SIGINT, self.kill_children)
signal.signal(signal.SIGHUP, self.hup)
while len(self.children) < self.conf.workers:
self.run_child()
def wait_on_children(self):
"""Wait on children exit."""
while self.running:
try:
pid, status = os.wait()
if os.WIFEXITED(status) or os.WIFSIGNALED(status):
self._remove_children(pid)
self._verify_and_respawn_children(pid, status)
except OSError as err:
if err.errno not in (errno.EINTR, errno.ECHILD):
raise
except KeyboardInterrupt:
LOG.info(_LI('Caught keyboard interrupt. Exiting.'))
os.killpg(0, signal.SIGTERM)
break
except exception.SIGHUPInterrupt:
self.reload()
continue
eventlet.greenio.shutdown_safe(self.sock)
self.sock.close()
LOG.debug('Exited')
def configure_socket(self, old_conf=None, has_changed=None):
"""Ensure a socket exists and is appropriately configured.
This function is called on start up, and can also be
called in the event of a configuration reload.
When called for the first time a new socket is created.
If reloading and either bind_host or bind port have been
changed the existing socket must be closed and a new
socket opened (laws of physics).
In all other cases (bind_host/bind_port have not changed)
the existing socket is reused.
:param old_conf: Cached old configuration settings (if any)
:param has changed: callable to determine if a parameter has changed
"""
new_sock = (old_conf is None or (
has_changed('bind_host') or
has_changed('bind_port')))
# check https
use_ssl = not (not self.conf.cert_file or not self.conf.key_file)
# Were we using https before?
old_use_ssl = (old_conf is not None and not (
not old_conf.get('key_file') or
not old_conf.get('cert_file')))
# Do we now need to perform an SSL wrap on the socket?
wrap_sock = use_ssl is True and (old_use_ssl is False or new_sock)
# Do we now need to perform an SSL unwrap on the socket?
unwrap_sock = use_ssl is False and old_use_ssl is True
if new_sock:
self._sock = None
if old_conf is not None:
self.sock.close()
_sock = get_socket(self.conf, self.default_port)
_sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
# sockets can hang around forever without keepalive
_sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
self._sock = _sock
if wrap_sock:
self.sock = ssl.wrap_socket(self._sock,
certfile=self.conf.cert_file,
keyfile=self.conf.key_file)
if unwrap_sock:
self.sock = self._sock
if new_sock and not use_ssl:
self.sock = self._sock
# Pick up newly deployed certs
if old_conf is not None and use_ssl is True and old_use_ssl is True:
if has_changed('cert_file'):
self.sock.certfile = self.conf.cert_file
if has_changed('key_file'):
self.sock.keyfile = self.conf.key_file
if new_sock or (old_conf is not None and has_changed('tcp_keepidle')):
# This option isn't available in the OS X version of eventlet
if hasattr(socket, 'TCP_KEEPIDLE'):
self.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE,
self.conf.tcp_keepidle)
if old_conf is not None and has_changed('backlog'):
self.sock.listen(self.conf.backlog)
def _remove_children(self, pid):
if pid in self.children:
self.children.remove(pid)
LOG.info(_LI('Removed dead child %s'), pid)
elif pid in self.stale_children:
self.stale_children.remove(pid)
LOG.info(_LI('Removed stale child %s'), pid)
else:
LOG.warn(_LW('Unrecognised child %s'), pid)
def _verify_and_respawn_children(self, pid, status):
if len(self.stale_children) == 0:
LOG.debug('No stale children')
if os.WIFEXITED(status) and os.WEXITSTATUS(status) != 0:
LOG.error(_LE('Not respawning child %d, cannot '
'recover from termination'), pid)
if not self.children and not self.stale_children:
LOG.info(_LI('All workers have terminated. Exiting'))
self.running = False
else:
if len(self.children) < self.conf.workers:
self.run_child()
def stash_conf_values(self):
"""Make a copy of some of the current global CONF's settings.
Allows determining if any of these values have changed
when the config is reloaded.
"""
conf = {}
conf['bind_host'] = self.conf.bind_host
conf['bind_port'] = self.conf.bind_port
conf['backlog'] = self.conf.backlog
conf['key_file'] = self.conf.key_file
conf['cert_file'] = self.conf.cert_file
return conf
def reload(self):
"""Reload and re-apply configuration settings.
Existing child processes are sent a SIGHUP signal and will exit after
completing existing requests. New child processes, which will have the
updated configuration, are spawned. This allows preventing
interruption to the service.
"""
def _has_changed(old, new, param):
old = old.get(param)
new = getattr(new, param)
return (new != old)
old_conf = self.stash_conf_values()
has_changed = functools.partial(_has_changed, old_conf, self.conf)
cfg.CONF.reload_config_files()
os.killpg(self.pgid, signal.SIGHUP)
self.stale_children = self.children
self.children = set()
# Ensure any logging config changes are picked up
logging.setup(cfg.CONF, self.name)
self.configure_socket(old_conf, has_changed)
self.start_wsgi()
def wait(self):
"""Wait until all servers have completed running."""
try:
if self.children:
self.wait_on_children()
else:
self.pool.waitall()
except KeyboardInterrupt:
pass
def run_child(self):
def child_hup(*args):
"""Shuts down child processes, existing requests are handled."""
signal.signal(signal.SIGHUP, signal.SIG_IGN)
eventlet.wsgi.is_accepting = False
self.sock.close()
pid = os.fork()
if pid == 0:
signal.signal(signal.SIGHUP, child_hup)
signal.signal(signal.SIGTERM, signal.SIG_DFL)
# ignore the interrupt signal to avoid a race whereby
# a child worker receives the signal before the parent
# and is respawned unnecessarily as a result
signal.signal(signal.SIGINT, signal.SIG_IGN)
# The child has no need to stash the unwrapped
# socket, and the reference prevents a clean
# exit on sighup
self._sock = None
self.run_server()
LOG.info(_LI('Child %d exiting normally'), os.getpid())
# self.pool.waitall() is now called in wsgi's server so
# it's safe to exit here
sys.exit(0)
else:
LOG.info(_LI('Started child %s'), pid)
self.children.add(pid)
def run_server(self):
"""Run a WSGI server."""
eventlet.wsgi.HttpProtocol.default_request_version = "HTTP/1.0"
eventlet.hubs.use_hub('poll')
eventlet.patcher.monkey_patch(all=False, socket=True)
self.pool = eventlet.GreenPool(size=self.threads)
socket_timeout = cfg.CONF.eventlet_opts.client_socket_timeout or None
try:
eventlet.wsgi.server(
self.sock, self.application,
custom_pool=self.pool,
url_length_limit=URL_LENGTH_LIMIT,
log=self._wsgi_logger,
debug=cfg.CONF.debug,
keepalive=cfg.CONF.eventlet_opts.wsgi_keep_alive,
socket_timeout=socket_timeout)
except socket.error as err:
if err[0] != errno.EINVAL:
raise
self.pool.waitall()
def _single_run(self, application, sock):
"""Start a WSGI server in a new green thread."""
LOG.info(_LI("Starting single process server"))
eventlet.wsgi.server(sock, application, custom_pool=self.pool,
url_length_limit=URL_LENGTH_LIMIT,
log=self._wsgi_logger, debug=cfg.CONF.debug)
class Middleware(object):
"""Base WSGI middleware wrapper.
These classes require an application to be initialized that will be called
next. By default the middleware will simply call its wrapped app, or you
can override __call__ to customize its behavior.
"""
def __init__(self, application):
self.application = application
def process_request(self, request):
"""Called on each request.
If this returns None, the next application down the stack will be
executed. If it returns a response then that response will be returned
and execution will stop here.
:param request: A request object to be processed.
:returns: None.
"""
return None
def process_response(self, response):
"""Customize the response."""
return response
@webob.dec.wsgify
def __call__(self, request):
response = self.process_request(request)
if response:
return response
response = request.get_response(self.application)
return self.process_response(response)
class Debug(Middleware):
"""Helper class that can be inserted into any WSGI application chain."""
@webob.dec.wsgify
def __call__(self, req):
print(("*" * 40) + " REQUEST ENVIRON")
for key, value in req.environ.items():
print(key, "=", value)
print('')
resp = req.get_response(self.application)
print(("*" * 40) + " RESPONSE HEADERS")
for (key, value) in six.iteritems(resp.headers):
print(key, "=", value)
print('')
resp.app_iter = self.print_generator(resp.app_iter)
return resp
@staticmethod
def print_generator(app_iter):
# Iterator that prints the contents of a wrapper string iterator
# when iterated.
print(("*" * 40) + " BODY")
for part in app_iter:
sys.stdout.write(part)
sys.stdout.flush()
yield part
print('')
def debug_filter(app, conf, **local_conf):
return Debug(app)
class DefaultMethodController(object):
"""A default controller for handling requests.
This controller handles the OPTIONS request method and any of the
HTTP methods that are not explicitly implemented by the application.
"""
def options(self, req, allowed_methods, *args, **kwargs):
"""Handler of the OPTIONS request method.
Return a response that includes the 'Allow' header listing the methods
that are implemented. A 204 status code is used for this response.
"""
raise webob.exc.HTTPNoContent(headers=[('Allow', allowed_methods)])
def reject(self, req, allowed_methods, *args, **kwargs):
"""Return a 405 method not allowed error.
As a convenience, the 'Allow' header with the list of implemented
methods is included in the response as well.
"""
raise webob.exc.HTTPMethodNotAllowed(
headers=[('Allow', allowed_methods)])
class Router(object):
"""WSGI middleware that maps incoming requests to WSGI apps."""
def __init__(self, mapper):
"""Create a router for the given routes.Mapper."""
self.map = mapper
self._router = routes.middleware.RoutesMiddleware(self._dispatch,
self.map)
@webob.dec.wsgify
def __call__(self, req):
"""Route the incoming request to a controller based on self.map."""
return self._router
@staticmethod
@webob.dec.wsgify
def _dispatch(req):
"""Private dispatch method.
Called by self._router() after matching the incoming request to
a route and putting the information into req.environ.
:returns: Either returns 404 or the routed WSGI app's response.
"""
match = req.environ['wsgiorg.routing_args'][1]
if not match:
return webob.exc.HTTPNotFound()
app = match['controller']
return app
class Request(webob.Request):
"""Add some OpenStack API-specific logic to the base webob.Request."""
def best_match_content_type(self):
"""Determine the requested response content-type."""
supported = ('application/json',)
bm = self.accept.best_match(supported)
return bm or 'application/json'
def get_content_type(self, allowed_content_types):
"""Determine content type of the request body."""
if "Content-Type" not in self.headers:
raise exception.InvalidContentType(content_type=None)
content_type = self.content_type
if content_type not in allowed_content_types:
raise exception.InvalidContentType(content_type=content_type)
else:
return content_type
def best_match_language(self):
"""Determines best available locale from the Accept-Language header.
:returns: the best language match or None if the 'Accept-Language'
header was not available in the request.
"""
if not self.accept_language:
return None
all_languages = oslo_i18n.get_available_languages('bilean')
return self.accept_language.best_match(all_languages)
def is_json_content_type(request):
content_type = request.content_type
if not content_type or content_type.startswith('text/plain'):
content_type = 'application/json'
if (content_type in ('JSON', 'application/json') and
request.body.startswith(b'{')):
return True
return False
class JSONRequestDeserializer(object):
def has_body(self, request):
"""Returns whether a Webob.Request object will possess an entity body.
:param request: A Webob.Request object
"""
if request is None or request.content_length is None:
return False
if request.content_length > 0 and is_json_content_type(request):
return True
return False
def from_json(self, datastring):
try:
if len(datastring) > cfg.CONF.max_json_body_size:
msg = _('JSON body size (%(len)s bytes) exceeds maximum '
'allowed size (%(limit)s bytes).'
) % {'len': len(datastring),
'limit': cfg.CONF.max_json_body_size}
raise exception.RequestLimitExceeded(message=msg)
return jsonutils.loads(datastring)
except ValueError as ex:
raise webob.exc.HTTPBadRequest(six.text_type(ex))
def default(self, request):
if self.has_body(request):
return {'body': self.from_json(request.body)}
else:
return {}
class Resource(object):
"""WSGI app that handles (de)serialization and controller dispatch.
Reads routing information supplied by RoutesMiddleware and calls
the requested action method upon its deserializer, controller,
and serializer. Those three objects may implement any of the basic
controller action methods (create, update, show, index, delete)
along with any that may be specified in the api router. A 'default'
method may also be implemented to be used in place of any
non-implemented actions. Deserializer methods must accept a request
argument and return a dictionary. Controller methods must accept a
request argument. Additionally, they must also accept keyword
arguments that represent the keys returned by the Deserializer. They
may raise a webob.exc exception or return a dict, which will be
serialized by requested content type.
"""
def __init__(self, controller, deserializer, serializer=None):
"""Initializer.
:param controller: object that implement methods created by routes lib
:param deserializer: object that supports webob request deserialization
through controller-like actions
:param serializer: object that supports webob response serialization
through controller-like actions
"""
self.controller = controller
self.deserializer = deserializer
self.serializer = serializer
@webob.dec.wsgify(RequestClass=Request)
def __call__(self, request):
"""WSGI method that controls (de)serialization and method dispatch."""
action_args = self.get_action_args(request.environ)
action = action_args.pop('action', None)
status_code = action_args.pop('success', None)
try:
deserialized_request = self.dispatch(self.deserializer,
action, request)
action_args.update(deserialized_request)
LOG.debug(('Calling %(controller)s : %(action)s'),
{'controller': self.controller, 'action': action})
action_result = self.dispatch(self.controller, action,
request, **action_args)
except TypeError as err:
LOG.error(_LE('Exception handling resource: %s') % err)
msg = _('The server could not comply with the request since '
'it is either malformed or otherwise incorrect.')
err = webob.exc.HTTPBadRequest(msg)
http_exc = translate_exception(err, request.best_match_language())
# NOTE(luisg): We disguise HTTP exceptions, otherwise they will be
# treated by wsgi as responses ready to be sent back and they
# won't make it into the pipeline app that serializes errors
raise exception.HTTPExceptionDisguise(http_exc)
except webob.exc.HTTPException as err:
if not isinstance(err, webob.exc.HTTPError):
# Some HTTPException are actually not errors, they are
# responses ready to be sent back to the users, so we don't
# create error log, but disguise and translate them to meet
# openstacksdk's need.
http_exc = translate_exception(err,
request.best_match_language())
raise exception.HTTPExceptionDisguise(http_exc)
if isinstance(err, webob.exc.HTTPServerError):
LOG.error(
_LE("Returning %(code)s to user: %(explanation)s"),
{'code': err.code, 'explanation': err.explanation})
http_exc = translate_exception(err, request.best_match_language())
raise exception.HTTPExceptionDisguise(http_exc)
except exception.BileanException as err:
raise translate_exception(err, request.best_match_language())
except Exception as err:
log_exception(err, sys.exc_info())
raise translate_exception(err, request.best_match_language())
serializer = self.serializer or serializers.JSONResponseSerializer()
try:
response = webob.Response(request=request)
# Customize status code if default (200) should be overridden
if status_code is not None:
response.status_code = int(status_code)
# Customize 'location' header if provided
if action_result and isinstance(action_result, dict):
location = action_result.pop('location', None)
if location:
response.location = '/v1%s' % location
if not action_result:
action_result = None
self.dispatch(serializer, action, response, action_result)
return response
# return unserializable result (typically an exception)
except Exception:
return action_result
def dispatch(self, obj, action, *args, **kwargs):
"""Find action-specific method on self and call it."""
try:
method = getattr(obj, action)
except AttributeError:
method = getattr(obj, 'default')
return method(*args, **kwargs)
def get_action_args(self, request_environment):
"""Parse dictionary created by routes library."""
try:
args = request_environment['wsgiorg.routing_args'][1].copy()
except Exception:
return {}
try:
del args['controller']
except KeyError:
pass
try:
del args['format']
except KeyError:
pass
return args
def log_exception(err, exc_info):
args = {'exc_info': exc_info} if cfg.CONF.verbose or cfg.CONF.debug else {}
LOG.error(_LE("Unexpected error occurred serving API: %s"), err, **args)
def translate_exception(exc, locale):
"""Translates all translatable elements of the given exception."""
if isinstance(exc, exception.BileanException):
exc.message = oslo_i18n.translate(exc.message, locale)
else:
exc.message = oslo_i18n.translate(six.text_type(exc), locale)
if isinstance(exc, webob.exc.HTTPError):
exc.explanation = oslo_i18n.translate(exc.explanation, locale)
exc.detail = oslo_i18n.translate(getattr(exc, 'detail', ''), locale)
return exc
@six.add_metaclass(abc.ABCMeta)
class BasePasteFactory(object):
"""A base class for paste app and filter factories.
Sub-classes must override the KEY class attribute and provide
a __call__ method.
"""
KEY = None
def __init__(self, conf):
self.conf = conf
@abc.abstractmethod
def __call__(self, global_conf, **local_conf):
return
def _import_factory(self, local_conf):
"""Import an app/filter class.
Lookup the KEY from the PasteDeploy local conf and import the
class named there. This class can then be used as an app or
filter factory.
"""
class_name = local_conf[self.KEY].replace(':', '.').strip()
return importutils.import_class(class_name)
class AppFactory(BasePasteFactory):
"""A Generic paste.deploy app factory.
The WSGI app constructor must accept a ConfigOpts object and a local
config dict as its arguments.
"""
KEY = 'bilean.app_factory'
def __call__(self, global_conf, **local_conf):
factory = self._import_factory(local_conf)
return factory(self.conf, **local_conf)
class FilterFactory(AppFactory):
"""A Generic paste.deploy filter factory.
This requires bilean.filter_factory to be set to a callable which returns
a WSGI filter when invoked. The WSGI filter constructor must accept a
WSGI app, a ConfigOpts object and a local config dict as its arguments.
"""
KEY = 'bilean.filter_factory'
def __call__(self, global_conf, **local_conf):
factory = self._import_factory(local_conf)
def filter(app):
return factory(app, self.conf, **local_conf)
return filter
def setup_paste_factories(conf):
"""Set up the generic paste app and filter factories.
The app factories are constructed at runtime to allow us to pass a
ConfigOpts object to the WSGI classes.
:param conf: a ConfigOpts object
"""
global app_factory, filter_factory
app_factory = AppFactory(conf)
filter_factory = FilterFactory(conf)
def teardown_paste_factories():
"""Reverse the effect of setup_paste_factories()."""
global app_factory, filter_factory
del app_factory
del filter_factory
def paste_deploy_app(paste_config_file, app_name, conf):
"""Load a WSGI app from a PasteDeploy configuration.
Use deploy.loadapp() to load the app from the PasteDeploy configuration,
ensuring that the supplied ConfigOpts object is passed to the app and
filter constructors.
:param paste_config_file: a PasteDeploy config file
:param app_name: the name of the app/pipeline to load from the file
:param conf: a ConfigOpts object to supply to the app and its filters
:returns: the WSGI app
"""
setup_paste_factories(conf)
try:
return deploy.loadapp("config:%s" % paste_config_file, name=app_name)
finally:
teardown_paste_factories()

0
bilean/db/__init__.py Normal file
View File

193
bilean/db/api.py Normal file
View File

@ -0,0 +1,193 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_db import api
CONF = cfg.CONF
_BACKEND_MAPPING = {'sqlalchemy': 'bilean.db.sqlalchemy.api'}
IMPL = api.DBAPI.from_config(CONF, backend_mapping=_BACKEND_MAPPING)
def get_engine():
return IMPL.get_engine()
def get_session():
return IMPL.get_session()
def db_sync(engine, version=None):
"""Migrate the database to `version` or the most recent version."""
return IMPL.db_sync(engine, version=version)
def db_version(engine):
"""Display the current database version."""
return IMPL.db_version(engine)
def user_get(context, user_id):
return IMPL.user_get(context, user_id)
def user_update(context, user_id, values):
return IMPL.user_update(context, user_id, values)
def user_create(context, values):
return IMPL.user_create(context, values)
def user_delete(context, user_id):
return IMPL.user_delete(context, user_id)
def user_get_all(context):
return IMPL.user_get_all(context)
def user_get_by_keystone_user_id(context, user_id):
return IMPL.user_get_by_keystone_user_id(context, user_id)
def user_delete_by_keystone_user_id(context, user_id):
return IMPL.user_delete_by_keystone_user_id(context, user_id)
def user_update_by_keystone_user_id(context, user_id, values):
return IMPL.user_update_by_keystone_user_id(context, user_id, values)
def rule_get(context, rule_id):
return IMPL.rule_get(context, rule_id)
def rule_get_all(context):
return IMPL.rule_get_all(context)
def get_rule_by_filters(context, **filters):
return IMPL.get_rule_by_filters(context, **filters)
def rule_create(context, values):
return IMPL.rule_create(context, values)
def rule_update(context, rule_id, values):
return IMPL.rule_update(context, rule_id, values)
def rule_delete(context, rule_id):
return IMPL.rule_delete(context, rule_id)
def resource_get(context, resource_id):
return IMPL.resource_get(context, resource_id)
def resource_get_all(context, **filters):
return IMPL.resource_get_all(context, **filters)
def resource_get_by_physical_resource_id(context,
physical_resource_id,
resource_type):
return IMPL.resource_get_by_physical_resource_id(
context, physical_resource_id, resource_type)
def resource_create(context, values):
return IMPL.resource_create(context, values)
def resource_update(context, resource_id, values):
return IMPL.resource_update(context, resource_id, values)
def resource_update_by_resource(context, resource):
return IMPL.resource_update_by_resource(context, resource)
def resource_delete(context, resource_id):
IMPL.resource_delete(context, resource_id)
def resource_delete_by_user_id(context, user_id):
IMPL.resource_delete(context, user_id)
def resource_delete_by_physical_resource_id(context,
physical_resource_id,
resource_type):
return IMPL.resource_delete_by_physical_resource_id(
context, physical_resource_id, resource_type)
def event_get(context, event_id):
return IMPL.event_get(context, event_id)
def event_get_by_user_id(context, user_id):
return IMPL.event_get_by_user_id(context, user_id)
def event_get_by_user_and_resource(context,
user_id,
resource_type,
action=None):
return IMPL.event_get_by_user_and_resource(context,
user_id,
resource_type,
action)
def events_get_all_by_filters(context, **filters):
return IMPL.events_get_all_by_filters(context, **filters)
def event_create(context, values):
return IMPL.event_create(context, values)
def event_delete(context, event_id):
return IMPL.event_delete(context, event_id)
def event_delete_by_user_id(context, user_id):
return IMPL.event_delete_by_user_id(context, user_id)
def job_create(context, values):
return IMPL.job_create(context, values)
def job_get(context, job_id):
return IMPL.job_get(context, job_id)
def job_get_by_engine_id(context, engine_id):
return IMPL.job_get_by_engine_id(context, engine_id)
def job_update(context, job_id, values):
return IMPL.job_update(context, job_id, values)
def job_delete(context, job_id):
return IMPL.job_delete(context, job_id)

View File

415
bilean/db/sqlalchemy/api.py Normal file
View File

@ -0,0 +1,415 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
'''Implementation of SQLAlchemy backend.'''
import sys
from oslo_config import cfg
from oslo_db.sqlalchemy import session as db_session
from oslo_log import log as logging
from sqlalchemy.orm.session import Session
from sqlalchemy.sql import func
from bilean.common import exception
from bilean.common.i18n import _
from bilean.db.sqlalchemy import migration
from bilean.db.sqlalchemy import models
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
_facade = None
def get_facade():
global _facade
if not _facade:
_facade = db_session.EngineFacade.from_config(CONF)
return _facade
get_engine = lambda: get_facade().get_engine()
get_session = lambda: get_facade().get_session()
def get_backend():
"""The backend is this module itself."""
return sys.modules[__name__]
def model_query(context, *args):
session = _session(context)
query = session.query(*args)
return query
def soft_delete_aware_query(context, *args, **kwargs):
"""Query helper that accounts for context's `show_deleted` field.
:param show_deleted: if True, overrides context's show_deleted field.
"""
query = model_query(context, *args)
show_deleted = kwargs.get('show_deleted') or context.show_deleted
if not show_deleted:
query = query.filter_by(deleted_at=None)
return query
def _session(context):
return (context and context.session) or get_session()
def db_sync(engine, version=None):
"""Migrate the database to `version` or the most recent version."""
return migration.db_sync(engine, version=version)
def db_version(engine):
"""Display the current database version."""
return migration.db_version(engine)
def user_get(context, user_id):
result = model_query(context, models.User).get(user_id)
if not result:
raise exception.NotFound(_('User with id %s not found') % user_id)
return result
def user_update(context, user_id, values):
user = user_get(context, user_id)
if not user:
raise exception.NotFound(_('Attempt to update a user with id: '
'%(id)s %(msg)s') % {
'id': user_id,
'msg': 'that does not exist'})
user.update(values)
user.save(_session(context))
return user_get(context, user_id)
def user_create(context, values):
user_ref = models.User()
user_ref.update(values)
user_ref.save(_session(context))
return user_ref
def user_delete(context, user_id):
user = user_get(context, user_id)
session = Session.object_session(user)
session.delete(user)
session.flush()
def user_get_all(context):
results = model_query(context, models.User).all()
if not results:
return None
return results
def rule_get(context, rule_id):
result = model_query(context, models.Rule).get(rule_id)
if not result:
raise exception.NotFound(_('Rule with id %s not found') % rule_id)
return result
def rule_get_all(context):
return model_query(context, models.Rule).all()
def get_rule_by_filters(context, **filters):
filter_keys = filters.keys()
query = model_query(context, models.Rule)
if "resource_type" in filter_keys:
query = query.filter_by(resource_type=filters["resource_type"])
if "size" in filter_keys:
query = query.filter_by(size=filters["size"])
return query.all()
def rule_create(context, values):
rule_ref = models.Rule()
rule_ref.update(values)
rule_ref.save(_session(context))
return rule_ref
def rule_update(context, rule_id, values):
rule = rule_get(context, rule_id)
if not rule:
raise exception.NotFound(_('Attempt to update a rule with id: '
'%(id)s %(msg)s') % {
'id': rule_id,
'msg': 'that does not exist'})
rule.update(values)
rule.save(_session(context))
def rule_delete(context, rule_id):
rule = rule_get(context, rule_id)
session = Session.object_session(rule)
session.delete(rule)
session.flush()
def resource_get(context, resource_id):
result = model_query(context, models.Resource).get(resource_id)
if not result:
raise exception.NotFound(_('Resource with id %s not found') %
resource_id)
return result
def resource_get_by_physical_resource_id(context,
physical_resource_id,
resource_type):
result = (model_query(context, models.Resource)
.filter_by(resource_ref=physical_resource_id)
.filter_by(resource_type=resource_type)
.first())
if not result:
raise exception.NotFound(_('Resource with physical_resource_id: '
'%(resource_id)s, resource_type: '
'%(resource_type)s not found.') % {
'resource_id': physical_resource_id,
'resource_type': resource_type})
return result
def resource_get_all(context, **filters):
if filters.get('show_deleted') is None:
filters['show_deleted'] = False
query = soft_delete_aware_query(context, models.Resource, **filters)
if "resource_type" in filters:
query = query.filter_by(resource_type=filters["resource_type"])
if "user_id" in filters:
query = query.filter_by(user_id=filters["user_id"])
return query.all()
def resource_get_by_user_id(context, user_id, show_deleted=False):
query = soft_delete_aware_query(
context, models.Resource, show_deleted=show_deleted
).filter_by(user_id=user_id).all()
return query
def resource_create(context, values):
resource_ref = models.Resource()
resource_ref.update(values)
resource_ref.save(_session(context))
return resource_ref
def resource_update(context, resource_id, values):
resource = resource_get(context, resource_id)
if not resource:
raise exception.NotFound(_('Attempt to update a resource with id: '
'%(id)s %(msg)s') % {
'id': resource_id,
'msg': 'that does not exist'})
resource.update(values)
resource.save(_session(context))
return resource
def resource_update_by_resource(context, res):
resource = resource_get_by_physical_resource_id(
context, res['resource_ref'], res['resource_type'])
if not resource:
raise exception.NotFound(_('Attempt to update a resource: '
'%(res)s %(msg)s') % {
'res': res,
'msg': 'that does not exist'})
resource.update(res)
resource.save(_session(context))
return resource
def resource_delete(context, resource_id, soft_delete=True):
resource = resource_get(context, resource_id)
session = Session.object_session(resource)
if soft_delete:
resource.soft_delete(session=session)
else:
session.delete(resource)
session.flush()
def resource_delete_by_physical_resource_id(context,
physical_resource_id,
resource_type,
soft_delete=True):
resource = resource_get_by_physical_resource_id(
context, physical_resource_id, resource_type)
session = Session.object_session(resource)
if soft_delete:
resource.soft_delete(session=session)
else:
session.delete(resource)
session.flush()
def resource_delete_by_user_id(context, user_id):
resource = resource_get_by_user_id(context, user_id)
session = Session.object_session(resource)
session.delete(resource)
session.flush()
def event_get(context, event_id):
result = model_query(context, models.Event).get(event_id)
if not result:
raise exception.NotFound(_('Event with id %s not found') % event_id)
return result
def event_get_by_user_id(context, user_id):
query = model_query(context, models.Event).filter_by(user_id=user_id)
return query
def event_get_by_user_and_resource(context,
user_id,
resource_type,
action=None):
query = (model_query(context, models.Event)
.filter_by(user_id=user_id)
.filter_by(resource_type=resource_type)
.filter_by(action=action).all())
return query
def events_get_all_by_filters(context,
user_id=None,
resource_type=None,
start=None,
end=None,
action=None,
aggregate=None):
if aggregate == 'sum':
query_prefix = model_query(
context, models.Event.resource_type, func.sum(models.Event.value)
).group_by(models.Event.resource_type)
elif aggregate == 'avg':
query_prefix = model_query(
context, models.Event.resource_type, func.avg(models.Event.value)
).group_by(models.Event.resource_type)
else:
query_prefix = model_query(context, models.Event)
if not context.is_admin:
if context.tenant_id:
query_prefix = query_prefix.filter_by(user_id=context.tenant_id)
elif user_id:
query_prefix = query_prefix.filter_by(user_id=user_id)
if resource_type:
query_prefix = query_prefix.filter_by(resource_type=resource_type)
if action:
query_prefix = query_prefix.filter_by(action=action)
if start:
query_prefix = query_prefix.filter(models.Event.created_at >= start)
if end:
query_prefix = query_prefix.filter(models.Event.created_at <= end)
return query_prefix.all()
def event_create(context, values):
event_ref = models.Event()
event_ref.update(values)
event_ref.save(_session(context))
return event_ref
def event_delete(context, event_id):
event = event_get(context, event_id)
session = Session.object_session(event)
session.delete(event)
session.flush()
def event_delete_by_user_id(context, user_id):
event = event_get(context, user_id)
session = Session.object_session(event)
session.delete(event)
session.flush()
def job_create(context, values):
job_ref = models.Job()
job_ref.update(values)
job_ref.save(_session(context))
return job_ref
def job_get(context, job_id):
result = model_query(context, models.Job).get(job_id)
if not result:
raise exception.NotFound(_('Job with id %s not found') % job_id)
return result
def job_get_by_engine_id(context, engine_id):
query = (model_query(context, models.Job)
.filter_by(engine_id=engine_id).all())
return query
def job_update(context, job_id, values):
job = job_get(context, job_id)
if not job:
raise exception.NotFound(_('Attempt to update a job with id: '
'%(id)s %(msg)s') % {
'id': job_id,
'msg': 'that does not exist'})
job.update(values)
job.save(_session(context))
return job
def job_delete(context, job_id):
job = job_get(context, job_id)
session = Session.object_session(job)
session.delete(job)
session.flush()

View File

@ -0,0 +1,44 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
def exact_filter(query, model, filters):
"""Applies exact match filtering to a query.
Returns the updated query. Modifies filters argument to remove
filters consumed.
:param query: query to apply filters to
:param model: model object the query applies to, for IN-style
filtering
:param filters: dictionary of filters; values that are lists,
tuples, sets, or frozensets cause an 'IN' test to
be performed, while exact matching ('==' operator)
is used for other values
"""
filter_dict = {}
if filters is None:
filters = {}
for key, value in filters.iteritems():
if isinstance(value, (list, tuple, set, frozenset)):
column_attr = getattr(model, key)
query = query.filter(column_attr.in_(value))
else:
filter_dict[key] = value
if filter_dict:
query = query.filter_by(**filter_dict)
return query

View File

@ -0,0 +1,4 @@
This is a database migration repository.
More information at
http://code.google.com/p/sqlalchemy-migrate/

View File

@ -0,0 +1,5 @@
#!/usr/bin/env python
from migrate.versioning.shell import main
if __name__ == '__main__':
main(debug='False')

View File

@ -0,0 +1,25 @@
[db_settings]
# Used to identify which repository this database is versioned under.
# You can use the name of your project.
repository_id=bilean
# The name of the database table used to track the schema version.
# This name shouldn't already be used by your project.
# If this is changed once a database is under version control, you'll need to
# change the table name in each database too.
version_table=migrate_version
# When committing a change script, Migrate will attempt to generate the
# sql for all supported databases; normally, if one of them fails - probably
# because you don't have that database installed - it is ignored and the
# commit continues, perhaps ending successfully.
# Databases in this list MUST compile successfully during a commit, or the
# entire commit will fail. List the databases your application will actually
# be using to ensure your updates to that database work properly.
# This must be a list; example: ['postgres','sqlite']
required_dbs=[]
# When creating new change scripts, Migrate will stamp the new script with
# a version number. By default this is latest_version + 1. You can set this
# to 'true' to tell Migrate to use the UTC timestamp instead.
use_timestamp_numbering=False

View File

@ -0,0 +1,117 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sqlalchemy
from bilean.db.sqlalchemy import types
def upgrade(migrate_engine):
meta = sqlalchemy.MetaData()
meta.bind = migrate_engine
user = sqlalchemy.Table(
'user', meta,
sqlalchemy.Column('id', sqlalchemy.String(36), primary_key=True,
nullable=False),
sqlalchemy.Column('created_at', sqlalchemy.DateTime),
sqlalchemy.Column('updated_at', sqlalchemy.DateTime),
sqlalchemy.Column('balance', sqlalchemy.Float),
sqlalchemy.Column('rate', sqlalchemy.Float),
sqlalchemy.Column('credit', sqlalchemy.Integer),
sqlalchemy.Column('last_bill', sqlalchemy.DateTime),
sqlalchemy.Column('status', sqlalchemy.String(10)),
sqlalchemy.Column('status_reason', sqlalchemy.String(255)),
mysql_engine='InnoDB',
mysql_charset='utf8'
)
rule = sqlalchemy.Table(
'rule', meta,
sqlalchemy.Column('id', sqlalchemy.String(36), primary_key=True,
nullable=False),
sqlalchemy.Column('name', sqlalchemy.String(255)),
sqlalchemy.Column('type', sqlalchemy.String(255)),
sqlalchemy.Column('spec', types.Dict()),
sqlalchemy.Column('meta_data', types.Dict()),
sqlalchemy.Column('created_at', sqlalchemy.DateTime),
sqlalchemy.Column('updated_at', sqlalchemy.DateTime),
sqlalchemy.Column('deleted_at', sqlalchemy.DateTime),
mysql_engine='InnoDB',
mysql_charset='utf8'
)
resource = sqlalchemy.Table(
'resource', meta,
sqlalchemy.Column('id', sqlalchemy.String(36), primary_key=True,
nullable=False),
sqlalchemy.Column('resource_ref', sqlalchemy.String(36),
nullable=False),
sqlalchemy.Column('created_at', sqlalchemy.DateTime),
sqlalchemy.Column('updated_at', sqlalchemy.DateTime),
sqlalchemy.Column('deleted_at', sqlalchemy.DateTime),
sqlalchemy.Column('deleted', sqlalchemy.Boolean, default=False),
sqlalchemy.Column('user_id',
sqlalchemy.String(36),
sqlalchemy.ForeignKey('user.id'),
nullable=False),
sqlalchemy.Column('rule_id',
sqlalchemy.String(36),
sqlalchemy.ForeignKey('rule.id'),
nullable=False),
sqlalchemy.Column('resource_type', sqlalchemy.String(36),
nullable=False),
sqlalchemy.Column('size', sqlalchemy.String(36), nullable=False),
sqlalchemy.Column('rate', sqlalchemy.Float, nullable=False),
sqlalchemy.Column('status', sqlalchemy.String(10)),
sqlalchemy.Column('status_reason', sqlalchemy.String(255)),
mysql_engine='InnoDB',
mysql_charset='utf8'
)
event = sqlalchemy.Table(
'event', meta,
sqlalchemy.Column('id', sqlalchemy.String(36),
primary_key=True, nullable=False),
sqlalchemy.Column('user_id', sqlalchemy.String(36),
sqlalchemy.ForeignKey('user.id'), nullable=False),
sqlalchemy.Column('created_at', sqlalchemy.DateTime),
sqlalchemy.Column('updated_at', sqlalchemy.DateTime),
sqlalchemy.Column('resource_id', sqlalchemy.String(36)),
sqlalchemy.Column('resource_type', sqlalchemy.String(36)),
sqlalchemy.Column('action', sqlalchemy.String(36)),
sqlalchemy.Column('value', sqlalchemy.Float),
mysql_engine='InnoDB',
mysql_charset='utf8'
)
tables = (
user,
rule,
resource,
event,
)
for index, table in enumerate(tables):
try:
table.create()
except Exception:
# If an error occurs, drop all tables created so far to return
# to the previously existing state.
meta.drop_all(tables=tables[:index])
raise
def downgrade(migrate_engine):
raise NotImplementedError('Database downgrade not supported - '
'would drop all tables')

View File

@ -0,0 +1,57 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sqlalchemy
from bilean.db.sqlalchemy import types
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
def upgrade(migrate_engine):
meta = sqlalchemy.MetaData()
meta.bind = migrate_engine
job = sqlalchemy.Table(
'job', meta,
sqlalchemy.Column('id', sqlalchemy.String(50),
primary_key=True, nullable=False),
sqlalchemy.Column('engine_id', sqlalchemy.String(36),
nullable=False),
sqlalchemy.Column('job_type', sqlalchemy.String(10),
nullable=False),
sqlalchemy.Column('parameters', types.Dict()),
sqlalchemy.Column('created_at', sqlalchemy.DateTime),
sqlalchemy.Column('updated_at', sqlalchemy.DateTime),
mysql_engine='InnoDB',
mysql_charset='utf8'
)
try:
job.create()
except Exception:
LOG.error("Table |%s| not created!", repr(job))
raise
def downgrade(migrate_engine):
meta = sqlalchemy.MetaData()
meta.bind = migrate_engine
job = sqlalchemy.Table('job', meta, autoload=True)
try:
job.drop()
except Exception:
LOG.error("Job table not dropped")
raise

View File

@ -0,0 +1,57 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sqlalchemy
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
def upgrade(migrate_engine):
meta = sqlalchemy.MetaData()
meta.bind = migrate_engine
services = sqlalchemy.Table(
'services', meta,
sqlalchemy.Column('id', sqlalchemy.String(36),
primary_key=True, nullable=False),
sqlalchemy.Column('created_at', sqlalchemy.DateTime),
sqlalchemy.Column('updated_at', sqlalchemy.DateTime),
sqlalchemy.Column('deleted_at', sqlalchemy.DateTime),
sqlalchemy.Column('deleted', sqlalchemy.Boolean),
sqlalchemy.Column('host', sqlalchemy.String(length=255)),
sqlalchemy.Column('binary', sqlalchemy.String(length=255)),
sqlalchemy.Column('topic', sqlalchemy.String(length=255)),
sqlalchemy.Column('report_count', sqlalchemy.Integer, nullable=False),
sqlalchemy.Column('disabled', sqlalchemy.Boolean),
mysql_engine='InnoDB',
mysql_charset='utf8'
)
try:
services.create()
except Exception:
LOG.error("Table |%s| not created!", repr(services))
raise
def downgrade(migrate_engine):
meta = sqlalchemy.MetaData()
meta.bind = migrate_engine
services = sqlalchemy.Table('services', meta, autoload=True)
try:
services.drop()
except Exception:
LOG.error("services table not dropped")
raise

View File

@ -0,0 +1,38 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from oslo_db.sqlalchemy import migration as oslo_migration
INIT_VERSION = 0
def db_sync(engine, version=None):
path = os.path.join(os.path.abspath(os.path.dirname(__file__)),
'migrate_repo')
return oslo_migration.db_sync(engine, path, version,
init_version=INIT_VERSION)
def db_version(engine):
path = os.path.join(os.path.abspath(os.path.dirname(__file__)),
'migrate_repo')
return oslo_migration.db_version(engine, path, INIT_VERSION)
def db_version_control(engine, version=None):
path = os.path.join(os.path.abspath(os.path.dirname(__file__)),
'migrate_repo')
return oslo_migration.db_version_control(engine, path, version)

View File

@ -0,0 +1,179 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
SQLAlchemy models for heat data.
"""
import uuid
from bilean.db.sqlalchemy import types
from oslo_db.sqlalchemy import models
from oslo_serialization import jsonutils
from oslo_utils import timeutils
import sqlalchemy
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import backref
from sqlalchemy.orm import relationship
from sqlalchemy.orm.session import Session
BASE = declarative_base()
def get_session():
from bilean.db.sqlalchemy import api as db_api
return db_api.get_session()
class BileanBase(models.ModelBase, models.TimestampMixin):
"""Base class for Heat Models."""
__table_args__ = {'mysql_engine': 'InnoDB'}
def expire(self, session=None, attrs=None):
"""Expire this object ()."""
if not session:
session = Session.object_session(self)
if not session:
session = get_session()
session.expire(self, attrs)
def refresh(self, session=None, attrs=None):
"""Refresh this object."""
if not session:
session = Session.object_session(self)
if not session:
session = get_session()
session.refresh(self, attrs)
def delete(self, session=None):
"""Delete this object."""
if not session:
session = Session.object_session(self)
if not session:
session = get_session()
session.delete(self)
session.flush()
def update_and_save(self, values, session=None):
if not session:
session = Session.object_session(self)
if not session:
session = get_session()
session.begin()
for k, v in values.iteritems():
setattr(self, k, v)
session.commit()
class SoftDelete(object):
deleted_at = sqlalchemy.Column(sqlalchemy.DateTime)
deleted = sqlalchemy.Column(sqlalchemy.Boolean, default=False)
def soft_delete(self, session=None):
"""Mark this object as deleted."""
self.update_and_save({'deleted_at': timeutils.utcnow(),
'deleted': True}, session=session)
class StateAware(object):
status = sqlalchemy.Column('status', sqlalchemy.String(10))
_status_reason = sqlalchemy.Column('status_reason', sqlalchemy.String(255))
@property
def status_reason(self):
return self._status_reason
@status_reason.setter
def status_reason(self, reason):
self._status_reason = reason and reason[:255] or ''
class User(BASE, BileanBase, StateAware):
"""Represents a user to record account"""
__tablename__ = 'user'
id = sqlalchemy.Column(sqlalchemy.String(36), primary_key=True)
balance = sqlalchemy.Column(sqlalchemy.Float, default=0.0)
rate = sqlalchemy.Column(sqlalchemy.Float, default=0.0)
credit = sqlalchemy.Column(sqlalchemy.Integer, default=0)
last_bill = sqlalchemy.Column(
sqlalchemy.DateTime, default=timeutils.utcnow())
updated_at = sqlalchemy.Column(sqlalchemy.DateTime)
class Rule(BASE, BileanBase):
"""Represents a rule created to bill someone resource"""
__tablename__ = 'rule'
id = sqlalchemy.Column(sqlalchemy.String(36), primary_key=True,
default=lambda: str(uuid.uuid4()))
name = sqlalchemy.Column(sqlalchemy.String(255))
type = sqlalchemy.Column(sqlalchemy.String(255))
spec = sqlalchemy.Column(types.Dict)
meta_data = sqlalchemy.Column(types.Dict)
updated_at = sqlalchemy.Column(sqlalchemy.DateTime)
class Resource(BASE, BileanBase, StateAware, SoftDelete):
"""Represents a meta resource with rate"""
__tablename__ = 'resource'
id = sqlalchemy.Column(sqlalchemy.String(36), primary_key=True,
default=lambda: str(uuid.uuid4()))
resource_ref = sqlalchemy.Column(sqlalchemy.String(36), nullable=False)
user_id = sqlalchemy.Column(
sqlalchemy.String(36),
sqlalchemy.ForeignKey('user.id'),
nullable=False)
rule_id = sqlalchemy.Column(
sqlalchemy.String(36),
sqlalchemy.ForeignKey('rule.id'),
nullable=False)
user = relationship(User, backref=backref('resource'))
rule = relationship(Rule, backref=backref('resource'))
resource_type = sqlalchemy.Column(sqlalchemy.String(36), nullable=False)
size = sqlalchemy.Column(sqlalchemy.String(36), nullable=False)
rate = sqlalchemy.Column(sqlalchemy.Float, nullable=False)
updated_at = sqlalchemy.Column(sqlalchemy.DateTime)
class Event(BASE, BileanBase):
"""Represents an event generated by the bilean engine."""
__tablename__ = 'event'
id = sqlalchemy.Column(sqlalchemy.String(36), primary_key=True,
default=lambda: str(uuid.uuid4()),
unique=True)
user_id = sqlalchemy.Column(sqlalchemy.String(36),
sqlalchemy.ForeignKey('user.id'),
nullable=False)
user = relationship(User, backref=backref('event'))
resource_id = sqlalchemy.Column(sqlalchemy.String(36))
action = sqlalchemy.Column(sqlalchemy.String(36))
resource_type = sqlalchemy.Column(sqlalchemy.String(36))
value = sqlalchemy.Column(sqlalchemy.Float)
class Job(BASE, BileanBase):
"""Represents a job for per user"""
__tablename__ = 'job'
id = sqlalchemy.Column(sqlalchemy.String(50), primary_key=True,
unique=True)
engine_id = sqlalchemy.Column(sqlalchemy.String(36))
job_type = sqlalchemy.Column(sqlalchemy.String(10))
parameters = sqlalchemy.Column(types.Dict())

View File

@ -0,0 +1,112 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
from sqlalchemy.dialects import mysql
from sqlalchemy.ext import mutable
from sqlalchemy import types
class MutableList(mutable.Mutable, list):
@classmethod
def coerce(cls, key, value):
if not isinstance(value, MutableList):
if isinstance(value, list):
return MutableList(value)
return mutable.Mutable.coerce(key, value)
else:
return value
def __init__(self, initval=None):
list.__init__(self, initval or [])
def __getitem__(self, key):
value = list.__getitem__(self, key)
for obj, key in self._parents.items():
value._parents[obj] = key
return value
def __setitem__(self, key, value):
list.__setitem__(self, key, value)
self.changed()
def __getstate__(self):
return list(self)
def __setstate__(self, state):
self[:] = state
def append(self, value):
list.append(self, value)
self.changed()
def extend(self, iterable):
list.extend(self, iterable)
self.changed()
def insert(self, index, item):
list.insert(self, index, item)
self.changed()
def __setslice__(self, i, j, other):
list.__setslice__(self, i, j, other)
self.changed()
def pop(self, index=-1):
item = list.pop(self, index)
self.changed()
return item
def remove(self, value):
list.remove(self, value)
self.changed()
class Dict(types.TypeDecorator):
impl = types.Text
def load_dialect_impl(self, dialect):
if dialect.name == 'mysql':
return dialect.type_descriptor(mysql.LONGTEXT())
else:
return self.impl
def process_bind_param(self, value, dialect):
return json.dumps(value)
def process_result_value(self, value, dialect):
if value is None:
return None
return json.loads(value)
class List(types.TypeDecorator):
impl = types.Text
def load_dialect_impl(self, dialect):
if dialect.name == 'mysql':
return dialect.type_descriptor(mysql.LONGTEXT())
else:
return self.impl
def process_bind_param(self, value, dialect):
return json.dumps(value)
def process_result_value(self, value, dialect):
if value is None:
return None
return json.loads(value)
mutable.MutableDict.associate_with(Dict)
MutableList.associate_with(List)

47
bilean/db/utils.py Normal file
View File

@ -0,0 +1,47 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class LazyPluggable(object):
"""A pluggable backend loaded lazily based on some value."""
def __init__(self, pivot, **backends):
self.__backends = backends
self.__pivot = pivot
self.__backend = None
def __get_backend(self):
if not self.__backend:
backend_name = 'sqlalchemy'
backend = self.__backends[backend_name]
if isinstance(backend, tuple):
name = backend[0]
fromlist = backend[1]
else:
name = backend
fromlist = backend
self.__backend = __import__(name, None, None, fromlist)
return self.__backend
def __getattr__(self, key):
backend = self.__get_backend()
return getattr(backend, key)
IMPL = LazyPluggable('backend',
sqlalchemy='heat.db.sqlalchemy.api')
def purge_deleted(age, granularity='days'):
IMPL.purge_deleted(age, granularity)

View File

86
bilean/engine/api.py Normal file
View File

@ -0,0 +1,86 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log as logging
from oslo_utils import timeutils
from bilean.common import params
LOG = logging.getLogger(__name__)
def format_user(user, detail=False):
'''Format user object to dict
Return a representation of the given user that matches the API output
expectations.
'''
updated_at = user.updated_at and timeutils.isotime(user.updated_at)
info = {
params.USER_ID: user.id,
params.USER_BALANCE: user.balance,
params.USER_RATE: user.rate,
params.USER_CREDIT: user.credit,
params.USER_STATUS: user.status,
params.USER_UPDATED_AT: updated_at,
params.USER_LAST_BILL: user.last_bill
}
if detail:
info[params.USER_CREATED_AT] = user.created_at
info[params.USER_STATUS_REASION] = user.status_reason
return info
def format_bilean_resource(resource, detail=False):
'''Format resource object to dict
Return a representation of the given resource that matches the API output
expectations.
'''
updated_at = resource.updated_at and timeutils.isotime(resource.updated_at)
info = {
params.RES_ID: resource.id,
params.RES_RESOURCE_TYPE: resource.resource_type,
params.RES_SIZE: resource.size,
params.RES_RATE: resource.rate,
params.RES_STATUS: resource.status,
params.RES_USER_ID: resource.user_id,
params.RES_RESOURCE_REF: resource.resource_ref,
params.RES_UPDATED_AT: updated_at,
}
if detail:
info[params.RES_CREATED_AT] = resource.created_at
info[params.RES_RULE_ID] = resource.rule_id
info[params.RES_STATUS_REASION] = resource.status_reason
return info
def format_rule(rule):
'''Format rule object to dict
Return a representation of the given rule that matches the API output
expectations.
'''
updated_at = rule.updated_at and timeutils.isotime(rule.updated_at)
info = {
params.RULE_ID: rule.id,
params.RULE_RESOURCE_TYPE: rule.resource_type,
params.RULE_SIZE: rule.size,
params.RULE_PARAMS: rule.params,
params.RULE_UPDATED_AT: updated_at,
params.RULE_CREATED_AT: rule.created_at,
}
return info

View File

@ -0,0 +1,151 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from apscheduler.schedulers.background import BackgroundScheduler
import random
from bilean.common import exception
from bilean.common.i18n import _
from oslo_config import cfg
from oslo_log import log as logging
bilean_task_opts = [
cfg.StrOpt('time_zone',
default='utc',
help=_('The time zone of job, default is utc')),
cfg.IntOpt('prior_notify_time',
default=3,
help=_("The days notify user before user's balance is used up, "
"default is 3 days.")),
cfg.IntOpt('misfire_grace_time',
default=3600,
help=_('Seconds after the designated run time that the job is '
'still allowed to be run.')),
cfg.BoolOpt('store_ap_job',
default=False,
help=_('Allow bilean to store apscheduler job.')),
cfg.StrOpt('backend',
default='sqlalchemy',
help='The backend to use for db'),
cfg.StrOpt('connection',
help='The SQLAlchemy connection string used to connect to the '
'database')
]
bilean_task_group = cfg.OptGroup('bilean_task')
cfg.CONF.register_group(bilean_task_group)
cfg.CONF.register_opts(bilean_task_opts, group=bilean_task_group)
LOG = logging.getLogger(__name__)
class BileanTask(object):
_scheduler = None
def __init__(self):
super(BileanTask, self).__init__()
self._scheduler = BackgroundScheduler()
if cfg.CONF.bilean_task.store_ap_job:
self._scheduler.add_jobstore(cfg.CONF.bilean_task.backend,
url=cfg.CONF.bilean_task.connection)
self.job_trigger_mappings = {'notify': 'date',
'daily': 'cron',
'freeze': 'date'}
def add_job(self, task, job_id, job_type='daily', params=None):
"""Add a job to scheduler by given data.
:param str|unicode user_id: used as job_id
:param datetime alarm_time: when to first run the job
"""
mg_time = cfg.CONF.bilean_task.misfire_grace_time
job_time_zone = cfg.CONF.bilean_task.time_zone
user_id = job_id.split('-')[1]
trigger_type = self.job_trigger_mappings[job_type]
if trigger_type == 'date':
run_date = params.get('run_date')
if run_date is None:
msg = "Param run_date cannot be None for trigger type 'date'."
raise exception.InvalidInput(reason=msg)
self._scheduler.add_job(task, 'date',
timezone=job_time_zone,
run_date=run_date,
args=[user_id],
id=job_id,
misfire_grace_time=mg_time)
else:
if params is None:
hour, minute = self._generate_timer()
else:
hour = params.get('hour', None)
minute = params.get('minute', None)
if hour is None or minute is None:
msg = "Param hour or minute cannot be None."
raise exception.InvalidInput(reason=msg)
self._scheduler.add_job(task, 'cron',
timezone=job_time_zone,
hour=hour,
minute=minute,
args=[user_id],
id=job_id,
misfire_grace_time=mg_time)
return job_id
def modify_job(self, job_id, **changes):
"""Modifies the properties of a single job.
Modifications are passed to this method as extra keyword arguments.
:param str|unicode job_id: the identifier of the job
"""
self._scheduler.modify_job(job_id, **changes)
def remove_job(self, job_id):
"""Removes a job, preventing it from being run any more.
:param str|unicode job_id: the identifier of the job
"""
self._scheduler.remove_job(job_id)
def start(self):
LOG.info(_('Starting Billing scheduler'))
self._scheduler.start()
def stop(self):
LOG.info(_('Stopping Billing scheduler'))
self._scheduler.shutdown()
def is_exist(self, job_id):
"""Returns if the Job exists that matches the given ``job_id``.
:param str|unicode job_id: the identifier of the job
:return: True|False
"""
job = self._scheduler.get_job(job_id)
return job is not None
def _generate_timer(self):
"""Generate a random timer include hour and minute."""
hour = random.randint(0, 23)
minute = random.randint(0, 59)
return (hour, minute)
def list_opts():
yield bilean_task_group.name, bilean_task_opts

View File

@ -0,0 +1,142 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import weakref
from oslo_config import cfg
from oslo_log import log as logging
from oslo_utils import importutils
import six
from stevedore import enabled
from bilean.common import exception
from bilean.common.i18n import _
from bilean.common.i18n import _LW
LOG = logging.getLogger(__name__)
_default_backend = "bilean.engine.clients.OpenStackClients"
cloud_opts = [
cfg.StrOpt('client_backend',
default=_default_backend,
help="Fully qualified class name to use as a client backend.")
]
cfg.CONF.register_opts(cloud_opts)
class OpenStackClients(object):
"""Convenience class to create and cache client instances."""
def __init__(self, context):
self._context = weakref.ref(context)
self._clients = {}
self._client_plugins = {}
@property
def context(self):
ctxt = self._context()
assert ctxt is not None, "Need a reference to the context"
return ctxt
def invalidate_plugins(self):
"""Used to force plugins to clear any cached client."""
for name in self._client_plugins:
self._client_plugins[name].invalidate()
def client_plugin(self, name):
global _mgr
if name in self._client_plugins:
return self._client_plugins[name]
if _mgr and name in _mgr.names():
client_plugin = _mgr[name].plugin(self.context)
self._client_plugins[name] = client_plugin
return client_plugin
def client(self, name):
client_plugin = self.client_plugin(name)
if client_plugin:
return client_plugin.client()
if name in self._clients:
return self._clients[name]
# call the local method _<name>() if a real client plugin
# doesn't exist
method_name = '_%s' % name
if callable(getattr(self, method_name, None)):
client = getattr(self, method_name)()
self._clients[name] = client
return client
LOG.warn(_LW('Requested client "%s" not found'), name)
@property
def auth_token(self):
# Always use the auth_token from the keystone() client, as
# this may be refreshed if the context contains credentials
# which allow reissuing of a new token before the context
# auth_token expiry (e.g trust_id or username/password)
return self.client('keystone').auth_token
class ClientBackend(object):
"""Class for delaying choosing the backend client module.
Delay choosing the backend client module until the client's class needs
to be initialized.
"""
def __new__(cls, context):
if cfg.CONF.client_backend == _default_backend:
return OpenStackClients(context)
else:
try:
return importutils.import_object(cfg.CONF.cloud_backend,
context)
except (ImportError, RuntimeError, cfg.NoSuchOptError) as err:
msg = _('Invalid cloud_backend setting in bilean.conf '
'detected - %s') % six.text_type(err)
LOG.error(msg)
raise exception.Invalid(reason=msg)
Clients = ClientBackend
_mgr = None
def has_client(name):
return _mgr and name in _mgr.names()
def initialise():
global _mgr
if _mgr:
return
def client_is_available(client_plugin):
if not hasattr(client_plugin.plugin, 'is_available'):
# if the client does not have a is_available() class method, then
# we assume it wants to be always available
return True
# let the client plugin decide if it wants to register or not
return client_plugin.plugin.is_available()
_mgr = enabled.EnabledExtensionManager(
namespace='bilean.clients',
check_func=client_is_available,
invoke_on_load=False)
def list_opts():
yield None, cloud_opts

View File

@ -0,0 +1,92 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import six
from oslo_config import cfg
@six.add_metaclass(abc.ABCMeta)
class ClientPlugin(object):
# Module which contains all exceptions classes which the client
# may emit
exceptions_module = None
def __init__(self, context):
self.context = context
self.clients = context.clients
self._client = None
def client(self):
if not self._client:
self._client = self._create()
return self._client
@abc.abstractmethod
def _create(self):
'''Return a newly created client.'''
pass
@property
def auth_token(self):
# Always use the auth_token from the keystone client, as
# this may be refreshed if the context contains credentials
# which allow reissuing of a new token before the context
# auth_token expiry (e.g trust_id or username/password)
return self.clients.client('keystone').auth_token
def url_for(self, **kwargs):
kc = self.clients.client('keystone')
return kc.service_catalog.url_for(**kwargs)
def _get_client_option(self, client, option):
# look for the option in the [clients_${client}] section
# unknown options raise cfg.NoSuchOptError
try:
group_name = 'clients_' + client
cfg.CONF.import_opt(option, 'bilean.common.config',
group=group_name)
v = getattr(getattr(cfg.CONF, group_name), option)
if v is not None:
return v
except cfg.NoSuchGroupError:
pass # do not error if the client is unknown
# look for the option in the generic [clients] section
cfg.CONF.import_opt(option, 'bilean.common.config', group='clients')
return getattr(cfg.CONF.clients, option)
def is_client_exception(self, ex):
'''Returns True if the current exception comes from the client.'''
if self.exceptions_module:
if isinstance(self.exceptions_module, list):
for m in self.exceptions_module:
if type(ex) in m.__dict__.values():
return True
else:
return type(ex) in self.exceptions_module.__dict__.values()
return False
def is_not_found(self, ex):
'''Returns True if the exception is a not-found.'''
return False
def is_over_limit(self, ex):
'''Returns True if the exception is an over-limit.'''
return False
def ignore_not_found(self, ex):
'''Raises the exception unless it is a not-found.'''
if not self.is_not_found(ex):
raise ex

View File

View File

@ -0,0 +1,52 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from ceilometerclient import client as cc
from ceilometerclient import exc
from ceilometerclient.openstack.common.apiclient import exceptions as api_exc
from bilean.engine.clients import client_plugin
class CeilometerClientPlugin(client_plugin.ClientPlugin):
exceptions_module = [exc, api_exc]
def _create(self):
con = self.context
endpoint_type = self._get_client_option('ceilometer', 'endpoint_type')
endpoint = self.url_for(service_type='metering',
endpoint_type=endpoint_type)
args = {
'auth_url': con.auth_url,
'service_type': 'metering',
'project_id': con.tenant,
'token': lambda: self.auth_token,
'endpoint_type': endpoint_type,
'cacert': self._get_client_option('ceilometer', 'ca_file'),
'cert_file': self._get_client_option('ceilometer', 'cert_file'),
'key_file': self._get_client_option('ceilometer', 'key_file'),
'insecure': self._get_client_option('ceilometer', 'insecure')
}
return cc.Client('2', endpoint, **args)
def is_not_found(self, ex):
return isinstance(ex, (exc.HTTPNotFound, api_exc.NotFound))
def is_over_limit(self, ex):
return isinstance(ex, exc.HTTPOverLimit)
def is_conflict(self, ex):
return isinstance(ex, exc.HTTPConflict)

View File

@ -0,0 +1,99 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.common import exception
from bilean.common.i18n import _
from bilean.common.i18n import _LI
from bilean.engine.clients import client_plugin
from cinderclient import client as cc
from cinderclient import exceptions
from keystoneclient import exceptions as ks_exceptions
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
class CinderClientPlugin(client_plugin.ClientPlugin):
exceptions_module = exceptions
def get_volume_api_version(self):
'''Returns the most recent API version.'''
endpoint_type = self._get_client_option('cinder', 'endpoint_type')
try:
self.url_for(service_type='volumev2', endpoint_type=endpoint_type)
return 2
except ks_exceptions.EndpointNotFound:
try:
self.url_for(service_type='volume',
endpoint_type=endpoint_type)
return 1
except ks_exceptions.EndpointNotFound:
return None
def _create(self):
con = self.context
volume_api_version = self.get_volume_api_version()
if volume_api_version == 1:
service_type = 'volume'
client_version = '1'
elif volume_api_version == 2:
service_type = 'volumev2'
client_version = '2'
else:
raise exception.Error(_('No volume service available.'))
LOG.info(_LI('Creating Cinder client with volume API version %d.'),
volume_api_version)
endpoint_type = self._get_client_option('cinder', 'endpoint_type')
args = {
'service_type': service_type,
'auth_url': con.auth_url or '',
'project_id': con.tenant,
'username': None,
'api_key': None,
'endpoint_type': endpoint_type,
'http_log_debug': self._get_client_option('cinder',
'http_log_debug'),
'cacert': self._get_client_option('cinder', 'ca_file'),
'insecure': self._get_client_option('cinder', 'insecure')
}
client = cc.Client(client_version, **args)
management_url = self.url_for(service_type=service_type,
endpoint_type=endpoint_type)
client.client.auth_token = self.auth_token
client.client.management_url = management_url
client.volume_api_version = volume_api_version
return client
def is_not_found(self, ex):
return isinstance(ex, exceptions.NotFound)
def is_over_limit(self, ex):
return isinstance(ex, exceptions.OverLimit)
def is_conflict(self, ex):
return (isinstance(ex, exceptions.ClientException) and
ex.code == 409)
def delete(self, volume_id):
"""Delete a volume by given volume id"""
self.client().volumes.delete(volume_id)

View File

@ -0,0 +1,103 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.common import exception
from bilean.common.i18n import _
from bilean.common.i18n import _LI
from bilean.engine.clients import client_plugin
from oslo_log import log as logging
from oslo_utils import uuidutils
from glanceclient import client as gc
from glanceclient import exc
LOG = logging.getLogger(__name__)
class GlanceClientPlugin(client_plugin.ClientPlugin):
exceptions_module = exc
def _create(self):
con = self.context
endpoint_type = self._get_client_option('glance', 'endpoint_type')
endpoint = self.url_for(service_type='image',
endpoint_type=endpoint_type)
args = {
'auth_url': con.auth_url,
'service_type': 'image',
'project_id': con.tenant,
'token': self.auth_token,
'endpoint_type': endpoint_type,
'cacert': self._get_client_option('glance', 'ca_file'),
'cert_file': self._get_client_option('glance', 'cert_file'),
'key_file': self._get_client_option('glance', 'key_file'),
'insecure': self._get_client_option('glance', 'insecure')
}
return gc.Client('1', endpoint, **args)
def is_not_found(self, ex):
return isinstance(ex, exc.HTTPNotFound)
def is_over_limit(self, ex):
return isinstance(ex, exc.HTTPOverLimit)
def is_conflict(self, ex):
return isinstance(ex, exc.HTTPConflict)
def get_image_id(self, image_identifier):
'''Return an id for the specified image name or identifier.
:param image_identifier: image name or a UUID-like identifier
:returns: the id of the requested :image_identifier:
:raises: exception.ImageNotFound,
exception.PhysicalResourceNameAmbiguity
'''
if uuidutils.is_uuid_like(image_identifier):
try:
image_id = self.client().images.get(image_identifier).id
except exc.HTTPNotFound:
image_id = self.get_image_id_by_name(image_identifier)
else:
image_id = self.get_image_id_by_name(image_identifier)
return image_id
def get_image_id_by_name(self, image_identifier):
'''Return an id for the specified image name.
:param image_identifier: image name
:returns: the id of the requested :image_identifier:
:raises: exception.ImageNotFound,
exception.PhysicalResourceNameAmbiguity
'''
try:
filters = {'name': image_identifier}
image_list = list(self.client().images.list(filters=filters))
except exc.ClientException as ex:
raise exception.Error(
_("Error retrieving image list from glance: %s") % ex)
num_matches = len(image_list)
if num_matches == 0:
LOG.info(_LI("Image %s was not found in glance"),
image_identifier)
raise exception.ImageNotFound(image_name=image_identifier)
elif num_matches > 1:
LOG.info(_LI("Multiple images %s were found in glance with name"),
image_identifier)
raise exception.PhysicalResourceNameAmbiguity(
name=image_identifier)
else:
return image_list[0].id

View File

@ -0,0 +1,65 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.engine.clients import client_plugin
from heatclient import client as hc
from heatclient import exc
class HeatClientPlugin(client_plugin.ClientPlugin):
exceptions_module = exc
def _create(self):
args = {
'auth_url': self.context.auth_url,
'token': self.auth_token,
'username': None,
'password': None,
'ca_file': self._get_client_option('heat', 'ca_file'),
'cert_file': self._get_client_option('heat', 'cert_file'),
'key_file': self._get_client_option('heat', 'key_file'),
'insecure': self._get_client_option('heat', 'insecure')
}
endpoint = self.get_heat_url()
if self._get_client_option('heat', 'url'):
# assume that the heat API URL is manually configured because
# it is not in the keystone catalog, so include the credentials
# for the standalone auth_password middleware
args['username'] = self.context.username
args['password'] = self.context.password
del(args['token'])
return hc.Client('1', endpoint, **args)
def is_not_found(self, ex):
return isinstance(ex, exc.HTTPNotFound)
def is_over_limit(self, ex):
return isinstance(ex, exc.HTTPOverLimit)
def is_conflict(self, ex):
return isinstance(ex, exc.HTTPConflict)
def get_heat_url(self):
heat_url = self._get_client_option('heat', 'url')
if heat_url:
tenant_id = self.context.tenant_id
heat_url = heat_url % {'tenant_id': tenant_id}
else:
endpoint_type = self._get_client_option('heat', 'endpoint_type')
heat_url = self.url_for(service_type='orchestration',
endpoint_type=endpoint_type)
return heat_url

View File

@ -0,0 +1,44 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.engine.clients import client_plugin
from oslo_config import cfg
from keystoneclient import exceptions
from keystoneclient.v2_0 import client as keystone_client
class KeystoneClientPlugin(client_plugin.ClientPlugin):
exceptions_module = exceptions
@property
def kclient(self):
return keystone_client.Client(
username=cfg.CONF.authentication.service_username,
password=cfg.CONF.authentication.service_password,
tenant_name=cfg.CONF.authentication.service_project_name,
auth_url=cfg.CONF.authentication.auth_url)
def _create(self):
return self.kclient
def is_not_found(self, ex):
return isinstance(ex, exceptions.NotFound)
def is_over_limit(self, ex):
return isinstance(ex, exceptions.RequestEntityTooLarge)
def is_conflict(self, ex):
return isinstance(ex, exceptions.Conflict)

View File

@ -0,0 +1,119 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.common import exception
from bilean.engine.clients import client_plugin
from oslo_utils import uuidutils
from neutronclient.common import exceptions
from neutronclient.neutron import v2_0 as neutronV20
from neutronclient.v2_0 import client as nc
class NeutronClientPlugin(client_plugin.ClientPlugin):
exceptions_module = exceptions
def _create(self):
con = self.context
endpoint_type = self._get_client_option('neutron', 'endpoint_type')
endpoint = self.url_for(service_type='network',
endpoint_type=endpoint_type)
args = {
'auth_url': con.auth_url,
'service_type': 'network',
'token': self.auth_token,
'endpoint_url': endpoint,
'endpoint_type': endpoint_type,
'ca_cert': self._get_client_option('neutron', 'ca_file'),
'insecure': self._get_client_option('neutron', 'insecure')
}
return nc.Client(**args)
def is_not_found(self, ex):
if isinstance(ex, (exceptions.NotFound,
exceptions.NetworkNotFoundClient,
exceptions.PortNotFoundClient)):
return True
return (isinstance(ex, exceptions.NeutronClientException) and
ex.status_code == 404)
def is_conflict(self, ex):
if not isinstance(ex, exceptions.NeutronClientException):
return False
return ex.status_code == 409
def is_over_limit(self, ex):
if not isinstance(ex, exceptions.NeutronClientException):
return False
return ex.status_code == 413
def find_neutron_resource(self, props, key, key_type):
return neutronV20.find_resourceid_by_name_or_id(
self.client(), key_type, props.get(key))
def resolve_network(self, props, net_key, net_id_key):
if props.get(net_key):
props[net_id_key] = self.find_neutron_resource(
props, net_key, 'network')
props.pop(net_key)
return props[net_id_key]
def resolve_subnet(self, props, subnet_key, subnet_id_key):
if props.get(subnet_key):
props[subnet_id_key] = self.find_neutron_resource(
props, subnet_key, 'subnet')
props.pop(subnet_key)
return props[subnet_id_key]
def network_id_from_subnet_id(self, subnet_id):
subnet_info = self.client().show_subnet(subnet_id)
return subnet_info['subnet']['network_id']
def get_secgroup_uuids(self, security_groups):
'''Returns a list of security group UUIDs.
Args:
security_groups: List of security group names or UUIDs
'''
seclist = []
all_groups = None
for sg in security_groups:
if uuidutils.is_uuid_like(sg):
seclist.append(sg)
else:
if not all_groups:
response = self.client().list_security_groups()
all_groups = response['security_groups']
same_name_groups = [g for g in all_groups if g['name'] == sg]
groups = [g['id'] for g in same_name_groups]
if len(groups) == 0:
raise exception.PhysicalResourceNotFound(resource_id=sg)
elif len(groups) == 1:
seclist.append(groups[0])
else:
# for admin roles, can get the other users'
# securityGroups, so we should match the tenant_id with
# the groups, and return the own one
own_groups = [g['id'] for g in same_name_groups
if g['tenant_id'] == self.context.tenant_id]
if len(own_groups) == 1:
seclist.append(own_groups[0])
else:
raise exception.PhysicalResourceNameAmbiguity(name=sg)
return seclist

View File

@ -0,0 +1,294 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import json
import six
from novaclient import client as nc
from novaclient import exceptions
from novaclient import shell as novashell
from bilean.common import exception
from bilean.common.i18n import _
from bilean.common.i18n import _LW
from bilean.engine.clients import client_plugin
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
class NovaClientPlugin(client_plugin.ClientPlugin):
deferred_server_statuses = ['BUILD',
'HARD_REBOOT',
'PASSWORD',
'REBOOT',
'RESCUE',
'RESIZE',
'REVERT_RESIZE',
'SHUTOFF',
'SUSPENDED',
'VERIFY_RESIZE']
exceptions_module = exceptions
def _create(self):
computeshell = novashell.OpenStackComputeShell()
extensions = computeshell._discover_extensions("1.1")
endpoint_type = self._get_client_option('nova', 'endpoint_type')
args = {
'project_id': self.context.tenant,
'auth_url': self.context.auth_url,
'service_type': 'compute',
'username': None,
'api_key': None,
'extensions': extensions,
'endpoint_type': endpoint_type,
'http_log_debug': self._get_client_option('nova',
'http_log_debug'),
'cacert': self._get_client_option('nova', 'ca_file'),
'insecure': self._get_client_option('nova', 'insecure')
}
client = nc.Client(1.1, **args)
management_url = self.url_for(service_type='compute',
endpoint_type=endpoint_type)
client.client.auth_token = self.auth_token
client.client.management_url = management_url
return client
def is_not_found(self, ex):
return isinstance(ex, exceptions.NotFound)
def is_over_limit(self, ex):
return isinstance(ex, exceptions.OverLimit)
def is_bad_request(self, ex):
return isinstance(ex, exceptions.BadRequest)
def is_conflict(self, ex):
return isinstance(ex, exceptions.Conflict)
def is_unprocessable_entity(self, ex):
http_status = (getattr(ex, 'http_status', None) or
getattr(ex, 'code', None))
return (isinstance(ex, exceptions.ClientException) and
http_status == 422)
def refresh_server(self, server):
'''Refresh server's attributes.
Log warnings for non-critical API errors.
'''
try:
server.get()
except exceptions.OverLimit as exc:
LOG.warn(_LW("Server %(name)s (%(id)s) received an OverLimit "
"response during server.get(): %(exception)s"),
{'name': server.name,
'id': server.id,
'exception': exc})
except exceptions.ClientException as exc:
if ((getattr(exc, 'http_status', getattr(exc, 'code', None)) in
(500, 503))):
LOG.warn(_LW('Server "%(name)s" (%(id)s) received the '
'following exception during server.get(): '
'%(exception)s'),
{'name': server.name,
'id': server.id,
'exception': exc})
else:
raise
def get_ip(self, server, net_type, ip_version):
"""Return the server's IP of the given type and version."""
if net_type in server.addresses:
for ip in server.addresses[net_type]:
if ip['version'] == ip_version:
return ip['addr']
def get_status(self, server):
'''Return the server's status.
:param server: server object
:returns: status as a string
'''
# Some clouds append extra (STATUS) strings to the status, strip it
return server.status.split('(')[0]
def get_flavor_id(self, flavor):
'''Get the id for the specified flavor name.
If the specified value is flavor id, just return it.
:param flavor: the name of the flavor to find
:returns: the id of :flavor:
:raises: exception.FlavorMissing
'''
flavor_id = None
flavor_list = self.client().flavors.list()
for o in flavor_list:
if o.name == flavor:
flavor_id = o.id
break
if o.id == flavor:
flavor_id = o.id
break
if flavor_id is None:
raise exception.FlavorMissing(flavor_id=flavor)
return flavor_id
def get_keypair(self, key_name):
'''Get the public key specified by :key_name:
:param key_name: the name of the key to look for
:returns: the keypair (name, public_key) for :key_name:
:raises: exception.UserKeyPairMissing
'''
try:
return self.client().keypairs.get(key_name)
except exceptions.NotFound:
raise exception.UserKeyPairMissing(key_name=key_name)
def delete_server(self, server):
'''Deletes a server and waits for it to disappear from Nova.'''
if not server:
return
try:
server.delete()
except Exception as exc:
self.ignore_not_found(exc)
return
while True:
yield
try:
self.refresh_server(server)
except Exception as exc:
self.ignore_not_found(exc)
break
else:
# Some clouds append extra (STATUS) strings to the status
short_server_status = server.status.split('(')[0]
if short_server_status in ("DELETED", "SOFT_DELETED"):
break
if short_server_status == "ERROR":
fault = getattr(server, 'fault', {})
message = fault.get('message', 'Unknown')
code = fault.get('code')
errmsg = (_("Server %(name)s delete failed: (%(code)s) "
"%(message)s"))
raise exception.Error(errmsg % {"name": server.name,
"code": code,
"message": message})
def delete(self, server_id):
'''Delete a server by given server id'''
self.client().servers.delete(server_id)
def resize(self, server, flavor, flavor_id):
"""Resize the server and then call check_resize task to verify."""
server.resize(flavor_id)
yield self.check_resize(server, flavor, flavor_id)
def rename(self, server, name):
"""Update the name for a server."""
server.update(name)
def check_resize(self, server, flavor, flavor_id):
"""Verify that a resizing server is properly resized.
If that's the case, confirm the resize, if not raise an error.
"""
self.refresh_server(server)
while server.status == 'RESIZE':
yield
self.refresh_server(server)
if server.status == 'VERIFY_RESIZE':
server.confirm_resize()
else:
raise exception.Error(
_("Resizing to '%(flavor)s' failed, status '%(status)s'") %
dict(flavor=flavor, status=server.status))
def rebuild(self, server, image_id, preserve_ephemeral=False):
"""Rebuild the server and call check_rebuild to verify."""
server.rebuild(image_id, preserve_ephemeral=preserve_ephemeral)
yield self.check_rebuild(server, image_id)
def check_rebuild(self, server, image_id):
"""Verify that a rebuilding server is rebuilt.
Raise error if it ends up in an ERROR state.
"""
self.refresh_server(server)
while server.status == 'REBUILD':
yield
self.refresh_server(server)
if server.status == 'ERROR':
raise exception.Error(
_("Rebuilding server failed, status '%s'") % server.status)
def meta_serialize(self, metadata):
"""Serialize non-string metadata values before sending them to Nova."""
if not isinstance(metadata, collections.Mapping):
raise exception.StackValidationFailed(message=_(
"nova server metadata needs to be a Map."))
return dict((key, (value if isinstance(value,
six.string_types)
else json.dumps(value))
) for (key, value) in metadata.items())
def meta_update(self, server, metadata):
"""Delete/Add the metadata in nova as needed."""
metadata = self.meta_serialize(metadata)
current_md = server.metadata
to_del = [key for key in current_md.keys() if key not in metadata]
client = self.client()
if len(to_del) > 0:
client.servers.delete_meta(server, to_del)
client.servers.set_meta(server, metadata)
def server_to_ipaddress(self, server):
'''Return the server's IP address, fetching it from Nova.'''
try:
server = self.client().servers.get(server)
except exceptions.NotFound as ex:
LOG.warn(_LW('Instance (%(server)s) not found: %(ex)s'),
{'server': server, 'ex': ex})
else:
for n in server.networks:
if len(server.networks[n]) > 0:
return server.networks[n][0]
def get_server(self, server):
try:
return self.client().servers.get(server)
except exceptions.NotFound as ex:
LOG.warn(_LW('Server (%(server)s) not found: %(ex)s'),
{'server': server, 'ex': ex})
raise exception.ServerNotFound(server=server)
def absolute_limits(self):
"""Return the absolute limits as a dictionary."""
limits = self.client().limits.get()
return dict([(limit.name, limit.value)
for limit in list(limits.absolute)])

View File

@ -0,0 +1,51 @@
# Copyright (c) 2014 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from bilean.engine.clients import client_plugin
from saharaclient.api import base as sahara_base
from saharaclient import client as sahara_client
class SaharaClientPlugin(client_plugin.ClientPlugin):
exceptions_module = sahara_base
def _create(self):
con = self.context
endpoint_type = self._get_client_option('sahara', 'endpoint_type')
endpoint = self.url_for(service_type='data_processing',
endpoint_type=endpoint_type)
args = {
'service_type': 'data_processing',
'input_auth_token': self.auth_token,
'auth_url': con.auth_url,
'project_name': con.tenant,
'sahara_url': endpoint
}
client = sahara_client.Client('1.1', **args)
return client
def is_not_found(self, ex):
return (isinstance(ex, sahara_base.APIException) and
ex.error_code == 404)
def is_over_limit(self, ex):
return (isinstance(ex, sahara_base.APIException) and
ex.error_code == 413)
def is_conflict(self, ex):
return (isinstance(ex, sahara_base.APIException) and
ex.error_code == 409)

View File

@ -0,0 +1,77 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from troveclient import client as tc
from troveclient.openstack.common.apiclient import exceptions
from bilean.common import exception
from bilean.engine.clients import client_plugin
class TroveClientPlugin(client_plugin.ClientPlugin):
exceptions_module = exceptions
def _create(self):
con = self.context
endpoint_type = self._get_client_option('trove', 'endpoint_type')
args = {
'service_type': 'database',
'auth_url': con.auth_url or '',
'proxy_token': con.auth_token,
'username': None,
'password': None,
'cacert': self._get_client_option('trove', 'ca_file'),
'insecure': self._get_client_option('trove', 'insecure'),
'endpoint_type': endpoint_type
}
client = tc.Client('1.0', **args)
management_url = self.url_for(service_type='database',
endpoint_type=endpoint_type)
client.client.auth_token = con.auth_token
client.client.management_url = management_url
return client
def is_not_found(self, ex):
return isinstance(ex, exceptions.NotFound)
def is_over_limit(self, ex):
return isinstance(ex, exceptions.RequestEntityTooLarge)
def is_conflict(self, ex):
return isinstance(ex, exceptions.Conflict)
def get_flavor_id(self, flavor):
'''Get the id for the specified flavor name.
If the specified value is flavor id, just return it.
:param flavor: the name of the flavor to find
:returns: the id of :flavor:
:raises: exception.FlavorMissing
'''
flavor_id = None
flavor_list = self.client().flavors.list()
for o in flavor_list:
if o.name == flavor:
flavor_id = o.id
break
if o.id == flavor:
flavor_id = o.id
break
if flavor_id is None:
raise exception.FlavorMissing(flavor_id=flavor)
return flavor_id

112
bilean/engine/dispatcher.py Normal file
View File

@ -0,0 +1,112 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_context import context as oslo_context
from oslo_log import log as logging
import oslo_messaging
from oslo_service import service
from bilean.common import params
from bilean.common.i18n import _LI
from bilean.common import messaging as bilean_messaging
LOG = logging.getLogger(__name__)
OPERATIONS = (
START_ACTION, CANCEL_ACTION, STOP
) = (
'start_action', 'cancel_action', 'stop'
)
class Dispatcher(service.Service):
'''Listen on an AMQP queue named for the engine.
Receive notification from engine services and schedule actions.
'''
def __init__(self, engine_service, topic, version, thread_group_mgr):
super(Dispatcher, self).__init__()
self.TG = thread_group_mgr
self.engine_id = engine_service.engine_id
self.topic = topic
self.version = version
def start(self):
super(Dispatcher, self).start()
self.target = oslo_messaging.Target(server=self.engine_id,
topic=self.topic,
version=self.version)
server = bilean_messaging.get_rpc_server(self.target, self)
server.start()
def listening(self, ctxt):
'''Respond affirmatively to confirm that engine is still alive.'''
return True
def start_action(self, ctxt, action_id=None):
self.TG.start_action(self.engine_id, action_id)
def cancel_action(self, ctxt, action_id):
'''Cancel an action.'''
self.TG.cancel_action(action_id)
def suspend_action(self, ctxt, action_id):
'''Suspend an action.'''
self.TG.suspend_action(action_id)
def resume_action(self, ctxt, action_id):
'''Resume an action.'''
self.TG.resume_action(action_id)
def stop(self):
super(Dispatcher, self).stop()
# Wait for all action threads to be finished
LOG.info(_LI("Stopping all action threads of engine %s"),
self.engine_id)
# Stop ThreadGroup gracefully
self.TG.stop(True)
LOG.info(_LI("All action threads have been finished"))
def notify(method, engine_id=None, **kwargs):
'''Send notification to dispatcher
:param method: remote method to call
:param engine_id: dispatcher to notify; None implies broadcast
'''
client = bilean_messaging.get_rpc_client(version=params.RPC_API_VERSION)
if engine_id:
# Notify specific dispatcher identified by engine_id
call_context = client.prepare(
version=params.RPC_API_VERSION,
topic=params.ENGINE_DISPATCHER_TOPIC,
server=engine_id)
else:
# Broadcast to all disptachers
call_context = client.prepare(
version=params.RPC_API_VERSION,
topic=params.ENGINE_DISPATCHER_TOPIC)
try:
# We don't use ctext parameter in action progress
# actually. But since RPCClient.call needs this param,
# we use oslo current context here.
call_context.call(oslo_context.get_current(), method, **kwargs)
return True
except oslo_messaging.MessagingTimeout:
return False
def start_action(engine_id=None, **kwargs):
return notify(START_ACTION, engine_id, **kwargs)

View File

@ -0,0 +1,191 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import glob
import os.path
import six
from stevedore import extension
from oslo_config import cfg
from oslo_log import log as logging
from bilean.common import exception
from bilean.common.i18n import _
from bilean.common.i18n import _LE
from bilean.common.i18n import _LI
from bilean.engine import parser
from bilean.engine import registry
LOG = logging.getLogger(__name__)
_environment = None
def global_env():
global _environment
if _environment is None:
initialize()
return _environment
class Environment(object):
'''An object that contains all rules, policies and customizations.'''
SECTIONS = (
PARAMETERS, CUSTOM_RULES,
) = (
'parameters', 'custom_rules'
)
def __init__(self, env=None, is_global=False):
'''Create an Environment from a dict.
:param env: the json environment
:param is_global: boolean indicating if this is a user created one.
'''
self.params = {}
if is_global:
self.rule_registry = registry.Registry('rules')
self.driver_registry = registry.Registry('drivers')
else:
self.rule_registry = registry.Registry(
'rules', global_env().rule_registry)
self.driver_registry = registry.Registry(
'drivers', global_env().driver_registry)
if env is not None:
# Merge user specified keys with current environment
self.params = env.get(self.PARAMETERS, {})
custom_rules = env.get(self.CUSTOM_RULES, {})
self.rule_registry.load(custom_rules)
def parse(self, env_str):
'''Parse a string format environment file into a dictionary.'''
if env_str is None:
return {}
env = parser.simple_parse(env_str)
# Check unknown sections
for sect in env:
if sect not in self.SECTIONS:
msg = _('environment has unknown section "%s"') % sect
raise ValueError(msg)
# Fill in default values for missing sections
for sect in self.SECTIONS:
if sect not in env:
env[sect] = {}
return env
def load(self, env_dict):
'''Load environment from the given dictionary.'''
self.params.update(env_dict.get(self.PARAMETERS, {}))
self.rule_registry.load(env_dict.get(self.CUSTOM_RULES, {}))
def _check_plugin_name(self, plugin_type, name):
if name is None or name == "":
msg = _('%s type name not specified') % plugin_type
raise exception.InvalidPlugin(message=msg)
elif not isinstance(name, six.string_types):
msg = _('%s type name is not a string') % plugin_type
raise exception.InvalidPlugin(message=msg)
def register_rule(self, name, plugin):
self._check_plugin_name('Rule', name)
self.rule_registry.register_plugin(name, plugin)
def get_rule(self, name):
self._check_plugin_name('Rule', name)
plugin = self.rule_registry.get_plugin(name)
if plugin is None:
raise exception.RuleTypeNotFound(rule_type=name)
return plugin
def get_rule_types(self):
return self.rule_registry.get_types()
def register_driver(self, name, plugin):
self._check_plugin_name('Driver', name)
self.driver_registry.register_plugin(name, plugin)
def get_driver(self, name):
self._check_plugin_name('Driver', name)
plugin = self.driver_registry.get_plugin(name)
if plugin is None:
msg = _('Driver plugin %(name)s is not found.') % {'name': name}
raise exception.InvalidPlugin(message=msg)
return plugin
def get_driver_types(self):
return self.driver_registry.get_types()
def read_global_environment(self):
'''Read and parse global environment files.'''
cfg.CONF.import_opt('environment_dir', 'bilean.common.config')
env_dir = cfg.CONF.environment_dir
try:
files = glob.glob(os.path.join(env_dir, '*'))
except OSError as ex:
LOG.error(_LE('Failed to read %s'), env_dir)
LOG.exception(ex)
return
for fname in files:
try:
with open(fname) as f:
LOG.info(_LI('Loading environment from %s'), fname)
self.load(self.parse(f.read()))
except ValueError as vex:
LOG.error(_LE('Failed to parse %s'), fname)
LOG.exception(six.text_type(vex))
except IOError as ioex:
LOG.error(_LE('Failed to read %s'), fname)
LOG.exception(six.text_type(ioex))
def _get_mapping(namespace):
mgr = extension.ExtensionManager(
namespace=namespace,
invoke_on_load=False)
return [[name, mgr[name].plugin] for name in mgr.names()]
def initialize():
global _environment
if _environment is not None:
return
env = Environment(is_global=True)
# Register global plugins when initialized
entries = _get_mapping('bilean.rules')
for name, plugin in entries:
env.register_rule(name, plugin)
try:
entries = _get_mapping('bilean.drivers')
for name, plugin in entries:
env.register_driver(name, plugin)
except Exception:
pass
env.read_global_environment()
_environment = env

138
bilean/engine/events.py Normal file
View File

@ -0,0 +1,138 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
from bilean.common.i18n import _
from bilean.db import api as db_api
from bilean.engine import resources as bilean_resources
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
class Event(object):
"""Class to deal with consumption record."""
def __init__(self, timestamp, **kwargs):
self.timestamp = timestamp
self.user_id = kwargs.get('user_id', None)
self.action = kwargs.get('action', None)
self.resource_type = kwargs.get('resource_type', None)
self.action = kwargs.get('action', None)
self.value = kwargs.get('value', 0)
@classmethod
def from_db_record(cls, record):
'''Construct an event object from a database record.'''
kwargs = {
'id': record.id,
'user_id': record.user_id,
'action': record.action,
'resource_type': record.resource_type,
'action': record.action,
'value': record.value,
}
return cls(record.timestamp, **kwargs)
@classmethod
def load(cls, context, db_event=None, event_id=None, project_safe=True):
'''Retrieve an event record from database.'''
if db_event is not None:
return cls.from_db_record(db_event)
record = db_api.event_get(context, event_id, project_safe=project_safe)
if record is None:
raise exception.EventNotFound(event=event_id)
return cls.from_db_record(record)
@classmethod
def load_all(cls, context, filters=None, limit=None, marker=None,
sort_keys=None, sort_dir=None, project_safe=True,
show_deleted=False):
'''Retrieve all events from database.'''
records = db_api.event_get_all(context, limit=limit, marker=marker,
sort_keys=sort_keys, sort_dir=sort_dir,
filters=filters,
project_safe=project_safe,
show_deleted=show_deleted)
for record in records:
yield cls.from_db_record(record)
def store(self, context):
'''Store the event into database and return its ID.'''
values = {
'id': self.id,
'user_id': self.user_id,
'action': self.action,
'resource_type': self.resource_type,
'action': self.action,
'value': self.value,
}
event = db_api.event_create(context, values)
self.id = event.id
return self.id
@classmethod
def from_dict(cls, **kwargs):
timestamp = kwargs.pop('timestamp')
return cls(timestamp, kwargs)
def to_dict(self):
evt = {
'id': self.id,
'user_id': self.user_id,
'action': self.action,
'resource_type': self.resource_type,
'action': self.action,
'value': self.value,
'timestamp': utils.format_time(self.timestamp),
}
return evt
def record(context, user_id, action=None, seconds=0, value=0):
"""Generate events for specify user
:param context: oslo.messaging.context
:param user_id: ID of user to mark event
:param action: action of event, include 'charge' and 'recharge'
:param seconds: use time length, needed when action is 'charge'
:param value: value of recharge, needed when action is 'recharge'
"""
try:
if action == 'charge':
resources = bilean_resources.resource_get_all(
context, user_id=user_id)
for resource in resources:
usage = resource['rate'] / 3600.0 * time_length
event_create(context,
user_id=user_id,
resource_id=resource['id'],
resource_type=resource['resource_type'],
action=action,
value=usage)
else:
event_create(context,
user_id=user_id,
action=action,
value=recharge_value)
except Exception as exc:
LOG.error(_("Error generate events: %s") % six.text_type(exc))

86
bilean/engine/parser.py Normal file
View File

@ -0,0 +1,86 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from oslo_serialization import jsonutils
import six
from six.moves import urllib
import yaml
from oslo_log import log as logging
from bilean.common.i18n import _
LOG = logging.getLogger(__name__)
# Try LibYAML if available
if hasattr(yaml, 'CSafeLoader'):
Loader = yaml.CSafeLoader
else:
Loader = yaml.SafeLoader
if hasattr(yaml, 'CSafeDumper'):
Dumper = yaml.CSafeDumper
else:
Dumper = yaml.SafeDumper
class YamlLoader(Loader):
def normalise_file_path_to_url(self, path):
if urllib.parse.urlparse(path).scheme:
return path
path = os.path.abspath(path)
return urllib.parse.urljoin('file:',
urllib.request.pathname2url(path))
def include(self, node):
try:
url = self.normalise_file_path_to_url(self.construct_scalar(node))
tmpl = urllib.request.urlopen(url).read()
return yaml.load(tmpl, Loader)
except urllib.error.URLError as ex:
raise IOError('Failed retrieving file %s: %s' %
(url, six.text_type(ex)))
def process_unicode(self, node):
# Override the default string handling function to always return
# unicode objects
return self.construct_scalar(node)
YamlLoader.add_constructor('!include', YamlLoader.include)
YamlLoader.add_constructor(u'tag:yaml.org,2002:str',
YamlLoader.process_unicode)
YamlLoader.add_constructor(u'tag:yaml.org,2002:timestamp',
YamlLoader.process_unicode)
def simple_parse(in_str):
try:
out_dict = jsonutils.loads(in_str)
except ValueError:
try:
out_dict = yaml.load(in_str, Loader=YamlLoader)
except yaml.YAMLError as yea:
yea = six.text_type(yea)
msg = _('Error parsing input: %s') % yea
raise ValueError(msg)
else:
if out_dict is None:
out_dict = {}
if not isinstance(out_dict, dict):
msg = _('The input is not a JSON object or YAML mapping.')
raise ValueError(msg)
return out_dict

136
bilean/engine/registry.py Normal file
View File

@ -0,0 +1,136 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import itertools
import six
from oslo_log import log as logging
from bilean.common.i18n import _LI
from bilean.common.i18n import _LW
LOG = logging.getLogger(__name__)
class PluginInfo(object):
'''Base mapping of plugin type to implementation.'''
def __new__(cls, registry, name, plugin, **kwargs):
'''Create a new PluginInfo of the appropriate class.
Placeholder for class hierarchy extensibility
'''
return super(PluginInfo, cls).__new__(cls)
def __init__(self, registry, name, plugin):
self.registry = registry
self.name = name
self.plugin = plugin
self.user_provided = True
def __eq__(self, other):
if other is None:
return False
return (self.name == other.name and
self.plugin == other.plugin and
self.user_provided == other.user_provided)
def __ne__(self, other):
return not self.__eq__(other)
def __lt__(self, other):
if self.user_provided != other.user_provided:
# user provided ones must be sorted above system ones.
return self.user_provided > other.user_provided
if len(self.name) != len(other.name):
# more specific (longer) name must be sorted above system ones.
return len(self.name) > len(other.name)
return self.name < other.name
def __gt__(self, other):
return other.__lt__(self)
def __str__(self):
return '[Plugin](User:%s) %s -> %s' % (self.user_provided,
self.name, str(self.plugin))
class Registry(object):
'''A registry for managing rule classes.'''
def __init__(self, registry_name, global_registry=None):
self.registry_name = registry_name
self._registry = {}
self.is_global = False if global_registry else True
self.global_registry = global_registry
def _register_info(self, name, info):
'''place the new info in the correct location in the registry.
:param path: a string of plugin name.
:param info: reference to a PluginInfo data structure, deregister a
PluginInfo if specified as None.
'''
registry = self._registry
if info is None:
# delete this entry.
LOG.warn(_LW('Removing %(item)s from registry'), {'item': name})
registry.pop(name, None)
return
if name in registry and isinstance(registry[name], PluginInfo):
if registry[name] == info:
return
details = {
'name': name,
'old': str(registry[name].plugin),
'new': str(info.plugin)
}
LOG.warn(_LW('Changing %(name)s from %(old)s to %(new)s'), details)
else:
LOG.info(_LI('Registering %(name)s -> %(value)s'), {
'name': name, 'value': str(info.plugin)})
info.user_provided = not self.is_global
registry[name] = info
def register_plugin(self, name, plugin):
pi = PluginInfo(self, name, plugin)
self._register_info(name, pi)
def load(self, json_snippet):
for k, v in iter(json_snippet.items()):
if v is None:
self._register_info(k, None)
else:
self.register_plugin(k, v)
def iterable_by(self, name):
plugin = self._registry.get(name)
if plugin:
yield plugin
def get_plugin(self, name):
giter = []
if not self.is_global:
giter = self.global_registry.iterable_by(name)
matches = itertools.chain(self.iterable_by(name), giter)
infoes = sorted(matches)
return infoes[0].plugin if infoes else None
def as_dict(self):
return dict((k, v.plugin) for k, v in self._registry.items())
def get_types(self):
'''Return a list of valid plugin types.'''
return [{'name': name} for name in six.iterkeys(self._registry)]

View File

@ -0,0 +1,60 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.db import api as db_api
from bilean.engine import api
def resource_get(context, resource_id):
return db_api.resource_get(context, resource_id)
def resource_get_all(context, **search_opts):
resources = db_api.resource_get_all(context, **search_opts)
if resources:
return [api.format_bilean_resource(resource) for resource in resources]
return []
def resource_get_by_physical_resource_id(context,
physical_resource_id,
resource_type):
return db_api.resource_get_by_physical_resource_id(
context, physical_resource_id, resource_type)
def resource_create(context, values):
return db_api.resource_create(context, values)
def resource_update(context, resource_id, values):
return db_api.resource_update(context, resource_id, values)
def resource_update_by_resource(context, resource):
return db_api.resource_update_by_resource(context, resource)
def resource_delete(context, resource_id):
db_api.resource_delete(context, resource_id)
def resource_delete_by_physical_resource_id(context,
physical_resource_id,
resource_type):
db_api.resource_delete_by_physical_resource_id(
context, physical_resource_id, resource_type)
def resource_delete_by_user_id(context, user_id):
db_api.resource_delete(context, user_id)

39
bilean/engine/rules.py Normal file
View File

@ -0,0 +1,39 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.db import api as db_api
def list_rules(context):
rules = db_api.rule_get_all(context)
return rules
def create_rule(context, values):
return db_api.rule_create(context, values)
def get_rule(context, rule_id):
return db_api.rule_get(context, rule_id)
def delete_rule(context, rule_id):
return db_api.rule_delete(context, rule_id)
def get_rule_by_filters(context, **filters):
return db_api.get_rule_by_filters(context, **filters)
def update_rule(context, rule_id, values):
return db_api.rule_update(context, rule_id, values)

585
bilean/engine/service.py Normal file
View File

@ -0,0 +1,585 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from datetime import timedelta
import functools
import os
import random
import six
import uuid
from oslo_config import cfg
from oslo_log import log as logging
import oslo_messaging
from oslo_service import service
from oslo_utils import timeutils
from bilean.common import context
from bilean.common import exception
from bilean.common.i18n import _
from bilean.common.i18n import _LI
from bilean.common.i18n import _LE
from bilean.common import messaging as rpc_messaging
from bilean.common import params as bilean_params
from bilean.common import schema
from bilean.db import api as db_api
from bilean.engine.bilean_task import BileanTask
from bilean.engine import clients as bilean_clients
from bilean.engine import environment
from bilean.engine import events as events_client
from bilean.engine import resources as resources_client
from bilean.engine import rules as rules_client
from bilean.engine import users as users_client
from bilean import notifier
LOG = logging.getLogger(__name__)
def request_context(func):
@functools.wraps(func)
def wrapped(self, ctx, *args, **kwargs):
if ctx is not None and not isinstance(ctx, context.RequestContext):
ctx = context.RequestContext.from_dict(ctx.to_dict())
try:
return func(self, ctx, *args, **kwargs)
except exception.BileanException:
raise oslo_messaging.rpc.dispatcher.ExpectedException()
return wrapped
class EngineService(service.Service):
"""Manages the running instances from creation to destruction.
All the methods in here are called from the RPC backend. This is
all done dynamically so if a call is made via RPC that does not
have a corresponding method here, an exception will be thrown when
it attempts to call into this class. Arguments to these methods
are also dynamically added and will be named as keyword arguments
by the RPC caller.
"""
RPC_API_VERSION = '1.1'
def __init__(self, host, topic, manager=None, cnxt=None):
super(EngineService, self).__init__()
bilean_clients.initialise()
self.host = host
self.topic = topic
# The following are initialized here, but assigned in start() which
# happens after the fork when spawning multiple worker processes
self.bilean_task = None
self.engine_id = None
self.target = None
self._rpc_server = None
self.notifier = notifier.Notifier()
self.job_task_mapping = {'daily': '_daily_task',
'notify': '_notify_task',
'freeze': '_freeze_task'}
if cnxt is None:
cnxt = context.get_service_context()
self.clients = cnxt.clients
def start(self):
self.engine_id = str(uuid.uuid4())
target = oslo_messaging.Target(version=self.RPC_API_VERSION,
server=cfg.CONF.host,
topic=self.topic)
self.target = target
self._rpc_server = rpc_messaging.get_rpc_server(target, self)
self._rpc_server.start()
super(EngineService, self).start()
def _stop_rpc_server(self):
# Stop RPC connection to prevent new requests
LOG.info(_LI("Stopping engine service..."))
try:
self._rpc_server.stop()
self._rpc_server.wait()
LOG.info(_LI('Engine service stopped successfully'))
except Exception as ex:
LOG.error(_LE('Failed to stop engine service: %s'),
six.text_type(ex))
def stop(self):
self._stop_rpc_server()
# Wait for all active threads to be finished
self.bilean_task.stop()
super(EngineService, self).stop()
def create_bilean_tasks(self):
LOG.info("Starting billing task for all users pid=%s" % os.getpid())
if self.bilean_task is None:
self.bilean_task = BileanTask()
self._init_users()
# Init billing job for engine
admin_context = context.get_admin_context()
jobs = db_api.job_get_by_engine_id(admin_context, self.engine_id)
if jobs:
for job in jobs:
if self.bilean_task.is_exist(job.id):
continue
job_type = job.job_type
task = getattr(self, self.job_task_mapping[job_type])
self.bilean_task.add_job(task,
job.id,
job_type=job_type,
params=job.parameters)
self.bilean_task.start()
def _init_users(self):
tenants = self.keystoneclient.tenants.list()
tenant_ids = [t.id for t in tenants]
admin_context = context.get_admin_context()
users = self.list_users(admin_context)
user_ids = [user['id'] for user in users]
for tid in tenant_ids:
if tid not in user_ids:
user = self.create_user(
admin_context,
values={'id': tid,
'status': 'init',
'status_reason': 'Init status'})
def _notify_task(self, user_id):
msg = {'user': user_id, 'notification': 'The balance is almost use up'}
self.notifier.info('billing.notify', msg)
admin_context = context.get_admin_context()
user = users_client.get_user(admin_context, user_id)
if user['status'] != 'freeze' and user['rate'] > 0:
user = users_client.do_bill(admin_context,
user,
update=True,
bilean_controller=self)
try:
db_api.job_delete(
admin_context, self._generate_job_id(user_id, 'notify'))
except exception.NotFound as e:
LOG.warn(_("Failed in deleting job: %s") % six.text_type(e))
self._add_freeze_job(admin_context, user)
def _daily_task(self, user_id):
admin_context = context.get_admin_context()
user = users_client.get_user(admin_context, user_id)
if user['status'] != 'freeze' and user['rate'] > 0:
users_client.do_bill(admin_context,
user,
update=True,
bilean_controller=self)
try:
db_api.job_delete(
admin_context, self._generate_job_id(user_id, 'daily'))
except exception.NotFound as e:
LOG.warn(_("Failed in deleting job: %s") % six.text_type(e))
def _freeze_task(self, user_id):
admin_context = context.get_admin_context()
user = users_client.get_user(admin_context, user_id)
if user['status'] != 'freeze' and user['rate'] > 0:
users_client.do_bill(admin_context,
user,
update=True,
bilean_controller=self)
try:
db_api.job_delete(
admin_context, self._generate_job_id(user_id, 'freeze'))
except exception.NotFound as e:
LOG.warn(_("Failed in deleting job: %s") % six.text_type(e))
@property
def keystoneclient(self):
return self.clients.client('keystone')
@property
def novaclient(self):
return self.clients.client_plugin('nova')
@property
def cinderclient(self):
return self.clients.client_plugin('cinder')
@request_context
def list_users(self, cnxt, detail=False):
return users_client.list_users(cnxt)
def _add_notify_job(self, cnxt, user):
if user['rate'] > 0:
total_seconds = user['balance'] / user['rate'] * 3600.0
prior_notify_time = cfg.CONF.bilean_task.prior_notify_time
notify_seconds = total_seconds - prior_notify_time * 60
notify_seconds = notify_seconds if notify_seconds > 0 else 0
nf_time = timeutils.utcnow() + timedelta(seconds=notify_seconds)
job_params = {'run_date': nf_time}
job_id = self._generate_job_id(user['id'], 'notify')
self.bilean_task.add_job(self._notify_task,
job_id,
job_type='notify',
params=job_params)
# Save job to database
job = {'id': job_id,
'job_type': 'notify',
'engine_id': self.engine_id,
'parameters': {'run_date': nf_time}}
db_api.job_create(cnxt, job)
def _add_freeze_job(self, cnxt, user):
if user['rate'] > 0:
total_seconds = user['balance'] / user['rate'] * 3600.0
run_time = timeutils.utcnow() + timedelta(seconds=total_seconds)
job_params = {'run_date': run_time}
job_id = self._generate_job_id(user['id'], 'freeze')
self.bilean_task.add_job(self._freeze_task,
job_id,
job_type='freeze',
params=job_params)
# Save job to database
job = {'id': job_id,
'job_type': 'freeze',
'engine_id': self.engine_id,
'parameters': {'run_date': run_time}}
db_api.job_create(cnxt, job)
def _add_daily_job(self, cnxt, user_id):
job_id = self._generate_job_id(user_id, 'daily')
params = {'hour': random.randint(0, 23),
'minute': random.randint(0, 59)}
self.bilean_task.add_job(self._daily_task, job_id, params=params)
# Save job to database
job = {'id': job_id,
'job_type': 'daily',
'engine_id': self.engine_id,
'parameters': params}
db_api.job_create(cnxt, job)
def _delete_all_job(self, cnxt, user_id):
notify_job_id = self._generate_job_id(user_id, 'notify')
freeze_job_id = self._generate_job_id(user_id, 'freeze')
daily_job_id = self._generate_job_id(user_id, 'daily')
for job_id in notify_job_id, freeze_job_id, daily_job_id:
if self.bilean_task.is_exist(job_id):
self.bilean_task.remove_job(job_id)
try:
db_api.job_delete(cnxt, job_id)
except exception.NotFound as e:
LOG.warn(_("Failed in deleting job: %s") % six.text_type(e))
def _update_notify_job(self, cnxt, user):
notify_job_id = self._generate_job_id(user['id'], 'notify')
freeze_job_id = self._generate_job_id(user['id'], 'freeze')
for job_id in notify_job_id, freeze_job_id:
if self.bilean_task.is_exist(job_id):
self.bilean_task.remove_job(job_id)
try:
db_api.job_delete(cnxt, notify_job_id)
db_api.job_delete(cnxt, freeze_job_id)
except exception.NotFound as e:
LOG.warn(_("Failed in deleting job: %s") % six.text_type(e))
self._add_notify_job(cnxt, user)
def _update_freeze_job(self, cnxt, user):
notify_job_id = self._generate_job_id(user['id'], 'notify')
freeze_job_id = self._generate_job_id(user['id'], 'freeze')
for job_id in notify_job_id, freeze_job_id:
if self.bilean_task.is_exist(job_id):
self.bilean_task.remove_job(job_id)
try:
db_api.job_delete(cnxt, notify_job_id)
db_api.job_delete(cnxt, freeze_job_id)
except exception.NotFound as e:
LOG.warn(_("Failed in deleting job: %s") % six.text_type(e))
self._add_freeze_job(cnxt, user)
def _update_user_job(self, cnxt, user):
"""Update user's billing job"""
user_id = user["id"]
no_job_status = ['init', 'free', 'freeze']
if user['status'] in no_job_status:
self._delete_all_job(cnxt, user_id)
elif user['status'] == 'inuse':
self._update_notify_job(cnxt, user)
daily_job_id = self._generate_job_id(user_id, 'daily')
if not self.bilean_task.is_exist(daily_job_id):
self._add_daily_job(cnxt, user_id)
elif user['status'] == 'prefreeze':
self._update_freeze_job(cnxt, user)
@request_context
def create_user(self, cnxt, values):
return users_client.create_user(cnxt, values)
@request_context
def show_user(self, cnxt, user_id):
if cnxt.tenant_id != user_id and not cnxt.is_admin:
raise exception.Forbidden("Only admin can do this.")
user = users_client.get_user(cnxt, user_id)
if user['rate'] > 0 and user['status'] != 'freeze':
return users_client.do_bill(cnxt, user)
return user
@request_context
def update_user(self, cnxt, user_id, values, user=None):
"""Update user info by given values."""
# Do bill before updating user.
if user is None:
user = users_client.get_user(cnxt, user_id)
if user['status'] != 'freeze' and user['rate'] > 0:
user = users_client.do_bill(cnxt, user, update=True)
action = values.get('action', 'update')
if action.lower() == "recharge":
recharge_value = values['balance']
new_balance = recharge_value + user['balance']
values.update(balance=new_balance)
if user['status'] == 'init' and new_balance > 0:
values.update(status='free',
status_reason='Recharge to free')
elif user['status'] == 'freeze' and new_balance > 0:
values.update(status='free',
status_reason='Recharge to free')
if user['status'] == 'prefreeze':
prior_notify_time = cfg.CONF.bilean_task.prior_notify_time
notify_seconds = prior_notify_time * 60
temp_use = notify_seconds * user['rate']
if new_balance > temp_use:
values.update(status='inuse',
status_reason='Recharge to inuse')
events_client.generate_events(
cnxt, user_id, 'recharge', recharge_value=recharge_value)
user.update(values)
# As user has been updated, the billing job for the user
# should to be updated too.
# values.update(self._update_user_job(cnxt, user))
self._update_user_job(cnxt, user)
# Update user
return users_client.update_user(cnxt, user_id, values)
@request_context
def delete_user(self, cnxt, user_id):
raise exception.NotImplement()
@request_context
def rule_create(self, cnxt, name, spec, metadata):
type_name, version = schema.get_spec_version(spec)
try:
plugin = environment.global_env().get_rule(type_name)
except exception.RuleTypeNotFound:
msg = _("The specified rule type (%(type)s) is not supported."
) % {"type": type_name}
raise exception.BileanBadRequest(msg=msg)
LOG.info(_LI("Creating rule type: %(type)s, name: %(name)s."),
{'type': type_name, 'name': name})
rule = plugin(name, spec, metadata=metadata)
try:
rule.validate()
except exception.InvalidSpec as ex:
msg = six.text_type()
LOG.error(_LE("Failed in creating rule: %s"), msg)
raise exception.BileanBadRequest(msg=msg)
rule.store(cnxt)
LOG.info(_LI("Rule %(name)s is created: %(id)s."),
{'name': name, 'id': rule.id})
return rule.to_dict()
@request_context
def rule_list(self, cnxt):
return rules_client.list_rules(cnxt)
@request_context
def rule_show(self, cnxt, rule_id):
return rules_client.get_rule(cnxt, rule_id)
@request_context
def rule_update(self, cnxt, rule_id, values):
return rules.update_rule(cnxt, rule_id, values)
@request_context
def rule_delete(self, cnxt, rule_id):
return rules_client.delete_rule(cnxt, rule_id)
def _get_resource_rule(self, cnxt, resource):
"""Get the exact rule result for given resource."""
resource_type = resource['resource_type']
try:
rules = rules_client.get_rule_by_filters(
cnxt, resource_type=resource_type)
return self._match_rule(rules, resource)
except Exception as ex:
LOG.warn(_("Failed in getting rule of resource: %s") %
six.text_type(ex))
def _match_rule(self, rules, resource):
res_size = resource['size']
res_rule = {}
for rule in rules:
start = bilean_params.MIN_VALUE if rule.get('start') == '-1' \
else rule.get('start')
end = bilean_params.MAX_VALUE if rule.get('end') == '-1' \
else rule.get('end')
if res_size >= start and res_size <= end:
if res_size.isdigit():
res_size = int(res_size)
price = eval(rule.get('price'), {'n': res_size})
res_rule["rate"] = price
res_rule["rule_id"] = rule.get("id")
return res_rule
raise exception.NotFound(_('No exact rule found for resource: %s') %
resource)
@request_context
def validate_creation(self, cnxt, resources):
user_id = cnxt.tenant_id
user = users_client.get_user(cnxt, user_id)
ress = resources['resources']
count = resources.get('count', 1)
ratecount = 0
for resource in ress:
res_rule = self._get_resource_rule(cnxt, resource)
ratecount += res_rule['rate']
if count > 1:
ratecount = ratecount * count
# Pre 1 hour bill for resources
pre_bill = ratecount * 1
if pre_bill > user['balance']:
return dict(validation=False)
return dict(validation=True)
@request_context
def resource_create(self, cnxt, resources):
"""Create resource by given database
Cause new resource would update user's rate, user update and billing
would be done.
"""
d_rate = 0
for resource in resources:
user_id = resource.get("user_id")
if user_id is None:
user_id = cnxt.tenant_id
resource['user_id'] = user_id
# Get the rule info and update resource resource
res_rule = self._get_resource_rule(cnxt, resource)
resource.update(res_rule)
d_rate += res_rule['rate']
self._change_user_rate(cnxt, user_id, d_rate)
r_resources = []
for resource in resources:
r_resources.append(
resources_client.resource_create(cnxt, resource))
return r_resources
def _change_user_rate(self, cnxt, user_id, df_rate):
old_user = users_client.get_user(cnxt, user_id)
# Update the rate of user
old_rate = old_user.get('rate', 0)
new_user_rate = old_rate + df_rate
user_update_params = {"rate": new_user_rate}
if old_rate == 0 and new_user_rate > 0:
user_update_params['last_bill'] = timeutils.utcnow()
if df_rate > 0 and old_user['status'] == 'free':
user_update_params['status'] = 'inuse'
elif df_rate < 0:
if new_user_rate == 0 and old_user['balance'] > 0:
user_update_params['status'] = 'free'
elif old_user['status'] == 'prefreeze':
prior_notify_time = cfg.CONF.bilean_task.prior_notify_time
notify_seconds = prior_notify_time * 60
temp_use = notify_seconds * new_user_rate
if old_user['balance'] > temp_use:
user_update_params['status'] = 'inuse'
user = self.update_user(cnxt, user_id, user_update_params, old_user)
# As the rate of user has changed, the billing job for the user
# should change too.
self._update_user_job(cnxt, user)
@request_context
def resource_list(self, cnxt, **search_opts):
return resources_client.resource_get_all(cnxt, **search_opts)
@request_context
def resource_get(self, cnxt, resource_id):
return resources_client.resource_get(cnxt, resource_id)
@request_context
def resource_update(self, cnxt, resource):
old_resource = db_api.resource_get_by_physical_resource_id(
cnxt, resource['resource_ref'], resource['resource_type'])
new_size = resource.get('size')
new_status = resource.get('status')
if new_size:
res_rule = self._get_resource_rule(cnxt, resource)
resource.update(res_rule)
d_rate = resource["rate"] - old_resource["rate"]
elif new_status in bilean_params.RESOURCE_STATUS and \
new_status != old_resource['status']:
if new_status == 'paused':
d_rate = - resource["rate"]
else:
d_rate = resource["rate"]
if d_rate:
self._change_user_rate(cnxt, resource['user_id'], d_rate)
return resources_client.resource_update_by_resource(cnxt, resource)
@request_context
def resource_delete(self, cnxt, resources):
"""Delele resources"""
d_rate = 0
for resource in resources:
res = db_api.resource_get_by_physical_resource_id(
cnxt, resource['resource_ref'], resource['resource_type'])
d_rate += res['rate']
d_rate = - d_rate
self._change_user_rate(cnxt, res['user_id'], d_rate)
try:
for resource in resources:
resources_client.resource_delete_by_physical_resource_id(
cnxt, resource['resource_ref'], resource['resource_type'])
except Exception as ex:
LOG.warn(_("Delete resource error %s"), ex)
return
def _delete_real_resource(self, resource):
resource_client_mappings = {'instance': 'novaclient',
'volume': 'cinderclient'}
resource_type = resource['resource_type']
c = getattr(self, resource_client_mappings[resource_type])
LOG.info(_("Delete resource: %s") % resource['resource_ref'])
c.delete(resource['resource_ref'])
def do_freeze_action(self, cnxt, user_id):
"""Freeze user, delete all resource ralated to user"""
resources = resources_client.resource_get_all(cnxt, user_id=user_id)
for resource in resources:
self._delete_real_resource(resource)
@request_context
def list_events(self, cnxt, **filters):
return events_client.events_get_all_by_filters(cnxt, **filters)
def _generate_job_id(self, user_id, job_type):
"""Generate job id by given user_id and job type"""
return "%s-%s" % (job_type, user_id)

185
bilean/engine/users.py Normal file
View File

@ -0,0 +1,185 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.common.i18n import _
from bilean.common import utils
from bilean.db import api as db_api
from bilean.engine import api
from bilean.engine import events
from oslo_log import log as logging
from oslo_utils import timeutils
LOG = logging.getLogger(__name__)
class User(object):
"""User object contains all user operations"""
statuses = (
INIT, ACTIVE, WARNING, FREEZE,
) = (
'INIT', 'ACTIVE', 'WARNING', 'FREEZE',
)
def __init__(self, user_id, policy_id, **kwargs):
self.id = user_id
self.policy_id = policy_id
self.balance = kwargs.get('balance', 0)
self.rate = kwargs.get('rate', 0.0)
self.credit = kwargs.get('credit', 0)
self.last_bill = kwargs.get('last_bill', None)
self.status = kwargs.get('status', self.INIT)
self.status_reason = kwargs.get('status_reason', 'Init user')
self.created_at = kwargs.get('created_at', None)
self.updated_at = kwargs.get('updated_at', None)
self.deleted_at = kwargs.get('deleted_at', None)
def store(context, values):
"""Store the user record into database table.
"""
values = {
'policy_id': self.policy_id,
'balance': self.balance,
'rate': self.rate,
'credit': self.credit,
'last_bill': self.last_bill,
'status': self.status,
'status_reason': self.status_reason,
'created_at': self.created_at,
'updated_at': self.updated_at,
'deleted_at': self.deleted_at,
}
if self.created_at:
db_api.user_update(context, self.id, values)
else:
values.update(id=self.id)
user = db_api.user_create(context, values)
self.created_at = user.created_at
return self.id
@classmethod
def _from_db_record(cls, context, record):
'''Construct a user object from database record.
:param context: the context used for DB operations;
:param record: a DB user object that contains all fields;
'''
kwargs = {
'balance': record.balance,
'rate': record.rate,
'credit': record.credit,
'last_bill': record.last_bill,
'status': record.status,
'status_reason': record.status_reason,
'created_at': record.created_at,
'updated_at': record.updated_at,
'deleted_at': record.deleted_at,
}
return cls(record.id, record.policy_id, **kwargs)
@classmethod
def load(cls, context, user_id=None, user=None, show_deleted=False,
project_safe=True):
'''Retrieve a user from database.'''
if user is None:
user = db_api.user_get(context, user_id,
show_deleted=show_deleted,
project_safe=project_safe)
if user is None:
raise exception.UserNotFound(user=user_id)
return cls._from_db_record(context, user)
@classmethod
def load_all(cls, context, show_deleted=False, limit=None,
marker=None, sort_keys=None, sort_dir=None,
filters=None, project_safe=True):
'''Retrieve all users of from database.'''
records = db_api.user_get_all(context, show_deleted=show_deleted,
limit=limit, marker=marker,
sort_keys=sort_keys, sort_dir=sort_dir,
filters=filters,
project_safe=project_safe)
return [cls._from_db_record(context, record) for record in records]
def to_dict(self):
user_dict = {
'id': self.id,
'policy_id': self.policy_id,
'balance': self.balance,
'rate': self.rate,
'credit': self.credit,
'last_bill': utils.format_time(self.last_bill),
'status': self.status,
'status_reason': self.status_reason,
'created_at': utils.format_time(self.created_at),
'updated_at': utils.format_time(self.updated_at),
'deleted_at': utils.format_time(self.deleted_at),
}
return user_dict
def set_status(self, context, status, reason=None):
'''Set status of the user.'''
self.status = status
values['status'] = status
if reason:
self.status_reason = reason
values['status_reason'] = reason
db_api.user_update(context, self.id, values)
def do_delete(self, context):
db_api.user_delete(context, self.id)
return True
def do_bill(self, context, user, update=False, bilean_controller=None):
now = timeutils.utcnow()
last_bill = user['last_bill']
if not last_bill:
LOG.error(_("Last bill info not found"))
return
total_seconds = (now - last_bill).total_seconds()
if total_seconds < 0:
LOG.error(_("Now time is earlier than last bill!"))
return
usage = user['rate'] / 3600.0 * total_seconds
new_balance = user['balance'] - usage
if not update:
user['balance'] = new_balance
return user
else:
update_values = {}
update_values['balance'] = new_balance
update_values['last_bill'] = now
if new_balance < 0:
if bilean_controller:
bilean_controller.do_freeze_action(context, user['id'])
update_values['status'] = 'freeze'
update_values['status_reason'] = 'balance overdraft'
LOG.info(_("Balance of user %s overdraft, change user's status to "
"'freeze'") % user['id'])
new_user = update_user(context, user['id'], update_values)
events.generate_events(context,
user['id'],
'charge',
time_length=total_seconds)
return new_user

View File

View File

@ -0,0 +1,74 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.common.i18n import _LE
from bilean.rpc import client as rpc_client
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
class Action(object):
def __init__(self, cnxt, action, data):
self.rpc_client = rpc_client.EngineClient()
self.cnxt = cnxt
self.action = action
self.data = data
def execute(self):
"""Wrapper of action execution."""
action_name = self.action.lower()
method_name = "do_" + action_name
method = getattr(self, method_name, None)
if method is None:
LOG.error(_LE('Unsupported action: %s.') % self.action)
return None
return method()
def do_create(self):
return NotImplemented
def do_update(self):
return NotImplemented
def do_delete(self):
return NotImplemented
class ResourceAction(Action):
"""Notification controller for Resources."""
def do_create(self):
"""Create a new resource"""
return self.rpc_client.resource_create(self.cnxt, self.data)
def do_update(self):
"""Update a resource"""
return self.rpc_client.resource_update(self.cnxt, self.data)
def do_delete(self):
"""Delete a resource"""
return self.rpc_client.resource_delete(self.cnxt, self.data)
class UserAction(Action):
"""Notification controller for Users."""
def do_create(self):
"""Create a new user"""
return self.rpc_client.user_create(self.cnxt, user_id=self.data)
def do_delete(self):
"""Delete a user"""
return self.rpc_client.delete_user(self.cnxt, user_id=self.data)

View File

@ -0,0 +1,278 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import fnmatch
import jsonpath_rw
import os
import six
import yaml
from bilean.common.i18n import _
from oslo_config import cfg
from oslo_log import log as logging
resource_definition_opts = [
cfg.StrOpt('definitions_cfg_file',
default="resource_definitions.yaml",
help="Configuration file for resource definitions."
),
cfg.BoolOpt('drop_unmatched_notifications',
default=True,
help='Drop notifications if no resource definition matches. '
'(Otherwise, we convert them with just the default traits)'),
]
resource_group = cfg.OptGroup('resource_definition')
cfg.CONF.register_group(resource_group)
cfg.CONF.register_opts(resource_definition_opts, group=resource_group)
LOG = logging.getLogger(__name__)
def get_config_file():
config_file = cfg.CONF.resource_definition.definitions_cfg_file
if not os.path.exists(config_file):
config_file = cfg.CONF.find_file(config_file)
return config_file
def setup_resources():
"""Setup the resource definitions from yaml config file."""
config_file = get_config_file()
if config_file is not None:
LOG.debug(_("Resource Definitions configuration file: %s") %
config_file)
with open(config_file) as cf:
config = cf.read()
try:
resources_config = yaml.safe_load(config)
except yaml.YAMLError as err:
if hasattr(err, 'problem_mark'):
mark = err.problem_mark
errmg = (_("Invalid YAML syntax in Resource Definitions "
"file %(file)s at line: %(line)s, column: "
"%(column)s.") % dict(file=config_file,
line=mark.line + 1,
column=mark.column + 1))
else:
errmg = (_("YAML error reading Resource Definitions file "
"%(file)s") % dict(file=config_file))
LOG.error(errmg)
raise
else:
LOG.debug(_("No Resource Definitions configuration file found!"
" Using default config."))
resources_config = []
LOG.info(_("Resource Definitions: %s"), resources_config)
allow_drop = cfg.CONF.resource_definition.drop_unmatched_notifications
return NotificationResourcesConverter(resources_config,
add_catchall=not allow_drop)
class NotificationResourcesConverter(object):
"""Notification Resource Converter."""
def __init__(self, resources_config, add_catchall=True):
self.definitions = [
EventDefinition(event_def)
for event_def in reversed(resources_config)]
if add_catchall and not any(d.is_catchall for d in self.definitions):
event_def = dict(event_type='*', resources={})
self.definitions.append(EventDefinition(event_def))
def to_resources(self, notification_body):
event_type = notification_body['event_type']
edef = None
for d in self.definitions:
if d.match_type(event_type):
edef = d
break
if edef is None:
msg = (_('Dropping Notification %(type)s')
% dict(type=event_type))
if cfg.CONF.resource_definition.drop_unmatched_notifications:
LOG.debug(msg)
else:
# If drop_unmatched_notifications is False, this should
# never happen. (mdragon)
LOG.error(msg)
return None
return edef.to_resources(notification_body)
class EventDefinition(object):
def __init__(self, definition_cfg):
self._included_types = []
self._excluded_types = []
self.cfg = definition_cfg
try:
event_type = definition_cfg['event_type']
self.resources = [ResourceDefinition(resource_def)
for resource_def in definition_cfg['resources']]
except KeyError as err:
raise EventDefinitionException(
_("Required field %s not specified") % err.args[0], self.cfg)
if isinstance(event_type, six.string_types):
event_type = [event_type]
for t in event_type:
if t.startswith('!'):
self._excluded_types.append(t[1:])
else:
self._included_types.append(t)
if self._excluded_types and not self._included_types:
self._included_types.append('*')
def included_type(self, event_type):
for t in self._included_types:
if fnmatch.fnmatch(event_type, t):
return True
return False
def excluded_type(self, event_type):
for t in self._excluded_types:
if fnmatch.fnmatch(event_type, t):
return True
return False
def match_type(self, event_type):
return (self.included_type(event_type)
and not self.excluded_type(event_type))
@property
def is_catchall(self):
return '*' in self._included_types and not self._excluded_types
def to_resources(self, notification_body):
resources = []
for resource in self.resources:
resources.append(resource.to_resource(notification_body))
return resources
class ResourceDefinition(object):
DEFAULT_TRAITS = dict(
user_id=dict(type='string', fields='payload.tenant_id'),
)
def __init__(self, definition_cfg):
self.traits = dict()
try:
self.resource_type = definition_cfg['resource_type']
traits = definition_cfg['traits']
except KeyError as err:
raise EventDefinitionException(
_("Required field %s not specified") % err.args[0], self.cfg)
for trait_name in self.DEFAULT_TRAITS:
self.traits[trait_name] = TraitDefinition(
trait_name,
self.DEFAULT_TRAITS[trait_name])
for trait_name in traits:
self.traits[trait_name] = TraitDefinition(
trait_name,
traits[trait_name])
def to_resource(self, notification_body):
traits = (self.traits[t].to_trait(notification_body)
for t in self.traits)
# Only accept non-None value traits ...
traits = [trait for trait in traits if trait is not None]
resource = {"resource_type": self.resource_type}
for trait in traits:
resource.update(trait)
return resource
class TraitDefinition(object):
def __init__(self, name, trait_cfg):
self.cfg = trait_cfg
self.name = name
type_name = trait_cfg.get('type', 'string')
if 'fields' not in trait_cfg:
raise EventDefinitionException(
_("Required field in trait definition not specified: "
"'%s'") % 'fields',
self.cfg)
fields = trait_cfg['fields']
if not isinstance(fields, six.string_types):
# NOTE(mdragon): if not a string, we assume a list.
if len(fields) == 1:
fields = fields[0]
else:
fields = '|'.join('(%s)' % path for path in fields)
try:
self.fields = jsonpath_rw.parse(fields)
except Exception as e:
raise EventDefinitionException(
_("Parse error in JSONPath specification "
"'%(jsonpath)s' for %(trait)s: %(err)s")
% dict(jsonpath=fields, trait=name, err=e), self.cfg)
self.trait_type = type_name
if self.trait_type is None:
raise EventDefinitionException(
_("Invalid trait type '%(type)s' for trait %(trait)s")
% dict(type=type_name, trait=name), self.cfg)
def to_trait(self, notification_body):
values = [match for match in self.fields.find(notification_body)
if match.value is not None]
value = values[0].value if values else None
if value is None:
return None
if self.trait_type != 'string' and value == '':
return None
if self.trait_type is "int":
value = int(value)
elif self.trait_type is "float":
value = float(value)
elif self.trait_type is "string":
value = str(value)
return {self.name: value}
class EventDefinitionException(Exception):
def __init__(self, message, definition_cfg):
super(EventDefinitionException, self).__init__(message)
self.definition_cfg = definition_cfg
def __str__(self):
return '%s %s: %s' % (self.__class__.__name__,
self.definition_cfg, self.message)
def list_opts():
yield resource_group.name, resource_definition_opts

View File

@ -0,0 +1,94 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.common import context
from bilean.common.i18n import _
from bilean.notification import converter
from bilean.notification import action as notify_action
from oslo_log import log as logging
import oslo_messaging
LOG = logging.getLogger(__name__)
KEYSTONE_EVENTS = ['identity.project.created',
'identity.project.deleted']
class EventsNotificationEndpoint(object):
def __init__(self):
self.resource_converter = converter.setup_resources()
self.cnxt = context.get_admin_context()
super(EventsNotificationEndpoint, self).__init__()
def info(self, ctxt, publisher_id, event_type, payload, metadata):
"""Convert message to Billing Event.
:param ctxt: oslo_messaging context
:param publisher_id: publisher of the notification
:param event_type: type of notification
:param payload: notification payload
:param metadata: metadata about the notification
"""
notification = dict(event_type=event_type,
payload=payload,
metadata=metadata)
LOG.debug(_("Receive notification: %s") % notification)
if event_type in KEYSTONE_EVENTS:
return self.process_identity_notification(notification)
return self.process_resource_notification(notification)
def process_identity_notification(self, notification):
"""Convert notifcation to user."""
user_id = notification['payload'].get('resource_info', None)
if not user_id:
LOG.error(_LE("Cannot retrieve user_id from notification: %s") %
notification)
return oslo_messaging.NotificationResult.HANDLED
action = self._get_action(notification['event_type'])
if action:
act = notify_action.UserAction(self.cnxt, action, user_id)
LOG.info(_("Notify engine to %(action)s user: %(user)s") %
{'action': action, 'user': user})
#act.execute()
return oslo_messaging.NotificationResult.HANDLED
def process_resource_notification(self, notification):
"""Convert notifcation to resources."""
resources = self.resource_converter.to_resources(notification)
if not resources:
LOG.info('Ignore notification because no matched resources '
'found from notification.')
return oslo_messaging.NotificationResult.HANDLED
action = self._get_action(notification['event_type'])
if action:
for resource in resources:
act = notify_action.ResourceAction(
self.cnxt, action, resource)
LOG.info(_("Notify engine to %(action)s resource: "
"%(resource)s") % {'action': action,
'resource': resource})
#act.execute()
return oslo_messaging.NotificationResult.HANDLED
def _get_action(self, event_type):
available_actions = ['create', 'delete', 'update']
for action in available_actions:
if action in event_type:
return action
LOG.info(_("Can not get action info in event_type: %s") % event_type)
return None

View File

@ -0,0 +1,53 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log as logging
import oslo_messaging
from oslo_service import service
from bilean.common.i18n import _
from bilean.common import messaging as bilean_messaging
from bilean.common import params
from bilean.notification import endpoint
LOG = logging.getLogger(__name__)
class NotificationService(service.Service):
def __init__(self, *args, **kwargs):
super(NotificationService, self).__init__(*args, **kwargs)
self.targets, self.listeners = [], []
self.transport = None
self.group_id = None
self.endpoints = [endpoint.EventsNotificationEndpoint()]
def start(self):
super(NotificationService, self).start()
self.transport = bilean_messaging.get_transport()
self.targets.append(
oslo_messaging.Target(topic=params.NOTIFICATION_TOPICS))
listener = bilean_messaging.get_notification_listener(
self.transport, self.targets, self.endpoints)
LOG.info(_("Starting listener on topic: %s"),
params.NOTIFICATION_TOPICS)
listener.start()
self.listeners.append(listener)
# Add a dummy thread to have wait() working
self.tg.add_timer(604800, lambda: None)
def stop(self):
map(lambda x: x.stop(), self.listeners)
super(NotificationService, self).stop()

49
bilean/notifier.py Normal file
View File

@ -0,0 +1,49 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_log import log as logging
import oslo_messaging
notifier_opts = [
cfg.StrOpt('default_publisher_id', default="billing.localhost",
help='Default publisher_id for outgoing notifications.'),
]
CONF = cfg.CONF
CONF.register_opts(notifier_opts)
LOG = logging.getLogger(__name__)
def get_transport():
return oslo_messaging.get_transport(CONF)
class Notifier(object):
"""Uses a notification strategy to send out messages about events."""
def __init__(self):
publisher_id = CONF.default_publisher_id
self._transport = get_transport()
self._notifier = oslo_messaging.Notifier(
self._transport, publisher_id=publisher_id)
def warn(self, event_type, payload):
self._notifier.warn({}, event_type, payload)
def info(self, event_type, payload):
self._notifier.info({}, event_type, payload)
def error(self, event_type, payload):
self._notifier.error({}, event_type, payload)

View File

0
bilean/rpc/__init__.py Normal file
View File

124
bilean/rpc/client.py Normal file
View File

@ -0,0 +1,124 @@
#
# Copyright 2012, Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Client side of the bilean engine RPC API.
"""
from bilean.common import messaging
from bilean.common import params
from oslo_config import cfg
class EngineClient(object):
'''Client side of the bilean engine rpc API.'''
BASE_RPC_API_VERSION = '1.0'
def __init__(self):
self._client = messaging.get_rpc_client(
topic=params.ENGINE_TOPIC,
server=cfg.CONF.host,
version=self.BASE_RPC_API_VERSION)
@staticmethod
def make_msg(method, **kwargs):
return method, kwargs
def call(self, ctxt, msg, version=None):
method, kwargs = msg
if version is not None:
client = self._client.prepare(version=version)
else:
client = self._client
return client.call(ctxt, method, **kwargs)
def cast(self, ctxt, msg, version=None):
method, kwargs = msg
if version is not None:
client = self._client.prepare(version=version)
else:
client = self._client
return client.cast(ctxt, method, **kwargs)
def user_list(self, ctxt):
return self.call(ctxt, self.make_msg('user_list'))
def user_get(self, ctxt, user_id):
return self.call(ctxt, self.make_msg('user_get',
user_id=user_id))
def user_create(self, ctxt, user_id, balance=0, credit=0,
status='init'):
values = {'id': user_id,
'balance': balance,
'credit': credit,
'status': status}
return self.call(ctxt, self.make_msg('user_create', values=values))
def user_update(self, ctxt, user_id, values):
return self.call(ctxt, self.make_msg('user_update',
user_id=user_id,
values=values))
def user_delete(self, ctxt, user_id):
return self.call(ctxt, self.make_msg('user_delete',
user_id=user_id))
def rule_list(self, ctxt):
return self.call(ctxt, self.make_msg('rule_list'))
def rule_get(self, ctxt, rule_id):
return self.call(ctxt, self.make_msg('rule_get',
rule_id=rule_id))
def rule_create(self, ctxt, name, spec, metadata):
return self.call(ctxt, self.make_msg('rule_create', name=name,
spec=spec, metadata=metadata))
def rule_update(self, ctxt, values):
return self.call(ctxt, self.make_msg('rule_update',
values=values))
def rule_delete(self, ctxt, rule_id):
return self.call(ctxt, self.make_msg('rule_delete',
rule_id=rule_id))
def resource_list(self, ctxt):
return self.call(ctxt, self.make_msg('resource_list'))
def resource_get(self, ctxt, resource_id):
return self.call(ctxt, self.make_msg('resource_get',
resource_id=resource_id))
def resource_create(self, ctxt, resource):
return self.call(ctxt, self.make_msg('resource_update',
resource=resource))
def resource_update(self, ctxt, resource):
return self.call(ctxt, self.make_msg('resource_update',
resource=resource))
def resource_delete(self, ctxt, resource):
return self.call(ctxt, self.make_msg('resource_delete',
resource=resource))
def event_list(self, ctxt, filters=None):
return self.call(ctxt, self.make_msg('event_list', **filters))
def validate_creation(self, cnxt, resources):
return self.call(cnxt, self.make_msg('validate_creation',
resources=resources))

0
bilean/rules/__init__.py Normal file
View File

206
bilean/rules/base.py Normal file
View File

@ -0,0 +1,206 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
from oslo_context import context as oslo_context
from oslo_log import log as logging
from oslo_utils import timeutils
from bilean.common import context
from bilean.common import exception
from bilean.common.i18n import _
from bilean.common import schema
from bilean.common import utils
from bilean.db import api as db_api
from bilean.engine import environment
LOG = logging.getLogger(__name__)
class Rule(object):
'''Base class for rules.'''
KEYS = (
TYPE, VERSION, PROPERTIES,
) = (
'type', 'version', 'properties',
)
spec_schema = {
TYPE: schema.String(
_('Name of the rule type.'),
required=True,
),
VERSION: schema.String(
_('Version number of the rule type.'),
required=True,
),
PROPERTIES: schema.Map(
_('Properties for the rule.'),
required=True,
)
}
properties_schema = {}
def __new__(cls, name, spec, **kwargs):
"""Create a new rule of the appropriate class.
:param name: The name for the rule.
:param spec: A dictionary containing the spec for the rule.
:param kwargs: Keyword arguments for rule creation.
:returns: An instance of a specific sub-class of Rule.
"""
type_name, version = schema.get_spec_version(spec)
if cls != Rule:
RuleClass = cls
else:
RuleClass = environment.global_env().get_rule(type_name)
return super(Rule, cls).__new__(RuleClass)
def __init__(self, name, spec, **kwargs):
"""Initialize a rule instance.
:param name: A string that specifies the name for the rule.
:param spec: A dictionary containing the detailed rule spec.
:param kwargs: Keyword arguments for initializing the rule.
:returns: An instance of a specific sub-class of Rule.
"""
type_name, version = schema.get_spec_version(spec)
self.name = name
self.spec = spec
self.id = kwargs.get('id', None)
self.type = kwargs.get('type', '%s-%s' % (type_name, version))
self.metadata = kwargs.get('metadata', {})
self.created_at = kwargs.get('created_at', None)
self.updated_at = kwargs.get('updated_at', None)
self.deleted_at = kwargs.get('deleted_at', None)
self.spec_data = schema.Spec(self.spec_schema, self.spec)
self.properties = schema.Spec(self.properties_schema,
self.spec.get(self.PROPERTIES, {}))
@classmethod
def from_db_record(cls, record):
'''Construct a rule object from database record.
:param record: a DB Profle object that contains all required fields.
'''
kwargs = {
'id': record.id,
'type': record.type,
'metadata': record.meta_data,
'created_at': record.created_at,
'updated_at': record.updated_at,
'deleted_at': record.deleted_at,
}
return cls(record.name, record.spec, **kwargs)
@classmethod
def load(cls, ctx, rule_id=None, rule=None):
'''Retrieve a rule object from database.'''
if rule is None:
rule = db_api.rule_get(ctx, rule_id)
if rule is None:
raise exception.RuleNotFound(rule=rule_id)
return cls.from_db_record(rule)
@classmethod
def load_all(cls, ctx):
'''Retrieve all rules from database.'''
records = db_api.rule_get_all(ctx)
for record in records:
yield cls.from_db_record(record)
@classmethod
def delete(cls, ctx, rule_id):
db_api.rule_delete(ctx, rule_id)
def store(self, ctx):
'''Store the rule into database and return its ID.'''
timestamp = timeutils.utcnow()
values = {
'name': self.name,
'type': self.type,
'spec': self.spec,
'meta_data': self.metadata,
}
if self.id:
self.updated_at = timestamp
values['updated_at'] = timestamp
db_api.rule_update(ctx, self.id, values)
else:
self.created_at = timestamp
values['created_at'] = timestamp
rule = db_api.rule_create(ctx, values)
self.id = rule.id
return self.id
def validate(self):
'''Validate the schema and the data provided.'''
# general validation
self.spec_data.validate()
self.properties.validate()
@classmethod
def get_schema(cls):
return dict((name, dict(schema))
for name, schema in cls.properties_schema.items())
def do_get_price(self, resource):
'''For subclass to override.'''
return NotImplemented
def do_delete(self, obj):
'''For subclass to override.'''
return NotImplemented
def do_update(self, obj, new_rule, **params):
'''For subclass to override.'''
return NotImplemented
def to_dict(self):
rule_dict = {
'id': self.id,
'name': self.name,
'type': self.type,
'spec': self.spec,
'metadata': self.metadata,
'created_at': utils.format_time(self.created_at),
'updated_at': utils.format_time(self.updated_at),
'deleted_at': utils.format_time(self.deleted_at),
}
return rule_dict
@classmethod
def from_dict(cls, **kwargs):
type_name = kwargs.pop('type')
name = kwargs.pop('name')
return cls(type_name, name, kwargs)

View File

View File

View File

@ -0,0 +1,93 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import base64
import copy
from oslo_log import log as logging
from oslo_utils import encodeutils
import six
from bilean.common import exception
from bilean.common.i18n import _
from bilean.common import schema
from bilean.common import utils
from bilean.rules import base
LOG = logging.getLogger(__name__)
class ServerRule(base.Rule):
'''Rule for an OpenStack Nova server.'''
KEYS = (
PRICE_MAPPING, UNIT,
) = (
'price_mapping', 'unit',
)
PM_KEYS = (
FLAVOR, PRICE,
) = (
'flavor', 'price',
)
AVAILABLE_UNIT = (
PER_HOUR, PER_SEC,
) = (
'per_hour', 'per_sec',
)
properties_schema = {
PRICE_MAPPING: schema.List(
_('A list specifying the price of each flavor'),
schema=schema.Map(
_('A map specifying the pricce of a flavor.'),
schema={
FLAVOR: schema.String(
_('Flavor id to set price.'),
),
PRICE: schema.Integer(
_('Price of this flavor.'),
),
}
),
required=True,
updatable=True,
),
UNIT: schema.String(
_('Unit of price, per_hour or per_sec.'),
default='per_hour',
),
}
def do_get_price(self, resource):
'''Get the price of resource in seconds.
If no exact price found, it shows that rule of the server's flavor
has not been set, will return 0 as the price notify admin to set
it.
:param: resource: Resource object to find price.
'''
flavor = resource.properties.get('flavor', None)
if not flavor:
raise exception.Error(msg='Flavor should be provided to get '
'the price of server.')
p_mapping = self.properties.get(self.PRICE_MAPPING)
price = 0
for pm in p_mapping:
if flavor == pm.get(self.FLAVOR):
price = pm.get(self.PRICE)
if self.PER_HOUR == self.properties.get(self.UNIT) and price > 0:
price = price * 1.0 / 3600
return price

17
bilean/version.py Normal file
View File

@ -0,0 +1,17 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pbr.version
version_info = pbr.version.VersionInfo('nova')

69
bin/bilean-api Executable file
View File

@ -0,0 +1,69 @@
#!/usr/bin/env python
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Bilean API Server. An OpenStack ReST API to Bilean
"""
import eventlet
eventlet.monkey_patch(os=False)
import os
import sys
possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
os.pardir,
os.pardir))
if os.path.exists(os.path.join(possible_topdir, 'bilean', '__init__.py')):
sys.path.insert(0, possible_topdir)
from bilean.common import config
from bilean.common.i18n import _LI
from bilean.common import messaging
from bilean.common import wsgi
from bilean import version
from oslo_config import cfg
from oslo_i18n import _lazy
from oslo_log import log as logging
from oslo_service import systemd
import six
_lazy.enable_lazy()
LOG = logging.getLogger('bilean.api')
if __name__ == "__main__":
try:
logging.register_options(cfg.CONF)
cfg.CONF(project='bilean', prog='bilean-api',
version=version.version_info.version_string())
logging.setup(cfg.CONF, 'bilean-api')
messaging.setup()
app = config.load_paste_app()
port = cfg.CONF.bilean_api.bind_port
host = cfg.CONF.bilean_api.bind_host
LOG.info(_LI('Starting Bilean ReST API on %(host)s:%(port)s'),
{'host': host, 'port': port})
server = wsgi.Server('bilean-api', cfg.CONF.bilean_api)
server.start(app, default_port=port)
systemd.notify_once()
server.wait()
except RuntimeError as ex:
sys.exit("ERROR: %s" % six.text_type(ex))

63
bin/bilean-engine Executable file
View File

@ -0,0 +1,63 @@
#!/usr/bin/env python
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Bilean Engine Server. This does the work of actually implementing the API
calls made by the user. Normal communications is done via the bilean API
which then calls into this engine.
"""
import eventlet
eventlet.monkey_patch()
import os
import sys
# If ../bilean/__init__.py exists, add ../ to Python search path, so that
# it will override what happens to be installed in /usr/(local/)lib/python...
POSSIBLE_TOPDIR = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
os.pardir,
os.pardir))
if os.path.exists(os.path.join(POSSIBLE_TOPDIR, 'bilean', '__init__.py')):
sys.path.insert(0, POSSIBLE_TOPDIR)
from bilean.common import messaging
from bilean.common import params
from oslo_config import cfg
from oslo_i18n import _lazy
from oslo_log import log as logging
from oslo_service import service
_lazy.enable_lazy()
LOG = logging.getLogger('bilean.engine')
if __name__ == '__main__':
logging.register_options(cfg.CONF)
cfg.CONF(project='bilean', prog='bilean-engine')
logging.setup(cfg.CONF, 'bilean-engine')
logging.set_defaults()
messaging.setup()
from bilean.engine import service as engine
srv = engine.EngineService(cfg.CONF.host, params.ENGINE_TOPIC)
launcher = service.launch(cfg.CONF, srv,
workers=cfg.CONF.num_engine_workers)
# We create the periodic tasks here, which mean they are created
# only in the parent process when num_engine_workers>1 is specified
srv.create_bilean_tasks()
launcher.wait()

27
bin/bilean-manage Executable file
View File

@ -0,0 +1,27 @@
#!/usr/bin/env python
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
POSSIBLE_TOPDIR = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
os.pardir,
os.pardir))
if os.path.exists(os.path.join(POSSIBLE_TOPDIR, 'bilean', '__init__.py')):
sys.path.insert(0, POSSIBLE_TOPDIR)
from bilean.cmd import manage
manage.main()

52
bin/bilean-notification Executable file
View File

@ -0,0 +1,52 @@
#!/usr/bin/env python
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import eventlet
eventlet.monkey_patch()
import os
import sys
# If ../bilean/__init__.py exists, add ../ to Python search path, so that
# it will override what happens to be installed in /usr/(local/)lib/python...
POSSIBLE_TOPDIR = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
os.pardir,
os.pardir))
if os.path.exists(os.path.join(POSSIBLE_TOPDIR, 'bilean', '__init__.py')):
sys.path.insert(0, POSSIBLE_TOPDIR)
from oslo_config import cfg
from oslo_i18n import _lazy
from oslo_log import log as logging
from oslo_service import service
from bilean.common import messaging
_lazy.enable_lazy()
LOG = logging.getLogger('bilean.notification')
if __name__ == "__main__":
logging.register_options(cfg.CONF)
cfg.CONF(project='bilean', prog='bilean-engine')
logging.setup(cfg.CONF, 'bilean-notification')
logging.set_defaults()
messaging.setup()
from bilean.notification import notification
srv = notification.NotificationService()
launcher = service.launch(cfg.CONF, srv)
launcher.wait()

View File

@ -0,0 +1,4 @@
To generate the sample senlin.conf file, run the following
command from the top level of the senlin directory:
tox -egenconfig

31
etc/bilean/api-paste.ini Normal file
View File

@ -0,0 +1,31 @@
# bilean-api pipeline
[pipeline:bilean-api]
pipeline = request_id faultwrap ssl versionnegotiation authtoken context apiv1app
[app:apiv1app]
paste.app_factory = bilean.common.wsgi:app_factory
bilean.app_factory = bilean.api.openstack.v1:API
[filter:versionnegotiation]
paste.filter_factory = bilean.common.wsgi:filter_factory
bilean.filter_factory = bilean.api.openstack:version_negotiation_filter
[filter:faultwrap]
paste.filter_factory = bilean.common.wsgi:filter_factory
bilean.filter_factory = bilean.api.openstack:faultwrap_filter
[filter:context]
paste.filter_factory = bilean.common.wsgi:filter_factory
paste.filter_factory = bilean.common.context:ContextMiddleware_filter_factory
[filter:ssl]
paste.filter_factory = bilean.common.wsgi:filter_factory
bilean.filter_factory = bilean.api.openstack:sslmiddleware_filter
# Auth middleware that validates token against keystone
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
# Middleware to set x-openstack-request-id in http response header
[filter:request_id]
paste.filter_factory = oslo_middleware.request_id:RequestId.factory

View File

@ -0,0 +1,977 @@
[DEFAULT]
#
# From bilean.api.middleware.ssl
#
# The HTTP Header that will be used to determine which the original request
# protocol scheme was, even if it was removed by an SSL terminator proxy.
# (string value)
# Deprecated group/name - [DEFAULT]/secure_proxy_ssl_header
#secure_proxy_ssl_header = X-Forwarded-Proto
#
# From bilean.common.config
#
# Name of the engine node. This can be an opaque identifier. It is not
# necessarily a hostname, FQDN, or IP address. (string value)
#host = kylin
#
# From bilean.common.config
#
# Seconds between running periodic tasks. (integer value)
#periodic_interval = 60
# Default region name used to get services endpoints. (string value)
#region_name_for_services = <None>
# Maximum raw byte size of data from web response. (integer value)
#max_response_size = 524288
# Number of heat-engine processes to fork and run. (integer value)
#num_engine_workers = 4
#
# From bilean.common.wsgi
#
# Maximum raw byte size of JSON request body. Should be larger than
# max_template_size. (integer value)
#max_json_body_size = 1048576
#
# From oslo.log
#
# Print debugging output (set logging level to DEBUG instead of default INFO
# level). (boolean value)
#debug = false
# If set to false, will disable INFO logging level, making WARNING the default.
# (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, log_format). (string value)
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# DEPRECATED. A logging.Formatter log message format string which may use any
# of the available logging.LogRecord attributes. This option is deprecated.
# Please use logging_context_format_string and logging_default_format_string
# instead. This option is ignored if log_config_append is set. (string value)
#log_format = <None>
# Format string for %%(asctime)s in log records. Default: %(default)s . This
# option is ignored if log_config_append is set. (string value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to output to. If no default is set, logging will
# go to stdout. This option is ignored if log_config_append is set. (string
# value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative --log-file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# (Optional) Uses logging handler designed to watch file system. When log file
# is moved or removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log-file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false
# (Optional) Enables or disables syslog rfc5424 format for logging. If enabled,
# prefixes the MSG part of the syslog message with APP-NAME (RFC5424). The
# format without the APP-NAME is deprecated in Kilo, and will be removed in
# Mitaka, along with this option. This option is ignored if log_config_append
# is set. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#use_syslog_rfc_format = true
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages without context. (string value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Data to append to log format when level is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# List of logger=LEVEL pairs. This option is ignored if log_config_append is
# set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Format string for user_identity field of the logging_context_format_string
# (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
#
# From oslo.messaging
#
# Size of RPC connection pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size
#rpc_conn_pool_size = 30
# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
#rpc_zmq_bind_address = *
# MatchMaker driver. (string value)
#rpc_zmq_matchmaker = redis
# Type of concurrency used. Either "native" or "eventlet" (string value)
#rpc_zmq_concurrency = eventlet
# Number of ZeroMQ contexts, defaults to 1. (integer value)
#rpc_zmq_contexts = 1
# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
#rpc_zmq_topic_backlog = <None>
# Directory for holding IPC sockets. (string value)
#rpc_zmq_ipc_dir = /var/run/openstack
# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
#rpc_zmq_host = localhost
# Seconds to wait before a cast expires (TTL). Only supported by impl_zmq.
# (integer value)
#rpc_cast_timeout = 30
# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
#rpc_poll_timeout = 1
# Configures zmq-messaging to use proxy with non PUB/SUB patterns. (boolean
# value)
#direct_over_proxy = true
# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
#use_pub_sub = true
# Minimal port number for random ports range. (port value)
# Minimum value: 1
# Maximum value: 65535
#rpc_zmq_min_port = 49152
# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
#rpc_zmq_max_port = 65536
# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
#rpc_zmq_bind_port_retries = 100
# Host to locate redis. (string value)
#host = 127.0.0.1
# Use this port to connect to redis host. (port value)
# Minimum value: 1
# Maximum value: 65535
#port = 6379
# Password for Redis server (optional). (string value)
#password =
# Size of executor thread pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
#executor_thread_pool_size = 64
# The Drivers(s) to handle sending notifications. Possible values are
# messaging, messagingv2, routing, log, test, noop (multi valued)
# Deprecated group/name - [DEFAULT]/notification_driver
#driver =
# A URL representing the messaging driver to use for notifications. If not set,
# we fall back to the same configuration used for RPC. (string value)
# Deprecated group/name - [DEFAULT]/notification_transport_url
#transport_url = <None>
# AMQP topic used for OpenStack notifications. (list value)
# Deprecated group/name - [rpc_notifier2]/topics
# Deprecated group/name - [DEFAULT]/notification_topics
#topics = notifications
# Seconds to wait for a response from a call. (integer value)
#rpc_response_timeout = 60
# A URL representing the messaging driver to use and its full configuration. If
# not set, we fall back to the rpc_backend option and driver specific
# configuration. (string value)
#transport_url = <None>
# The messaging driver to use, defaults to rabbit. Other drivers include amqp
# and zmq. (string value)
#rpc_backend = rabbit
# The default exchange under which topics are scoped. May be overridden by an
# exchange name specified in the transport_url option. (string value)
#control_exchange = openstack
#
# From oslo.service.periodic_task
#
# Some periodic tasks can be run in a separate process. Should we run them
# here? (boolean value)
#run_external_periodic_tasks = true
#
# From oslo.service.service
#
# Enable eventlet backdoor. Acceptable values are 0, <port>, and
# <start>:<end>, where 0 results in listening on a random tcp port number;
# <port> results in listening on the specified port number (and not enabling
# backdoor if that port is in use); and <start>:<end> results in listening on
# the smallest unused port number within the specified range of port numbers.
# The chosen port is displayed in the service's log file. (string value)
#backdoor_port = <None>
# Enables or disables logging values of all registered options when starting a
# service (at DEBUG level). (boolean value)
#log_options = true
# Specify a timeout after which a gracefully shutdown server will exit. Zero
# value means endless wait. (integer value)
#graceful_shutdown_timeout = 60
[authentication]
#
# From bilean.common.config
#
# Complete public identity V3 API endpoint. (string value)
#auth_url =
# Bilean service user name (string value)
#service_username = bilean
# Password specified for the Bilean service user. (string value)
#service_password =
# Name of the service project. (string value)
#service_project_name = service
# Name of the domain for the service user. (string value)
#service_user_domain = Default
# Name of the domain for the service project. (string value)
#service_project_domain = Default
[bilean_api]
#
# From bilean.common.wsgi
#
# Address to bind the server. Useful when selecting a particular network
# interface. (IP address value)
#bind_host = 0.0.0.0
# The port on which the server will listen. (port value)
# Minimum value: 1
# Maximum value: 65535
#bind_port = 8770
# Number of backlog requests to configure the socket with. (integer value)
#backlog = 4096
# Location of the SSL certificate file to use for SSL mode. (string value)
#cert_file = <None>
# Location of the SSL key file to use for enabling SSL mode. (string value)
#key_file = <None>
# Number of workers for Bilean service. (integer value)
#workers = 0
# Maximum line size of message headers to be accepted. max_header_line may need
# to be increased when using large tokens (typically those generated by the
# Keystone v3 API with big service catalogs). (integer value)
#max_header_line = 16384
# The value for the socket option TCP_KEEPIDLE. This is the time in seconds
# that the connection must be idle before TCP starts sending keepalive probes.
# (integer value)
#tcp_keepidle = 600
[bilean_task]
#
# From bilean.engine.bilean_task
#
# The time zone of job, default is utc (string value)
#time_zone = utc
# The days notify user before user's balance is used up, default is 3 days.
# (integer value)
#prior_notify_time = 3
# Seconds after the designated run time that the job is still allowed to be
# run. (integer value)
#misfire_grace_time = 3600
# Allow bilean to store apscheduler job. (boolean value)
#store_ap_job = false
# The backend to use for db (string value)
#backend = sqlalchemy
# The SQLAlchemy connection string used to connect to the database (string
# value)
#connection = <None>
[clients]
#
# From bilean.common.config
#
# Type of endpoint in Identity service catalog to use for communication with
# the OpenStack service. (string value)
#endpoint_type = <None>
# Optional CA cert file to use in SSL connections. (string value)
#ca_file = <None>
# Optional PEM-formatted certificate chain file. (string value)
#cert_file = <None>
# Optional PEM-formatted file that contains the private key. (string value)
#key_file = <None>
# If set, then the server's certificate will not be verified. (boolean value)
#insecure = <None>
[database]
#
# From oslo.db
#
# The file name to use with SQLite. (string value)
# Deprecated group/name - [DEFAULT]/sqlite_db
#sqlite_db = oslo.sqlite
# If True, SQLite uses synchronous mode. (boolean value)
# Deprecated group/name - [DEFAULT]/sqlite_synchronous
#sqlite_synchronous = true
# The back end to use for the database. (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend = sqlalchemy
# The SQLAlchemy connection string to use to connect to the database. (string
# value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection = <None>
# The SQLAlchemy connection string to use to connect to the slave database.
# (string value)
#slave_connection = <None>
# The SQL mode to be used for MySQL sessions. This option, including the
# default, overrides any server-set SQL mode. To use whatever SQL mode is set
# by the server configuration, set this to no value. Example: mysql_sql_mode=
# (string value)
#mysql_sql_mode = TRADITIONAL
# Timeout before idle SQL connections are reaped. (integer value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout = 3600
# Minimum number of SQL connections to keep open in a pool. (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size = 1
# Maximum number of SQL connections to keep open in a pool. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = <None>
# Maximum number of database connection retries during startup. Set to -1 to
# specify an infinite retry count. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries = 10
# Interval between retries of opening a SQL connection. (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval = 10
# If set, use this value for max_overflow with SQLAlchemy. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow = <None>
# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
# value)
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug = 0
# Add Python stack traces to SQL as comment strings. (boolean value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace = false
# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout = <None>
# Enable the experimental use of database reconnect on connection lost.
# (boolean value)
#use_db_reconnect = false
# Seconds between retries of a database transaction. (integer value)
#db_retry_interval = 1
# If True, increases the interval between retries of a database operation up to
# db_max_retry_interval. (boolean value)
#db_inc_retry_interval = true
# If db_inc_retry_interval is set, the maximum seconds between retries of a
# database operation. (integer value)
#db_max_retry_interval = 10
# Maximum retries in case of connection error or deadlock error before error is
# raised. Set to -1 to specify an infinite retry count. (integer value)
#db_max_retries = 20
[eventlet_opts]
#
# From bilean.common.wsgi
#
# If false, closes the client socket explicitly. (boolean value)
#wsgi_keep_alive = true
# Timeout for client connections' socket operations. If an incoming connection
# is idle for this number of seconds it will be closed. A value of '0'
# indicates waiting forever. (integer value)
#client_socket_timeout = 900
[keystone_authtoken]
#
# From keystonemiddleware.auth_token
#
# Complete public Identity API endpoint. (string value)
#auth_uri = <None>
# API version of the admin Identity API endpoint. (string value)
#auth_version = <None>
# Do not handle authorization requests within the middleware, but delegate the
# authorization decision to downstream WSGI components. (boolean value)
#delay_auth_decision = false
# Request timeout value for communicating with Identity API server. (integer
# value)
#http_connect_timeout = <None>
# How many times are we trying to reconnect when communicating with Identity
# API Server. (integer value)
#http_request_max_retries = 3
# Env key for the swift cache. (string value)
#cache = <None>
# Required if identity server requires client certificate (string value)
#certfile = <None>
# Required if identity server requires client certificate (string value)
#keyfile = <None>
# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
# Defaults to system CAs. (string value)
#cafile = <None>
# Verify HTTPS connections. (boolean value)
#insecure = false
# The region in which the identity server can be found. (string value)
#region_name = <None>
# Directory used to cache files related to PKI tokens. (string value)
#signing_dir = <None>
# Optionally specify a list of memcached server(s) to use for caching. If left
# undefined, tokens will instead be cached in-process. (list value)
# Deprecated group/name - [DEFAULT]/memcache_servers
#memcached_servers = <None>
# In order to prevent excessive effort spent validating tokens, the middleware
# caches previously-seen tokens for a configurable duration (in seconds). Set
# to -1 to disable caching completely. (integer value)
#token_cache_time = 300
# Determines the frequency at which the list of revoked tokens is retrieved
# from the Identity service (in seconds). A high number of revocation events
# combined with a low cache duration may significantly reduce performance.
# (integer value)
#revocation_cache_time = 10
# (Optional) If defined, indicate whether token data should be authenticated or
# authenticated and encrypted. Acceptable values are MAC or ENCRYPT. If MAC,
# token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data
# is encrypted and authenticated in the cache. If the value is not one of these
# options or empty, auth_token will raise an exception on initialization.
# (string value)
#memcache_security_strategy = <None>
# (Optional, mandatory if memcache_security_strategy is defined) This string is
# used for key derivation. (string value)
#memcache_secret_key = <None>
# (Optional) Number of seconds memcached server is considered dead before it is
# tried again. (integer value)
#memcache_pool_dead_retry = 300
# (Optional) Maximum total number of open connections to every memcached
# server. (integer value)
#memcache_pool_maxsize = 10
# (Optional) Socket timeout in seconds for communicating with a memcached
# server. (integer value)
#memcache_pool_socket_timeout = 3
# (Optional) Number of seconds a connection to memcached is held unused in the
# pool before it is closed. (integer value)
#memcache_pool_unused_timeout = 60
# (Optional) Number of seconds that an operation will wait to get a memcached
# client connection from the pool. (integer value)
#memcache_pool_conn_get_timeout = 10
# (Optional) Use the advanced (eventlet safe) memcached client pool. The
# advanced pool will only work under python 2.x. (boolean value)
#memcache_use_advanced_pool = false
# (Optional) Indicate whether to set the X-Service-Catalog header. If False,
# middleware will not ask for service catalog on token validation and will not
# set the X-Service-Catalog header. (boolean value)
#include_service_catalog = true
# Used to control the use and type of token binding. Can be set to: "disabled"
# to not check token binding. "permissive" (default) to validate binding
# information if the bind type is of a form known to the server and ignore it
# if not. "strict" like "permissive" but if the bind type is unknown the token
# will be rejected. "required" any form of token binding is needed to be
# allowed. Finally the name of a binding method that must be present in tokens.
# (string value)
#enforce_token_bind = permissive
# If true, the revocation list will be checked for cached tokens. This requires
# that PKI tokens are configured on the identity server. (boolean value)
#check_revocations_for_cached = false
# Hash algorithms to use for hashing PKI tokens. This may be a single algorithm
# or multiple. The algorithms are those supported by Python standard
# hashlib.new(). The hashes will be tried in the order given, so put the
# preferred one first for performance. The result of the first hash will be
# stored in the cache. This will typically be set to multiple values only while
# migrating from a less secure algorithm to a more secure one. Once all the old
# tokens are expired this option should be set to a single value for better
# performance. (list value)
#hash_algorithms = md5
# Prefix to prepend at the beginning of the path. Deprecated, use identity_uri.
# (string value)
#auth_admin_prefix =
# Host providing the admin Identity API endpoint. Deprecated, use identity_uri.
# (string value)
#auth_host = 127.0.0.1
# Port of the admin Identity API endpoint. Deprecated, use identity_uri.
# (integer value)
#auth_port = 35357
# Protocol of the admin Identity API endpoint (http or https). Deprecated, use
# identity_uri. (string value)
#auth_protocol = https
# Complete admin Identity API endpoint. This should specify the unversioned
# root endpoint e.g. https://localhost:35357/ (string value)
#identity_uri = <None>
# This option is deprecated and may be removed in a future release. Single
# shared secret with the Keystone configuration used for bootstrapping a
# Keystone installation, or otherwise bypassing the normal authentication
# process. This option should not be used, use `admin_user` and
# `admin_password` instead. (string value)
#admin_token = <None>
# Service username. (string value)
#admin_user = <None>
# Service user password. (string value)
#admin_password = <None>
# Service tenant name. (string value)
#admin_tenant_name = admin
# Authentication type to load (unknown value)
# Deprecated group/name - [DEFAULT]/auth_plugin
#auth_type = <None>
# Config Section from which to load plugin specific options (unknown value)
#auth_section = <None>
[matchmaker_redis]
#
# From oslo.messaging
#
# Host to locate redis. (string value)
#host = 127.0.0.1
# Use this port to connect to redis host. (port value)
# Minimum value: 1
# Maximum value: 65535
#port = 6379
# Password for Redis server (optional). (string value)
#password =
[oslo_messaging_amqp]
#
# From oslo.messaging
#
# address prefix used when sending to a specific server (string value)
# Deprecated group/name - [amqp1]/server_request_prefix
#server_request_prefix = exclusive
# address prefix used when broadcasting to all servers (string value)
# Deprecated group/name - [amqp1]/broadcast_prefix
#broadcast_prefix = broadcast
# address prefix when sending to any server in group (string value)
# Deprecated group/name - [amqp1]/group_request_prefix
#group_request_prefix = unicast
# Name for the AMQP container (string value)
# Deprecated group/name - [amqp1]/container_name
#container_name = <None>
# Timeout for inactive connections (in seconds) (integer value)
# Deprecated group/name - [amqp1]/idle_timeout
#idle_timeout = 0
# Debug: dump AMQP frames to stdout (boolean value)
# Deprecated group/name - [amqp1]/trace
#trace = false
# CA certificate PEM file to verify server certificate (string value)
# Deprecated group/name - [amqp1]/ssl_ca_file
#ssl_ca_file =
# Identifying certificate PEM file to present to clients (string value)
# Deprecated group/name - [amqp1]/ssl_cert_file
#ssl_cert_file =
# Private key PEM file used to sign cert_file certificate (string value)
# Deprecated group/name - [amqp1]/ssl_key_file
#ssl_key_file =
# Password for decrypting ssl_key_file (if encrypted) (string value)
# Deprecated group/name - [amqp1]/ssl_key_password
#ssl_key_password = <None>
# Accept clients using either SSL or plain TCP (boolean value)
# Deprecated group/name - [amqp1]/allow_insecure_clients
#allow_insecure_clients = false
# Space separated list of acceptable SASL mechanisms (string value)
# Deprecated group/name - [amqp1]/sasl_mechanisms
#sasl_mechanisms =
# Path to directory that contains the SASL configuration (string value)
# Deprecated group/name - [amqp1]/sasl_config_dir
#sasl_config_dir =
# Name of configuration file (without .conf suffix) (string value)
# Deprecated group/name - [amqp1]/sasl_config_name
#sasl_config_name =
# User name for message broker authentication (string value)
# Deprecated group/name - [amqp1]/username
#username =
# Password for message broker authentication (string value)
# Deprecated group/name - [amqp1]/password
#password =
[oslo_messaging_rabbit]
#
# From oslo.messaging
#
# Use durable queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_durable_queues
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
#amqp_durable_queues = false
# Auto-delete queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_auto_delete
#amqp_auto_delete = false
# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
# distributions. (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_version
#kombu_ssl_version =
# SSL key file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_keyfile
#kombu_ssl_keyfile =
# SSL cert file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_certfile
#kombu_ssl_certfile =
# SSL certification authority file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_ca_certs
#kombu_ssl_ca_certs =
# How long to wait before reconnecting in response to an AMQP consumer cancel
# notification. (floating point value)
# Deprecated group/name - [DEFAULT]/kombu_reconnect_delay
#kombu_reconnect_delay = 1.0
# How long to wait a missing client beforce abandoning to send it its replies.
# This value should not be longer than rpc_response_timeout. (integer value)
# Deprecated group/name - [DEFAULT]/kombu_reconnect_timeout
#kombu_missing_consumer_retry_timeout = 60
# Determines how the next RabbitMQ node is chosen in case the one we are
# currently connected to becomes unavailable. Takes effect only if more than
# one RabbitMQ node is provided in config. (string value)
# Allowed values: round-robin, shuffle
#kombu_failover_strategy = round-robin
# The RabbitMQ broker address where a single node is used. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_host
#rabbit_host = localhost
# The RabbitMQ broker port where a single node is used. (port value)
# Minimum value: 1
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rabbit_port
#rabbit_port = 5672
# RabbitMQ HA cluster host:port pairs. (list value)
# Deprecated group/name - [DEFAULT]/rabbit_hosts
#rabbit_hosts = $rabbit_host:$rabbit_port
# Connect over SSL for RabbitMQ. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_use_ssl
#rabbit_use_ssl = false
# The RabbitMQ userid. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_userid
#rabbit_userid = guest
# The RabbitMQ password. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_password
#rabbit_password = guest
# The RabbitMQ login method. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_login_method
#rabbit_login_method = AMQPLAIN
# The RabbitMQ virtual host. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_virtual_host
#rabbit_virtual_host = /
# How frequently to retry connecting with RabbitMQ. (integer value)
#rabbit_retry_interval = 1
# How long to backoff for between retries when connecting to RabbitMQ. (integer
# value)
# Deprecated group/name - [DEFAULT]/rabbit_retry_backoff
#rabbit_retry_backoff = 2
# Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry
# count). (integer value)
# Deprecated group/name - [DEFAULT]/rabbit_max_retries
#rabbit_max_retries = 0
# Use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you
# must wipe the RabbitMQ database. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_ha_queues
#rabbit_ha_queues = false
# Number of seconds after which the Rabbit broker is considered down if
# heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
# value)
#heartbeat_timeout_threshold = 60
# How often times during the heartbeat_timeout_threshold we check the
# heartbeat. (integer value)
#heartbeat_rate = 2
# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
# Deprecated group/name - [DEFAULT]/fake_rabbit
#fake_rabbit = false
[oslo_policy]
#
# From oslo.policy
#
# The JSON file that defines policies. (string value)
# Deprecated group/name - [DEFAULT]/policy_file
#policy_file = policy.json
# Default rule. Enforced when a requested rule is not found. (string value)
# Deprecated group/name - [DEFAULT]/policy_default_rule
#policy_default_rule = default
# Directories where policy configuration files are stored. They can be relative
# to any directory in the search path defined by the config_dir option, or
# absolute paths. The file defined by policy_file must exist for these
# directories to be searched. Missing or empty directories are ignored. (multi
# valued)
# Deprecated group/name - [DEFAULT]/policy_dirs
#policy_dirs = policy.d
[paste_deploy]
#
# From bilean.common.config
#
# The API paste config file to use. (string value)
#api_paste_config = api-paste.ini
[resource_definition]
#
# From bilean.notification.converter
#
# Configuration file for resource definitions. (string value)
#definitions_cfg_file = resource_definitions.yaml
# Drop notifications if no resource definition matches. (Otherwise, we convert
# them with just the default traits) (boolean value)
#drop_unmatched_notifications = false
[revision]
#
# From bilean.common.config
#
# Bilean API revision. (string value)
#bilean_api_revision = 1.0
# Bilean engine revision. (string value)
#bilean_engine_revision = 1.0
[ssl]
#
# From oslo.service.sslutils
#
# CA certificate file to use to verify connecting clients. (string value)
# Deprecated group/name - [DEFAULT]/ssl_ca_file
#ca_file = <None>
# Certificate file to use when starting the server securely. (string value)
# Deprecated group/name - [DEFAULT]/ssl_cert_file
#cert_file = <None>
# Private key file to use when starting the server securely. (string value)
# Deprecated group/name - [DEFAULT]/ssl_key_file
#key_file = <None>
# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
# distributions. (string value)
#version = <None>
# Sets the list of available ciphers. value should be a string in the OpenSSL
# cipher list format. (string value)
#ciphers = <None>

17
etc/bilean/policy.json Normal file
View File

@ -0,0 +1,17 @@
{
"context_is_admin": "role:admin",
"users:index": "role:admin",
"users:update": "role:admin",
"users:show": "",
"resources:index": "role:admin",
"resources:show": "role:admin",
"resources:validate_creation": "role:admin",
"rules:index": "",
"rules:create": "role:admin",
"rules:show": "role:admin",
"rules:update": "role:admin",
"rules:delete": "role:admin"
}

View File

@ -0,0 +1,32 @@
- event_type: compute.instance.*.end
resources:
- resource_type: os.nova.server
traits:
instance_flavor_id:
fields: payload.instance_flavor_id
resource_ref:
fields: payload.instance_id
- event_type: volume.*.end
resources:
- resource_type: volume
traits:
value:
fields: payload.size
resource_ref:
fields: payload.volume_id
- event_type: network.create.end
resources:
- resource_type: network
traits:
resource_ref:
fields: payload.network.id
user_id:
fields: payload.network.tenant_id
- event_type: network.delete.end
resources:
- resource_type: network
traits:
resource_ref:
fields: payload.network_id
user_id:
fields: payload.network.tenant_id

Some files were not shown because too many files have changed in this diff Show More