Retire bilean

This hasn't been touched in over two years. Time to close things off.

Change-Id: I5b8bcba0a5639c6a79a1c0ebba9d11aafa763b8c
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This commit is contained in:
Stephen Finucane 2019-05-02 13:51:00 -06:00
parent 31168a829b
commit 17c06ad4ee
133 changed files with 9 additions and 14222 deletions

25
.gitignore vendored
View File

@ -1,25 +0,0 @@
*.db
*.log
*.pyc
*.swp
.DS_Store
.coverage
.tox
.testrepository
AUTHORS
ChangeLog
bilean.egg-info/
bilean/versioninfo
build/
covhtml
dist/
doc/build
doc/source/bilean.*
doc/source/modules.rst
etc/bilean.conf
nosetests.xml
pep8.txt
requirements.txt
tests/test.db.pristine
vendor
etc/bilean/bilean.conf.sample

View File

@ -1,6 +0,0 @@
[DEFAULT]
test_command=
PYTHON=$(echo ${PYTHON:-python} | sed 's/--source bilean//g')
${PYTHON} -m subunit.run discover ${OS_TEST_PATH:-./bilean/tests} -t . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,43 +0,0 @@
======================
Contributing to Bilean
======================
If you're interested in contributing to the Bilean project,
the following will help get you started.
Contributor License Agreement
=============================
In order to contribute to the Bilean project, you need to have
signed OpenStack's contributor's agreement:
* http://docs.openstack.org/infra/manual/developers.html
* http://wiki.openstack.org/CLA
Project Hosting Details
=======================
* Bug trackers
* General bilean tracker: https://launchpad.net/bilean
* Python client tracker: https://launchpad.net/python-bileanclient
* Mailing list (prefix subjects with ``[Bilean]`` for faster responses)
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
* Wiki
https://wiki.openstack.org/wiki/Bilean
* IRC channel
* #openstack-bilean at FreeNode
* Code Hosting
* https://git.openstack.org/cgit/openstack/bilean
* https://git.openstack.org/cgit/openstack/python-bileanclient
* Code Review
* https://review.openstack.org/#/q/bilean+AND+status:+open,n,z
* http://docs.openstack.org/infra/manual/developers.html#development-workflow

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

9
README Normal file
View File

@ -0,0 +1,9 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For any further questions, please email
openstack-discuss@lists.openstack.org.

View File

@ -1,36 +0,0 @@
Bilean
======
--------
Overview
--------
Bilean is a billing service for OpenStack clouds, it provides trigger-type
billing based on other OpenStack services' notification.
---------
Resources
---------
Launchpad Projects
------------------
- Server: https://launchpad.net/bilean
Blueprints
----------
- Blueprints: https://blueprints.launchpad.net/bilean
Bug Tracking
------------
- Bugs: https://bugs.launchpad.net/bilean
Documentation
------------
- Documentation: http://bilean.readthedocs.io/en/latest
IRC
---
IRC Channel: #openstack-bilean on `Freenode`_.
.. _Freenode: http://freenode.net/

View File

View File

View File

@ -1,82 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_middleware import request_id as oslo_request_id
from oslo_utils import encodeutils
from bilean.common import context
from bilean.common import exception
from bilean.common import wsgi
class ContextMiddleware(wsgi.Middleware):
def process_request(self, req):
'''Build context from authentication info extracted from request.'''
headers = req.headers
environ = req.environ
try:
auth_url = headers.get('X-Auth-Url')
if not auth_url:
# Use auth_url defined in bilean.conf
auth_url = cfg.CONF.authentication.auth_url
auth_token = headers.get('X-Auth-Token')
auth_token_info = environ.get('keystone.token_info')
project = headers.get('X-Project-Id')
project_name = headers.get('X-Project-Name')
project_domain = headers.get('X-Project-Domain-Id')
project_domain_name = headers.get('X-Project-Domain-Name')
user = headers.get('X-User-Id')
user_name = headers.get('X-User-Name')
user_domain = headers.get('X-User-Domain-Id')
user_domain_name = headers.get('X-User-Domain-Name')
domain = headers.get('X-Domain-Id')
domain_name = headers.get('X-Domain-Name')
region_name = headers.get('X-Region-Name')
roles = headers.get('X-Roles')
if roles is not None:
roles = roles.split(',')
env_req_id = environ.get(oslo_request_id.ENV_REQUEST_ID)
if env_req_id is None:
request_id = None
else:
request_id = encodeutils.safe_decode(env_req_id)
except Exception:
raise exception.NotAuthenticated()
req.context = context.RequestContext(
auth_token=auth_token,
user=user,
project=project,
domain=domain,
user_domain=user_domain,
project_domain=project_domain,
request_id=request_id,
auth_url=auth_url,
user_name=user_name,
project_name=project_name,
domain_name=domain_name,
user_domain_name=user_domain_name,
project_domain_name=project_domain_name,
auth_token_info=auth_token_info,
region_name=region_name,
roles=roles)

View File

@ -1,135 +0,0 @@
# -*- coding: utf-8 -*-
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
'''
A middleware that turns exceptions into parsable string.
'''
import traceback
from oslo_config import cfg
import six
import webob
from bilean.common import exception
from bilean.common import serializers
from bilean.common import wsgi
class Fault(object):
def __init__(self, error):
self.error = error
@webob.dec.wsgify(RequestClass=wsgi.Request)
def __call__(self, req):
serializer = serializers.JSONResponseSerializer()
resp = webob.Response(request=req)
default_webob_exc = webob.exc.HTTPInternalServerError()
resp.status_code = self.error.get('code', default_webob_exc.code)
serializer.default(resp, self.error)
return resp
class FaultWrapper(wsgi.Middleware):
"""Replace error body with something the client can parse."""
error_map = {
'Forbidden': webob.exc.HTTPForbidden,
'InternalError': webob.exc.HTTPInternalServerError,
'InvalidParameter': webob.exc.HTTPBadRequest,
'InvalidSchemaError': webob.exc.HTTPBadRequest,
'MultipleChoices': webob.exc.HTTPBadRequest,
'UserNotFound': webob.exc.HTTPNotFound,
'RuleNotFound': webob.exc.HTTPNotFound,
'RuleTypeNotFound': webob.exc.HTTPNotFound,
'RuleTypeNotMatch': webob.exc.HTTPBadRequest,
'ReceiverNotFound': webob.exc.HTTPNotFound,
'RequestLimitExceeded': webob.exc.HTTPBadRequest,
'ResourceInUse': webob.exc.HTTPConflict,
'BileanBadRequest': webob.exc.HTTPBadRequest,
'SpecValidationFailed': webob.exc.HTTPBadRequest,
}
def _map_exception_to_error(self, class_exception):
if class_exception == Exception:
return webob.exc.HTTPInternalServerError
if class_exception.__name__ not in self.error_map:
return self._map_exception_to_error(class_exception.__base__)
return self.error_map[class_exception.__name__]
def _error(self, ex):
trace = None
traceback_marker = 'Traceback (most recent call last)'
webob_exc = None
if isinstance(ex, exception.HTTPExceptionDisguise):
# An HTTP exception was disguised so it could make it here
# let's remove the disguise and set the original HTTP exception
if cfg.CONF.debug:
trace = ''.join(traceback.format_tb(ex.tb))
ex = ex.exc
webob_exc = ex
ex_type = ex.__class__.__name__
is_remote = ex_type.endswith('_Remote')
if is_remote:
ex_type = ex_type[:-len('_Remote')]
full_message = six.text_type(ex)
if '\n' in full_message and is_remote:
message, msg_trace = full_message.split('\n', 1)
elif traceback_marker in full_message:
message, msg_trace = full_message.split(traceback_marker, 1)
message = message.rstrip('\n')
msg_trace = traceback_marker + msg_trace
else:
if six.PY3:
msg_trace = traceback.format_exception(type(ex), ex,
ex.__traceback__)
else:
msg_trace = traceback.format_exc()
message = full_message
if isinstance(ex, exception.BileanException):
message = ex.message
if cfg.CONF.debug and not trace:
trace = msg_trace
if not webob_exc:
webob_exc = self._map_exception_to_error(ex.__class__)
error = {
'code': webob_exc.code,
'title': webob_exc.title,
'explanation': webob_exc.explanation,
'error': {
'code': webob_exc.code,
'message': message,
'type': ex_type,
'traceback': trace,
}
}
return error
def process_request(self, req):
try:
return req.get_response(self.application)
except Exception as exc:
return req.get_response(Fault(self._error(exc)))

View File

@ -1,38 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_middleware import ssl
ssl_middleware_opts = [
cfg.StrOpt('secure_proxy_ssl_header',
default='X-Forwarded-Proto',
deprecated_group='DEFAULT',
help="The HTTP Header that will be used to determine which "
"the original request protocol scheme was, even if it was "
"removed by an SSL terminator proxy.")
]
class SSLMiddleware(ssl.SSLMiddleware):
def __init__(self, application, *args, **kwargs):
# NOTE(cbrandily): calling super(ssl.SSLMiddleware, self).__init__
# allows defining our opt (including a deprecation).
super(ssl.SSLMiddleware, self).__init__(application, *args, **kwargs)
self.oslo_conf.register_opts(
ssl_middleware_opts, group='oslo_middleware')
def list_opts():
yield None, ssl_middleware_opts

View File

@ -1,125 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
A filter middleware that inspects the requested URI for a version string
and/or Accept headers and attempts to negotiate an API controller to
return
"""
import re
import webob
from bilean.common import wsgi
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
class VersionNegotiationFilter(wsgi.Middleware):
def __init__(self, version_controller, app, conf, **local_conf):
self.versions_app = version_controller(conf)
self.version_uri_regex = re.compile(r"^v(\d+)\.?(\d+)?")
self.conf = conf
super(VersionNegotiationFilter, self).__init__(app)
def process_request(self, req):
"""Process Accept header or simply return correct API controller.
If there is a version identifier in the URI, simply
return the correct API controller, otherwise, if we
find an Accept: header, process it
"""
# See if a version identifier is in the URI passed to
# us already. If so, simply return the right version
# API controller
msg = ("Processing request: %(method)s %(path)s Accept: "
"%(accept)s" % {'method': req.method,
'path': req.path, 'accept': req.accept})
LOG.debug(msg)
# If the request is for /versions, just return the versions container
if req.path_info_peek() in ("versions", ""):
return self.versions_app
match = self._match_version_string(req.path_info_peek(), req)
if match:
major_version = req.environ['api.major_version']
minor_version = req.environ['api.minor_version']
if (major_version == 1 and minor_version == 0):
LOG.debug("Matched versioned URI. "
"Version: %(major_version)d.%(minor_version)d"
% {'major_version': major_version,
'minor_version': minor_version})
# Strip the version from the path
req.path_info_pop()
return None
else:
LOG.debug("Unknown version in versioned URI: "
"%(major_version)d.%(minor_version)d. "
"Returning version choices."
% {'major_version': major_version,
'minor_version': minor_version})
return self.versions_app
accept = str(req.accept)
if accept.startswith('application/vnd.openstack.orchestration-'):
token_loc = len('application/vnd.openstack.orchestration-')
accept_version = accept[token_loc:]
match = self._match_version_string(accept_version, req)
if match:
major_version = req.environ['api.major_version']
minor_version = req.environ['api.minor_version']
if (major_version == 1 and minor_version == 0):
LOG.debug("Matched versioned media type. Version: "
"%(major_version)d.%(minor_version)d"
% {'major_version': major_version,
'minor_version': minor_version})
return None
else:
LOG.debug("Unknown version in accept header: "
"%(major_version)d.%(minor_version)d..."
"returning version choices."
% {'major_version': major_version,
'minor_version': minor_version})
return self.versions_app
else:
if req.accept not in ('*/*', ''):
LOG.debug("Unknown accept header: %s..."
"returning HTTP not found.", req.accept)
return webob.exc.HTTPNotFound()
return None
def _match_version_string(self, subject, req):
"""Given a subject, tries to match a major and/or minor version number.
If found, sets the api.major_version and api.minor_version environ
variables.
Returns True if there was a match, false otherwise.
:param subject: The string to check
:param req: Webob.Request object
"""
match = self.version_uri_regex.match(subject)
if match:
major_version, minor_version = match.groups(0)
major_version = int(major_version)
minor_version = int(minor_version)
req.environ['api.major_version'] = major_version
req.environ['api.minor_version'] = minor_version
return match is not None

View File

@ -1,36 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.api.middleware import context
from bilean.api.middleware import fault
from bilean.api.middleware import ssl
from bilean.api.middleware import version_negotiation
from bilean.api.openstack import versions
def version_negotiation_filter(app, conf, **local_conf):
return version_negotiation.VersionNegotiationFilter(versions.Controller,
app,
conf, **local_conf)
def faultwrap_filter(app, conf, **local_conf):
return fault.FaultWrapper(app)
def sslmiddleware_filter(app, conf, **local_conf):
return ssl.SSLMiddleware(app)
def contextmiddleware_filter(app, conf, **local_conf):
return context.ContextMiddleware(app)

View File

@ -1,193 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import routes
from bilean.api.openstack.v1 import consumptions
from bilean.api.openstack.v1 import events
from bilean.api.openstack.v1 import policies
from bilean.api.openstack.v1 import resources
from bilean.api.openstack.v1 import rules
from bilean.api.openstack.v1 import users
from bilean.common import wsgi
class API(wsgi.Router):
"""WSGI router for Bilean v1 ReST API requests."""
def __init__(self, conf, **local_conf):
self.conf = conf
mapper = routes.Mapper()
# Users
users_resource = users.create_resource(conf)
users_path = "/users"
with mapper.submapper(controller=users_resource,
path_prefix=users_path) as user_mapper:
# User collection
user_mapper.connect("user_index",
"",
action="index",
conditions={'method': 'GET'})
# User detail
user_mapper.connect("user_get",
"/{user_id}",
action="get",
conditions={'method': 'GET'})
# Update user
user_mapper.connect("user_recharge",
"/{user_id}",
action="recharge",
conditions={'method': 'PUT'})
# Action
user_mapper.connect("user_action",
"/{user_id}/action",
action="action",
conditions={'method': 'POST'})
# Resources
res_resource = resources.create_resource(conf)
res_path = "/resources"
with mapper.submapper(controller=res_resource,
path_prefix=res_path) as res_mapper:
# Resource collection
res_mapper.connect("resource_index",
"",
action="index",
conditions={'method': 'GET'})
# Resource detail
res_mapper.connect("resource_get",
"/{resource_id}",
action="get",
conditions={'method': 'GET'})
# Validate creation
res_mapper.connect("validate_creation",
"",
action="validate_creation",
conditions={'method': 'POST'})
# Rules
rule_resource = rules.create_resource(conf)
rule_path = "/rules"
with mapper.submapper(controller=rule_resource,
path_prefix=rule_path) as rule_mapper:
# Rule collection
rule_mapper.connect("rule_index",
"",
action="index",
conditions={'method': 'GET'})
# Rule detail
rule_mapper.connect("rule_get",
"/{rule_id}",
action="get",
conditions={'method': 'GET'})
# Create rule
rule_mapper.connect("rule_create",
"",
action="create",
conditions={'method': 'POST'})
# Update rule
rule_mapper.connect("rule_update",
"/{rule_id}",
action="update",
conditions={'method': 'PUT'})
# Delete rule
rule_mapper.connect("rule_delete",
"/{rule_id}",
action="delete",
conditions={'method': 'DELETE'})
# Policies
policy_resource = policies.create_resource(conf)
policy_path = "/policies"
with mapper.submapper(controller=policy_resource,
path_prefix=policy_path) as policy_mapper:
# Policy collection
policy_mapper.connect("policy_index",
"",
action="index",
conditions={'method': 'GET'})
# Policy detail
policy_mapper.connect("policy_get",
"/{policy_id}",
action="get",
conditions={'method': 'GET'})
# Create policy
policy_mapper.connect("policy_create",
"",
action="create",
conditions={'method': 'POST'})
# Update policy
policy_mapper.connect("policy_update",
"/{policy_id}",
action="update",
conditions={'method': 'PUT'})
# Delete policy
policy_mapper.connect("policy_delete",
"/{policy_id}",
action="delete",
conditions={'method': 'DELETE'})
# Action
policy_mapper.connect("policy_action",
"/{policy_id}/action",
action="action",
conditions={'method': 'POST'})
# Events
event_resource = events.create_resource(conf)
event_path = "/events"
with mapper.submapper(controller=event_resource,
path_prefix=event_path) as event_mapper:
# Event collection
event_mapper.connect("event_index",
"",
action="index",
conditions={'method': 'GET'})
# Consumptions
cons_resource = consumptions.create_resource(conf)
cons_path = "/consumptions"
with mapper.submapper(controller=cons_resource,
path_prefix=cons_path) as cons_mapper:
# Consumption collection
cons_mapper.connect("consumption_index",
"",
action="index",
conditions={'method': 'GET'})
cons_mapper.connect("consumption_statistics",
"/statistics",
action="statistics",
conditions={'method': 'GET'})
super(API, self).__init__(mapper)

View File

@ -1,93 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.api.openstack.v1 import util
from bilean.common import consts
from bilean.common import serializers
from bilean.common import utils
from bilean.common import wsgi
from bilean.rpc import client as rpc_client
class ConsumptionController(object):
"""WSGI controller for Consumptions in Bilean v1 API."""
# Define request scope (must match what is in policy.json)
REQUEST_SCOPE = 'consumptions'
def __init__(self, options):
self.options = options
self.rpc_client = rpc_client.EngineClient()
@util.policy_enforce
def index(self, req):
"""Lists all consumptions."""
filter_whitelist = {
'resource_type': 'mixed',
}
param_whitelist = {
'user_id': 'single',
'start_time': 'single',
'end_time': 'single',
'limit': 'single',
'marker': 'single',
'sort_dir': 'single',
'sort_keys': 'multi',
}
params = util.get_allowed_params(req.params, param_whitelist)
filters = util.get_allowed_params(req.params, filter_whitelist)
key = consts.PARAM_LIMIT
if key in params:
params[key] = utils.parse_int_param(key, params[key])
if not filters:
filters = None
consumptions = self.rpc_client.consumption_list(req.context,
filters=filters,
**params)
return {'consumptions': consumptions}
@util.policy_enforce
def statistics(self, req):
'''Consumptions statistics.'''
filter_whitelist = {
'resource_type': 'mixed',
}
param_whitelist = {
'user_id': 'single',
'start_time': 'single',
'end_time': 'single',
'summary': 'single',
}
params = util.get_allowed_params(req.params, param_whitelist)
filters = util.get_allowed_params(req.params, filter_whitelist)
key = consts.PARAM_SUMMARY
if key in params:
params[key] = utils.parse_bool_param(key, params[key])
if not filters:
filters = None
statistics = self.rpc_client.consumption_statistics(req.context,
filters=filters,
**params)
return {'statistics': statistics}
def create_resource(ops):
"""Consumption resource factory method."""
deserializer = wsgi.JSONRequestDeserializer()
serializer = serializers.JSONResponseSerializer()
return wsgi.Resource(ConsumptionController(ops), deserializer, serializer)

View File

@ -1,74 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.api.openstack.v1 import util
from bilean.common import consts
from bilean.common import serializers
from bilean.common import utils
from bilean.common import wsgi
from bilean.rpc import client as rpc_client
class EventController(object):
"""WSGI controller for Events in Bilean v1 API
Implements the API actions
"""
# Define request scope (must match what is in policy.json)
REQUEST_SCOPE = 'events'
def __init__(self, options):
self.options = options
self.rpc_client = rpc_client.EngineClient()
@util.policy_enforce
def index(self, req):
"""Lists summary information for all events"""
filter_whitelist = {
'resource_type': 'mixed',
'action': 'single',
}
param_whitelist = {
'user_id': 'single',
'start_time': 'single',
'end_time': 'single',
'limit': 'single',
'marker': 'single',
'sort_dir': 'single',
'sort_keys': 'multi',
'show_deleted': 'single',
}
params = util.get_allowed_params(req.params, param_whitelist)
filters = util.get_allowed_params(req.params, filter_whitelist)
key = consts.PARAM_LIMIT
if key in params:
params[key] = utils.parse_int_param(key, params[key])
key = consts.PARAM_SHOW_DELETED
if key in params:
params[key] = utils.parse_bool_param(key, params[key])
if not filters:
filters = None
events = self.rpc_client.event_list(req.context, filters=filters,
**params)
return {'events': events}
def create_resource(options):
"""User resource factory method."""
deserializer = wsgi.JSONRequestDeserializer()
serializer = serializers.JSONResponseSerializer()
return wsgi.Resource(EventController(options), deserializer, serializer)

View File

@ -1,173 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from webob import exc
from bilean.api.openstack.v1 import util
from bilean.api import validator
from bilean.common import consts
from bilean.common.i18n import _
from bilean.common import serializers
from bilean.common import utils
from bilean.common import wsgi
from bilean.rpc import client as rpc_client
class PolicyData(object):
'''The data accompanying a POST/PUT request to create/update a policy.'''
def __init__(self, data):
self.data = data
def name(self):
if consts.POLICY_NAME not in self.data:
raise exc.HTTPBadRequest(_("No policy name specified"))
return self.data[consts.POLICY_NAME]
def rules(self):
return self.data.get(consts.POLICY_RULES)
def metadata(self):
return self.data.get(consts.RULE_METADATA)
class PolicyController(object):
"""WSGI controller for Policys in Bilean v1 API
Implements the API actions
"""
# Define request scope (must match what is in policy.json)
REQUEST_SCOPE = 'policies'
SUPPORTED_ACTIONS = (
ADD_RULES,
) = (
'add_rules',
)
def __init__(self, options):
self.options = options
self.rpc_client = rpc_client.EngineClient()
@util.policy_enforce
def index(self, req):
"""List summary information for all policies"""
filter_whitelist = {
'name': 'mixed',
'metadata': 'mixed',
}
param_whitelist = {
'limit': 'single',
'marker': 'single',
'sort_dir': 'single',
'sort_keys': 'multi',
'show_deleted': 'single',
}
params = util.get_allowed_params(req.params, param_whitelist)
filters = util.get_allowed_params(req.params, filter_whitelist)
key = consts.PARAM_LIMIT
if key in params:
params[key] = utils.parse_int_param(key, params[key])
key = consts.PARAM_SHOW_DELETED
if key in params:
params[key] = utils.parse_bool_param(key, params[key])
if not filters:
filters = None
policies = self.rpc_client.policy_list(req.context, filters=filters,
**params)
return {'policies': policies}
@util.policy_enforce
def get(self, req, policy_id):
"""Get detailed information for a policy"""
policy = self.rpc_client.policy_get(req.context,
policy_id)
return {'policy': policy}
@util.policy_enforce
def create(self, req, body):
"""Create a new policy"""
if not validator.is_valid_body(body):
raise exc.HTTPUnprocessableEntity()
policy_data = body.get('policy')
if policy_data is None:
raise exc.HTTPBadRequest(_("Malformed request data, missing "
"'policy' key in request body."))
data = PolicyData(policy_data)
policy = self.rpc_client.policy_create(req.context,
data.name(),
data.rules(),
data.metadata())
return {'policy': policy}
@util.policy_enforce
def update(self, req, policy_id, body):
if not validator.is_valid_body(body):
raise exc.HTTPUnprocessableEntity()
policy_data = body.get('policy')
if policy_data is None:
raise exc.HTTPBadRequest(_("Malformed request data, missing "
"'policy' key in request body."))
name = policy_data.get(consts.POLICY_NAME)
metadata = policy_data.get(consts.POLICY_METADATA)
is_default = policy_data.get(consts.POLICY_IS_DEFAULT)
policy = self.rpc_client.policy_update(req.context, policy_id, name,
metadata, is_default)
return {'policy': policy}
@util.policy_enforce
def delete(self, req, policy_id):
"""Delete a policy with given policy_id"""
self.rpc_client.policy_delete(req.context, policy_id)
@util.policy_enforce
def action(self, req, policy_id, body=None):
'''Perform specified action on a policy.'''
body = body or {}
if len(body) < 1:
raise exc.HTTPBadRequest(_('No action specified'))
if len(body) > 1:
raise exc.HTTPBadRequest(_('Multiple actions specified'))
action = list(body.keys())[0]
if action not in self.SUPPORTED_ACTIONS:
msg = _("Unrecognized action '%s' specified") % action
raise exc.HTTPBadRequest(msg)
if action == self.ADD_RULES:
rules = body.get(action).get('rules')
if rules is None or not isinstance(rules, list) or len(rules) == 0:
raise exc.HTTPBadRequest(_('No rule to add'))
policy = self.rpc_client.policy_add_rules(
req.context, policy_id, rules)
return {'policy': policy}
def create_resource(options):
"""Policy resource factory method."""
deserializer = wsgi.JSONRequestDeserializer()
serializer = serializers.JSONResponseSerializer()
return wsgi.Resource(PolicyController(options), deserializer, serializer)

View File

@ -1,120 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from webob import exc
from bilean.api.openstack.v1 import util
from bilean.api import validator
from bilean.common import consts
from bilean.common import exception
from bilean.common.i18n import _
from bilean.common import serializers
from bilean.common import utils
from bilean.common import wsgi
from bilean.rpc import client as rpc_client
class ResourceController(object):
"""WSGI controller for Resources in Bilean v1 API
Implements the API actions, cause action 'create' and 'delete' is
triggered by notification, it's not necessary to provide here.
"""
# Define request scope (must match what is in policy.json)
REQUEST_SCOPE = 'resources'
def __init__(self, options):
self.options = options
self.rpc_client = rpc_client.EngineClient()
@util.policy_enforce
def index(self, req):
"""Lists summary information for all resources"""
filter_whitelist = {
'resource_type': 'mixed',
'rule_id': 'mixed',
}
param_whitelist = {
'user_id': 'single',
'limit': 'single',
'marker': 'single',
'sort_dir': 'single',
'sort_keys': 'multi',
'show_deleted': 'single',
}
params = util.get_allowed_params(req.params, param_whitelist)
filters = util.get_allowed_params(req.params, filter_whitelist)
key = consts.PARAM_LIMIT
if key in params:
params[key] = utils.parse_int_param(key, params[key])
key = consts.PARAM_SHOW_DELETED
if key in params:
params[key] = utils.parse_bool_param(key, params[key])
if not filters:
filters = None
resources = self.rpc_client.resource_list(req.context, filters=filters,
**params)
return {'resources': resources}
@util.policy_enforce
def get(self, req, resource_id):
"""Gets detailed information for a resource"""
resource = self.rpc_client.resource_get(req.context, resource_id)
return {'resource': resource}
@util.policy_enforce
def validate_creation(self, req, body):
"""Validate resources creation
:param user_id: Id of user to validate
:param body: dict body include resources and count
:return True|False
"""
if not validator.is_valid_body(body):
raise exc.HTTPUnprocessableEntity()
resources = body.get('resources')
if not resources:
msg = _("Resources is empty")
raise exc.HTTPBadRequest(explanation=msg)
count = body.get('count')
if count:
try:
validator.validate_integer(count, 'count',
consts.MIN_RESOURCE_NUM,
consts.MAX_RESOURCE_NUM)
except exception.InvalidInput as e:
raise exc.HTTPBadRequest(explanation=e.format_message())
try:
for resource in resources:
validator.validate_resource(resource)
except exception.InvalidInput as e:
raise exc.HTTPBadRequest(explanation=e.format_message())
except Exception as e:
raise exc.HTTPBadRequest(explanation=e)
return self.rpc_client.validate_creation(req.context, body)
def create_resource(options):
"""Resource resource factory method."""
deserializer = wsgi.JSONRequestDeserializer()
serializer = serializers.JSONResponseSerializer()
return wsgi.Resource(ResourceController(options), deserializer, serializer)

View File

@ -1,128 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from webob import exc
from bilean.api.openstack.v1 import util
from bilean.api import validator
from bilean.common import consts
from bilean.common.i18n import _
from bilean.common import serializers
from bilean.common import utils
from bilean.common import wsgi
from bilean.rpc import client as rpc_client
class RuleData(object):
'''The data accompanying a POST/PUT request to create/update a rule.'''
def __init__(self, data):
self.data = data
def name(self):
if consts.RULE_NAME not in self.data:
raise exc.HTTPBadRequest(_("No rule name specified"))
return self.data[consts.RULE_NAME]
def spec(self):
if consts.RULE_SPEC not in self.data:
raise exc.HTTPBadRequest(_("No rule spec provided"))
return self.data[consts.RULE_SPEC]
def metadata(self):
return self.data.get(consts.RULE_METADATA)
class RuleController(object):
"""WSGI controller for Rules in Bilean v1 API
Implements the API actions
"""
# Define request scope (must match what is in policy.json)
REQUEST_SCOPE = 'rules'
def __init__(self, options):
self.options = options
self.rpc_client = rpc_client.EngineClient()
@util.policy_enforce
def index(self, req):
"""List summary information for all rules"""
filter_whitelist = {
'name': 'mixed',
'type': 'mixed',
'metadata': 'mixed',
}
param_whitelist = {
'limit': 'single',
'marker': 'single',
'sort_dir': 'single',
'sort_keys': 'multi',
'show_deleted': 'single',
}
params = util.get_allowed_params(req.params, param_whitelist)
filters = util.get_allowed_params(req.params, filter_whitelist)
key = consts.PARAM_LIMIT
if key in params:
params[key] = utils.parse_int_param(key, params[key])
key = consts.PARAM_SHOW_DELETED
if key in params:
params[key] = utils.parse_bool_param(key, params[key])
if not filters:
filters = None
rules = self.rpc_client.rule_list(req.context, filters=filters,
**params)
return {'rules': rules}
@util.policy_enforce
def get(self, req, rule_id):
"""Get detailed information for a rule"""
rule = self.rpc_client.rule_get(req.context,
rule_id)
return {'rule': rule}
@util.policy_enforce
def create(self, req, body):
"""Create a new rule"""
if not validator.is_valid_body(body):
raise exc.HTTPUnprocessableEntity()
rule_data = body.get('rule')
if rule_data is None:
raise exc.HTTPBadRequest(_("Malformed request data, missing "
"'rule' key in request body."))
data = RuleData(rule_data)
rule = self.rpc_client.rule_create(req.context,
data.name(),
data.spec(),
data.metadata())
return {'rule': rule}
@util.policy_enforce
def delete(self, req, rule_id):
"""Delete a rule with given rule_id"""
self.rpc_client.rule_delete(req.context, rule_id)
def create_resource(options):
"""Rule resource factory method."""
deserializer = wsgi.JSONRequestDeserializer()
serializer = serializers.JSONResponseSerializer()
return wsgi.Resource(RuleController(options), deserializer, serializer)

View File

@ -1,126 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from webob import exc
from bilean.api.openstack.v1 import util
from bilean.api import validator
from bilean.common import consts
from bilean.common import exception
from bilean.common.i18n import _
from bilean.common import serializers
from bilean.common import utils
from bilean.common import wsgi
from bilean.rpc import client as rpc_client
class UserController(object):
"""WSGI controller for Users in Bilean v1 API
Implements the API actions
"""
# Define request scope (must match what is in policy.json)
REQUEST_SCOPE = 'users'
SUPPORTED_ACTIONS = (
RECHARGE, ATTACH_POLICY,
) = (
'recharge', 'attach_policy',
)
def __init__(self, options):
self.options = options
self.rpc_client = rpc_client.EngineClient()
@util.policy_enforce
def index(self, req):
filter_whitelist = {
'status': 'mixed',
}
param_whitelist = {
'limit': 'single',
'marker': 'single',
'sort_dir': 'single',
'sort_keys': 'multi',
'show_deleted': 'single',
}
params = util.get_allowed_params(req.params, param_whitelist)
filters = util.get_allowed_params(req.params, filter_whitelist)
key = consts.PARAM_LIMIT
if key in params:
params[key] = utils.parse_int_param(key, params[key])
key = consts.PARAM_SHOW_DELETED
if key in params:
params[key] = utils.parse_bool_param(key, params[key])
if not filters:
filters = None
users = self.rpc_client.user_list(req.context, filters=filters,
**params)
return {'users': users}
@util.policy_enforce
def get(self, req, user_id):
"""Get detailed information for a user"""
user = self.rpc_client.user_get(req.context, user_id)
return {'user': user}
@util.policy_enforce
def action(self, req, user_id, body=None):
"""Perform specified action on a user."""
if not validator.is_valid_body(body):
raise exc.HTTPUnprocessableEntity()
if len(body) < 1:
raise exc.HTTPBadRequest(_('No action specified'))
if len(body) > 1:
raise exc.HTTPBadRequest(_('Multiple actions specified'))
action = list(body.keys())[0]
if action not in self.SUPPORTED_ACTIONS:
msg = _("Unrecognized action '%s' specified") % action
raise exc.HTTPBadRequest(msg)
if action == self.ATTACH_POLICY:
policy = body.get(action).get('policy')
if policy is None:
raise exc.HTTPBadRequest(_("Malformed request data, no policy "
"specified to attach."))
user = self.rpc_client.user_attach_policy(
req.context, user_id, policy)
elif action == self.RECHARGE:
value = body.get(action).get('value')
if value is None:
raise exc.HTTPBadRequest(_("Malformed request data, missing "
"'value' key in request body."))
try:
validator.validate_float(value, 'recharge_value',
consts.MIN_VALUE, consts.MAX_VALUE)
except exception.InvalidInput as e:
raise exc.HTTPBadRequest(explanation=e.format_message())
user = self.rpc_client.user_recharge(req.context, user_id, value)
return {'user': user}
def create_resource(options):
"""User resource factory method."""
deserializer = wsgi.JSONRequestDeserializer()
serializer = serializers.JSONResponseSerializer()
return wsgi.Resource(UserController(options), deserializer, serializer)

View File

@ -1,71 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import functools
import six
from webob import exc
from bilean.common import policy
def policy_enforce(handler):
"""Decorator that enforces policies.
Checks the path matches the request context and enforce policy defined in
policy.json.
This is a handler method decorator.
"""
@functools.wraps(handler)
def handle_bilean_method(controller, req, **kwargs):
rule = "%s:%s" % (controller.REQUEST_SCOPE, handler.__name__)
allowed = policy.enforce(context=req.context,
rule=rule, target={})
if not allowed:
raise exc.HTTPForbidden()
return handler(controller, req, **kwargs)
return handle_bilean_method
def get_allowed_params(params, whitelist):
"""Extract from ``params`` all entries listed in ``whitelist``.
The returning dict will contain an entry for a key if, and only if,
there's an entry in ``whitelist`` for that key and at least one entry in
``params``. If ``params`` contains multiple entries for the same key, it
will yield an array of values: ``{key: [v1, v2,...]}``
:param params: a NestedMultiDict from webob.Request.params
:param whitelist: an array of strings to whitelist
:returns: a dict with {key: value} pairs
"""
allowed_params = {}
for key, get_type in six.iteritems(whitelist):
value = None
if get_type == 'single':
value = params.get(key)
elif get_type == 'multi':
value = params.getall(key)
elif get_type == 'mixed':
value = params.getall(key)
if isinstance(value, list) and len(value) == 1:
value = value.pop()
if value:
allowed_params[key] = value
return allowed_params

View File

@ -1,54 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Controller that returns information on the bilean API versions
"""
import json
from six.moves import http_client
import webob.dec
class Controller(object):
"""A controller that produces information on the bilean API versions."""
def __init__(self, conf):
self.conf = conf
@webob.dec.wsgify
def __call__(self, req):
"""Respond to a request for all OpenStack API versions."""
version_objs = [
{
"id": "v1.0",
"status": "CURRENT",
"links": [
{
"rel": "self",
"href": self.get_href(req)
}]
}]
body = json.dumps(dict(versions=version_objs))
response = webob.Response(request=req,
status=http_client.MULTIPLE_CHOICES,
content_type='application/json')
response.body = body
return response
def get_href(self, req):
return "%s/v1/" % req.host_url

View File

@ -1,202 +0,0 @@
# Copyright 2011 Cloudscaling, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
from bilean.common import consts
from bilean.common import exception
from bilean.common.i18n import _
from oslo_log import log as logging
from oslo_utils import uuidutils
LOG = logging.getLogger(__name__)
def _validate_uuid_format(uid):
return uuidutils.is_uuid_like(uid)
def is_valid_body(body, entity_name=None):
if entity_name is not None:
if not (body and entity_name in body):
return False
def is_dict(d):
try:
d.get(None)
return True
except AttributeError:
return False
return is_dict(body)
def validate(args, validator):
"""Validate values of args against validators in validator.
:param args: Dict of values to be validated.
:param validator: A dict where the keys map to keys in args
and the values are validators.
Applies each validator to ``args[key]``
:returns: True if validation succeeds. Otherwise False.
A validator should be a callable which accepts 1 argument and which
returns True if the argument passes validation. False otherwise.
A validator should not raise an exception to indicate validity of the
argument.
Only validates keys which show up in both args and validator.
"""
for key in validator:
if key not in args:
continue
f = validator[key]
assert callable(f)
if not f(args[key]):
LOG.debug("%(key)s with value %(value)s failed"
" validator %(name)s",
{'key': key, 'value': args[key], 'name': f.__name__})
return False
return True
def validate_string(value, name=None, min_length=0, max_length=None,
available_fields=None):
"""Check the length of specified string
:param value: the value of the string
:param name: the name of the string
:param min_length: the min_length of the string
:param max_length: the max_length of the string
"""
if not isinstance(value, six.string_types):
if name is None:
msg = _("The input is not a string or unicode")
else:
msg = _("%s is not a string or unicode") % name
raise exception.InvalidInput(message=msg)
if name is None:
name = value
if available_fields:
if value not in available_fields:
msg = _("%(name)s must be in %(fields)s") % {
'name': name, 'fields': available_fields}
raise exception.InvalidInput(message=msg)
if len(value) < min_length:
msg = _("%(name)s has a minimum character requirement of "
"%(min_length)s.") % {'name': name, 'min_length': min_length}
raise exception.InvalidInput(message=msg)
if max_length and len(value) > max_length:
msg = _("%(name)s has more than %(max_length)s "
"characters.") % {'name': name, 'max_length': max_length}
raise exception.InvalidInput(message=msg)
def validate_resource(resource):
"""Make sure that resource is valid"""
if not is_valid_body(resource):
msg = _("%s is not a dict") % resource
raise exception.InvalidInput(message=msg)
if resource['resource_type']:
validate_string(resource['resource_type'],
available_fields=consts.RESOURCE_TYPES)
else:
msg = _('Expected resource_type field for resource')
raise exception.InvalidInput(reason=msg)
if resource['value']:
validate_integer(resource['value'], 'resource_value', min_value=1)
else:
msg = _('Expected resource value field for resource')
raise exception.InvalidInput(reason=msg)
def validate_integer(value, name, min_value=None, max_value=None):
"""Make sure that value is a valid integer, potentially within range."""
try:
value = int(str(value))
except (ValueError, UnicodeEncodeError):
msg = _('%(value_name)s must be an integer')
raise exception.InvalidInput(reason=(
msg % {'value_name': name}))
if min_value is not None:
if value < min_value:
msg = _('%(value_name)s must be >= %(min_value)d')
raise exception.InvalidInput(
reason=(msg % {'value_name': name,
'min_value': min_value}))
if max_value is not None:
if value > max_value:
msg = _('%(value_name)s must be <= %(max_value)d')
raise exception.InvalidInput(
reason=(
msg % {'value_name': name,
'max_value': max_value})
)
return value
def validate_float(value, name, min_value=None, max_value=None):
"""Make sure that value is a valid float, potentially within range."""
try:
value = float(str(value))
except (ValueError, UnicodeEncodeError):
msg = _('%(value_name)s must be an float')
raise exception.InvalidInput(reason=(
msg % {'value_name': name}))
if min_value is not None:
if value < min_value:
msg = _('%(value_name)s must be >= %(min_value)d')
raise exception.InvalidInput(
reason=(msg % {'value_name': name,
'min_value': min_value}))
if max_value is not None:
if value > max_value:
msg = _('%(value_name)s must be <= %(max_value)d')
raise exception.InvalidInput(
reason=(
msg % {'value_name': name,
'max_value': max_value}))
return value
def is_none_string(val):
"""Check if a string represents a None value."""
if not isinstance(val, six.string_types):
return False
return val.lower() == 'none'
def check_isinstance(obj, cls):
"""Checks that obj is of type cls, and lets PyLint infer types."""
if isinstance(obj, cls):
return obj
raise Exception(_('Expected object of type: %s') % (str(cls)))

View File

View File

@ -1,63 +0,0 @@
#!/usr/bin/env python
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Bilean API Server.
An OpenStack ReST API to Bilean
"""
import eventlet
eventlet.monkey_patch(os=False)
import sys
from bilean.common import config
from bilean.common.i18n import _LI
from bilean.common import messaging
from bilean.common import wsgi
from bilean import version
from oslo_config import cfg
import oslo_i18n as i18n
from oslo_log import log as logging
from oslo_service import systemd
import six
i18n.enable_lazy()
LOG = logging.getLogger('bilean.api')
def main():
try:
logging.register_options(cfg.CONF)
cfg.CONF(project='bilean', prog='bilean-api',
version=version.version_info.version_string())
logging.setup(cfg.CONF, 'bilean-api')
messaging.setup()
app = config.load_paste_app()
port = cfg.CONF.bilean_api.bind_port
host = cfg.CONF.bilean_api.bind_host
LOG.info(_LI('Starting Bilean ReST API on %(host)s:%(port)s'),
{'host': host, 'port': port})
server = wsgi.Server('bilean-api', cfg.CONF.bilean_api)
server.start(app, default_port=port)
systemd.notify_once()
server.wait()
except RuntimeError as ex:
sys.exit("ERROR: %s" % six.text_type(ex))

View File

@ -1,45 +0,0 @@
#!/usr/bin/env python
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Bilean Engine Server.
"""
import eventlet
eventlet.monkey_patch()
from bilean.common import consts
from bilean.common import messaging
from oslo_config import cfg
from oslo_i18n import _lazy
from oslo_log import log as logging
from oslo_service import service
_lazy.enable_lazy()
def main():
logging.register_options(cfg.CONF)
cfg.CONF(project='bilean', prog='bilean-engine')
logging.setup(cfg.CONF, 'bilean-engine')
logging.set_defaults()
messaging.setup()
from bilean.engine import service as engine
srv = engine.EngineService(cfg.CONF.host, consts.ENGINE_TOPIC)
launcher = service.launch(cfg.CONF, srv,
workers=cfg.CONF.num_engine_workers)
launcher.wait()

View File

@ -1,91 +0,0 @@
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
CLI interface for bilean management.
"""
import sys
from oslo_config import cfg
from oslo_log import log as logging
from bilean.common.i18n import _
from bilean.db import api
from bilean.db import utils
from bilean import version
CONF = cfg.CONF
def do_db_version():
"""Print database's current migration level."""
print(api.db_version(api.get_engine()))
def do_db_sync():
"""Place a database under migration control and upgrade.
Creating first if necessary.
"""
api.db_sync(api.get_engine(), CONF.command.version)
def purge_deleted():
"""Remove database records that have been previously soft deleted."""
utils.purge_deleted(CONF.command.age, CONF.command.granularity)
def add_command_parsers(subparsers):
parser = subparsers.add_parser('db_version')
parser.set_defaults(func=do_db_version)
parser = subparsers.add_parser('db_sync')
parser.set_defaults(func=do_db_sync)
parser.add_argument('version', nargs='?')
parser.add_argument('current_version', nargs='?')
parser = subparsers.add_parser('purge_deleted')
parser.set_defaults(func=purge_deleted)
parser.add_argument('age', nargs='?', default='90',
help=_('How long to preserve deleted data.'))
parser.add_argument(
'-g', '--granularity', default='days',
choices=['days', 'hours', 'minutes', 'seconds'],
help=_('Granularity to use for age argument, defaults to days.'))
command_opt = cfg.SubCommandOpt('command',
title='Commands',
help='Show available commands.',
handler=add_command_parsers)
def main():
logging.register_options(CONF)
logging.setup(CONF, 'bilean-manage')
CONF.register_cli_opt(command_opt)
try:
default_config_files = cfg.find_config_files('bilean', 'bilean-engine')
CONF(sys.argv[1:], project='bilean', prog='bilean-manage',
version=version.version_info.version_string(),
default_config_files=default_config_files)
except RuntimeError as e:
sys.exit("ERROR: %s" % e)
try:
CONF.command.func()
except Exception as e:
sys.exit("ERROR: %s" % e)

View File

@ -1,39 +0,0 @@
#!/usr/bin/env python
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import eventlet
eventlet.monkey_patch()
from oslo_config import cfg
from oslo_i18n import _lazy
from oslo_log import log as logging
from oslo_service import service
from bilean.common import messaging
_lazy.enable_lazy()
def main():
logging.register_options(cfg.CONF)
cfg.CONF(project='bilean', prog='bilean-notification')
logging.setup(cfg.CONF, 'bilean-notification')
logging.set_defaults()
messaging.setup()
from bilean.notification import notification
srv = notification.NotificationService()
launcher = service.launch(cfg.CONF, srv)
launcher.wait()

View File

@ -1,44 +0,0 @@
#!/usr/bin/env python
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Bilean Scheduler Server.
"""
import eventlet
eventlet.monkey_patch()
from bilean.common import consts
from bilean.common import messaging
from oslo_config import cfg
from oslo_i18n import _lazy
from oslo_log import log as logging
from oslo_service import service
_lazy.enable_lazy()
def main():
logging.register_options(cfg.CONF)
cfg.CONF(project='bilean', prog='bilean-scheduler')
logging.setup(cfg.CONF, 'bilean-scheduler')
logging.set_defaults()
messaging.setup()
from bilean.scheduler import service as scheduler
srv = scheduler.SchedulerService(cfg.CONF.host, consts.SCHEDULER_TOPIC)
launcher = service.launch(cfg.CONF, srv)
launcher.wait()

View File

@ -1,167 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Routines for configuring Bilean."""
import logging as sys_logging
import os
import socket
from oslo_concurrency import processutils
from oslo_config import cfg
from oslo_log import log as logging
from bilean.common.i18n import _
from bilean.common import wsgi
paste_deploy_group = cfg.OptGroup('paste_deploy')
paste_deploy_opts = [
cfg.StrOpt('api_paste_config', default="api-paste.ini",
help=_("The API paste config file to use."))]
service_opts = [
cfg.IntOpt('periodic_interval',
default=60,
help=_('Seconds between running periodic tasks.')),
cfg.StrOpt('region_name_for_services',
help=_('Default region name used to get services endpoints.')),
cfg.IntOpt('max_response_size',
default=524288,
help=_('Maximum raw byte size of data from web response.')),
cfg.IntOpt('num_engine_workers',
default=processutils.get_worker_count(),
help=_('Number of heat-engine processes to fork and run.')),
]
engine_opts = [
cfg.StrOpt('environment_dir',
default='/etc/bilean/environments',
help=_('The directory to search for environment files.')),
cfg.IntOpt('default_action_timeout',
default=3600,
help=_('Timeout in seconds for actions.')),
cfg.IntOpt('lock_retry_times',
default=50,
help=_('Number of times trying to grab a lock.')),
cfg.IntOpt('lock_retry_interval',
default=1,
help=_('Number of seconds between lock retries.')),
]
rpc_opts = [
cfg.StrOpt('host',
default=socket.gethostname(),
help=_('Name of the engine node. '
'This can be an opaque identifier. '
'It is not necessarily a hostname, FQDN, '
'or IP address.'))]
cloud_backend_opts = [
cfg.StrOpt('cloud_backend',
default='openstack',
help=_('Default cloud backend to use.'))]
authentication_group = cfg.OptGroup('authentication')
authentication_opts = [
cfg.StrOpt('auth_url', default='',
help=_('Complete public identity V3 API endpoint.')),
cfg.StrOpt('service_username', default='bilean',
help=_('Bilean service user name')),
cfg.StrOpt('service_password', default='',
help=_('Password specified for the Bilean service user.')),
cfg.StrOpt('service_project_name', default='service',
help=_('Name of the service project.')),
cfg.StrOpt('service_user_domain', default='Default',
help=_('Name of the domain for the service user.')),
cfg.StrOpt('service_project_domain', default='Default',
help=_('Name of the domain for the service project.')),
]
client_http_log_debug_opts = [
cfg.BoolOpt('http_log_debug',
default=False,
help=_("Allow client's debug log output."))]
revision_group = cfg.OptGroup('revision')
revision_opts = [
cfg.StrOpt('bilean_api_revision', default='1.0',
help=_('Bilean API revision.')),
cfg.StrOpt('bilean_engine_revision', default='1.0',
help=_('Bilean engine revision.'))]
def list_opts():
yield None, rpc_opts
yield None, engine_opts
yield None, service_opts
yield None, cloud_backend_opts
yield paste_deploy_group.name, paste_deploy_opts
yield authentication_group.name, authentication_opts
yield revision_group.name, revision_opts
cfg.CONF.register_group(paste_deploy_group)
cfg.CONF.register_group(authentication_group)
cfg.CONF.register_group(revision_group)
for group, opts in list_opts():
cfg.CONF.register_opts(opts, group=group)
def _get_deployment_config_file():
"""Retrieves the deployment_config_file config item.
Item formatted as an absolute pathname.
"""
config_path = cfg.CONF.find_file(
cfg.CONF.paste_deploy['api_paste_config'])
if config_path is None:
return None
return os.path.abspath(config_path)
def load_paste_app(app_name=None):
"""Builds and returns a WSGI app from a paste config file.
We assume the last config file specified in the supplied ConfigOpts
object is the paste config file.
:param app_name: name of the application to load
:raises RuntimeError when config file cannot be located or application
cannot be loaded from config file
"""
if app_name is None:
app_name = cfg.CONF.prog
conf_file = _get_deployment_config_file()
if conf_file is None:
raise RuntimeError(_("Unable to locate config file [%s]") %
cfg.CONF.paste_deploy['api_paste_config'])
try:
app = wsgi.paste_deploy_app(conf_file, app_name, cfg.CONF)
# Log the options used when starting if we're in debug mode...
if cfg.CONF.debug:
cfg.CONF.log_opt_values(logging.getLogger(app_name),
sys_logging.DEBUG)
return app
except (LookupError, ImportError) as e:
raise RuntimeError(_("Unable to load %(app_name)s from "
"configuration file %(conf_file)s."
"\nGot: %(e)r") % {'app_name': app_name,
'conf_file': conf_file,
'e': e})

View File

@ -1,128 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
MIN_VALUE = 1
MAX_VALUE = 100000000
MIN_RESOURCE_NUM = 1
MAX_RESOURCE_NUM = 1000
RPC_ATTRS = (
ENGINE_TOPIC,
SCHEDULER_TOPIC,
ENGINE_DISPATCHER_TOPIC,
RPC_API_VERSION,
) = (
'bilean-engine',
'bilean-scheduler',
'bilean_engine_dispatcher',
'1.0',
)
USER_STATUSES = (
USER_INIT, USER_FREE, USER_ACTIVE, USER_WARNING, USER_FREEZE,
) = (
'INIT', 'FREE', 'ACTIVE', 'WARNING', 'FREEZE',
)
ACTION_NAMES = (
USER_CREATE_RESOURCE, USER_UPDATE_RESOURCE, USER_DELETE_RESOURCE,
USER_SETTLE_ACCOUNT,
) = (
'USER_CREATE_RESOURCE', 'USER_UPDATE_RESOURCE', 'USER_DELETE_RESOURCE',
'USER_SETTLE_ACCOUNT',
)
ACTION_STATUSES = (
ACTION_INIT, ACTION_WAITING, ACTION_READY, ACTION_RUNNING,
ACTION_SUCCEEDED, ACTION_FAILED, ACTION_CANCELLED
) = (
'INIT', 'WAITING', 'READY', 'RUNNING',
'SUCCEEDED', 'FAILED', 'CANCELLED',
)
RPC_PARAMS = (
PARAM_SHOW_DELETED, PARAM_SHOW_NESTED, PARAM_LIMIT, PARAM_MARKER,
PARAM_GLOBAL_PROJECT, PARAM_SHOW_DETAILS,
PARAM_SORT_DIR, PARAM_SORT_KEYS, PARAM_SUMMARY,
) = (
'show_deleted', 'show_nested', 'limit', 'marker',
'global_project', 'show_details',
'sort_dir', 'sort_keys', 'summary',
)
USER_KEYS = (
USER_ID, USER_NAME, USER_POLICY_ID, USER_BALANCE, USER_RATE, USER_CREDIT,
USER_LAST_BILL, USER_STATUS, USER_STATUS_REASION, USER_CREATED_AT,
USER_UPDATED_AT, USER_DELETED_AT,
) = (
'id', 'name', 'policy_id', 'balance', 'rate', 'credit',
'last_bill', 'status', 'status_reason', 'created_at',
'updated_at', 'deleted_at',
)
RESOURCE_KEYS = (
RES_ID, RES_USER_ID, RES_RULE_ID, RES_RESOURCE_TYPE, RES_PROPERTIES,
RES_RATE, RES_LAST_BILL, RES_CREATED_AT, RES_UPDATED_AT, RES_DELETED_AT,
) = (
'id', 'user_id', 'rule_id', 'resource_type', 'properties',
'rate', 'last_bill', 'created_at', 'updated_at', 'deleted_at',
)
RULE_KEYS = (
RULE_ID, RULE_NAME, RULE_TYPE, RULE_SPEC, RULE_METADATA,
RULE_UPDATED_AT, RULE_CREATED_AT, RULE_DELETED_AT,
) = (
'id', 'name', 'type', 'spec', 'metadata',
'updated_at', 'created_at', 'deleted_at',
)
EVENT_KEYS = (
EVENT_ID, EVENT_TIMESTAMP, EVENT_OBJ_ID, EVENT_OBJ_TYPE, EVENT_ACTION,
EVENT_USER_ID, EVENT_LEVEL, EVENT_STATUS, EVENT_STATUS_REASON,
EVENT_METADATA,
) = (
'id', 'timestamp', 'obj_id', 'obj_type', 'action',
'user_id', 'level', 'status', 'status_reason', 'metadata',
)
POLICY_KEYS = (
POLICY_ID, POLICY_NAME, POLICY_IS_DEFAULT, POLICY_RULES, POLICY_METADATA,
POLICY_CREATED_AT, POLICY_UPDATED_AT, POLICY_DELETED_AT,
) = (
'id', 'name', 'is_default', 'rules', 'metadata',
'created_at', 'updated_at', 'deleted_at',
)
CONSUMPTION_KEYS = (
CONSUMPTION_ID, CONSUMPTION_USER_ID, CONSUMPTION_RESOURCE_ID,
CONSUMPTION_RESOURCE_TYPE, CONSUMPTION_START_TIME, CONSUMPTION_END_TIME,
CONSUMPTION_RATE, CONSUMPTION_COST, CONSUMPTION_METADATA,
) = (
'id', 'user_id', 'resource_id',
'resource_type', 'start_time', 'end_time',
'rate', 'cost', 'metadata',
)
RECHARGE_KEYS = (
RECHARGE_ID, RECHARGE_USER_ID, RECHARGE_TYPE, RECHARGE_TIMESTAMP,
RECHARGE_METADATA,
) = (
'id', 'user_id', 'type', 'timestamp', 'metadata',
)
RECHARGE_TYPES = (
SELF_RECHARGE, SYSTEM_BONUS,
) = (
'Recharge', 'System bonus',
)

View File

@ -1,130 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_context import context as base_context
from oslo_utils import encodeutils
from bilean.common import policy
from bilean.db import api as db_api
from bilean.drivers import base as driver_base
class RequestContext(base_context.RequestContext):
'''Stores information about the security context.
The context encapsulates information related to the user accessing the
the system, as well as additional request information.
'''
def __init__(self, auth_token=None, user=None, project=None,
domain=None, user_domain=None, project_domain=None,
is_admin=None, read_only=False, show_deleted=False,
request_id=None, auth_url=None, trusts=None,
user_name=None, project_name=None, domain_name=None,
user_domain_name=None, project_domain_name=None,
auth_token_info=None, region_name=None, roles=None,
password=None, **kwargs):
'''Initializer of request context.'''
# We still have 'tenant' param because oslo_context still use it.
super(RequestContext, self).__init__(
auth_token=auth_token, user=user, tenant=project,
domain=domain, user_domain=user_domain,
project_domain=project_domain,
read_only=read_only, show_deleted=show_deleted,
request_id=request_id)
# request_id might be a byte array
self.request_id = encodeutils.safe_decode(self.request_id)
# we save an additional 'project' internally for use
self.project = project
# Session for DB access
self._session = None
self.auth_url = auth_url
self.trusts = trusts
self.user_name = user_name
self.project_name = project_name
self.domain_name = domain_name
self.user_domain_name = user_domain_name
self.project_domain_name = project_domain_name
self.auth_token_info = auth_token_info
self.region_name = region_name
self.roles = roles or []
self.password = password
# Check user is admin or not
if is_admin is None:
self.is_admin = policy.enforce(self, 'context_is_admin',
target={'project': self.project},
do_raise=False)
else:
self.is_admin = is_admin
@property
def session(self):
if self._session is None:
self._session = db_api.get_session()
return self._session
def to_dict(self):
return {
'auth_url': self.auth_url,
'auth_token': self.auth_token,
'auth_token_info': self.auth_token_info,
'user': self.user,
'user_name': self.user_name,
'user_domain': self.user_domain,
'user_domain_name': self.user_domain_name,
'project': self.project,
'project_name': self.project_name,
'project_domain': self.project_domain,
'project_domain_name': self.project_domain_name,
'domain': self.domain,
'domain_name': self.domain_name,
'trusts': self.trusts,
'region_name': self.region_name,
'roles': self.roles,
'show_deleted': self.show_deleted,
'is_admin': self.is_admin,
'request_id': self.request_id,
'password': self.password,
}
@classmethod
def from_dict(cls, values):
return cls(**values)
def get_service_context(set_project_id=False, **kwargs):
'''An abstraction layer for getting service credential.
There could be multiple cloud backends for bilean to use. This
abstraction layer provides an indirection for bilean to get the
credentials of 'bilean' user on the specific cloud. By default,
this credential refers to the credentials built for keystone middleware
in an OpenStack cloud.
'''
identity_service = driver_base.BileanDriver().identity
service_creds = identity_service.get_service_credentials(**kwargs)
if set_project_id:
project = identity_service().conn.session.get_project_id()
service_creds.update(project=project)
return RequestContext(is_admin=True, **service_creds)
def get_admin_context(show_deleted=False):
return RequestContext(is_admin=True, show_deleted=show_deleted)

View File

@ -1,234 +0,0 @@
#
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
'''
Bilean exception subclasses.
'''
import sys
from oslo_log import log as logging
import six
from bilean.common.i18n import _, _LE
_FATAL_EXCEPTION_FORMAT_ERRORS = False
LOG = logging.getLogger(__name__)
class BileanException(Exception):
'''Base Bilean Exception.
To correctly use this class, inherit from it and define
a 'msg_fmt' property. That msg_fmt will get printf'd
with the keyword arguments provided to the constructor.
'''
message = _("An unknown exception occurred.")
def __init__(self, **kwargs):
self.kwargs = kwargs
try:
self.message = self.msg_fmt % kwargs
except KeyError:
# exc_info = sys.exc_info()
# if kwargs doesn't match a variable in the message
# log the issue and the kwargs
LOG.exception(_LE('Exception in string format operation'))
for name, value in six.iteritems(kwargs):
LOG.error("%s: %s" % (name, value)) # noqa
if _FATAL_EXCEPTION_FORMAT_ERRORS:
raise
# raise exc_info[0], exc_info[1], exc_info[2]
def __str__(self):
return six.text_type(self.message)
def __unicode__(self):
return six.text_type(self.message)
def __deepcopy__(self, memo):
return self.__class__(**self.kwargs)
class SIGHUPInterrupt(BileanException):
msg_fmt = _("System SIGHUP signal received.")
class NotAuthenticated(BileanException):
msg_fmt = _("You are not authenticated.")
class Forbidden(BileanException):
msg_fmt = _("You are not authorized to complete this action.")
class BileanBadRequest(BileanException):
msg_fmt = _("The request is malformed: %(msg)s")
class MultipleChoices(BileanException):
msg_fmt = _("Multiple results found matching the query criteria %(arg)s. "
"Please be more specific.")
class InvalidInput(BileanException):
msg_fmt = _("Invalid value '%(value)s' specified for '%(name)s'")
class InvalidParameter(BileanException):
msg_fmt = _("Invalid value '%(value)s' specified for '%(name)s'")
class PluginTypeNotFound(BileanException):
msg_fmt = _("Plugin type (%(plugin_type)s) is not found.")
class RuleTypeNotFound(BileanException):
msg_fmt = _("Rule type (%(rule_type)s) is not found.")
class RuleTypeNotMatch(BileanException):
msg_fmt = _("%(message)s")
class RuleNotFound(BileanException):
msg_fmt = _("The rule (%(rule)s) could not be found.")
class RuleNotSpecified(BileanException):
msg_fmt = _("Rule not specified.")
class ActionNotFound(BileanException):
msg_fmt = _("The action (%(action)s) could not be found.")
class PolicyNotFound(BileanException):
msg_fmt = _("The policy (%(policy)s) could not be found.")
class MultipleDefaultPolicy(BileanException):
msg_fmt = _("More than one default policies found.")
class UserNotFound(BileanException):
msg_fmt = _("The user (%(user)s) could not be found.")
class InvalidSchemaError(BileanException):
msg_fmt = _("%(message)s")
class SpecValidationFailed(BileanException):
msg_fmt = _("%(message)s")
class FeatureNotSupported(BileanException):
msg_fmt = _("%(feature)s is not supported.")
class Error(BileanException):
msg_fmt = "%(message)s"
def __init__(self, msg):
super(Error, self).__init__(message=msg)
class ResourceInUse(BileanException):
msg_fmt = _("The %(resource_type)s (%(resource_id)s) is still in use.")
class InvalidContentType(BileanException):
msg_fmt = _("Invalid content type %(content_type)s")
class RequestLimitExceeded(BileanException):
msg_fmt = _('Request limit exceeded: %(message)s')
class EventNotFound(BileanException):
msg_fmt = _("The event (%(event)s) could not be found.")
class ConsumptionNotFound(BileanException):
msg_fmt = _("The consumption (%(consumption)s) could not be found.")
class InvalidResource(BileanException):
msg_fmt = _("%(msg)")
class InternalError(BileanException):
'''A base class for internal exceptions in bilean.
The internal exception classes which inherit from :class:`InternalError`
class should be translated to a user facing exception type if need to be
made user visible.
'''
msg_fmt = _('ERROR %(code)s happens for %(message)s.')
message = _('Internal error happens')
def __init__(self, **kwargs):
super(InternalError, self).__init__(**kwargs)
if 'code' in kwargs.keys():
self.code = kwargs.get('code', 500)
self.message = kwargs.get('message')
class ResourceBusyError(InternalError):
msg_fmt = _("The %(resource_type)s (%(resource_id)s) is busy now.")
class TrustNotFound(InternalError):
# Internal exception, not to be exposed to end user.
msg_fmt = _("The trust for trustor (%(trustor)s) could not be found.")
class ResourceDeletionFailure(InternalError):
# Used when deleting resources from other services
msg_fmt = _("Failed in deleting %(resource)s.")
class ResourceNotFound(InternalError):
msg_fmt = _("The resource (%(resource)s) could not be found.")
class ResourceStatusError(InternalError):
msg_fmt = _("The resource %(resource_id)s is in error status "
"- '%(status)s' due to '%(reason)s'.")
class InvalidPlugin(InternalError):
msg_fmt = _("%(message)s")
class InvalidSpec(InternalError):
msg_fmt = _("%(message)s")
class HTTPExceptionDisguise(Exception):
"""Disguises HTTP exceptions.
The purpose is to let them be handled by the webob fault application
in the wsgi pipeline.
"""
def __init__(self, exception):
self.exc = exception
self.tb = sys.exc_info()[2]

View File

@ -1,35 +0,0 @@
# Copyright 2014 IBM Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# It's based on oslo.i18n usage in OpenStack Keystone project and
# recommendations from http://docs.openstack.org/developer/oslo.i18n/usage.html
import oslo_i18n
_translators = oslo_i18n.TranslatorFactory(domain='bilean')
# The primary translation function using the well-known name "_"
_ = _translators.primary
# Translators for log levels.
#
# The abbreviated names are meant to reflect the usual use of a short
# name like '_'. The "L" is for "log" and the other letter comes from
# the level.
_LI = _translators.log_info
_LW = _translators.log_warning
_LE = _translators.log_error
_LC = _translators.log_critical

View File

@ -1,138 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import eventlet
from oslo_config import cfg
import oslo_messaging
from oslo_serialization import jsonutils
from bilean.common import context
NOTIFIER = None
TRANSPORTS = {}
TRANSPORT = None
DEFAULT_URL = "__default__"
class RequestContextSerializer(oslo_messaging.Serializer):
def __init__(self, base):
self._base = base
def serialize_entity(self, ctxt, entity):
if not self._base:
return entity
return self._base.serialize_entity(ctxt, entity)
def deserialize_entity(self, ctxt, entity):
if not self._base:
return entity
return self._base.deserialize_entity(ctxt, entity)
@staticmethod
def serialize_context(ctxt):
return ctxt.to_dict()
@staticmethod
def deserialize_context(ctxt):
return context.RequestContext.from_dict(ctxt)
class JsonPayloadSerializer(oslo_messaging.NoOpSerializer):
@classmethod
def serialize_entity(cls, context, entity):
return jsonutils.to_primitive(entity, convert_instances=True)
def setup(url=None, optional=False):
"""Initialise the oslo_messaging layer."""
global TRANSPORT, NOTIFIER
if url and url.startswith("fake://"):
# NOTE(sileht): oslo_messaging fake driver uses time.sleep
# for task switch, so we need to monkey_patch it
eventlet.monkey_patch(time=True)
if not TRANSPORT:
oslo_messaging.set_transport_defaults('bilean')
exmods = ['bilean.common.exception']
try:
TRANSPORT = oslo_messaging.get_transport(
cfg.CONF, url, allowed_remote_exmods=exmods)
except oslo_messaging.InvalidTransportURL as e:
TRANSPORT = None
if not optional or e.url:
# NOTE(sileht): oslo_messaging is configured but unloadable
# so reraise the exception
raise
if not NOTIFIER and TRANSPORT:
serializer = RequestContextSerializer(JsonPayloadSerializer())
NOTIFIER = oslo_messaging.Notifier(TRANSPORT, serializer=serializer)
def cleanup():
"""Cleanup the oslo_messaging layer."""
global TRANSPORT, TRANSPORTS, NOTIFIER
for url in TRANSPORTS:
TRANSPORTS[url].cleanup()
del TRANSPORTS[url]
TRANSPORT = NOTIFIER = None
def get_transport(url=None, optional=False, cache=True):
"""Initialise the oslo_messaging layer."""
global TRANSPORTS, DEFAULT_URL
cache_key = url or DEFAULT_URL
transport = TRANSPORTS.get(cache_key)
if not transport or not cache:
try:
transport = oslo_messaging.get_transport(cfg.CONF, url)
except oslo_messaging.InvalidTransportURL as e:
if not optional or e.url:
# NOTE(sileht): oslo_messaging is configured but unloadable
# so reraise the exception
raise
return None
else:
if cache:
TRANSPORTS[cache_key] = transport
return transport
def get_rpc_server(target, endpoint):
"""Return a configured oslo_messaging rpc server."""
serializer = RequestContextSerializer(JsonPayloadSerializer())
return oslo_messaging.get_rpc_server(TRANSPORT, target, [endpoint],
executor='eventlet',
serializer=serializer)
def get_rpc_client(**kwargs):
"""Return a configured oslo_messaging RPCClient."""
target = oslo_messaging.Target(**kwargs)
serializer = RequestContextSerializer(JsonPayloadSerializer())
return oslo_messaging.RPCClient(TRANSPORT, target,
serializer=serializer)
def get_notification_listener(transport, targets, endpoints,
allow_requeue=False):
"""Return a configured oslo_messaging notification listener."""
return oslo_messaging.get_notification_listener(
transport, targets, endpoints, executor='eventlet',
allow_requeue=allow_requeue)
def get_notifier(publisher_id):
"""Return a configured oslo_messaging notifier."""
return NOTIFIER.prepare(publisher_id=publisher_id)

View File

@ -1,47 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Policy Engine For Bilean
"""
from oslo_config import cfg
from oslo_policy import policy
from bilean.common import exception
POLICY_ENFORCER = None
CONF = cfg.CONF
def _get_enforcer(policy_file=None, rules=None, default_rule=None):
global POLICY_ENFORCER
if POLICY_ENFORCER is None:
POLICY_ENFORCER = policy.Enforcer(CONF,
policy_file=policy_file,
rules=rules,
default_rule=default_rule)
return POLICY_ENFORCER
def enforce(context, rule, target, do_raise=True, *args, **kwargs):
enforcer = _get_enforcer()
credentials = context.to_dict()
target = target or {}
if do_raise:
kwargs.update(exc=exception.Forbidden)
return enforcer.enforce(rule, target, credentials, do_raise,
*args, **kwargs)

View File

@ -1,427 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import numbers
import six
from oslo_utils import strutils
from bilean.common import exception
from bilean.common.i18n import _
class AnyIndexDict(collections.Mapping):
'''Convenience schema for a list.'''
def __init__(self, value):
self.value = value
def __getitem__(self, key):
if key != '*' and not isinstance(key, six.integer_types):
raise KeyError(_('Invalid key %s') % str(key))
return self.value
def __iter__(self):
yield '*'
def __len__(self):
return 1
class Schema(collections.Mapping):
'''Class for validating rule specifications.'''
KEYS = (
TYPE, DESCRIPTION, DEFAULT, REQUIRED, SCHEMA, UPDATABLE,
CONSTRAINTS, READONLY,
) = (
'type', 'description', 'default', 'required', 'schema', 'updatable',
'constraints', 'readonly',
)
TYPES = (
INTEGER, STRING, NUMBER, BOOLEAN, MAP, LIST,
) = (
'Integer', 'String', 'Number', 'Boolean', 'Map', 'List',
)
def __init__(self, description=None, default=None,
required=False, schema=None, updatable=False,
readonly=False, constraints=None):
if schema is not None:
if type(self) not in (List, Map):
msg = _('Schema valid only for List or Map, not '
'"%s"') % self[self.TYPE]
raise exception.InvalidSchemaError(message=msg)
if self[self.TYPE] == self.LIST:
self.schema = AnyIndexDict(schema)
else:
self.schema = schema
self.description = description
self.default = default
self.required = required
self.updatable = updatable
self.constraints = constraints or []
self.readonly = readonly
self._len = None
def has_default(self):
return self.default is not None
def get_default(self):
return self.resolve(self.default)
def _validate_default(self, context):
if self.default is None:
return
try:
self.validate(self.default, context)
except (ValueError, TypeError) as exc:
raise exception.InvalidSchemaError(
message=_('Invalid default %(default)s (%(exc)s)') %
dict(default=self.default, exc=exc))
def validate(self, context=None):
'''Validates the schema.
This method checks if the schema itself is valid.
'''
self._validate_default(context)
# validated nested schema: List or Map
if self.schema:
if isinstance(self.schema, AnyIndexDict):
self.schema.value.validate(context)
else:
for nested_schema in self.schema.values():
nested_schema.validate(context)
def validate_constraints(self, value, context=None, skipped=None):
if not skipped:
skipped = []
try:
for constraint in self.constraints:
if type(constraint) not in skipped:
constraint.validate(value, context)
except ValueError as ex:
raise exception.SpecValidationFailed(message=six.text_type(ex))
def __getitem__(self, key):
if key == self.DESCRIPTION:
if self.description is not None:
return self.description
elif key == self.DEFAULT:
if self.default is not None:
return self.default
elif key == self.SCHEMA:
if self.schema is not None:
return dict((n, dict(s)) for n, s in self.schema.items())
elif key == self.REQUIRED:
return self.required
elif key == self.READONLY:
return self.readonly
elif key == self.CONSTRAINTS:
if self.constraints:
return [dict(c) for c in self.constraints]
raise KeyError(key)
def __iter__(self):
for k in self.KEYS:
try:
self[k]
except KeyError:
pass
else:
yield k
def __len__(self):
if self._len is None:
self._len = len(list(iter(self)))
return self._len
class Boolean(Schema):
def __getitem__(self, key):
if key == self.TYPE:
return self.BOOLEAN
else:
return super(Boolean, self).__getitem__(key)
def to_schema_type(self, value):
return strutils.bool_from_string(str(value), strict=True)
def resolve(self, value):
if str(value).lower() not in ('true', 'false'):
msg = _('The value "%s" is not a valid Boolean') % value
raise exception.SpecValidationFailed(message=msg)
return strutils.bool_from_string(value, strict=True)
def validate(self, value, context=None):
if isinstance(value, bool):
return
self.resolve(value)
class Integer(Schema):
def __getitem__(self, key):
if key == self.TYPE:
return self.INTEGER
else:
return super(Integer, self).__getitem__(key)
def to_schema_type(self, value):
if isinstance(value, six.integer_types):
return value
try:
num = int(value)
except ValueError:
raise ValueError(_('%s is not an intger.') % num)
return num
def resolve(self, value):
try:
return int(value)
except (TypeError, ValueError):
msg = _('The value "%s" cannot be converted into an '
'integer.') % value
raise exception.SpecValidationFailed(message=msg)
def validate(self, value, context=None):
if not isinstance(value, six.integer_types):
value = self.resolve(value)
self.validate_constraints(value, self, context)
class String(Schema):
def __getitem__(self, key):
if key == self.TYPE:
return self.STRING
else:
return super(String, self).__getitem__(key)
def to_schema_type(self, value):
return str(value)
def resolve(self, value):
try:
return str(value)
except (TypeError, ValueError):
raise
def validate(self, value, context=None):
if not isinstance(value, six.string_types):
msg = _('The value "%s" cannot be converted into a '
'string.') % value
raise exception.SpecValidationFailed(message=msg)
self.resolve(value)
self.validate_constraints(value, self, context)
class Number(Schema):
def __getitem__(self, key):
if key == self.TYPE:
return self.NUMBER
else:
return super(Number, self).__getitem__(key)
def to_schema_type(self, value):
if isinstance(value, numbers.Number):
return value
try:
return int(value)
except ValueError:
return float(value)
def resolve(self, value):
if isinstance(value, numbers.Number):
return value
try:
return int(value)
except ValueError:
return float(value)
def validate(self, value, context=None):
if isinstance(value, numbers.Number):
return
self.resolve(value)
self.resolve_constraints(value, self, context)
class List(Schema):
def __getitem__(self, key):
if key == self.TYPE:
return self.LIST
else:
return super(List, self).__getitem__(key)
def _get_children(self, values, keys, context):
sub_schema = self.schema
if sub_schema is not None:
# We have a child schema specified for list elements
# Fake a dict of array elements, since we have only one schema
schema_arr = dict((k, sub_schema[k]) for k in keys)
subspec = Spec(schema_arr, dict(values))
subspec.validate()
return ((k, subspec[k]) for k in keys)
else:
return values
def get_default(self):
if not isinstance(self.default, collections.Sequence):
raise TypeError(_('"%s" is not a List') % self.default)
return self.default
def resolve(self, value, context=None):
if not isinstance(value, collections.Sequence):
raise TypeError(_('"%s" is not a List') % value)
return [v[1] for v in self._get_children(enumerate(value),
list(range(len(value))),
context)]
def validate(self, value, context=None):
if not isinstance(value, collections.Mapping):
raise TypeError(_('"%s" is not a Map') % value)
for key, child in self.schema.items():
item_value = value.get(key)
child.validate(item_value, context)
class Map(Schema):
def __getitem__(self, key):
if key == self.TYPE:
return self.MAP
else:
return super(Map, self).__getitem__(key)
def _get_children(self, values, context=None):
# There are cases where the Map is not specified to the very detailed
# levels, we treat them as valid specs as well.
if self.schema is None:
return values
sub_schema = self.schema
if sub_schema is not None:
# sub_schema shoud be a dict here
subspec = Spec(sub_schema, dict(values))
subspec.validate()
return ((k, subspec[k]) for k in sub_schema)
else:
return values
def get_default(self):
if not isinstance(self.default, collections.Mapping):
raise TypeError(_('"%s" is not a Map') % self.default)
return self.default
def resolve(self, value, context=None):
if not isinstance(value, collections.Mapping):
raise TypeError(_('"%s" is not a Map') % value)
return dict(self._get_children(six.iteritems(value), context))
def validate(self, value, context=None):
if not isinstance(value, collections.Mapping):
raise TypeError(_('"%s" is not a Map') % value)
for key, child in self.schema.items():
item_value = value.get(key)
child.validate(item_value, context)
class Spec(collections.Mapping):
'''A class that contains all spec items.'''
def __init__(self, schema, data):
self._schema = schema
self._data = data
def validate(self):
'''Validate the schema.'''
for (k, s) in self._schema.items():
try:
# validate through resolve
self.resolve_value(k)
except (TypeError, ValueError) as err:
msg = _('Spec validation error (%(key)s): %(err)s') % dict(
key=k, err=six.text_type(err))
raise exception.SpecValidationFailed(message=msg)
for key in self._data:
if key not in self._schema:
msg = _('Unrecognizable spec item "%s"') % key
raise exception.SpecValidationFailed(message=msg)
def resolve_value(self, key):
if key not in self:
raise KeyError(_('Invalid spec item: "%s"') % key)
schema_item = self._schema[key]
if key in self._data:
raw_value = self._data[key]
return schema_item.resolve(raw_value)
elif schema_item.has_default():
return schema_item.get_default()
elif schema_item.required:
raise ValueError(_('Required spec item "%s" not assigned') % key)
def __getitem__(self, key):
'''Lazy evaluation for spec items.'''
return self.resolve_value(key)
def __len__(self):
'''Number of items in the spec.
A spec always contain all keys though some may be not specified.
'''
return len(self._schema)
def __contains__(self, key):
return key in self._schema
def __iter__(self):
return iter(self._schema)
def get_spec_version(spec):
if not isinstance(spec, dict):
msg = _('The provided spec is not a map.')
raise exception.SpecValidationFailed(message=msg)
if 'type' not in spec:
msg = _("The 'type' key is missing from the provided spec map.")
raise exception.SpecValidationFailed(message=msg)
if 'version' not in spec:
msg = _("The 'version' key is missing from the provided spec map.")
raise exception.SpecValidationFailed(message=msg)
return (spec['type'], spec['version'])

View File

@ -1,41 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Utility methods for serializing responses
"""
import datetime
from oslo_log import log as logging
from oslo_serialization import jsonutils
from oslo_utils import encodeutils
import six
LOG = logging.getLogger(__name__)
class JSONResponseSerializer(object):
def to_json(self, data):
def sanitizer(obj):
if isinstance(obj, datetime.datetime):
return obj.isoformat()
return six.text_type(obj)
response = jsonutils.dumps(data, default=sanitizer, sort_keys=True)
LOG.debug("JSON response : %s" % response)
return response
def default(self, response, result):
response.content_type = 'application/json'
response.body = encodeutils.safe_encode(self.to_json(result))

View File

@ -1,198 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
'''
Utilities module.
'''
import datetime
import decimal
import random
import six
import string
from cryptography import fernet
import requests
from requests import exceptions
from six.moves import urllib
from oslo_config import cfg
from oslo_log import log as logging
from oslo_utils import encodeutils
from oslo_utils import strutils
from oslo_utils import timeutils
from bilean.common import exception
from bilean.common.i18n import _, _LI
cfg.CONF.import_opt('max_response_size', 'bilean.common.config')
LOG = logging.getLogger(__name__)
class URLFetchError(exception.Error, IOError):
pass
def parse_int_param(name, value, allow_zero=True, allow_negative=False,
lower_limit=None, upper_limit=None):
if value is None:
return None
if value in ('0', 0):
if allow_zero:
return int(value)
raise exception.InvalidParameter(name=name, value=value)
try:
result = int(value)
except (TypeError, ValueError):
raise exception.InvalidParameter(name=name, value=value)
else:
if any([(allow_negative is False and result < 0),
(lower_limit and result < lower_limit),
(upper_limit and result > upper_limit)]):
raise exception.InvalidParameter(name=name, value=value)
return result
def parse_bool_param(name, value):
if str(value).lower() not in ('true', 'false'):
raise exception.InvalidParameter(name=name, value=str(value))
return strutils.bool_from_string(value, strict=True)
def url_fetch(url, allowed_schemes=('http', 'https')):
'''Get the data at the specified URL.
The URL must use the http: or https: schemes.
The file: scheme is also supported if you override
the allowed_schemes argument.
Raise an IOError if getting the data fails.
'''
LOG.info(_LI('Fetching data from %s'), url)
components = urllib.parse.urlparse(url)
if components.scheme not in allowed_schemes:
raise URLFetchError(_('Invalid URL scheme %s') % components.scheme)
if components.scheme == 'file':
try:
return urllib.request.urlopen(url).read()
except urllib.error.URLError as uex:
raise URLFetchError(_('Failed to retrieve data: %s') % uex)
try:
resp = requests.get(url, stream=True)
resp.raise_for_status()
# We cannot use resp.text here because it would download the entire
# file, and a large enough file would bring down the engine. The
# 'Content-Length' header could be faked, so it's necessary to
# download the content in chunks to until max_response_size is reached.
# The chunk_size we use needs to balance CPU-intensive string
# concatenation with accuracy (eg. it's possible to fetch 1000 bytes
# greater than max_response_size with a chunk_size of 1000).
reader = resp.iter_content(chunk_size=1000)
result = ""
for chunk in reader:
result += chunk
if len(result) > cfg.CONF.max_response_size:
raise URLFetchError("Data exceeds maximum allowed size (%s"
" bytes)" % cfg.CONF.max_response_size)
return result
except exceptions.RequestException as ex:
raise URLFetchError(_('Failed to retrieve data: %s') % ex)
def encrypt(msg):
'''Encrypt message with random key.
:param msg: message to be encrypted
:returns: encrypted msg and key to decrypt
'''
password = fernet.Fernet.generate_key()
f = fernet.Fernet(password)
key = f.encrypt(encodeutils.safe_encode(msg))
return encodeutils.safe_decode(password), encodeutils.safe_decode(key)
def decrypt(msg, key):
'''Decrypt message using provided key.
:param msg: encrypted message
:param key: key used to decrypt
:returns: decrypted message string
'''
f = fernet.Fernet(encodeutils.safe_encode(msg))
msg = f.decrypt(encodeutils.safe_encode(key))
return encodeutils.safe_decode(msg)
def random_name(length=8):
if length <= 0:
return ''
lead = random.choice(string.ascii_letters)
tail = ''.join(random.choice(string.ascii_letters + string.digits)
for i in range(length - 1))
return lead + tail
def format_time(value):
"""Cut microsecond and format to isoformat string."""
if isinstance(value, datetime.datetime):
value = value.replace(microsecond=0)
value = value.isoformat()
return value
def format_time_to_seconds(t):
"""Format datetime to seconds from 1970-01-01 00:00:00 UTC."""
epoch = datetime.datetime.utcfromtimestamp(0)
if isinstance(t, datetime.datetime):
return (t - epoch).total_seconds()
if isinstance(t, six.string_types):
try:
dt = timeutils.parse_strtime(t)
except ValueError:
dt = timeutils.normalize_time(timeutils.parse_isotime(t))
return (dt - epoch).total_seconds()
return t
def make_decimal(value):
"""Format float to decimal."""
if isinstance(value, decimal.Decimal):
return value
if isinstance(value, float):
return decimal.Decimal.from_float(value)
return decimal.Decimal(str(value))
def format_decimal(value, num=8):
"""Format decimal and keep num decimals."""
if not isinstance(value, decimal.Decimal):
value = make_decimal(value)
dec = "0.%s" % ('0' * num)
return value.quantize(decimal.Decimal(dec))
def dec2str(value):
"""Decimal to str and keep 2 decimals."""
if not isinstance(value, decimal.Decimal):
value = make_decimal(value)
return str(value.quantize(decimal.Decimal('0.00')))

View File

@ -1,920 +0,0 @@
#
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2013 IBM Corp.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Utility methods for working with WSGI servers
"""
import abc
import errno
import logging as std_logging
import os
import signal
import sys
import time
import eventlet
from eventlet.green import socket
from eventlet.green import ssl
import eventlet.greenio
import eventlet.wsgi
import functools
from oslo_config import cfg
import oslo_i18n
from oslo_log import log as logging
from oslo_serialization import jsonutils
from oslo_utils import importutils
from paste import deploy
import routes
import routes.middleware
import six
import webob.dec
import webob.exc
from bilean.common import exception
from bilean.common.i18n import _, _LE, _LI, _LW
from bilean.common import serializers
LOG = logging.getLogger(__name__)
URL_LENGTH_LIMIT = 50000
api_opts = [
cfg.IPOpt('bind_host', default='0.0.0.0',
help=_('Address to bind the server. Useful when '
'selecting a particular network interface.')),
cfg.PortOpt('bind_port', default=8770,
help=_('The port on which the server will listen.')),
cfg.IntOpt('backlog', default=4096,
help=_("Number of backlog requests "
"to configure the socket with.")),
cfg.StrOpt('cert_file',
help=_("Location of the SSL certificate file "
"to use for SSL mode.")),
cfg.StrOpt('key_file',
help=_("Location of the SSL key file to use "
"for enabling SSL mode.")),
cfg.IntOpt('workers', default=0,
help=_("Number of workers for Bilean service.")),
cfg.IntOpt('max_header_line', default=16384,
help=_('Maximum line size of message headers to be accepted. '
'max_header_line may need to be increased when using '
'large tokens (typically those generated by the '
'Keystone v3 API with big service catalogs).')),
cfg.IntOpt('tcp_keepidle', default=600,
help=_('The value for the socket option TCP_KEEPIDLE. This is '
'the time in seconds that the connection must be idle '
'before TCP starts sending keepalive probes.')),
]
api_group = cfg.OptGroup('bilean_api')
cfg.CONF.register_group(api_group)
cfg.CONF.register_opts(api_opts, group=api_group)
wsgi_eventlet_opts = [
cfg.BoolOpt('wsgi_keep_alive', default=True,
help=_("If false, closes the client socket explicitly.")),
cfg.IntOpt('client_socket_timeout', default=900,
help=_("Timeout for client connections' socket operations. "
"If an incoming connection is idle for this number of "
"seconds it will be closed. A value of '0' indicates "
"waiting forever.")),
]
wsgi_eventlet_group = cfg.OptGroup('eventlet_opts')
cfg.CONF.register_group(wsgi_eventlet_group)
cfg.CONF.register_opts(wsgi_eventlet_opts, group=wsgi_eventlet_group)
json_size_opt = cfg.IntOpt('max_json_body_size', default=1048576,
help=_('Maximum raw byte size of JSON request body.'
' Should be larger than max_template_size.'))
cfg.CONF.register_opt(json_size_opt)
def list_opts():
yield None, [json_size_opt]
yield 'bilean_api', api_opts
yield 'eventlet_opts', wsgi_eventlet_opts
def get_bind_addr(conf, default_port=None):
return (conf.bind_host, conf.bind_port or default_port)
def get_socket(conf, default_port):
'''Bind socket to bind ip:port in conf
:param conf: a cfg.ConfigOpts object
:param default_port: port to bind to if none is specified in conf
:returns : a socket object as returned from socket.listen or
ssl.wrap_socket if conf specifies cert_file
'''
bind_addr = get_bind_addr(conf, default_port)
# TODO(jaypipes): eventlet's greened socket module does not actually
# support IPv6 in getaddrinfo(). We need to get around this in the
# future or monitor upstream for a fix
address_family = [addr[0] for addr in socket.getaddrinfo(bind_addr[0],
bind_addr[1], socket.AF_UNSPEC, socket.SOCK_STREAM)
if addr[0] in (socket.AF_INET, socket.AF_INET6)][0]
cert_file = conf.cert_file
key_file = conf.key_file
use_ssl = cert_file or key_file
if use_ssl and (not cert_file or not key_file):
raise RuntimeError(_("When running server in SSL mode, you must "
"specify both a cert_file and key_file "
"option value in your configuration file"))
sock = None
retry_until = time.time() + 30
while not sock and time.time() < retry_until:
try:
sock = eventlet.listen(bind_addr, backlog=conf.backlog,
family=address_family)
except socket.error as err:
if err.args[0] != errno.EADDRINUSE:
raise
eventlet.sleep(0.1)
if not sock:
raise RuntimeError(_("Could not bind to %(bind_addr)s after trying "
" 30 seconds") % {'bind_addr': bind_addr})
return sock
class WritableLogger(object):
"""A thin wrapper that responds to `write` and logs."""
def __init__(self, LOG, level=std_logging.DEBUG):
self.LOG = LOG
self.level = level
def write(self, msg):
self.LOG.log(self.level, msg.rstrip("\n"))
class Server(object):
"""Server class to manage multiple WSGI sockets and applications."""
def __init__(self, name, conf, threads=1000):
os.umask(0o27) # ensure files are created with the correct privileges
self._logger = logging.getLogger("eventlet.wsgi.server")
self._wsgi_logger = WritableLogger(self._logger)
self.name = name
self.threads = threads
self.children = set()
self.stale_children = set()
self.running = True
self.pgid = os.getpid()
self.conf = conf
try:
os.setpgid(self.pgid, self.pgid)
except OSError:
self.pgid = 0
def kill_children(self, *args):
"""Kills the entire process group."""
LOG.error(_LE('SIGTERM received'))
signal.signal(signal.SIGTERM, signal.SIG_IGN)
signal.signal(signal.SIGINT, signal.SIG_IGN)
self.running = False
os.killpg(0, signal.SIGTERM)
def hup(self, *args):
"""Reloads configuration files with zero down time."""
LOG.error(_LE('SIGHUP received'))
signal.signal(signal.SIGHUP, signal.SIG_IGN)
raise exception.SIGHUPInterrupt
def start(self, application, default_port):
"""Run a WSGI server with the given application.
:param application: The application to run in the WSGI server
:param conf: a cfg.ConfigOpts object
:param default_port: Port to bind to if none is specified in conf
"""
eventlet.wsgi.MAX_HEADER_LINE = self.conf.max_header_line
self.application = application
self.default_port = default_port
self.configure_socket()
self.start_wsgi()
def start_wsgi(self):
if self.conf.workers == 0:
# Useful for profiling, test, debug etc.
self.pool = eventlet.GreenPool(size=self.threads)
self.pool.spawn_n(self._single_run, self.application, self.sock)
return
LOG.info(_LI("Starting %d workers") % self.conf.workers)
signal.signal(signal.SIGTERM, self.kill_children)
signal.signal(signal.SIGINT, self.kill_children)
signal.signal(signal.SIGHUP, self.hup)
while len(self.children) < self.conf.workers:
self.run_child()
def wait_on_children(self):
"""Wait on children exit."""
while self.running:
try:
pid, status = os.wait()
if os.WIFEXITED(status) or os.WIFSIGNALED(status):
self._remove_children(pid)
self._verify_and_respawn_children(pid, status)
except OSError as err:
if err.errno not in (errno.EINTR, errno.ECHILD):
raise
except KeyboardInterrupt:
LOG.info(_LI('Caught keyboard interrupt. Exiting.'))
os.killpg(0, signal.SIGTERM)
break
except exception.SIGHUPInterrupt:
self.reload()
continue
eventlet.greenio.shutdown_safe(self.sock)
self.sock.close()
LOG.debug('Exited')
def configure_socket(self, old_conf=None, has_changed=None):
"""Ensure a socket exists and is appropriately configured.
This function is called on start up, and can also be
called in the event of a configuration reload.
When called for the first time a new socket is created.
If reloading and either bind_host or bind port have been
changed the existing socket must be closed and a new
socket opened (laws of physics).
In all other cases (bind_host/bind_port have not changed)
the existing socket is reused.
:param old_conf: Cached old configuration settings (if any)
:param has changed: callable to determine if a parameter has changed
"""
new_sock = (old_conf is None or (
has_changed('bind_host') or
has_changed('bind_port')))
# check https
use_ssl = not (not self.conf.cert_file or not self.conf.key_file)
# Were we using https before?
old_use_ssl = (old_conf is not None and not (
not old_conf.get('key_file') or
not old_conf.get('cert_file')))
# Do we now need to perform an SSL wrap on the socket?
wrap_sock = use_ssl is True and (old_use_ssl is False or new_sock)
# Do we now need to perform an SSL unwrap on the socket?
unwrap_sock = use_ssl is False and old_use_ssl is True
if new_sock:
self._sock = None
if old_conf is not None:
self.sock.close()
_sock = get_socket(self.conf, self.default_port)
_sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
# sockets can hang around forever without keepalive
_sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
self._sock = _sock
if wrap_sock:
self.sock = ssl.wrap_socket(self._sock,
certfile=self.conf.cert_file,
keyfile=self.conf.key_file)
if unwrap_sock:
self.sock = self._sock
if new_sock and not use_ssl:
self.sock = self._sock
# Pick up newly deployed certs
if old_conf is not None and use_ssl is True and old_use_ssl is True:
if has_changed('cert_file'):
self.sock.certfile = self.conf.cert_file
if has_changed('key_file'):
self.sock.keyfile = self.conf.key_file
if new_sock or (old_conf is not None and has_changed('tcp_keepidle')):
# This option isn't available in the OS X version of eventlet
if hasattr(socket, 'TCP_KEEPIDLE'):
self.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE,
self.conf.tcp_keepidle)
if old_conf is not None and has_changed('backlog'):
self.sock.listen(self.conf.backlog)
def _remove_children(self, pid):
if pid in self.children:
self.children.remove(pid)
LOG.info(_LI('Removed dead child %s'), pid)
elif pid in self.stale_children:
self.stale_children.remove(pid)
LOG.info(_LI('Removed stale child %s'), pid)
else:
LOG.warn(_LW('Unrecognised child %s'), pid)
def _verify_and_respawn_children(self, pid, status):
if len(self.stale_children) == 0:
LOG.debug('No stale children')
if os.WIFEXITED(status) and os.WEXITSTATUS(status) != 0:
LOG.error(_LE('Not respawning child %d, cannot '
'recover from termination'), pid)
if not self.children and not self.stale_children:
LOG.info(_LI('All workers have terminated. Exiting'))
self.running = False
else:
if len(self.children) < self.conf.workers:
self.run_child()
def stash_conf_values(self):
"""Make a copy of some of the current global CONF's settings.
Allows determining if any of these values have changed
when the config is reloaded.
"""
conf = {}
conf['bind_host'] = self.conf.bind_host
conf['bind_port'] = self.conf.bind_port
conf['backlog'] = self.conf.backlog
conf['key_file'] = self.conf.key_file
conf['cert_file'] = self.conf.cert_file
return conf
def reload(self):
"""Reload and re-apply configuration settings.
Existing child processes are sent a SIGHUP signal and will exit after
completing existing requests. New child processes, which will have the
updated configuration, are spawned. This allows preventing
interruption to the service.
"""
def _has_changed(old, new, param):
old = old.get(param)
new = getattr(new, param)
return (new != old)
old_conf = self.stash_conf_values()
has_changed = functools.partial(_has_changed, old_conf, self.conf)
cfg.CONF.reload_config_files()
os.killpg(self.pgid, signal.SIGHUP)
self.stale_children = self.children
self.children = set()
# Ensure any logging config changes are picked up
logging.setup(cfg.CONF, self.name)
self.configure_socket(old_conf, has_changed)
self.start_wsgi()
def wait(self):
"""Wait until all servers have completed running."""
try:
if self.children:
self.wait_on_children()
else:
self.pool.waitall()
except KeyboardInterrupt:
pass
def run_child(self):
def child_hup(*args):
"""Shuts down child processes, existing requests are handled."""
signal.signal(signal.SIGHUP, signal.SIG_IGN)
eventlet.wsgi.is_accepting = False
self.sock.close()
pid = os.fork()
if pid == 0:
signal.signal(signal.SIGHUP, child_hup)
signal.signal(signal.SIGTERM, signal.SIG_DFL)
# ignore the interrupt signal to avoid a race whereby
# a child worker receives the signal before the parent
# and is respawned unnecessarily as a result
signal.signal(signal.SIGINT, signal.SIG_IGN)
# The child has no need to stash the unwrapped
# socket, and the reference prevents a clean
# exit on sighup
self._sock = None
self.run_server()
LOG.info(_LI('Child %d exiting normally'), os.getpid())
# self.pool.waitall() is now called in wsgi's server so
# it's safe to exit here
sys.exit(0)
else:
LOG.info(_LI('Started child %s'), pid)
self.children.add(pid)
def run_server(self):
"""Run a WSGI server."""
eventlet.wsgi.HttpProtocol.default_request_version = "HTTP/1.0"
eventlet.hubs.use_hub('poll')
eventlet.patcher.monkey_patch(all=False, socket=True)
self.pool = eventlet.GreenPool(size=self.threads)
socket_timeout = cfg.CONF.eventlet_opts.client_socket_timeout or None
try:
eventlet.wsgi.server(
self.sock, self.application,
custom_pool=self.pool,
url_length_limit=URL_LENGTH_LIMIT,
log=self._wsgi_logger,
debug=cfg.CONF.debug,
keepalive=cfg.CONF.eventlet_opts.wsgi_keep_alive,
socket_timeout=socket_timeout)
except socket.error as err:
if err[0] != errno.EINVAL:
raise
self.pool.waitall()
def _single_run(self, application, sock):
"""Start a WSGI server in a new green thread."""
LOG.info(_LI("Starting single process server"))
eventlet.wsgi.server(sock, application, custom_pool=self.pool,
url_length_limit=URL_LENGTH_LIMIT,
log=self._wsgi_logger, debug=cfg.CONF.debug)
class Middleware(object):
"""Base WSGI middleware wrapper.
These classes require an application to be initialized that will be called
next. By default the middleware will simply call its wrapped app, or you
can override __call__ to customize its behavior.
"""
def __init__(self, application):
self.application = application
def process_request(self, request):
"""Called on each request.
If this returns None, the next application down the stack will be
executed. If it returns a response then that response will be returned
and execution will stop here.
:param request: A request object to be processed.
:returns: None.
"""
return None
def process_response(self, response):
"""Customize the response."""
return response
@webob.dec.wsgify
def __call__(self, request):
response = self.process_request(request)
if response:
return response
response = request.get_response(self.application)
return self.process_response(response)
class Debug(Middleware):
"""Helper class that can be inserted into any WSGI application chain."""
@webob.dec.wsgify
def __call__(self, req):
print(("*" * 40) + " REQUEST ENVIRON")
for key, value in req.environ.items():
print(key, "=", value)
print('')
resp = req.get_response(self.application)
print(("*" * 40) + " RESPONSE HEADERS")
for (key, value) in six.iteritems(resp.headers):
print(key, "=", value)
print('')
resp.app_iter = self.print_generator(resp.app_iter)
return resp
@staticmethod
def print_generator(app_iter):
# Iterator that prints the contents of a wrapper string iterator
# when iterated.
print(("*" * 40) + " BODY")
for part in app_iter:
sys.stdout.write(part)
sys.stdout.flush()
yield part
print('')
def debug_filter(app, conf, **local_conf):
return Debug(app)
class DefaultMethodController(object):
"""A default controller for handling requests.
This controller handles the OPTIONS request method and any of the
HTTP methods that are not explicitly implemented by the application.
"""
def options(self, req, allowed_methods, *args, **kwargs):
"""Handler of the OPTIONS request method.
Return a response that includes the 'Allow' header listing the methods
that are implemented. A 204 status code is used for this response.
"""
raise webob.exc.HTTPNoContent(headers=[('Allow', allowed_methods)])
def reject(self, req, allowed_methods, *args, **kwargs):
"""Return a 405 method not allowed error.
As a convenience, the 'Allow' header with the list of implemented
methods is included in the response as well.
"""
raise webob.exc.HTTPMethodNotAllowed(
headers=[('Allow', allowed_methods)])
class Router(object):
"""WSGI middleware that maps incoming requests to WSGI apps."""
def __init__(self, mapper):
"""Create a router for the given routes.Mapper."""
self.map = mapper
self._router = routes.middleware.RoutesMiddleware(self._dispatch,
self.map)
@webob.dec.wsgify
def __call__(self, req):
"""Route the incoming request to a controller based on self.map."""
return self._router
@staticmethod
@webob.dec.wsgify
def _dispatch(req):
"""Private dispatch method.
Called by self._router() after matching the incoming request to
a route and putting the information into req.environ.
:returns: Either returns 404 or the routed WSGI app's response.
"""
match = req.environ['wsgiorg.routing_args'][1]
if not match:
return webob.exc.HTTPNotFound()
app = match['controller']
return app
class Request(webob.Request):
"""Add some OpenStack API-specific logic to the base webob.Request."""
def best_match_content_type(self):
"""Determine the requested response content-type."""
supported = ('application/json',)
bm = self.accept.best_match(supported)
return bm or 'application/json'
def get_content_type(self, allowed_content_types):
"""Determine content type of the request body."""
if "Content-Type" not in self.headers:
raise exception.InvalidContentType(content_type=None)
content_type = self.content_type
if content_type not in allowed_content_types:
raise exception.InvalidContentType(content_type=content_type)
else:
return content_type
def best_match_language(self):
"""Determines best available locale from the Accept-Language header.
:returns: the best language match or None if the 'Accept-Language'
header was not available in the request.
"""
if not self.accept_language:
return None
all_languages = oslo_i18n.get_available_languages('bilean')
return self.accept_language.best_match(all_languages)
def is_json_content_type(request):
content_type = request.content_type
if not content_type or content_type.startswith('text/plain'):
content_type = 'application/json'
if (content_type in ('JSON', 'application/json') and
request.body.startswith(b'{')):
return True
return False
class JSONRequestDeserializer(object):
def has_body(self, request):
"""Returns whether a Webob.Request object will possess an entity body.
:param request: A Webob.Request object
"""
if request is None or request.content_length is None:
return False
if request.content_length > 0 and is_json_content_type(request):
return True
return False
def from_json(self, datastring):
try:
if len(datastring) > cfg.CONF.max_json_body_size:
msg = _('JSON body size (%(len)s bytes) exceeds maximum '
'allowed size (%(limit)s bytes).'
) % {'len': len(datastring),
'limit': cfg.CONF.max_json_body_size}
raise exception.RequestLimitExceeded(message=msg)
return jsonutils.loads(datastring)
except ValueError as ex:
raise webob.exc.HTTPBadRequest(six.text_type(ex))
def default(self, request):
if self.has_body(request):
return {'body': self.from_json(request.body)}
else:
return {}
class Resource(object):
"""WSGI app that handles (de)serialization and controller dispatch.
Reads routing information supplied by RoutesMiddleware and calls
the requested action method upon its deserializer, controller,
and serializer. Those three objects may implement any of the basic
controller action methods (create, update, show, index, delete)
along with any that may be specified in the api router. A 'default'
method may also be implemented to be used in place of any
non-implemented actions. Deserializer methods must accept a request
argument and return a dictionary. Controller methods must accept a
request argument. Additionally, they must also accept keyword
arguments that represent the keys returned by the Deserializer. They
may raise a webob.exc exception or return a dict, which will be
serialized by requested content type.
"""
def __init__(self, controller, deserializer, serializer=None):
"""Initializer.
:param controller: object that implement methods created by routes lib
:param deserializer: object that supports webob request deserialization
through controller-like actions
:param serializer: object that supports webob response serialization
through controller-like actions
"""
self.controller = controller
self.deserializer = deserializer
self.serializer = serializer
@webob.dec.wsgify(RequestClass=Request)
def __call__(self, request):
"""WSGI method that controls (de)serialization and method dispatch."""
action_args = self.get_action_args(request.environ)
action = action_args.pop('action', None)
status_code = action_args.pop('success', None)
try:
deserialized_request = self.dispatch(self.deserializer,
action, request)
action_args.update(deserialized_request)
LOG.debug(('Calling %(controller)s : %(action)s'),
{'controller': self.controller, 'action': action})
action_result = self.dispatch(self.controller, action,
request, **action_args)
except TypeError as err:
LOG.error(_LE('Exception handling resource: %s') % err)
msg = _('The server could not comply with the request since '
'it is either malformed or otherwise incorrect.')
err = webob.exc.HTTPBadRequest(msg)
http_exc = translate_exception(err, request.best_match_language())
# NOTE(luisg): We disguise HTTP exceptions, otherwise they will be
# treated by wsgi as responses ready to be sent back and they
# won't make it into the pipeline app that serializes errors
raise exception.HTTPExceptionDisguise(http_exc)
except webob.exc.HTTPException as err:
if not isinstance(err, webob.exc.HTTPError):
# Some HTTPException are actually not errors, they are
# responses ready to be sent back to the users, so we don't
# create error log, but disguise and translate them to meet
# openstacksdk's need.
http_exc = translate_exception(err,
request.best_match_language())
raise exception.HTTPExceptionDisguise(http_exc)
if isinstance(err, webob.exc.HTTPServerError):
LOG.error(
_LE("Returning %(code)s to user: %(explanation)s"),
{'code': err.code, 'explanation': err.explanation})
http_exc = translate_exception(err, request.best_match_language())
raise exception.HTTPExceptionDisguise(http_exc)
except exception.BileanException as err:
raise translate_exception(err, request.best_match_language())
except Exception as err:
log_exception(err, sys.exc_info())
raise translate_exception(err, request.best_match_language())
serializer = self.serializer or serializers.JSONResponseSerializer()
try:
response = webob.Response(request=request)
# Customize status code if default (200) should be overridden
if status_code is not None:
response.status_code = int(status_code)
# Customize 'location' header if provided
if action_result and isinstance(action_result, dict):
location = action_result.pop('location', None)
if location:
response.location = '/v1%s' % location
if not action_result:
action_result = None
self.dispatch(serializer, action, response, action_result)
return response
# return unserializable result (typically an exception)
except Exception:
return action_result
def dispatch(self, obj, action, *args, **kwargs):
"""Find action-specific method on self and call it."""
try:
method = getattr(obj, action)
except AttributeError:
method = getattr(obj, 'default')
return method(*args, **kwargs)
def get_action_args(self, request_environment):
"""Parse dictionary created by routes library."""
try:
args = request_environment['wsgiorg.routing_args'][1].copy()
except Exception:
return {}
try:
del args['controller']
except KeyError:
pass
try:
del args['format']
except KeyError:
pass
return args
def log_exception(err, exc_info):
args = {'exc_info': exc_info} if cfg.CONF.verbose or cfg.CONF.debug else {}
LOG.error(_LE("Unexpected error occurred serving API: %s"), err, **args)
def translate_exception(exc, locale):
"""Translates all translatable elements of the given exception."""
if isinstance(exc, exception.BileanException):
exc.message = oslo_i18n.translate(exc.message, locale)
else:
exc.message = oslo_i18n.translate(six.text_type(exc), locale)
if isinstance(exc, webob.exc.HTTPError):
exc.explanation = oslo_i18n.translate(exc.explanation, locale)
exc.detail = oslo_i18n.translate(getattr(exc, 'detail', ''), locale)
return exc
@six.add_metaclass(abc.ABCMeta)
class BasePasteFactory(object):
"""A base class for paste app and filter factories.
Sub-classes must override the KEY class attribute and provide
a __call__ method.
"""
KEY = None
def __init__(self, conf):
self.conf = conf
@abc.abstractmethod
def __call__(self, global_conf, **local_conf):
return
def _import_factory(self, local_conf):
"""Import an app/filter class.
Lookup the KEY from the PasteDeploy local conf and import the
class named there. This class can then be used as an app or
filter factory.
"""
class_name = local_conf[self.KEY].replace(':', '.').strip()
return importutils.import_class(class_name)
class AppFactory(BasePasteFactory):
"""A Generic paste.deploy app factory.
The WSGI app constructor must accept a ConfigOpts object and a local
config dict as its arguments.
"""
KEY = 'bilean.app_factory'
def __call__(self, global_conf, **local_conf):
factory = self._import_factory(local_conf)
return factory(self.conf, **local_conf)
class FilterFactory(AppFactory):
"""A Generic paste.deploy filter factory.
This requires bilean.filter_factory to be set to a callable which returns
a WSGI filter when invoked. The WSGI filter constructor must accept a
WSGI app, a ConfigOpts object and a local config dict as its arguments.
"""
KEY = 'bilean.filter_factory'
def __call__(self, global_conf, **local_conf):
factory = self._import_factory(local_conf)
def filter(app):
return factory(app, self.conf, **local_conf)
return filter
def setup_paste_factories(conf):
"""Set up the generic paste app and filter factories.
The app factories are constructed at runtime to allow us to pass a
ConfigOpts object to the WSGI classes.
:param conf: a ConfigOpts object
"""
global app_factory, filter_factory
app_factory = AppFactory(conf)
filter_factory = FilterFactory(conf)
def teardown_paste_factories():
"""Reverse the effect of setup_paste_factories()."""
global app_factory, filter_factory
del app_factory
del filter_factory
def paste_deploy_app(paste_config_file, app_name, conf):
"""Load a WSGI app from a PasteDeploy configuration.
Use deploy.loadapp() to load the app from the PasteDeploy configuration,
ensuring that the supplied ConfigOpts object is passed to the app and
filter constructors.
:param paste_config_file: a PasteDeploy config file
:param app_name: the name of the app/pipeline to load from the file
:param conf: a ConfigOpts object to supply to the app and its filters
:returns: the WSGI app
"""
setup_paste_factories(conf)
try:
return deploy.loadapp("config:%s" % paste_config_file, name=app_name)
finally:
teardown_paste_factories()

View File

View File

@ -1,348 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_db import api
CONF = cfg.CONF
_BACKEND_MAPPING = {'sqlalchemy': 'bilean.db.sqlalchemy.api'}
IMPL = api.DBAPI.from_config(CONF, backend_mapping=_BACKEND_MAPPING)
def get_engine():
return IMPL.get_engine()
def get_session():
return IMPL.get_session()
def db_sync(engine, version=None):
"""Migrate the database to `version` or the most recent version."""
return IMPL.db_sync(engine, version=version)
def db_version(engine):
"""Display the current database version."""
return IMPL.db_version(engine)
# users
def user_get(context, user_id, show_deleted=False, project_safe=True):
return IMPL.user_get(context, user_id,
show_deleted=show_deleted,
project_safe=project_safe)
def user_update(context, user_id, values):
return IMPL.user_update(context, user_id, values)
def user_create(context, values):
return IMPL.user_create(context, values)
def user_delete(context, user_id):
return IMPL.user_delete(context, user_id)
def user_get_all(context, show_deleted=False, limit=None,
marker=None, sort_keys=None, sort_dir=None,
filters=None):
return IMPL.user_get_all(context, show_deleted=show_deleted,
limit=limit, marker=marker,
sort_keys=sort_keys, sort_dir=sort_dir,
filters=filters)
# rules
def rule_get(context, rule_id, show_deleted=False):
return IMPL.rule_get(context, rule_id, show_deleted=False)
def rule_get_all(context, show_deleted=False, limit=None,
marker=None, sort_keys=None, sort_dir=None,
filters=None):
return IMPL.rule_get_all(context, show_deleted=show_deleted,
limit=limit, marker=marker,
sort_keys=sort_keys, sort_dir=sort_dir,
filters=filters)
def rule_create(context, values):
return IMPL.rule_create(context, values)
def rule_update(context, rule_id, values):
return IMPL.rule_update(context, rule_id, values)
def rule_delete(context, rule_id):
return IMPL.rule_delete(context, rule_id)
# resources
def resource_get(context, resource_id, show_deleted=False, project_safe=True):
return IMPL.resource_get(context, resource_id,
show_deleted=show_deleted,
project_safe=project_safe)
def resource_get_all(context, user_id=None, show_deleted=False,
limit=None, marker=None, sort_keys=None,
sort_dir=None, filters=None, project_safe=True):
return IMPL.resource_get_all(context, user_id=user_id,
show_deleted=show_deleted,
limit=limit, marker=marker,
sort_keys=sort_keys, sort_dir=sort_dir,
filters=filters, project_safe=project_safe)
def resource_create(context, values):
return IMPL.resource_create(context, values)
def resource_update(context, resource_id, values):
return IMPL.resource_update(context, resource_id, values)
def resource_delete(context, resource_id, soft_delete=True):
IMPL.resource_delete(context, resource_id, soft_delete=soft_delete)
# events
def event_get(context, event_id, project_safe=True):
return IMPL.event_get(context, event_id, project_safe=project_safe)
def event_get_all(context, limit=None, marker=None, sort_keys=None,
sort_dir=None, filters=None, project_safe=True):
return IMPL.event_get_all(context, limit=limit,
marker=marker,
sort_keys=sort_keys,
sort_dir=sort_dir,
filters=filters,
project_safe=project_safe)
def event_create(context, values):
return IMPL.event_create(context, values)
def event_delete(context, event_id):
return IMPL.event_delete(context, event_id)
# jobs
def job_create(context, values):
return IMPL.job_create(context, values)
def job_get_all(context, scheduler_id=None):
return IMPL.job_get_all(context, scheduler_id=scheduler_id)
def job_delete(context, job_id):
return IMPL.job_delete(context, job_id)
# policies
def policy_get(context, policy_id, show_deleted=False):
return IMPL.policy_get(context, policy_id, show_deleted=False)
def policy_get_all(context, limit=None, marker=None, sort_keys=None,
sort_dir=None, filters=None, show_deleted=False):
return IMPL.policy_get_all(context, limit=limit, marker=marker,
sort_keys=sort_keys, sort_dir=sort_dir,
filters=filters, show_deleted=show_deleted)
def policy_create(context, values):
return IMPL.policy_create(context, values)
def policy_update(context, policy_id, values):
return IMPL.policy_update(context, policy_id, values)
def policy_delete(context, policy_id):
return IMPL.policy_delete(context, policy_id)
# locks
def user_lock_acquire(user_id, action_id):
return IMPL.user_lock_acquire(user_id, action_id)
def user_lock_release(user_id, action_id):
return IMPL.user_lock_release(user_id, action_id)
def user_lock_steal(user_id, action_id):
return IMPL.user_lock_steal(user_id, action_id)
# actions
def action_create(context, values):
return IMPL.action_create(context, values)
def action_update(context, action_id, values):
return IMPL.action_update(context, action_id, values)
def action_get(context, action_id, project_safe=True, refresh=False):
return IMPL.action_get(context, action_id, project_safe=project_safe,
refresh=refresh)
def action_get_all_by_owner(context, owner):
return IMPL.action_get_all_by_owner(context, owner)
def action_get_all(context, filters=None, limit=None, marker=None, sort=None,
project_safe=True):
return IMPL.action_get_all(context, filters=filters, sort=sort,
limit=limit, marker=marker,
project_safe=project_safe)
def action_check_status(context, action_id, timestamp):
return IMPL.action_check_status(context, action_id, timestamp)
def dependency_add(context, depended, dependent):
return IMPL.dependency_add(context, depended, dependent)
def dependency_get_depended(context, action_id):
return IMPL.dependency_get_depended(context, action_id)
def dependency_get_dependents(context, action_id):
return IMPL.dependency_get_dependents(context, action_id)
def action_mark_succeeded(context, action_id, timestamp):
return IMPL.action_mark_succeeded(context, action_id, timestamp)
def action_mark_failed(context, action_id, timestamp, reason=None):
return IMPL.action_mark_failed(context, action_id, timestamp, reason)
def action_mark_cancelled(context, action_id, timestamp):
return IMPL.action_mark_cancelled(context, action_id, timestamp)
def action_acquire(context, action_id, owner, timestamp):
return IMPL.action_acquire(context, action_id, owner, timestamp)
def action_acquire_first_ready(context, owner, timestamp):
return IMPL.action_acquire_first_ready(context, owner, timestamp)
def action_abandon(context, action_id):
return IMPL.action_abandon(context, action_id)
def action_lock_check(context, action_id, owner=None):
'''Check whether an action has been locked(by a owner).'''
return IMPL.action_lock_check(context, action_id, owner)
def action_signal(context, action_id, value):
'''Send signal to an action via DB.'''
return IMPL.action_signal(context, action_id, value)
def action_signal_query(context, action_id):
'''Query signal status for the sepcified action.'''
return IMPL.action_signal_query(context, action_id)
def action_delete(context, action_id, force=False):
return IMPL.action_delete(context, action_id, force)
# services
def service_create(context, host, binary, topic=None):
return IMPL.service_create(context, host, binary, topic=topic)
def service_update(context, service_id, values=None):
return IMPL.service_update(context, service_id, values=values)
def service_delete(context, service_id):
return IMPL.service_delete(context, service_id)
def service_get(context, service_id):
return IMPL.service_get(context, service_id)
def service_get_by_host_and_binary(context, host, binary):
return IMPL.service_get_by_host_and_binary(context, host, binary)
def service_get_all(context):
return IMPL.service_get_all(context)
# consumptions
def consumption_get(context, consumption_id, project_safe=True):
return IMPL.consumption_get(context, consumption_id,
project_safe=project_safe)
def consumption_get_all(context, user_id=None, limit=None, marker=None,
sort_keys=None, sort_dir=None, filters=None,
project_safe=True):
return IMPL.consumption_get_all(context,
user_id=user_id,
limit=limit,
marker=marker,
sort_keys=sort_keys,
sort_dir=sort_dir,
filters=filters,
project_safe=project_safe)
def consumption_create(context, values):
return IMPL.consumption_create(context, values)
# recharges
def recharge_create(context, values):
return IMPL.recharge_create(context, values)
def recharge_get(context, recharge_id, project_safe=True):
return IMPL.recharge_get(context, recharge_id, project_safe=project_safe)
def recharge_get_all(context, limit=None, marker=None, sort_keys=None,
sort_dir=None, filters=None, project_safe=True):
return IMPL.recharge_get_all(context, limit=limit,
marker=marker,
sort_keys=sort_keys,
sort_dir=sort_dir,
filters=filters,
project_safe=project_safe)

View File

@ -1,967 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
'''Implementation of SQLAlchemy backend.'''
import six
import sys
from oslo_config import cfg
from oslo_db.sqlalchemy import session as db_session
from oslo_db.sqlalchemy import utils
from oslo_log import log as logging
from oslo_utils import timeutils
from sqlalchemy.orm.session import Session
from bilean.common import consts
from bilean.common import exception
from bilean.common.i18n import _
from bilean.common.i18n import _LW
from bilean.db.sqlalchemy import filters as db_filters
from bilean.db.sqlalchemy import migration
from bilean.db.sqlalchemy import models
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
_facade = None
def get_facade():
global _facade
if not _facade:
_facade = db_session.EngineFacade.from_config(CONF)
return _facade
get_engine = lambda: get_facade().get_engine()
get_session = lambda: get_facade().get_session()
def get_backend():
"""The backend is this module itself."""
return sys.modules[__name__]
def model_query(context, *args):
session = _session(context)
query = session.query(*args)
return query
def _get_sort_keys(sort_keys, mapping):
'''Returns an array containing only whitelisted keys
:param sort_keys: an array of strings
:param mapping: a mapping from keys to DB column names
:returns: filtered list of sort keys
'''
if isinstance(sort_keys, six.string_types):
sort_keys = [sort_keys]
return [mapping[key] for key in sort_keys or [] if key in mapping]
def _paginate_query(context, query, model, limit=None, marker=None,
sort_keys=None, sort_dir=None, default_sort_keys=None):
if not sort_keys:
sort_keys = default_sort_keys or []
if not sort_dir:
sort_dir = 'asc'
model_marker = None
if marker:
model_marker = model_query(context, model).get(marker)
try:
query = utils.paginate_query(query, model, limit, sort_keys,
model_marker, sort_dir)
except utils.InvalidSortKey:
raise exception.InvalidParameter(name='sort_keys', value=sort_keys)
return query
def soft_delete_aware_query(context, *args, **kwargs):
"""Query helper that accounts for context's `show_deleted` field.
:param show_deleted: if True, overrides context's show_deleted field.
"""
query = model_query(context, *args)
show_deleted = kwargs.get('show_deleted') or context.show_deleted
if not show_deleted:
query = query.filter_by(deleted_at=None)
return query
def _session(context):
return (context and context.session) or get_session()
def db_sync(engine, version=None):
"""Migrate the database to `version` or the most recent version."""
return migration.db_sync(engine, version=version)
def db_version(engine):
"""Display the current database version."""
return migration.db_version(engine)
# users
def user_get(context, user_id, show_deleted=False, project_safe=True):
query = model_query(context, models.User)
user = query.get(user_id)
deleted_ok = show_deleted or context.show_deleted
if user is None or user.deleted_at is not None and not deleted_ok:
return None
if project_safe and context.project != user.id:
return None
return user
def user_update(context, user_id, values):
user = user_get(context, user_id, project_safe=False)
if user is None:
raise exception.UserNotFound(user=user_id)
user.update(values)
user.save(_session(context))
return user
def user_create(context, values):
user_ref = models.User()
user_ref.update(values)
user_ref.save(_session(context))
return user_ref
def user_delete(context, user_id):
session = _session(context)
user = user_get(context, user_id)
if user is None:
return
# Delete all related resource records
for resource in user.resources:
session.delete(resource)
# Delete all related event records
for event in user.events:
session.delete(event)
user.soft_delete(session=session)
session.flush()
def user_get_all(context, show_deleted=False, limit=None,
marker=None, sort_keys=None, sort_dir=None,
filters=None):
query = soft_delete_aware_query(context, models.User,
show_deleted=show_deleted)
if filters is None:
filters = {}
sort_key_map = {
consts.USER_CREATED_AT: models.User.created_at.key,
consts.USER_UPDATED_AT: models.User.updated_at.key,
consts.USER_NAME: models.User.name.key,
consts.USER_BALANCE: models.User.balance.key,
consts.USER_STATUS: models.User.status.key,
}
keys = _get_sort_keys(sort_keys, sort_key_map)
query = db_filters.exact_filter(query, models.User, filters)
return _paginate_query(context, query, models.User,
limit=limit, marker=marker,
sort_keys=keys, sort_dir=sort_dir,
default_sort_keys=['id']).all()
# rules
def rule_get(context, rule_id, show_deleted=False):
query = model_query(context, models.Rule)
rule = query.filter_by(id=rule_id).first()
deleted_ok = show_deleted or context.show_deleted
if rule is None or rule.deleted_at is not None and not deleted_ok:
return None
return rule
def rule_get_all(context, show_deleted=False, limit=None,
marker=None, sort_keys=None, sort_dir=None,
filters=None):
query = soft_delete_aware_query(context, models.Rule,
show_deleted=show_deleted)
if filters is None:
filters = {}
sort_key_map = {
consts.RULE_NAME: models.Rule.name.key,
consts.RULE_TYPE: models.Rule.type.key,
consts.RULE_CREATED_AT: models.Rule.created_at.key,
consts.RULE_UPDATED_AT: models.Rule.updated_at.key,
}
keys = _get_sort_keys(sort_keys, sort_key_map)
query = db_filters.exact_filter(query, models.Rule, filters)
return _paginate_query(context, query, models.Rule,
limit=limit, marker=marker,
sort_keys=keys, sort_dir=sort_dir,
default_sort_keys=['id']).all()
def rule_create(context, values):
rule_ref = models.Rule()
rule_ref.update(values)
rule_ref.save(_session(context))
return rule_ref
def rule_update(context, rule_id, values):
rule = rule_get(context, rule_id)
if rule is None:
raise exception.RuleNotFound(rule=rule_id)
rule.update(values)
rule.save(_session(context))
def rule_delete(context, rule_id):
rule = rule_get(context, rule_id)
if rule is None:
return
session = Session.object_session(rule)
rule.soft_delete(session=session)
session.flush()
# resources
def resource_get(context, resource_id, show_deleted=False, project_safe=True):
query = model_query(context, models.Resource)
resource = query.get(resource_id)
deleted_ok = show_deleted or context.show_deleted
if resource is None or resource.deleted_at is not None and not deleted_ok:
return None
if project_safe and context.project != resource.user_id:
return None
return resource
def resource_get_all(context, user_id=None, show_deleted=False,
limit=None, marker=None, sort_keys=None, sort_dir=None,
filters=None, project_safe=True):
query = soft_delete_aware_query(context, models.Resource,
show_deleted=show_deleted)
if project_safe:
query = query.filter_by(user_id=context.project)
elif user_id:
query = query.filter_by(user_id=user_id)
if filters is None:
filters = {}
sort_key_map = {
consts.RES_CREATED_AT: models.Resource.created_at.key,
consts.RES_UPDATED_AT: models.Resource.updated_at.key,
consts.RES_RESOURCE_TYPE: models.Resource.resource_type.key,
consts.RES_USER_ID: models.Resource.user_id.key,
}
keys = _get_sort_keys(sort_keys, sort_key_map)
query = db_filters.exact_filter(query, models.Resource, filters)
return _paginate_query(context, query, models.Resource,
limit=limit, marker=marker,
sort_keys=keys, sort_dir=sort_dir,
default_sort_keys=['id']).all()
def resource_create(context, values):
resource_ref = models.Resource()
resource_ref.update(values)
resource_ref.save(_session(context))
return resource_ref
def resource_update(context, resource_id, values):
project_safe = True
if context.is_admin:
project_safe = False
resource = resource_get(context, resource_id, show_deleted=True,
project_safe=project_safe)
if resource is None:
raise exception.ResourceNotFound(resource=resource_id)
resource.update(values)
resource.save(_session(context))
return resource
def resource_delete(context, resource_id, soft_delete=True):
resource = resource_get(context, resource_id, project_safe=False)
if resource is None:
return
session = Session.object_session(resource)
if soft_delete:
resource.soft_delete(session=session)
else:
session.delete(resource)
session.flush()
# events
def event_get(context, event_id, project_safe=True):
query = model_query(context, models.Event)
event = query.get(event_id)
if event is None:
return None
if project_safe and context.project != event.user_id:
return None
return event
def event_get_all(context, limit=None, marker=None, sort_keys=None,
sort_dir=None, filters=None, project_safe=True):
query = model_query(context, models.Event)
if context.is_admin:
project_safe = False
if project_safe:
query = query.filter_by(user_id=context.project)
if filters is None:
filters = {}
sort_key_map = {
consts.EVENT_LEVEL: models.Event.level.key,
consts.EVENT_TIMESTAMP: models.Event.timestamp.key,
consts.EVENT_USER_ID: models.Event.user_id.key,
consts.EVENT_STATUS: models.Event.status.key,
}
keys = _get_sort_keys(sort_keys, sort_key_map)
query = db_filters.exact_filter(query, models.Event, filters)
return _paginate_query(context, query, models.Event,
limit=limit, marker=marker,
sort_keys=keys, sort_dir=sort_dir,
default_sort_keys=['id']).all()
def event_create(context, values):
event_ref = models.Event()
event_ref.update(values)
event_ref.save(_session(context))
return event_ref
# jobs
def job_create(context, values):
job_ref = models.Job()
job_ref.update(values)
job_ref.save(_session(context))
return job_ref
def job_get_all(context, scheduler_id=None):
query = model_query(context, models.Job)
if scheduler_id:
query = query.filter_by(scheduler_id=scheduler_id)
return query.all()
def job_delete(context, job_id):
job = model_query(context, models.Job).get(job_id)
if job is None:
return
session = Session.object_session(job)
session.delete(job)
session.flush()
# policies
def policy_get(context, policy_id, show_deleted=False):
query = model_query(context, models.Policy)
policy = query.get(policy_id)
deleted_ok = show_deleted or context.show_deleted
if policy is None or policy.deleted_at is not None and not deleted_ok:
return None
return policy
def policy_get_all(context, limit=None, marker=None,
sort_keys=None, sort_dir=None,
filters=None, show_deleted=False):
query = soft_delete_aware_query(context, models.Policy,
show_deleted=show_deleted)
if filters is None:
filters = {}
sort_key_map = {
consts.POLICY_NAME: models.Policy.name.key,
consts.POLICY_CREATED_AT: models.Policy.created_at.key,
consts.POLICY_UPDATED_AT: models.Policy.updated_at.key,
}
keys = _get_sort_keys(sort_keys, sort_key_map)
query = db_filters.exact_filter(query, models.Policy, filters)
return _paginate_query(context, query, models.Policy,
limit=limit, marker=marker,
sort_keys=keys, sort_dir=sort_dir,
default_sort_keys=['id']).all()
def policy_create(context, values):
policy_ref = models.Policy()
policy_ref.update(values)
policy_ref.save(_session(context))
return policy_ref
def policy_update(context, policy_id, values):
policy = policy_get(context, policy_id)
if policy is None:
raise exception.PolicyNotFound(policy=policy_id)
policy.update(values)
policy.save(_session(context))
def policy_delete(context, policy_id):
policy = policy_get(context, policy_id)
if policy is None:
return
session = Session.object_session(policy)
policy.soft_delete(session=session)
session.flush()
# locks
def user_lock_acquire(user_id, action_id):
session = get_session()
session.begin()
lock = session.query(models.UserLock).get(user_id)
if lock is None:
lock = models.UserLock(user_id=user_id, action_id=action_id)
session.add(lock)
session.commit()
return lock.action_id
def user_lock_release(user_id, action_id):
session = get_session()
session.begin()
success = False
lock = session.query(models.UserLock).get(user_id)
if lock is not None and lock.action_id == action_id:
session.delete(lock)
success = True
session.commit()
return success
def user_lock_steal(user_id, action_id):
session = get_session()
session.begin()
lock = session.query(models.UserLock).get(user_id)
if lock is not None:
lock.action_id = action_id
lock.save(session)
else:
lock = models.UserLock(user_id=user_id, action_id=action_id)
session.add(lock)
session.commit()
return lock.action_id
# actions
def action_create(context, values):
action = models.Action()
action.update(values)
action.save(_session(context))
return action
def action_update(context, action_id, values):
session = get_session()
action = session.query(models.Action).get(action_id)
if not action:
raise exception.ActionNotFound(action=action_id)
action.update(values)
action.save(session)
def action_get(context, action_id, project_safe=True, refresh=False):
session = _session(context)
action = session.query(models.Action).get(action_id)
if action is None:
return None
if not context.is_admin and project_safe:
if action.project != context.project:
return None
session.refresh(action)
return action
def action_get_all_by_owner(context, owner_id):
query = model_query(context, models.Action).\
filter_by(owner=owner_id)
return query.all()
def action_get_all(context, filters=None, limit=None, marker=None,
sort_keys=None, sort_dir=None):
query = model_query(context, models.Action)
if filters:
query = db_filters.exact_filter(query, models.Action, filters)
sort_key_map = {
consts.ACTION_CREATED_AT: models.Action.created_at.key,
consts.ACTION_UPDATED_AT: models.Action.updated_at.key,
consts.ACTION_NAME: models.Action.name.key,
consts.ACTION_STATUS: models.Action.status.key,
}
keys = _get_sort_keys(sort_keys, sort_key_map)
query = db_filters.exact_filter(query, models.Action, filters)
return _paginate_query(context, query, models.Action,
limit=limit, marker=marker,
sort_keys=keys, sort_dir=sort_dir,
default_sort_keys=['id']).all()
def action_check_status(context, action_id, timestamp):
session = _session(context)
q = session.query(models.ActionDependency)
count = q.filter_by(dependent=action_id).count()
if count > 0:
return consts.ACTION_WAITING
action = session.query(models.Action).get(action_id)
if action.status == consts.ACTION_WAITING:
session.begin()
action.status = consts.ACTION_READY
action.status_reason = _('All depended actions completed.')
action.end_time = timestamp
action.save(session)
session.commit()
return action.status
def dependency_get_depended(context, action_id):
session = _session(context)
q = session.query(models.ActionDependency).filter_by(dependent=action_id)
return [d.depended for d in q.all()]
def dependency_get_dependents(context, action_id):
session = _session(context)
q = session.query(models.ActionDependency).filter_by(depended=action_id)
return [d.dependent for d in q.all()]
def dependency_add(context, depended, dependent):
if isinstance(depended, list) and isinstance(dependent, list):
raise exception.NotSupport(
_('Multiple dependencies between lists not support'))
session = _session(context)
if isinstance(depended, list):
session.begin()
for d in depended:
r = models.ActionDependency(depended=d, dependent=dependent)
session.add(r)
query = session.query(models.Action).filter_by(id=dependent)
query.update({'status': consts.ACTION_WAITING,
'status_reason': _('Waiting for depended actions.')},
synchronize_session=False)
session.commit()
return
# Only dependent can be a list now, convert it to a list if it
# is not a list
if not isinstance(dependent, list): # e.g. B,C,D depend on A
dependents = [dependent]
else:
dependents = dependent
session.begin()
for d in dependents:
r = models.ActionDependency(depended=depended, dependent=d)
session.add(r)
q = session.query(models.Action).filter(models.Action.id.in_(dependents))
q.update({'status': consts.ACTION_WAITING,
'status_reason': _('Waiting for depended actions.')},
synchronize_session=False)
session.commit()
def action_mark_succeeded(context, action_id, timestamp):
session = _session(context)
session.begin()
query = session.query(models.Action).filter_by(id=action_id)
values = {
'owner': None,
'status': consts.ACTION_SUCCEEDED,
'status_reason': _('Action completed successfully.'),
'end_time': timestamp,
}
query.update(values, synchronize_session=False)
subquery = session.query(models.ActionDependency).filter_by(
depended=action_id)
subquery.delete(synchronize_session=False)
session.commit()
def _mark_failed(session, action_id, timestamp, reason=None):
# mark myself as failed
query = session.query(models.Action).filter_by(id=action_id)
values = {
'owner': None,
'status': consts.ACTION_FAILED,
'status_reason': (six.text_type(reason) if reason else
_('Action execution failed')),
'end_time': timestamp,
}
query.update(values, synchronize_session=False)
query = session.query(models.ActionDependency)
query = query.filter_by(depended=action_id)
dependents = [d.dependent for d in query.all()]
query.delete(synchronize_session=False)
for d in dependents:
_mark_failed(session, d, timestamp)
def action_mark_failed(context, action_id, timestamp, reason=None):
session = _session(context)
session.begin()
_mark_failed(session, action_id, timestamp, reason)
session.commit()
def _mark_cancelled(session, action_id, timestamp, reason=None):
query = session.query(models.Action).filter_by(id=action_id)
values = {
'owner': None,
'status': consts.ACTION_CANCELLED,
'status_reason': (six.text_type(reason) if reason else
_('Action execution failed')),
'end_time': timestamp,
}
query.update(values, synchronize_session=False)
query = session.query(models.ActionDependency)
query = query.filter_by(depended=action_id)
dependents = [d.dependent for d in query.all()]
query.delete(synchronize_session=False)
for d in dependents:
_mark_cancelled(session, d, timestamp)
def action_mark_cancelled(context, action_id, timestamp, reason=None):
session = _session(context)
session.begin()
_mark_cancelled(session, action_id, timestamp, reason)
session.commit()
def action_acquire(context, action_id, owner, timestamp):
session = _session(context)
with session.begin():
action = session.query(models.Action).get(action_id)
if not action:
return None
if action.owner and action.owner != owner:
return None
if action.status != consts.ACTION_READY:
msg = _LW('The action is not in an executable status: '
'%s') % action.status
LOG.warn(msg)
return None
action.owner = owner
action.start_time = timestamp
action.status = consts.ACTION_RUNNING
action.status_reason = _('The action is being processed.')
return action
def action_acquire_first_ready(context, owner, timestamp):
session = _session(context)
with session.begin():
action = session.query(models.Action).\
filter_by(status=consts.ACTION_READY).\
filter_by(owner=None).first()
if action:
action.owner = owner
action.start_time = timestamp
action.status = consts.ACTION_RUNNING
action.status_reason = _('The action is being processed.')
return action
def action_abandon(context, action_id):
'''Abandon an action for other workers to execute again.
This API is always called with the action locked by the current
worker. There is no chance the action is gone or stolen by others.
'''
query = model_query(context, models.Action)
action = query.get(action_id)
action.owner = None
action.start_time = None
action.status = consts.ACTION_READY
action.status_reason = _('The action was abandoned.')
action.save(query.session)
return action
def action_lock_check(context, action_id, owner=None):
action = model_query(context, models.Action).get(action_id)
if not action:
raise exception.ActionNotFound(action=action_id)
if owner:
return owner if owner == action.owner else action.owner
else:
return action.owner if action.owner else None
def action_signal(context, action_id, value):
query = model_query(context, models.Action)
action = query.get(action_id)
if not action:
return
action.control = value
action.save(query.session)
def action_signal_query(context, action_id):
action = model_query(context, models.Action).get(action_id)
if not action:
return None
return action.control
def action_delete(context, action_id, force=False):
session = _session(context)
action = session.query(models.Action).get(action_id)
if not action:
return
if ((action.status == 'WAITING') or (action.status == 'RUNNING') or
(action.status == 'SUSPENDED')):
raise exception.ResourceBusyError(resource_type='action',
resource_id=action_id)
session.begin()
session.delete(action)
session.commit()
session.flush()
# services
def service_create(context, host, binary, topic=None):
time_now = timeutils.utcnow()
svc = models.Service(host=host, binary=binary,
topic=topic, created_at=time_now,
updated_at=time_now)
svc.save(_session(context))
return svc
def service_update(context, service_id, values=None):
service = service_get(context, service_id)
if not service:
return
if values is None:
values = {}
values.update({'updated_at': timeutils.utcnow()})
service.update(values)
service.save(_session(context))
return service
def service_delete(context, service_id):
session = _session(context)
session.query(models.Service).filter_by(
id=service_id).delete(synchronize_session='fetch')
def service_get(context, service_id):
return model_query(context, models.Service).get(service_id)
def service_get_by_host_and_binary(context, host, binary):
query = model_query(context, models.Service)
return query.filter_by(host=host).filter_by(binary=binary).first()
def service_get_all(context):
return model_query(context, models.Service).all()
# consumptions
def consumption_get(context, consumption_id, project_safe=True):
query = model_query(context, models.Consumption)
consumption = query.get(consumption_id)
if consumption is None:
return None
if project_safe and context.project != consumption.user_id:
return None
return consumption
def consumption_get_all(context, user_id=None, limit=None, marker=None,
sort_keys=None, sort_dir=None, filters=None,
project_safe=True):
query = model_query(context, models.Consumption)
if context.is_admin:
project_safe = False
if project_safe:
query = query.filter_by(user_id=context.project)
elif user_id:
query = query.filter_by(user_id=user_id)
if filters is None:
filters = {}
sort_key_map = {
consts.CONSUMPTION_USER_ID: models.Consumption.user_id.key,
consts.CONSUMPTION_RESOURCE_TYPE: models.Consumption.resource_type.key,
consts.CONSUMPTION_START_TIME: models.Consumption.start_time.key,
}
keys = _get_sort_keys(sort_keys, sort_key_map)
query = db_filters.exact_filter(query, models.Consumption, filters)
return _paginate_query(context, query, models.Consumption,
limit=limit, marker=marker,
sort_keys=keys, sort_dir=sort_dir,
default_sort_keys=['id']).all()
def consumption_create(context, values):
consumption_ref = models.Consumption()
consumption_ref.update(values)
consumption_ref.save(_session(context))
return consumption_ref
def consumption_delete(context, consumption_id):
session = _session(context)
session.query(models.Consumption).filter_by(
id=consumption_id).delete(synchronize_session='fetch')
# recharges
def recharge_create(context, values):
recharge_ref = models.Recharge()
recharge_ref.update(values)
recharge_ref.save(_session(context))
return recharge_ref
def recharge_get(context, recharge_id, project_safe=True):
query = model_query(context, models.Recharge)
recharge = query.get(recharge_id)
if recharge is None:
return None
if project_safe and context.project != recharge.user_id:
return None
return recharge
def recharge_get_all(context, limit=None, marker=None, sort_keys=None,
sort_dir=None, filters=None, project_safe=True):
query = model_query(context, models.Recharge)
if context.is_admin:
project_safe = False
if project_safe:
query = query.filter_by(user_id=context.project)
if filters is None:
filters = {}
sort_key_map = {
consts.RECHARGE_USER_ID: models.Recharge.user_id.key,
consts.RECHARGE_TYPE: models.Recharge.type.key,
consts.RECHARGE_TIMESTAMP: models.Recharge.timestamp.key,
}
keys = _get_sort_keys(sort_keys, sort_key_map)
query = db_filters.exact_filter(query, models.Recharge, filters)
return _paginate_query(context, query, models.Recharge,
limit=limit, marker=marker,
sort_keys=keys, sort_dir=sort_dir,
default_sort_keys=['id']).all()

View File

@ -1,45 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
def exact_filter(query, model, filters):
"""Applies exact match filtering to a query.
Returns the updated query. Modifies filters argument to remove
filters consumed.
:param query: query to apply filters to
:param model: model object the query applies to, for IN-style
filtering
:param filters: dictionary of filters; values that are lists,
tuples, sets, or frozensets cause an 'IN' test to
be performed, while exact matching ('==' operator)
is used for other values
"""
filter_dict = {}
if filters is None:
filters = {}
for key, value in six.iteritems(filters):
if isinstance(value, (list, tuple, set, frozenset)):
column_attr = getattr(model, key)
query = query.filter(column_attr.in_(value))
else:
filter_dict[key] = value
if filter_dict:
query = query.filter_by(**filter_dict)
return query

View File

@ -1,4 +0,0 @@
This is a database migration repository.
More information at
http://code.google.com/p/sqlalchemy-migrate/

View File

@ -1,5 +0,0 @@
#!/usr/bin/env python
from migrate.versioning.shell import main
if __name__ == '__main__':
main(debug='False')

View File

@ -1,25 +0,0 @@
[db_settings]
# Used to identify which repository this database is versioned under.
# You can use the name of your project.
repository_id=bilean
# The name of the database table used to track the schema version.
# This name shouldn't already be used by your project.
# If this is changed once a database is under version control, you'll need to
# change the table name in each database too.
version_table=migrate_version
# When committing a change script, Migrate will attempt to generate the
# sql for all supported databases; normally, if one of them fails - probably
# because you don't have that database installed - it is ignored and the
# commit continues, perhaps ending successfully.
# Databases in this list MUST compile successfully during a commit, or the
# entire commit will fail. List the databases your application will actually
# be using to ensure your updates to that database work properly.
# This must be a list; example: ['postgres','sqlite']
required_dbs=[]
# When creating new change scripts, Migrate will stamp the new script with
# a version number. By default this is latest_version + 1. You can set this
# to 'true' to tell Migrate to use the UTC timestamp instead.
use_timestamp_numbering=False

View File

@ -1,233 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sqlalchemy
from bilean.db.sqlalchemy import types
def upgrade(migrate_engine):
meta = sqlalchemy.MetaData()
meta.bind = migrate_engine
user = sqlalchemy.Table(
'user', meta,
sqlalchemy.Column('id', sqlalchemy.String(36), primary_key=True,
nullable=False),
sqlalchemy.Column('name', sqlalchemy.String(255)),
sqlalchemy.Column('policy_id',
sqlalchemy.String(36),
sqlalchemy.ForeignKey('policy.id'),
nullable=True),
sqlalchemy.Column('balance', sqlalchemy.Numeric(20, 8)),
sqlalchemy.Column('rate', sqlalchemy.Numeric(20, 8)),
sqlalchemy.Column('credit', sqlalchemy.Integer),
sqlalchemy.Column('last_bill', sqlalchemy.Numeric(24, 8)),
sqlalchemy.Column('status', sqlalchemy.String(255)),
sqlalchemy.Column('status_reason', sqlalchemy.Text),
sqlalchemy.Column('created_at', sqlalchemy.DateTime),
sqlalchemy.Column('updated_at', sqlalchemy.DateTime),
sqlalchemy.Column('deleted_at', sqlalchemy.DateTime),
mysql_engine='InnoDB',
mysql_charset='utf8'
)
rule = sqlalchemy.Table(
'rule', meta,
sqlalchemy.Column('id', sqlalchemy.String(36), primary_key=True,
nullable=False),
sqlalchemy.Column('name', sqlalchemy.String(255)),
sqlalchemy.Column('type', sqlalchemy.String(255)),
sqlalchemy.Column('spec', types.Dict),
sqlalchemy.Column('meta_data', types.Dict),
sqlalchemy.Column('created_at', sqlalchemy.DateTime),
sqlalchemy.Column('updated_at', sqlalchemy.DateTime),
sqlalchemy.Column('deleted_at', sqlalchemy.DateTime),
mysql_engine='InnoDB',
mysql_charset='utf8'
)
policy = sqlalchemy.Table(
'policy', meta,
sqlalchemy.Column('id', sqlalchemy.String(36), primary_key=True,
nullable=False),
sqlalchemy.Column('name', sqlalchemy.String(255)),
sqlalchemy.Column('rules', types.List),
sqlalchemy.Column('is_default', sqlalchemy.Boolean),
sqlalchemy.Column('meta_data', types.Dict),
sqlalchemy.Column('created_at', sqlalchemy.DateTime),
sqlalchemy.Column('updated_at', sqlalchemy.DateTime),
sqlalchemy.Column('deleted_at', sqlalchemy.DateTime),
mysql_engine='InnoDB',
mysql_charset='utf8'
)
resource = sqlalchemy.Table(
'resource', meta,
sqlalchemy.Column('id', sqlalchemy.String(36), primary_key=True,
nullable=False),
sqlalchemy.Column('user_id',
sqlalchemy.String(36),
sqlalchemy.ForeignKey('user.id'),
nullable=False),
sqlalchemy.Column('rule_id', sqlalchemy.String(36), nullable=False),
sqlalchemy.Column('resource_type', sqlalchemy.String(36),
nullable=False),
sqlalchemy.Column('last_bill', sqlalchemy.Numeric(24, 8)),
sqlalchemy.Column('properties', types.Dict),
sqlalchemy.Column('rate', sqlalchemy.Numeric(20, 8), nullable=False),
sqlalchemy.Column('created_at', sqlalchemy.DateTime),
sqlalchemy.Column('updated_at', sqlalchemy.DateTime),
sqlalchemy.Column('deleted_at', sqlalchemy.DateTime),
mysql_engine='InnoDB',
mysql_charset='utf8'
)
event = sqlalchemy.Table(
'event', meta,
sqlalchemy.Column('id', sqlalchemy.String(36),
primary_key=True, nullable=False),
sqlalchemy.Column('timestamp', sqlalchemy.DateTime),
sqlalchemy.Column('obj_id', sqlalchemy.String(36)),
sqlalchemy.Column('obj_type', sqlalchemy.String(36)),
sqlalchemy.Column('obj_name', sqlalchemy.String(255)),
sqlalchemy.Column('action', sqlalchemy.String(36)),
sqlalchemy.Column('user_id', sqlalchemy.String(36)),
sqlalchemy.Column('level', sqlalchemy.Integer),
sqlalchemy.Column('status', sqlalchemy.String(255)),
sqlalchemy.Column('status_reason', sqlalchemy.Text),
sqlalchemy.Column('meta_data', types.Dict),
mysql_engine='InnoDB',
mysql_charset='utf8'
)
consumption = sqlalchemy.Table(
'consumption', meta,
sqlalchemy.Column('id', sqlalchemy.String(36),
primary_key=True, nullable=False),
sqlalchemy.Column('user_id', sqlalchemy.String(36)),
sqlalchemy.Column('resource_id', sqlalchemy.String(36)),
sqlalchemy.Column('resource_type', sqlalchemy.String(255)),
sqlalchemy.Column('start_time', sqlalchemy.Numeric(24, 8)),
sqlalchemy.Column('end_time', sqlalchemy.Numeric(24, 8)),
sqlalchemy.Column('rate', sqlalchemy.Numeric(20, 8)),
sqlalchemy.Column('cost', sqlalchemy.Numeric(20, 8)),
sqlalchemy.Column('meta_data', types.Dict),
mysql_engine='InnoDB',
mysql_charset='utf8'
)
recharge = sqlalchemy.Table(
'recharge', meta,
sqlalchemy.Column('id', sqlalchemy.String(36),
primary_key=True, nullable=False),
sqlalchemy.Column('user_id', sqlalchemy.String(36)),
sqlalchemy.Column('type', sqlalchemy.String(255)),
sqlalchemy.Column('timestamp', sqlalchemy.DateTime),
sqlalchemy.Column('value', sqlalchemy.Numeric(20, 8)),
sqlalchemy.Column('meta_data', types.Dict),
mysql_engine='InnoDB',
mysql_charset='utf8'
)
action = sqlalchemy.Table(
'action', meta,
sqlalchemy.Column('id', sqlalchemy.String(36),
primary_key=True, nullable=False),
sqlalchemy.Column('name', sqlalchemy.String(63)),
sqlalchemy.Column('context', types.Dict),
sqlalchemy.Column('target', sqlalchemy.String(36)),
sqlalchemy.Column('action', sqlalchemy.String(255)),
sqlalchemy.Column('cause', sqlalchemy.String(255)),
sqlalchemy.Column('owner', sqlalchemy.String(36)),
sqlalchemy.Column('start_time', sqlalchemy.Numeric(24, 8)),
sqlalchemy.Column('end_time', sqlalchemy.Numeric(24, 8)),
sqlalchemy.Column('timeout', sqlalchemy.Integer),
sqlalchemy.Column('inputs', types.Dict),
sqlalchemy.Column('outputs', types.Dict),
sqlalchemy.Column('data', types.Dict),
sqlalchemy.Column('status', sqlalchemy.String(255)),
sqlalchemy.Column('status_reason', sqlalchemy.Text),
sqlalchemy.Column('created_at', sqlalchemy.DateTime),
sqlalchemy.Column('updated_at', sqlalchemy.DateTime),
mysql_engine='InnoDB',
mysql_charset='utf8'
)
dependency = sqlalchemy.Table(
'dependency', meta,
sqlalchemy.Column('id', sqlalchemy.String(36),
primary_key=True, nullable=False),
sqlalchemy.Column('depended',
sqlalchemy.String(36),
sqlalchemy.ForeignKey('action.id'),
nullable=False),
sqlalchemy.Column('dependent',
sqlalchemy.String(36),
sqlalchemy.ForeignKey('action.id'),
nullable=False),
mysql_engine='InnoDB',
mysql_charset='utf8'
)
user_lock = sqlalchemy.Table(
'user_lock', meta,
sqlalchemy.Column('user_id', sqlalchemy.String(36),
primary_key=True, nullable=False),
sqlalchemy.Column('action_id', sqlalchemy.String(36)),
mysql_engine='InnoDB',
mysql_charset='utf8'
)
service = sqlalchemy.Table(
'service', meta,
sqlalchemy.Column('id', sqlalchemy.String(36),
primary_key=True, nullable=False),
sqlalchemy.Column('host', sqlalchemy.String(255)),
sqlalchemy.Column('binary', sqlalchemy.String(255)),
sqlalchemy.Column('topic', sqlalchemy.String(255)),
sqlalchemy.Column('disabled', sqlalchemy.Boolean),
sqlalchemy.Column('disabled_reason', sqlalchemy.String(255)),
sqlalchemy.Column('created_at', sqlalchemy.DateTime),
sqlalchemy.Column('updated_at', sqlalchemy.DateTime),
mysql_engine='InnoDB',
mysql_charset='utf8'
)
tables = (
policy,
user,
rule,
resource,
event,
consumption,
recharge,
action,
dependency,
user_lock,
service,
)
for index, table in enumerate(tables):
try:
table.create()
except Exception:
# If an error occurs, drop all tables created so far to return
# to the previously existing state.
meta.drop_all(tables=tables[:index])
raise
def downgrade(migrate_engine):
raise NotImplementedError('Database downgrade not supported - '
'would drop all tables')

View File

@ -1,57 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sqlalchemy
from bilean.db.sqlalchemy import types
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
def upgrade(migrate_engine):
meta = sqlalchemy.MetaData()
meta.bind = migrate_engine
job = sqlalchemy.Table(
'job', meta,
sqlalchemy.Column('id', sqlalchemy.String(50),
primary_key=True, nullable=False),
sqlalchemy.Column('scheduler_id', sqlalchemy.String(36),
nullable=False),
sqlalchemy.Column('job_type', sqlalchemy.String(10),
nullable=False),
sqlalchemy.Column('parameters', types.Dict),
sqlalchemy.Column('created_at', sqlalchemy.DateTime),
sqlalchemy.Column('updated_at', sqlalchemy.DateTime),
mysql_engine='InnoDB',
mysql_charset='utf8'
)
try:
job.create()
except Exception:
LOG.error("Table |%s| not created!", repr(job))
raise
def downgrade(migrate_engine):
meta = sqlalchemy.MetaData()
meta.bind = migrate_engine
job = sqlalchemy.Table('job', meta, autoload=True)
try:
job.drop()
except Exception:
LOG.error("Job table not dropped")
raise

View File

@ -1,38 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from oslo_db.sqlalchemy import migration as oslo_migration
INIT_VERSION = 0
def db_sync(engine, version=None):
path = os.path.join(os.path.abspath(os.path.dirname(__file__)),
'migrate_repo')
return oslo_migration.db_sync(engine, path, version,
init_version=INIT_VERSION)
def db_version(engine):
path = os.path.join(os.path.abspath(os.path.dirname(__file__)),
'migrate_repo')
return oslo_migration.db_version(engine, path, INIT_VERSION)
def db_version_control(engine, version=None):
path = os.path.join(os.path.abspath(os.path.dirname(__file__)),
'migrate_repo')
return oslo_migration.db_version_control(engine, path, version)

View File

@ -1,266 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
SQLAlchemy models for Bilean data.
"""
import six
from bilean.db.sqlalchemy import types
from oslo_db.sqlalchemy import models
from oslo_utils import timeutils
from oslo_utils import uuidutils
import sqlalchemy
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import backref
from sqlalchemy.orm import relationship
from sqlalchemy.orm.session import Session
BASE = declarative_base()
UUID4 = uuidutils.generate_uuid
def get_session():
from bilean.db.sqlalchemy import api as db_api
return db_api.get_session()
class BileanBase(models.ModelBase):
"""Base class for Heat Models."""
__table_args__ = {'mysql_engine': 'InnoDB'}
def expire(self, session=None, attrs=None):
"""Expire this object ()."""
if not session:
session = Session.object_session(self)
if not session:
session = get_session()
session.expire(self, attrs)
def refresh(self, session=None, attrs=None):
"""Refresh this object."""
if not session:
session = Session.object_session(self)
if not session:
session = get_session()
session.refresh(self, attrs)
def delete(self, session=None):
"""Delete this object."""
if not session:
session = Session.object_session(self)
if not session:
session = get_session()
session.delete(self)
session.flush()
def update_and_save(self, values, session=None):
if not session:
session = Session.object_session(self)
if not session:
session = get_session()
session.begin()
for k, v in six.iteritems(values):
setattr(self, k, v)
session.commit()
class SoftDelete(object):
deleted_at = sqlalchemy.Column(sqlalchemy.DateTime)
def soft_delete(self, session=None):
"""Mark this object as deleted."""
self.update_and_save({'deleted_at': timeutils.utcnow()},
session=session)
class StateAware(object):
status = sqlalchemy.Column('status', sqlalchemy.String(255))
status_reason = sqlalchemy.Column('status_reason', sqlalchemy.Text)
class User(BASE, BileanBase, SoftDelete, StateAware, models.TimestampMixin):
"""Represents a user to record account"""
__tablename__ = 'user'
id = sqlalchemy.Column(sqlalchemy.String(36), primary_key=True)
name = sqlalchemy.Column(sqlalchemy.String(255))
policy_id = sqlalchemy.Column(
sqlalchemy.String(36),
sqlalchemy.ForeignKey('policy.id'),
nullable=True)
balance = sqlalchemy.Column(sqlalchemy.Numeric(20, 8), default=0.0)
rate = sqlalchemy.Column(sqlalchemy.Numeric(20, 8), default=0.0)
credit = sqlalchemy.Column(sqlalchemy.Integer, default=0)
last_bill = sqlalchemy.Column(sqlalchemy.Numeric(24, 8))
class Policy(BASE, BileanBase, SoftDelete, models.TimestampMixin):
"""Represents a policy to collect rules."""
__tablename__ = 'policy'
id = sqlalchemy.Column(sqlalchemy.String(36), primary_key=True,
default=lambda: UUID4())
name = sqlalchemy.Column(sqlalchemy.String(255))
rules = sqlalchemy.Column(types.List)
is_default = sqlalchemy.Column(sqlalchemy.Boolean, default=False)
meta_data = sqlalchemy.Column(types.Dict)
class Rule(BASE, BileanBase, SoftDelete, models.TimestampMixin):
"""Represents a rule created to bill someone resource"""
__tablename__ = 'rule'
id = sqlalchemy.Column(sqlalchemy.String(36), primary_key=True,
default=lambda: UUID4())
name = sqlalchemy.Column(sqlalchemy.String(255))
type = sqlalchemy.Column(sqlalchemy.String(255))
spec = sqlalchemy.Column(types.Dict)
meta_data = sqlalchemy.Column(types.Dict)
class Resource(BASE, BileanBase, SoftDelete, models.TimestampMixin):
"""Represents a meta resource with rate"""
__tablename__ = 'resource'
id = sqlalchemy.Column(sqlalchemy.String(36), primary_key=True)
user_id = sqlalchemy.Column(
sqlalchemy.String(36),
sqlalchemy.ForeignKey('user.id'),
nullable=False)
rule_id = sqlalchemy.Column(sqlalchemy.String(36), nullable=True)
user = relationship(User, backref=backref('resources'))
resource_type = sqlalchemy.Column(sqlalchemy.String(36), nullable=False)
rate = sqlalchemy.Column(sqlalchemy.Numeric(20, 8), nullable=False)
last_bill = sqlalchemy.Column(sqlalchemy.Numeric(24, 8))
properties = sqlalchemy.Column(types.Dict)
class Action(BASE, BileanBase, StateAware, models.TimestampMixin):
"""Action objects."""
__tablename__ = 'action'
id = sqlalchemy.Column(sqlalchemy.String(36), primary_key=True,
default=lambda: UUID4())
name = sqlalchemy.Column(sqlalchemy.String(63))
context = sqlalchemy.Column(types.Dict)
target = sqlalchemy.Column(sqlalchemy.String(36))
action = sqlalchemy.Column(sqlalchemy.String(255))
cause = sqlalchemy.Column(sqlalchemy.String(255))
owner = sqlalchemy.Column(sqlalchemy.String(36))
start_time = sqlalchemy.Column(sqlalchemy.Numeric(24, 8))
end_time = sqlalchemy.Column(sqlalchemy.Numeric(24, 8))
timeout = sqlalchemy.Column(sqlalchemy.Integer)
inputs = sqlalchemy.Column(types.Dict)
outputs = sqlalchemy.Column(types.Dict)
data = sqlalchemy.Column(types.Dict)
class ActionDependency(BASE, BileanBase):
"""Action dependencies."""
__tablename__ = 'dependency'
id = sqlalchemy.Column(sqlalchemy.String(36), primary_key=True,
default=lambda: UUID4())
depended = sqlalchemy.Column(sqlalchemy.String(36),
sqlalchemy.ForeignKey('action.id'),
nullable=False)
dependent = sqlalchemy.Column(sqlalchemy.String(36),
sqlalchemy.ForeignKey('action.id'),
nullable=False)
class Event(BASE, BileanBase, StateAware):
"""Represents an event generated by the bilean engine."""
__tablename__ = 'event'
id = sqlalchemy.Column(sqlalchemy.String(36), primary_key=True,
default=lambda: UUID4())
timestamp = sqlalchemy.Column(sqlalchemy.DateTime)
obj_id = sqlalchemy.Column(sqlalchemy.String(36))
obj_type = sqlalchemy.Column(sqlalchemy.String(36))
obj_name = sqlalchemy.Column(sqlalchemy.String(255))
action = sqlalchemy.Column(sqlalchemy.String(36))
user_id = sqlalchemy.Column(sqlalchemy.String(36))
level = sqlalchemy.Column(sqlalchemy.Integer)
meta_data = sqlalchemy.Column(types.Dict)
class Consumption(BASE, BileanBase):
"""Consumption objects."""
__tablename__ = 'consumption'
id = sqlalchemy.Column(sqlalchemy.String(36), primary_key=True,
default=lambda: UUID4())
user_id = sqlalchemy.Column(sqlalchemy.String(36))
resource_id = sqlalchemy.Column(sqlalchemy.String(36))
resource_type = sqlalchemy.Column(sqlalchemy.String(255))
start_time = sqlalchemy.Column(sqlalchemy.Numeric(24, 8))
end_time = sqlalchemy.Column(sqlalchemy.Numeric(24, 8))
rate = sqlalchemy.Column(sqlalchemy.Numeric(20, 8))
cost = sqlalchemy.Column(sqlalchemy.Numeric(20, 8))
meta_data = sqlalchemy.Column(types.Dict)
class Recharge(BASE, BileanBase):
"""Recharge history."""
__tablename__ = 'recharge'
id = sqlalchemy.Column(sqlalchemy.String(36), primary_key=True,
default=lambda: UUID4())
user_id = sqlalchemy.Column(sqlalchemy.String(36))
type = sqlalchemy.Column(sqlalchemy.String(255))
timestamp = sqlalchemy.Column(sqlalchemy.DateTime)
value = sqlalchemy.Column(sqlalchemy.Numeric(20, 8))
meta_data = sqlalchemy.Column(types.Dict)
class Job(BASE, BileanBase):
"""Represents a job for per user"""
__tablename__ = 'job'
id = sqlalchemy.Column(sqlalchemy.String(50), primary_key=True)
scheduler_id = sqlalchemy.Column(sqlalchemy.String(36))
job_type = sqlalchemy.Column(sqlalchemy.String(10))
parameters = sqlalchemy.Column(types.Dict)
class UserLock(BASE, BileanBase):
"""User locks for engines."""
__tablename__ = 'user_lock'
user_id = sqlalchemy.Column(sqlalchemy.String(36), primary_key=True)
action_id = sqlalchemy.Column(sqlalchemy.String(36))
class Service(BASE, BileanBase, models.TimestampMixin):
"""Service registry."""
__tablename__ = 'service'
id = sqlalchemy.Column(sqlalchemy.String(36), primary_key=True,
default=lambda: UUID4())
host = sqlalchemy.Column(sqlalchemy.String(255))
binary = sqlalchemy.Column(sqlalchemy.String(255))
topic = sqlalchemy.Column(sqlalchemy.String(255))
disabled = sqlalchemy.Column(sqlalchemy.Boolean, default=False)
disabled_reason = sqlalchemy.Column(sqlalchemy.String(255))

View File

@ -1,112 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
from sqlalchemy.dialects import mysql
from sqlalchemy.ext import mutable
from sqlalchemy import types
class MutableList(mutable.Mutable, list):
@classmethod
def coerce(cls, key, value):
if not isinstance(value, MutableList):
if isinstance(value, list):
return MutableList(value)
return mutable.Mutable.coerce(key, value)
else:
return value
def __init__(self, initval=None):
list.__init__(self, initval or [])
def __getitem__(self, key):
value = list.__getitem__(self, key)
for obj, key in self._parents.items():
value._parents[obj] = key
return value
def __setitem__(self, key, value):
list.__setitem__(self, key, value)
self.changed()
def __getstate__(self):
return list(self)
def __setstate__(self, state):
self[:] = state
def append(self, value):
list.append(self, value)
self.changed()
def extend(self, iterable):
list.extend(self, iterable)
self.changed()
def insert(self, index, item):
list.insert(self, index, item)
self.changed()
def __setslice__(self, i, j, other):
list.__setslice__(self, i, j, other)
self.changed()
def pop(self, index=-1):
item = list.pop(self, index)
self.changed()
return item
def remove(self, value):
list.remove(self, value)
self.changed()
class Dict(types.TypeDecorator):
impl = types.Text
def load_dialect_impl(self, dialect):
if dialect.name == 'mysql':
return dialect.type_descriptor(mysql.LONGTEXT())
else:
return self.impl
def process_bind_param(self, value, dialect):
return json.dumps(value)
def process_result_value(self, value, dialect):
if value is None:
return None
return json.loads(value)
class List(types.TypeDecorator):
impl = types.Text
def load_dialect_impl(self, dialect):
if dialect.name == 'mysql':
return dialect.type_descriptor(mysql.LONGTEXT())
else:
return self.impl
def process_bind_param(self, value, dialect):
return json.dumps(value)
def process_result_value(self, value, dialect):
if value is None:
return None
return json.loads(value)
mutable.MutableDict.associate_with(Dict)
MutableList.associate_with(List)

View File

@ -1,47 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class LazyPluggable(object):
"""A pluggable backend loaded lazily based on some value."""
def __init__(self, pivot, **backends):
self.__backends = backends
self.__pivot = pivot
self.__backend = None
def __get_backend(self):
if not self.__backend:
backend_name = 'sqlalchemy'
backend = self.__backends[backend_name]
if isinstance(backend, tuple):
name = backend[0]
fromlist = backend[1]
else:
name = backend
fromlist = backend
self.__backend = __import__(name, None, None, fromlist)
return self.__backend
def __getattr__(self, key):
backend = self.__get_backend()
return getattr(backend, key)
IMPL = LazyPluggable('backend',
sqlalchemy='heat.db.sqlalchemy.api')
def purge_deleted(age, granularity='days'):
IMPL.purge_deleted(age, granularity)

View File

@ -1,54 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
from oslo_config import cfg
from bilean.engine import environment
CONF = cfg.CONF
CONF.import_opt('cloud_backend', 'bilean.common.config')
class DriverBase(object):
'''Base class for all drivers.'''
def __init__(self, params=None):
if params is None:
params = {
'auth_url': CONF.authentication.auth_url,
'username': CONF.authentication.service_username,
'password': CONF.authentication.service_password,
'project_name': CONF.authentication.service_project_name,
'user_domain_name':
cfg.CONF.authentication.service_user_domain,
'project_domain_name':
cfg.CONF.authentication.service_project_domain,
}
self.conn_params = copy.deepcopy(params)
class BileanDriver(object):
'''Generic driver class'''
def __init__(self, backend_name=None):
if backend_name is None:
backend_name = cfg.CONF.cloud_backend
backend = environment.global_env().get_driver(backend_name)
self.compute = backend.compute
self.network = backend.network
self.identity = backend.identity
self.block_store = backend.block_store

View File

@ -1,22 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.drivers.openstack import cinder_v2
from bilean.drivers.openstack import keystone_v3
from bilean.drivers.openstack import neutron_v2
from bilean.drivers.openstack import nova_v2
compute = nova_v2.NovaClient
identity = keystone_v3.KeystoneClient
network = neutron_v2.NeutronClient
block_store = cinder_v2.CinderClient

View File

@ -1,33 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.drivers import base
from bilean.drivers.openstack import sdk
class CinderClient(base.DriverBase):
'''Cinder V2 driver.'''
def __init__(self, params=None):
super(CinderClient, self).__init__(params)
self.conn = sdk.create_connection(self.conn_params)
@sdk.translate_exception
def volume_get(self, volume):
'''Get a single volume.'''
return self.conn.block_store.get_volume(volume)
@sdk.translate_exception
def volume_delete(self, volume, ignore_missing=True):
'''Delete a volume.'''
self.conn.block_store.delete_volume(volume,
ignore_missing=ignore_missing)

View File

@ -1,67 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from bilean.drivers import base
from bilean.drivers.openstack import sdk
CONF = cfg.CONF
class KeystoneClient(base.DriverBase):
'''Keystone V3 driver.'''
def __init__(self, params=None):
super(KeystoneClient, self).__init__(params)
self.conn = sdk.create_connection(self.conn_params)
@sdk.translate_exception
def project_find(self, name_or_id, ignore_missing=True):
'''Find a single project
:param name_or_id: The name or ID of a project.
:param bool ignore_missing: When set to ``False``
:class:`~openstack.exceptions.ResourceNotFound` will be
raised when the resource does not exist.
When set to ``True``, None will be returned when
attempting to find a nonexistent resource.
:returns: One :class:`~openstack.identity.v3.project.Project` or None
'''
project = self.conn.identity.find_project(
name_or_id, ignore_missing=ignore_missing)
return project
@sdk.translate_exception
def project_list(self, **queries):
'''Function to get project list.'''
return self.conn.identity.projects(**queries)
@classmethod
def get_service_credentials(cls, **kwargs):
'''Bilean service credential to use with Keystone.
:param kwargs: An additional keyword argument list that can be used
for customizing the default settings.
'''
creds = {
'auth_url': CONF.authentication.auth_url,
'username': CONF.authentication.service_username,
'password': CONF.authentication.service_password,
'project_name': CONF.authentication.service_project_name,
'user_domain_name': cfg.CONF.authentication.service_user_domain,
'project_domain_name':
cfg.CONF.authentication.service_project_domain,
}
creds.update(**kwargs)
return creds

View File

@ -1,125 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.drivers import base
from bilean.drivers.openstack import sdk
class NeutronClient(base.DriverBase):
'''Neutron V2 driver.'''
def __init__(self, params=None):
super(NeutronClient, self).__init__(params)
self.conn = sdk.create_connection(self.conn_params)
@sdk.translate_exception
def network_get(self, name_or_id):
network = self.conn.network.find_network(name_or_id)
return network
@sdk.translate_exception
def network_delete(self, network, ignore_missing=True):
self.conn.network.delete_network(
network, ignore_missing=ignore_missing)
return
@sdk.translate_exception
def subnet_get(self, name_or_id):
subnet = self.conn.network.find_subnet(name_or_id)
return subnet
@sdk.translate_exception
def subnet_delete(self, subnet, ignore_missing=True):
self.conn.network.delete_subnet(
subnet, ignore_missing=ignore_missing)
return
@sdk.translate_exception
def loadbalancer_get(self, name_or_id):
lb = self.conn.network.find_load_balancer(name_or_id)
return lb
@sdk.translate_exception
def loadbalancer_list(self):
lbs = [lb for lb in self.conn.network.load_balancers()]
return lbs
@sdk.translate_exception
def loadbalancer_delete(self, lb_id, ignore_missing=True):
self.conn.network.delete_load_balancer(
lb_id, ignore_missing=ignore_missing)
return
@sdk.translate_exception
def listener_get(self, name_or_id):
listener = self.conn.network.find_listener(name_or_id)
return listener
@sdk.translate_exception
def listener_list(self):
listeners = [i for i in self.conn.network.listeners()]
return listeners
@sdk.translate_exception
def listener_delete(self, listener_id, ignore_missing=True):
self.conn.network.delete_listener(listener_id,
ignore_missing=ignore_missing)
return
@sdk.translate_exception
def pool_get(self, name_or_id):
pool = self.conn.network.find_pool(name_or_id)
return pool
@sdk.translate_exception
def pool_list(self):
pools = [p for p in self.conn.network.pools()]
return pools
@sdk.translate_exception
def pool_delete(self, pool_id, ignore_missing=True):
self.conn.network.delete_pool(pool_id,
ignore_missing=ignore_missing)
return
@sdk.translate_exception
def pool_member_get(self, pool_id, name_or_id):
member = self.conn.network.find_pool_member(name_or_id,
pool_id)
return member
@sdk.translate_exception
def pool_member_list(self, pool_id):
members = [m for m in self.conn.network.pool_members(pool_id)]
return members
@sdk.translate_exception
def pool_member_delete(self, pool_id, member_id, ignore_missing=True):
self.conn.network.delete_pool_member(
member_id, pool_id, ignore_missing=ignore_missing)
return
@sdk.translate_exception
def healthmonitor_get(self, name_or_id):
hm = self.conn.network.find_health_monitor(name_or_id)
return hm
@sdk.translate_exception
def healthmonitor_list(self):
hms = [hm for hm in self.conn.network.health_monitors()]
return hms
@sdk.translate_exception
def healthmonitor_delete(self, hm_id, ignore_missing=True):
self.conn.network.delete_health_monitor(
hm_id, ignore_missing=ignore_missing)
return

View File

@ -1,72 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from bilean.drivers import base
from bilean.drivers.openstack import sdk
class NovaClient(base.DriverBase):
'''Nova V2 driver.'''
def __init__(self, params=None):
super(NovaClient, self).__init__(params)
self.conn = sdk.create_connection(self.conn_params)
@sdk.translate_exception
def flavor_find(self, name_or_id, ignore_missing=False):
return self.conn.compute.find_flavor(name_or_id, ignore_missing)
@sdk.translate_exception
def flavor_list(self, details=True, **query):
return self.conn.compute.flavors(details, **query)
@sdk.translate_exception
def image_find(self, name_or_id, ignore_missing=False):
return self.conn.compute.find_image(name_or_id, ignore_missing)
@sdk.translate_exception
def image_list(self, details=True, **query):
return self.conn.compute.images(details, **query)
@sdk.translate_exception
def image_delete(self, value, ignore_missing=True):
return self.conn.compute.delete_image(value, ignore_missing)
@sdk.translate_exception
def server_get(self, value):
return self.conn.compute.get_server(value)
@sdk.translate_exception
def server_list(self, details=True, **query):
return self.conn.compute.servers(details, **query)
@sdk.translate_exception
def server_update(self, value, **attrs):
return self.conn.compute.update_server(value, **attrs)
@sdk.translate_exception
def server_delete(self, value, ignore_missing=True):
return self.conn.compute.delete_server(value, ignore_missing)
@sdk.translate_exception
def wait_for_server_delete(self, value, timeout=None):
'''Wait for server deleting complete'''
if timeout is None:
timeout = cfg.CONF.default_action_timeout
server_obj = self.conn.compute.find_server(value, True)
if server_obj:
self.conn.compute.wait_for_delete(server_obj, wait=timeout)
return

View File

@ -1,114 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
'''
SDK Client
'''
import functools
from oslo_log import log as logging
import six
from openstack import connection
from openstack import exceptions as sdk_exc
from openstack import profile
from oslo_serialization import jsonutils
from requests import exceptions as req_exc
from bilean.common import exception as bilean_exc
USER_AGENT = 'bilean'
exc = sdk_exc
LOG = logging.getLogger(__name__)
def parse_exception(ex):
'''Parse exception code and yield useful information.'''
code = 500
if isinstance(ex, sdk_exc.HttpException):
# some exceptions don't contain status_code
if ex.http_status is not None:
code = ex.http_status
message = ex.message
data = {}
try:
data = jsonutils.loads(ex.details)
except Exception:
# Some exceptions don't have details record or
# are not in JSON format
pass
# try dig more into the exception record
# usually 'data' has two types of format :
# type1: {"forbidden": {"message": "error message", "code": 403}
# type2: {"code": 404, "error": { "message": "not found"}}
if data:
code = data.get('code', code)
message = data.get('message', message)
error = data.get('error')
if error:
code = data.get('code', code)
message = data['error'].get('message', message)
else:
for value in data.values():
code = value.get('code', code)
message = value.get('message', message)
elif isinstance(ex, sdk_exc.SDKException):
# Besides HttpException there are some other exceptions like
# ResourceTimeout can be raised from SDK, handle them here.
message = ex.message
elif isinstance(ex, req_exc.RequestException):
# Exceptions that are not captured by SDK
code = ex.errno
message = six.text_type(ex)
elif isinstance(ex, Exception):
message = six.text_type(ex)
raise bilean_exc.InternalError(code=code, message=message)
def translate_exception(func):
"""Decorator for exception translation."""
@functools.wraps(func)
def invoke_with_catch(driver, *args, **kwargs):
try:
return func(driver, *args, **kwargs)
except Exception as ex:
LOG.exception(ex)
raise parse_exception(ex)
return invoke_with_catch
def create_connection(params=None):
if params is None:
params = {}
if params.get('token'):
auth_plugin = 'token'
else:
auth_plugin = 'password'
prof = profile.Profile()
prof.set_version('identity', 'v3')
if 'region_name' in params:
prof.set_region(prof.ALL, params['region_name'])
params.pop('region_name')
try:
conn = connection.Connection(profile=prof, user_agent=USER_AGENT,
auth_plugin=auth_plugin, **params)
except Exception as ex:
raise parse_exception(ex)
return conn

View File

@ -1,407 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
import time
from oslo_config import cfg
from oslo_log import log as logging
from oslo_utils import timeutils
from bilean.common import context as req_context
from bilean.common import exception
from bilean.common.i18n import _, _LE
from bilean.common import utils
from bilean.db import api as db_api
from bilean.engine import event as EVENT
wallclock = time.time
LOG = logging.getLogger(__name__)
# Action causes
CAUSES = (
CAUSE_RPC, CAUSE_DERIVED,
) = (
'RPC Request',
'Derived Action',
)
class Action(object):
'''An action can be performed on a user, rule or policy.'''
RETURNS = (
RES_OK, RES_ERROR, RES_RETRY, RES_CANCEL, RES_TIMEOUT,
) = (
'OK', 'ERROR', 'RETRY', 'CANCEL', 'TIMEOUT',
)
# Action status definitions:
# INIT: Not ready to be executed because fields are being modified,
# or dependency with other actions are being analyzed.
# READY: Initialized and ready to be executed by a worker.
# RUNNING: Being executed by a worker thread.
# SUCCEEDED: Completed with success.
# FAILED: Completed with failure.
# CANCELLED: Action cancelled because worker thread was cancelled.
STATUSES = (
INIT, WAITING, READY, RUNNING, SUSPENDED,
SUCCEEDED, FAILED, CANCELLED
) = (
'INIT', 'WAITING', 'READY', 'RUNNING', 'SUSPENDED',
'SUCCEEDED', 'FAILED', 'CANCELLED',
)
# Signal commands
COMMANDS = (
SIG_CANCEL, SIG_SUSPEND, SIG_RESUME,
) = (
'CANCEL', 'SUSPEND', 'RESUME',
)
def __new__(cls, target, action, context, **kwargs):
if (cls != Action):
return super(Action, cls).__new__(cls)
target_type = action.split('_')[0]
if target_type == 'USER':
from bilean.engine.actions import user_action
ActionClass = user_action.UserAction
# elif target_type == 'RULE':
# from bilean.engine.actions import rule_action
# ActionClass = rule_action.RuleAction
# elif target_type == 'POLICY':
# from bilean.engine.actions import policy_action
# ActionClass = policy_action.PolicyAction
return super(Action, cls).__new__(ActionClass)
def __init__(self, target, action, context, **kwargs):
# context will be persisted into database so that any worker thread
# can pick the action up and execute it on behalf of the initiator
self.id = kwargs.get('id', None)
self.name = kwargs.get('name', '')
self.context = context
self.action = action
self.target = target
# Why this action is fired, it can be a UUID of another action
self.cause = kwargs.get('cause', '')
# Owner can be an UUID format ID for the worker that is currently
# working on the action. It also serves as a lock.
self.owner = kwargs.get('owner', None)
self.start_time = utils.make_decimal(kwargs.get('start_time', 0))
self.end_time = utils.make_decimal(kwargs.get('end_time', 0))
# Timeout is a placeholder in case some actions may linger too long
self.timeout = kwargs.get('timeout', cfg.CONF.default_action_timeout)
# Return code, useful when action is not automatically deleted
# after execution
self.status = kwargs.get('status', self.INIT)
self.status_reason = kwargs.get('status_reason', '')
# All parameters are passed in using keyword arguments which is
# a dictionary stored as JSON in DB
self.inputs = kwargs.get('inputs', {})
self.outputs = kwargs.get('outputs', {})
self.created_at = kwargs.get('created_at', None)
self.updated_at = kwargs.get('updated_at', None)
self.data = kwargs.get('data', {})
def store(self, context):
"""Store the action record into database table.
:param context: An instance of the request context.
:return: The ID of the stored object.
"""
timestamp = timeutils.utcnow()
values = {
'name': self.name,
'context': self.context.to_dict(),
'target': self.target,
'action': self.action,
'cause': self.cause,
'owner': self.owner,
'start_time': utils.format_decimal(self.start_time),
'end_time': utils.format_decimal(self.end_time),
'timeout': self.timeout,
'status': self.status,
'status_reason': self.status_reason,
'inputs': self.inputs,
'outputs': self.outputs,
'created_at': self.created_at,
'updated_at': self.updated_at,
'data': self.data,
}
if self.id:
self.updated_at = timestamp
values['updated_at'] = timestamp
db_api.action_update(context, self.id, values)
else:
self.created_at = timestamp
values['created_at'] = timestamp
action = db_api.action_create(context, values)
self.id = action.id
return self.id
@classmethod
def _from_db_record(cls, record):
"""Construct a action object from database record.
:param context: the context used for DB operations;
:param record: a DB action object that contains all fields.
:return: An `Action` object deserialized from the DB action object.
"""
context = req_context.RequestContext.from_dict(record.context)
kwargs = {
'id': record.id,
'name': record.name,
'cause': record.cause,
'owner': record.owner,
'start_time': record.start_time,
'end_time': record.end_time,
'timeout': record.timeout,
'status': record.status,
'status_reason': record.status_reason,
'inputs': record.inputs or {},
'outputs': record.outputs or {},
'created_at': record.created_at,
'updated_at': record.updated_at,
'data': record.data,
}
return cls(record.target, record.action, context, **kwargs)
@classmethod
def load(cls, context, action_id=None, db_action=None):
"""Retrieve an action from database.
:param context: Instance of request context.
:param action_id: An UUID for the action to deserialize.
:param db_action: An action object for the action to deserialize.
:return: A `Action` object instance.
"""
if db_action is None:
db_action = db_api.action_get(context, action_id)
if db_action is None:
raise exception.ActionNotFound(action=action_id)
return cls._from_db_record(db_action)
@classmethod
def load_all(cls, context, filters=None, limit=None, marker=None,
sort_keys=None, sort_dir=None):
"""Retrieve all actions from database."""
records = db_api.action_get_all(context, filters=filters,
limit=limit, marker=marker,
sort_keys=sort_keys,
sort_dir=sort_dir)
for record in records:
yield cls._from_db_record(record)
@classmethod
def create(cls, context, target, action, **kwargs):
"""Create an action object.
:param context: The requesting context.
:param target: The ID of the target.
:param action: Name of the action.
:param dict kwargs: Other keyword arguments for the action.
:return: ID of the action created.
"""
params = {
'user': context.user,
'project': context.project,
'domain': context.domain,
'is_admin': context.is_admin,
'request_id': context.request_id,
'trusts': context.trusts,
}
ctx = req_context.RequestContext.from_dict(params)
obj = cls(target, action, ctx, **kwargs)
return obj.store(context)
@classmethod
def delete(cls, context, action_id):
"""Delete an action from database."""
db_api.action_delete(context, action_id)
def signal(self, cmd):
'''Send a signal to the action.'''
if cmd not in self.COMMANDS:
return
if cmd == self.SIG_CANCEL:
expected_statuses = (self.INIT, self.WAITING, self.READY,
self.RUNNING)
elif cmd == self.SIG_SUSPEND:
expected_statuses = (self.RUNNING)
else: # SIG_RESUME
expected_statuses = (self.SUSPENDED)
if self.status not in expected_statuses:
reason = _("Action (%(action)s) is in unexpected status "
"(%(actual)s) while expected status should be one of "
"(%(expected)s).") % dict(action=self.id,
expected=expected_statuses,
actual=self.status)
EVENT.error(self.context, self, cmd, status_reason=reason)
return
db_api.action_signal(self.context, self.id, cmd)
def execute(self, **kwargs):
'''Execute the action.
In theory, the action encapsulates all information needed for
execution. 'kwargs' may specify additional parameters.
:param kwargs: additional parameters that may override the default
properties stored in the action record.
'''
return NotImplemented
def set_status(self, result, reason=None):
"""Set action status based on return value from execute."""
timestamp = wallclock()
if result == self.RES_OK:
status = self.SUCCEEDED
db_api.action_mark_succeeded(self.context, self.id, timestamp)
elif result == self.RES_ERROR:
status = self.FAILED
db_api.action_mark_failed(self.context, self.id, timestamp,
reason=reason or 'ERROR')
elif result == self.RES_TIMEOUT:
status = self.FAILED
db_api.action_mark_failed(self.context, self.id, timestamp,
reason=reason or 'TIMEOUT')
elif result == self.RES_CANCEL:
status = self.CANCELLED
db_api.action_mark_cancelled(self.context, self.id, timestamp)
else: # result == self.RES_RETRY:
status = self.READY
# Action failed at the moment, but can be retried
# We abandon it and then notify other dispatchers to execute it
db_api.action_abandon(self.context, self.id)
if status == self.SUCCEEDED:
EVENT.info(self.context, self, self.action, status, reason)
elif status == self.READY:
EVENT.warning(self.context, self, self.action, status, reason)
else:
EVENT.error(self.context, self, self.action, status, reason)
self.status = status
self.status_reason = reason
def get_status(self):
timestamp = wallclock()
status = db_api.action_check_status(self.context, self.id, timestamp)
self.status = status
return status
def is_timeout(self):
time_lapse = wallclock() - self.start_time
return time_lapse > self.timeout
def _check_signal(self):
# Check timeout first, if true, return timeout message
if self.timeout is not None and self.is_timeout():
EVENT.debug(self.context, self, self.action, 'TIMEOUT')
return self.RES_TIMEOUT
result = db_api.action_signal_query(self.context, self.id)
return result
def is_cancelled(self):
return self._check_signal() == self.SIG_CANCEL
def is_suspended(self):
return self._check_signal() == self.SIG_SUSPEND
def is_resumed(self):
return self._check_signal() == self.SIG_RESUME
def to_dict(self):
if self.id:
dep_on = db_api.dependency_get_depended(self.context, self.id)
dep_by = db_api.dependency_get_dependents(self.context, self.id)
else:
dep_on = []
dep_by = []
action_dict = {
'id': self.id,
'name': self.name,
'action': self.action,
'target': self.target,
'cause': self.cause,
'owner': self.owner,
'start_time': utils.dec2str(self.start_time),
'end_time': utils.dec2str(self.end_time),
'timeout': self.timeout,
'status': self.status,
'status_reason': self.status_reason,
'inputs': self.inputs,
'outputs': self.outputs,
'depends_on': dep_on,
'depended_by': dep_by,
'created_at': self.created_at,
'updated_at': self.updated_at,
'data': self.data,
}
return action_dict
def ActionProc(context, action_id):
'''Action process.'''
action = Action.load(context, action_id=action_id)
if action is None:
LOG.error(_LE('Action "%s" could not be found.'), action_id)
return False
reason = 'Action completed'
success = True
try:
result, reason = action.execute()
except Exception as ex:
result = action.RES_ERROR
reason = six.text_type(ex)
LOG.exception(_('Unexpected exception occurred during action '
'%(action)s (%(id)s) execution: %(reason)s'),
{'action': action.action, 'id': action.id,
'reason': reason})
success = False
finally:
# NOTE: locks on action is eventually released here by status update
action.set_status(result, reason)
return success

View File

@ -1,169 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
from bilean.common import exception
from bilean.common.i18n import _, _LE, _LI
from bilean.engine.actions import base
from bilean.engine import event as EVENT
from bilean.engine.flows import flow as bilean_flow
from bilean.engine import lock as bilean_lock
from bilean.engine import user as user_mod
from bilean.plugins import base as plugin_base
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
class UserAction(base.Action):
"""An action that can be performed on a user."""
ACTIONS = (
USER_CREATE_RESOURCE, USER_UPDATE_RESOUCE, USER_DELETE_RESOURCE,
USER_SETTLE_ACCOUNT,
) = (
'USER_CREATE_RESOURCE', 'USER_UPDATE_RESOUCE', 'USER_DELETE_RESOURCE',
'USER_SETTLE_ACCOUNT',
)
def __init__(self, target, action, context, **kwargs):
"""Constructor for a user action object.
:param target: ID of the target user object on which the action is to
be executed.
:param action: The name of the action to be executed.
:param context: The context used for accessing the DB layer.
:param dict kwargs: Additional parameters that can be passed to the
action.
"""
super(UserAction, self).__init__(target, action, context, **kwargs)
try:
self.user = user_mod.User.load(self.context, user_id=self.target)
except Exception:
self.user = None
def do_create_resource(self):
resource = plugin_base.Resource.from_dict(self.inputs)
try:
flow_engine = bilean_flow.get_create_resource_flow(
self.context, self.target, resource)
with bilean_flow.DynamicLogListener(flow_engine, logger=LOG):
flow_engine.run()
except Exception as ex:
LOG.error(_LE("Faied to execute action(%(action_id)s), error: "
"%(error_msg)s"), {"action_id": self.id,
"error_msg": six.text_type(ex)})
return self.RES_ERROR, _('Resource creation failed.')
return self.RES_OK, _('Resource creation successfully.')
def do_update_resource(self):
try:
values = self.inputs
resource_id = values.pop('id', None)
resource = plugin_base.Resource.load(
self.context, resource_id=resource_id)
except exception.ResourceNotFound:
LOG.error(_LE('The resource(%s) trying to update not found.'),
resource_id)
return self.RES_ERROR, _('Resource not found.')
try:
flow_engine = bilean_flow.get_update_resource_flow(
self.context, self.target, resource, values)
with bilean_flow.DynamicLogListener(flow_engine, logger=LOG):
flow_engine.run()
except Exception as ex:
LOG.error(_LE("Faied to execute action(%(action_id)s), error: "
"%(error_msg)s"), {"action_id": self.id,
"error_msg": six.text_type(ex)})
return self.RES_ERROR, _('Resource update failed.')
LOG.info(_LI('Successfully updated resource: %s'), resource.id)
return self.RES_OK, _('Resource update successfully.')
def do_delete_resource(self):
try:
resource_id = self.inputs.get('resource_id')
resource = plugin_base.Resource.load(
self.context, resource_id=resource_id)
except exception.ResourceNotFound:
LOG.error(_LE('The resource(%s) trying to delete not found.'),
resource_id)
return self.RES_ERROR, _('Resource not found.')
try:
flow_engine = bilean_flow.get_delete_resource_flow(
self.context, self.target, resource)
with bilean_flow.DynamicLogListener(flow_engine, logger=LOG):
flow_engine.run()
except Exception as ex:
LOG.error(_LE("Faied to execute action(%(action_id)s), error: "
"%(error_msg)s"), {"action_id": self.id,
"error_msg": six.text_type(ex)})
return self.RES_ERROR, _('Resource deletion failed.')
LOG.info(_LI('Successfully deleted resource: %s'), resource.id)
return self.RES_OK, _('Resource deletion successfully.')
def do_settle_account(self):
try:
flow_engine = bilean_flow.get_settle_account_flow(
self.context, self.target, task=self.inputs.get('task'))
with bilean_flow.DynamicLogListener(flow_engine, logger=LOG):
flow_engine.run()
except Exception as ex:
LOG.error(_LE("Faied to execute action(%(action_id)s), error: "
"%(error_msg)s"), {"action_id": self.id,
"error_msg": six.text_type(ex)})
return self.RES_ERROR, _('Settle account failed.')
return self.RES_OK, _('Settle account successfully.')
def _execute(self):
"""Private function that finds out the handler and execute it."""
action_name = self.action.lower()
method_name = action_name.replace('user', 'do')
method = getattr(self, method_name, None)
if method is None:
reason = _('Unsupported action: %s') % self.action
EVENT.error(self.context, self.user, self.action, 'Failed', reason)
return self.RES_ERROR, reason
return method()
def execute(self, **kwargs):
"""Interface function for action execution.
:param dict kwargs: Parameters provided to the action, if any.
:returns: A tuple containing the result and the related reason.
"""
try:
res = bilean_lock.user_lock_acquire(self.context, self.target,
self.id, self.owner)
if not res:
LOG.error(_LE('Failed grabbing the lock for user: %s'),
self.target)
res = self.RES_ERROR
reason = _('Failed in locking user')
else:
res, reason = self._execute()
finally:
bilean_lock.user_lock_release(self.target, self.id)
return res, reason

View File

@ -1,117 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.common import exception
from bilean.common import utils
from bilean.db import api as db_api
class Consumption(object):
"""Class reference to consumption record."""
def __init__(self, user_id, **kwargs):
self.id = kwargs.get('id')
self.user_id = user_id
self.resource_id = kwargs.get('resource_id')
self.resource_type = kwargs.get('resource_type')
self.start_time = utils.make_decimal(kwargs.get('start_time', 0))
self.end_time = utils.make_decimal(kwargs.get('end_time', 0))
self.rate = utils.make_decimal(kwargs.get('rate', 0))
self.cost = utils.make_decimal(kwargs.get('cost', 0))
self.metadata = kwargs.get('metadata')
@classmethod
def from_db_record(cls, record):
'''Construct a consumption object from a database record.'''
kwargs = {
'id': record.id,
'resource_id': record.resource_id,
'resource_type': record.resource_type,
'start_time': record.start_time,
'end_time': record.end_time,
'rate': record.rate,
'cost': record.cost,
'metadata': record.meta_data,
}
return cls(record.user_id, **kwargs)
@classmethod
def load(cls, context, db_consumption=None, consumption_id=None,
project_safe=True):
'''Retrieve a consumption record from database.'''
if db_consumption is not None:
return cls.from_db_record(db_consumption)
record = db_api.consumption_get(context, consumption_id,
project_safe=project_safe)
if record is None:
raise exception.ConsumptionNotFound(consumption=consumption_id)
return cls.from_db_record(record)
@classmethod
def load_all(cls, context, user_id=None, limit=None, marker=None,
sort_keys=None, sort_dir=None, filters=None,
project_safe=True):
'''Retrieve all consumptions from database.'''
records = db_api.consumption_get_all(context,
user_id=user_id,
limit=limit,
marker=marker,
filters=filters,
sort_keys=sort_keys,
sort_dir=sort_dir,
project_safe=project_safe)
for record in records:
yield cls.from_db_record(record)
def store(self, context):
'''Store the consumption into database and return its ID.'''
values = {
'user_id': self.user_id,
'resource_id': self.resource_id,
'resource_type': self.resource_type,
'start_time': utils.format_decimal(self.start_time),
'end_time': utils.format_decimal(self.end_time),
'rate': utils.format_decimal(self.rate),
'cost': utils.format_decimal(self.cost),
'meta_data': self.metadata,
}
consumption = db_api.consumption_create(context, values)
self.id = consumption.id
return self.id
def delete(self, context):
'''Delete consumption from database.'''
db_api.consumption_delete(context, self.id)
def to_dict(self):
consumption = {
'id': self.id,
'user_id': self.user_id,
'resource_id': self.resource_id,
'resource_type': self.resource_type,
'start_time': utils.dec2str(self.start_time),
'end_time': utils.dec2str(self.end_time),
'rate': utils.dec2str(self.rate),
'cost': utils.dec2str(self.cost),
'metadata': self.metadata,
}
return consumption

View File

@ -1,112 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_context import context as oslo_context
from oslo_log import log as logging
import oslo_messaging
from oslo_service import service
from bilean.common import consts
from bilean.common.i18n import _LI
from bilean.common import messaging as rpc_messaging
LOG = logging.getLogger(__name__)
OPERATIONS = (
START_ACTION, CANCEL_ACTION, STOP
) = (
'start_action', 'cancel_action', 'stop'
)
class Dispatcher(service.Service):
'''Listen on an AMQP queue named for the engine.
Receive notification from engine services and schedule actions.
'''
def __init__(self, engine_service, topic, version, thread_group_mgr):
super(Dispatcher, self).__init__()
self.TG = thread_group_mgr
self.engine_id = engine_service.engine_id
self.topic = topic
self.version = version
def start(self):
super(Dispatcher, self).start()
self.target = oslo_messaging.Target(server=self.engine_id,
topic=self.topic,
version=self.version)
server = rpc_messaging.get_rpc_server(self.target, self)
server.start()
def listening(self, ctxt):
'''Respond affirmatively to confirm that engine is still alive.'''
return True
def start_action(self, ctxt, action_id=None):
self.TG.start_action(self.engine_id, action_id)
def cancel_action(self, ctxt, action_id):
'''Cancel an action.'''
self.TG.cancel_action(action_id)
def suspend_action(self, ctxt, action_id):
'''Suspend an action.'''
self.TG.suspend_action(action_id)
def resume_action(self, ctxt, action_id):
'''Resume an action.'''
self.TG.resume_action(action_id)
def stop(self):
super(Dispatcher, self).stop()
# Wait for all action threads to be finished
LOG.info(_LI("Stopping all action threads of engine %s"),
self.engine_id)
# Stop ThreadGroup gracefully
self.TG.stop(True)
LOG.info(_LI("All action threads have been finished"))
def notify(method, engine_id=None, **kwargs):
'''Send notification to dispatcher
:param method: remote method to call
:param engine_id: dispatcher to notify; None implies broadcast
'''
client = rpc_messaging.get_rpc_client(version=consts.RPC_API_VERSION)
if engine_id:
# Notify specific dispatcher identified by engine_id
call_context = client.prepare(
version=consts.RPC_API_VERSION,
topic=consts.ENGINE_DISPATCHER_TOPIC,
server=engine_id)
else:
# Broadcast to all disptachers
call_context = client.prepare(
version=consts.RPC_API_VERSION,
topic=consts.ENGINE_DISPATCHER_TOPIC)
try:
# We don't use ctext parameter in action progress
# actually. But since RPCClient.call needs this param,
# we use oslo current context here.
call_context.call(oslo_context.get_current(), method, **kwargs)
return True
except oslo_messaging.MessagingTimeout:
return False
def start_action(engine_id=None, **kwargs):
return notify(START_ACTION, engine_id, **kwargs)

View File

@ -1,189 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import glob
import os.path
import six
from stevedore import extension
from oslo_config import cfg
from oslo_log import log as logging
from bilean.common import exception
from bilean.common.i18n import _, _LE, _LI
from bilean.engine import parser
from bilean.engine import registry
LOG = logging.getLogger(__name__)
_environment = None
def global_env():
global _environment
if _environment is None:
initialize()
return _environment
class Environment(object):
'''An object that contains all plugins, drivers and customizations.'''
SECTIONS = (
PARAMETERS, CUSTOM_PLUGINS,
) = (
'parameters', 'custom_plugins'
)
def __init__(self, env=None, is_global=False):
'''Create an Environment from a dict.
:param env: the json environment
:param is_global: boolean indicating if this is a user created one.
'''
self.params = {}
if is_global:
self.plugin_registry = registry.Registry('plugins')
self.driver_registry = registry.Registry('drivers')
else:
self.plugin_registry = registry.Registry(
'plugins', global_env().plugin_registry)
self.driver_registry = registry.Registry(
'drivers', global_env().driver_registry)
if env is not None:
# Merge user specified keys with current environment
self.params = env.get(self.PARAMETERS, {})
custom_plugins = env.get(self.CUSTOM_PLUGINS, {})
self.plugin_registry.load(custom_plugins)
def parse(self, env_str):
'''Parse a string format environment file into a dictionary.'''
if env_str is None:
return {}
env = parser.simple_parse(env_str)
# Check unknown sections
for sect in env:
if sect not in self.SECTIONS:
msg = _('environment has unknown section "%s"') % sect
raise ValueError(msg)
# Fill in default values for missing sections
for sect in self.SECTIONS:
if sect not in env:
env[sect] = {}
return env
def load(self, env_dict):
'''Load environment from the given dictionary.'''
self.params.update(env_dict.get(self.PARAMETERS, {}))
self.plugin_registry.load(env_dict.get(self.CUSTOM_PLUGINS, {}))
def _check_plugin_name(self, plugin_type, name):
if name is None or name == "":
msg = _('%s type name not specified') % plugin_type
raise exception.InvalidPlugin(message=msg)
elif not isinstance(name, six.string_types):
msg = _('%s type name is not a string') % plugin_type
raise exception.InvalidPlugin(message=msg)
def register_plugin(self, name, plugin):
self._check_plugin_name('Plugin', name)
self.plugin_registry.register_plugin(name, plugin)
def get_plugin(self, name):
self._check_plugin_name('Plugin', name)
plugin = self.plugin_registry.get_plugin(name)
if plugin is None:
raise exception.PluginTypeNotFound(plugin_type=name)
return plugin
def get_plugins(self):
return self.plugin_registry.get_plugins()
def get_plugin_types(self):
return self.plugin_registry.get_types()
def register_driver(self, name, plugin):
self._check_plugin_name('Driver', name)
self.driver_registry.register_plugin(name, plugin)
def get_driver(self, name):
self._check_plugin_name('Driver', name)
plugin = self.driver_registry.get_plugin(name)
if plugin is None:
msg = _('Driver plugin %(name)s is not found.') % {'name': name}
raise exception.InvalidPlugin(message=msg)
return plugin
def get_driver_types(self):
return self.driver_registry.get_types()
def read_global_environment(self):
'''Read and parse global environment files.'''
cfg.CONF.import_opt('environment_dir', 'bilean.common.config')
env_dir = cfg.CONF.environment_dir
try:
files = glob.glob(os.path.join(env_dir, '*'))
except OSError as ex:
LOG.error(_LE('Failed to read %s'), env_dir)
LOG.exception(ex)
return
for fname in files:
try:
with open(fname) as f:
LOG.info(_LI('Loading environment from %s'), fname)
self.load(self.parse(f.read()))
except ValueError as vex:
LOG.error(_LE('Failed to parse %s'), fname)
LOG.exception(six.text_type(vex))
except IOError as ioex:
LOG.error(_LE('Failed to read %s'), fname)
LOG.exception(six.text_type(ioex))
def _get_mapping(namespace):
mgr = extension.ExtensionManager(
namespace=namespace,
invoke_on_load=False)
return [[name, mgr[name].plugin] for name in mgr.names()]
def initialize():
global _environment
if _environment is not None:
return
env = Environment(is_global=True)
# Register global plugins when initialized
entries = _get_mapping('bilean.plugins')
for name, plugin in entries:
env.register_plugin(name, plugin)
entries = _get_mapping('bilean.drivers')
for name, plugin in entries:
env.register_driver(name, plugin)
env.read_global_environment()
_environment = env

View File

@ -1,209 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
from bilean.common import exception
from bilean.common.i18n import _, _LC, _LE, _LI, _LW
from bilean.common import utils
from bilean.db import api as db_api
from oslo_log import log
from oslo_utils import reflection
from oslo_utils import timeutils
LOG = log.getLogger(__name__)
class Event(object):
'''capturing an interesting happening in Bilean.'''
def __init__(self, timestamp, level, entity=None, **kwargs):
self.timestamp = timestamp
self.level = level
self.id = kwargs.get('id')
self.user_id = kwargs.get('user_id')
self.action = kwargs.get('action')
self.status = kwargs.get('status')
self.status_reason = kwargs.get('status_reason')
self.obj_id = kwargs.get('obj_id')
self.obj_type = kwargs.get('obj_type')
self.obj_name = kwargs.get('obj_name')
self.metadata = kwargs.get('metadata')
cntx = kwargs.get('context')
if cntx is not None:
self.user_id = cntx.project
if entity is not None:
self.obj_id = entity.id
self.obj_name = entity.name
e_type = reflection.get_class_name(entity, fully_qualified=False)
self.obj_type = e_type.upper()
@classmethod
def from_db_record(cls, record):
'''Construct an event object from a database record.'''
kwargs = {
'id': record.id,
'user_id': record.user_id,
'action': record.action,
'status': record.status,
'status_reason': record.status_reason,
'obj_id': record.obj_id,
'obj_type': record.obj_type,
'obj_name': record.obj_name,
'metadata': record.meta_data,
}
return cls(record.timestamp, record.level, **kwargs)
@classmethod
def load(cls, context, db_event=None, event_id=None, project_safe=True):
'''Retrieve an event record from database.'''
if db_event is not None:
return cls.from_db_record(db_event)
record = db_api.event_get(context, event_id, project_safe=project_safe)
if record is None:
raise exception.EventNotFound(event=event_id)
return cls.from_db_record(record)
@classmethod
def load_all(cls, context, limit=None, marker=None, sort_keys=None,
sort_dir=None, filters=None, project_safe=True):
'''Retrieve all events from database.'''
records = db_api.event_get_all(context, limit=limit,
marker=marker,
filters=filters,
sort_keys=sort_keys,
sort_dir=sort_dir,
project_safe=project_safe)
for record in records:
yield cls.from_db_record(record)
def store(self, context):
'''Store the event into database and return its ID.'''
values = {
'timestamp': self.timestamp,
'level': self.level,
'user_id': self.user_id,
'action': self.action,
'status': self.status,
'status_reason': self.status_reason,
'obj_id': self.obj_id,
'obj_type': self.obj_type,
'obj_name': self.obj_name,
'meta_data': self.metadata,
}
event = db_api.event_create(context, values)
self.id = event.id
return self.id
def to_dict(self):
evt = {
'id': self.id,
'level': self.level,
'user_id': self.user_id,
'action': self.action,
'status': self.status,
'status_reason': self.status_reason,
'obj_id': self.obj_id,
'obj_type': self.obj_type,
'obj_name': self.obj_name,
'timestamp': utils.format_time(self.timestamp),
'metadata': self.metadata,
}
return evt
def critical(context, entity, action, status=None, status_reason=None,
timestamp=None):
timestamp = timestamp or timeutils.utcnow()
event = Event(timestamp, logging.CRITICAL, entity,
action=action, status=status, status_reason=status_reason,
user_id=context.project)
event.store(context)
LOG.critical(_LC('%(name)s [%(id)s] - %(status)s: %(reason)s'),
{'name': event.obj_name,
'id': event.obj_id and event.obj_id[:8],
'status': status,
'reason': status_reason})
def error(context, entity, action, status=None, status_reason=None,
timestamp=None):
timestamp = timestamp or timeutils.utcnow()
event = Event(timestamp, logging.ERROR, entity,
action=action, status=status, status_reason=status_reason,
user_id=context.project)
event.store(context)
LOG.error(_LE('%(name)s [%(id)s] %(action)s - %(status)s: %(reason)s'),
{'name': event.obj_name,
'id': event.obj_id and event.obj_id[:8],
'action': action,
'status': status,
'reason': status_reason})
def warning(context, entity, action, status=None, status_reason=None,
timestamp=None):
timestamp = timestamp or timeutils.utcnow()
event = Event(timestamp, logging.WARNING, entity,
action=action, status=status, status_reason=status_reason,
user_id=context.project)
event.store(context)
LOG.warn(_LW('%(name)s [%(id)s] %(action)s - %(status)s: %(reason)s'),
{'name': event.obj_name,
'id': event.obj_id and event.obj_id[:8],
'action': action,
'status': status,
'reason': status_reason})
def info(context, entity, action, status=None, status_reason=None,
timestamp=None):
timestamp = timestamp or timeutils.utcnow()
event = Event(timestamp, logging.INFO, entity,
action=action, status=status, status_reason=status_reason,
user_id=context.project)
event.store(context)
LOG.info(_LI('%(name)s [%(id)s] %(action)s - %(status)s: %(reason)s'),
{'name': event.obj_name,
'id': event.obj_id and event.obj_id[:8],
'action': action,
'status': status,
'reason': status_reason})
def debug(context, entity, action, status=None, status_reason=None,
timestamp=None):
timestamp = timestamp or timeutils.utcnow()
event = Event(timestamp, logging.DEBUG, entity,
action=action, status=status, status_reason=status_reason,
user_id=context.project)
event.store(context)
LOG.debug(_('%(name)s [%(id)s] %(action)s - %(status)s: %(reason)s'),
{'name': event.obj_name,
'id': event.obj_id and event.obj_id[:8],
'action': action,
'status': status,
'reason': status_reason})

View File

@ -1,294 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from oslo_log import log as logging
import taskflow.engines
from taskflow.listeners import base
from taskflow.listeners import logging as logging_listener
from taskflow.patterns import linear_flow
from taskflow import task
from taskflow.types import failure as ft
from bilean.common import exception
from bilean.common.i18n import _LE
from bilean.common import utils
from bilean.engine import policy as policy_mod
from bilean.engine import user as user_mod
from bilean.plugins import base as plugin_base
from bilean import scheduler as bilean_scheduler
LOG = logging.getLogger(__name__)
class DynamicLogListener(logging_listener.DynamicLoggingListener):
"""This is used to attach to taskflow engines while they are running.
It provides a bunch of useful features that expose the actions happening
inside a taskflow engine, which can be useful for developers for debugging,
for operations folks for monitoring and tracking of the resource actions
and more...
"""
#: Exception is an excepted case, don't include traceback in log if fails.
_NO_TRACE_EXCEPTIONS = (exception.InvalidInput)
def __init__(self, engine,
task_listen_for=base.DEFAULT_LISTEN_FOR,
flow_listen_for=base.DEFAULT_LISTEN_FOR,
retry_listen_for=base.DEFAULT_LISTEN_FOR,
logger=LOG):
super(DynamicLogListener, self).__init__(
engine,
task_listen_for=task_listen_for,
flow_listen_for=flow_listen_for,
retry_listen_for=retry_listen_for,
log=logger)
def _format_failure(self, fail):
if fail.check(*self._NO_TRACE_EXCEPTIONS) is not None:
exc_info = None
exc_details = '%s%s' % (os.linesep, fail.pformat(traceback=False))
return (exc_info, exc_details)
else:
return super(DynamicLogListener, self)._format_failure(fail)
class CreateResourceTask(task.Task):
"""Create resource and store to db."""
def execute(self, context, resource, **kwargs):
user = user_mod.User.load(context, user_id=resource.user_id)
pid = user.policy_id
try:
if pid:
policy = policy_mod.Policy.load(context, policy_id=pid)
else:
policy = policy_mod.Policy.load_default(context)
except exception.PolicyNotFound as e:
LOG.error(_LE("Error when find policy: %s"), e)
if policy is not None:
rule = policy.find_rule(context, resource.resource_type)
# Update resource with rule_id and rate
resource.rule_id = rule.id
resource.rate = utils.make_decimal(rule.get_price(resource))
resource.store(context)
def revert(self, context, resource, result, **kwargs):
if isinstance(result, ft.Failure):
LOG.error(_LE("Error when creating resource: %s"),
resource.to_dict())
return
resource.delete(context, soft_delete=False)
class UpdateResourceTask(task.Task):
"""Update resource."""
def execute(self, context, resource, values, resource_bak, **kwargs):
old_rate = resource.rate
resource.properties = values.get('properties')
rule = plugin_base.Rule.load(context, rule_id=resource.rule_id)
resource.rate = utils.make_decimal(rule.get_price(resource))
resource.delta_rate = resource.rate - old_rate
resource.store(context)
def revert(self, context, resource, resource_bak, result, **kwargs):
if isinstance(result, ft.Failure):
LOG.error(_LE("Error when updating resource: %s"), resource.id)
return
# restore resource
res = plugin_base.Resource.from_dict(resource_bak)
res.store(context)
class DeleteResourceTask(task.Task):
"""Delete resource from db."""
def execute(self, context, resource, **kwargs):
resource.delete(context)
def revert(self, context, resource, result, **kwargs):
if isinstance(result, ft.Failure):
LOG.error(_LE("Error when deleting resource: %s"), resource.id)
return
resource.deleted_at = None
resource.store(context)
class CreateConsumptionTask(task.Task):
"""Generate consumption record and store to db."""
def execute(self, context, resource, *args, **kwargs):
consumption = resource.consumption
if consumption is not None:
consumption.store(context)
def revert(self, context, resource, result, *args, **kwargs):
if isinstance(result, ft.Failure):
LOG.error(_LE("Error when storing consumption of resource: %s"),
resource.id)
return
consumption = resource.consumption
if consumption is not None:
consumption.delete(context)
class LoadUserTask(task.Task):
"""Load user from db."""
default_provides = set(['user_bak', 'user_obj'])
def execute(self, context, user_id, **kwargs):
user_obj = user_mod.User.load(context, user_id=user_id)
return {
'user_bak': user_obj.to_dict(),
'user_obj': user_obj,
}
class SettleAccountTask(task.Task):
def execute(self, context, user_obj, user_bak, task, **kwargs):
user_obj.settle_account(context, task=task)
def revert(self, context, user_bak, result, **kwargs):
if isinstance(result, ft.Failure):
LOG.error(_LE("Error when settling account for user: %s"),
user_bak.get('id'))
return
# Restore user
user = user_mod.User.from_dict(user_bak)
user.store(context)
class UpdateUserRateTask(task.Task):
"""Update user's rate ."""
def execute(self, context, user_obj, user_bak, resource, *args, **kwargs):
user_obj.update_rate(context, resource.delta_rate,
timestamp=resource.last_bill)
def revert(self, context, user_obj, user_bak, resource, result,
*args, **kwargs):
if isinstance(result, ft.Failure):
LOG.error(_LE("Error when updating user: %s"), user_obj.id)
return
# Restore user
user = user_mod.User.from_dict(user_bak)
user.store(context)
class UpdateUserJobsTask(task.Task):
"""Update user jobs."""
def execute(self, user_obj, **kwargs):
res = bilean_scheduler.notify(bilean_scheduler.UPDATE_JOBS,
user=user_obj.to_dict())
if not res:
LOG.error(_LE("Error when updating user jobs: %s"), user_obj.id)
raise
def get_settle_account_flow(context, user_id, task=None):
"""Constructs and returns settle account task flow."""
flow_name = user_id + '_settle_account'
flow = linear_flow.Flow(flow_name)
kwargs = {
'context': context,
'user_id': user_id,
'task': task,
}
flow.add(LoadUserTask(),
SettleAccountTask())
if task != 'freeze':
flow.add(UpdateUserJobsTask())
return taskflow.engines.load(flow, store=kwargs)
def get_create_resource_flow(context, user_id, resource):
"""Constructs and returns user task flow.
:param context: The request context.
:param user_id: The ID of user.
:param resource: Object resource to create.
"""
flow_name = user_id + '_create_resource'
flow = linear_flow.Flow(flow_name)
kwargs = {
'context': context,
'user_id': user_id,
'resource': resource,
}
flow.add(CreateResourceTask(),
LoadUserTask(),
UpdateUserRateTask(),
UpdateUserJobsTask())
return taskflow.engines.load(flow, store=kwargs)
def get_delete_resource_flow(context, user_id, resource):
"""Constructs and returns user task flow.
:param context: The request context.
:param user_id: The ID of user.
:param resource: Object resource to delete.
"""
flow_name = user_id + '_delete_resource'
flow = linear_flow.Flow(flow_name)
kwargs = {
'context': context,
'user_id': user_id,
'resource': resource,
}
flow.add(DeleteResourceTask(),
CreateConsumptionTask(),
LoadUserTask(),
UpdateUserRateTask(),
UpdateUserJobsTask())
return taskflow.engines.load(flow, store=kwargs)
def get_update_resource_flow(context, user_id, resource, values):
"""Constructs and returns user task flow.
:param context: The request context.
:param user_id: The ID of user.
:param resource: Object resource to update.
:param values: The values to update.
"""
flow_name = user_id + '_update_resource'
flow = linear_flow.Flow(flow_name)
kwargs = {
'context': context,
'user_id': user_id,
'resource': resource,
'resource_bak': resource.to_dict(),
'values': values,
}
flow.add(UpdateResourceTask(),
CreateConsumptionTask(),
LoadUserTask(),
UpdateUserRateTask(),
UpdateUserJobsTask())
return taskflow.engines.load(flow, store=kwargs)

View File

@ -1,108 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import eventlet
from oslo_config import cfg
from oslo_log import log as logging
from oslo_utils import timeutils
import time
from bilean.common.i18n import _, _LE, _LI
from bilean.db import api as db_api
CONF = cfg.CONF
CONF.import_opt('lock_retry_times', 'bilean.common.config')
CONF.import_opt('lock_retry_interval', 'bilean.common.config')
LOG = logging.getLogger(__name__)
def is_engine_dead(ctx, engine_id, period_time=None):
# if engine didn't report its status for peirod_time, will consider it
# as a dead engine.
if period_time is None:
period_time = 2 * CONF.periodic_interval
engine = db_api.service_get(ctx, engine_id)
if not engine:
return True
if (timeutils.utcnow() - engine.updated_at).total_seconds() > period_time:
return True
return False
def sleep(sleep_time):
'''Interface for sleeping.'''
eventlet.sleep(sleep_time)
def user_lock_acquire(context, user_id, action_id, engine=None,
forced=False):
"""Try to lock the specified user.
:param context: the context used for DB operations;
:param user_id: ID of the user to be locked.
:param action_id: ID of the action that attempts to lock the user.
:param engine: ID of the engine that attempts to lock the user.
:param forced: set to True to cancel current action that owns the lock,
if any.
:returns: True if lock is acquired, or False otherwise.
"""
owner = db_api.user_lock_acquire(user_id, action_id)
if action_id == owner:
return True
retries = cfg.CONF.lock_retry_times
retry_interval = cfg.CONF.lock_retry_interval
while retries > 0:
sleep(retry_interval)
LOG.debug(_('Acquire lock for user %s again'), user_id)
owner = db_api.user_lock_acquire(user_id, action_id)
if action_id == owner:
return True
retries = retries - 1
if forced:
owner = db_api.user_lock_steal(user_id, action_id)
return action_id == owner
action = db_api.action_get(context, owner)
if (action and action.owner and action.owner != engine and
is_engine_dead(context, action.owner)):
LOG.info(_LI('The user %(u)s is locked by dead action %(a)s, '
'try to steal the lock.'), {
'u': user_id,
'a': owner
})
reason = _('Engine died when executing this action.')
db_api.action_mark_failed(context, action.id, time.time(),
reason=reason)
db_api.user_lock_steal(user_id, action_id)
return True
LOG.error(_LE('User is already locked by action %(old)s, '
'action %(new)s failed grabbing the lock'),
{'old': owner, 'new': action_id})
return False
def user_lock_release(user_id, action_id):
"""Release the lock on the specified user.
:param user_id: ID of the user to be released.
:param action_id: ID of the action which locked the user.
"""
return db_api.user_lock_release(user_id, action_id)

View File

@ -1,82 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from oslo_serialization import jsonutils
import six
from six.moves import urllib
import yaml
from bilean.common.i18n import _
# Try LibYAML if available
if hasattr(yaml, 'CSafeLoader'):
Loader = yaml.CSafeLoader
else:
Loader = yaml.SafeLoader
if hasattr(yaml, 'CSafeDumper'):
Dumper = yaml.CSafeDumper
else:
Dumper = yaml.SafeDumper
class YamlLoader(Loader):
def normalise_file_path_to_url(self, path):
if urllib.parse.urlparse(path).scheme:
return path
path = os.path.abspath(path)
return urllib.parse.urljoin('file:',
urllib.request.pathname2url(path))
def include(self, node):
try:
url = self.normalise_file_path_to_url(self.construct_scalar(node))
tmpl = urllib.request.urlopen(url).read()
return yaml.load(tmpl, Loader)
except urllib.error.URLError as ex:
raise IOError('Failed retrieving file %s: %s' %
(url, six.text_type(ex)))
def process_unicode(self, node):
# Override the default string handling function to always return
# unicode objects
return self.construct_scalar(node)
YamlLoader.add_constructor('!include', YamlLoader.include)
YamlLoader.add_constructor(u'tag:yaml.org,2002:str',
YamlLoader.process_unicode)
YamlLoader.add_constructor(u'tag:yaml.org,2002:timestamp',
YamlLoader.process_unicode)
def simple_parse(in_str):
try:
out_dict = jsonutils.loads(in_str)
except ValueError:
try:
out_dict = yaml.load(in_str, Loader=YamlLoader)
except yaml.YAMLError as yea:
yea = six.text_type(yea)
msg = _('Error parsing input: %s') % yea
raise ValueError(msg)
else:
if out_dict is None:
out_dict = {}
if not isinstance(out_dict, dict):
msg = _('The input is not a JSON object or YAML mapping.')
raise ValueError(msg)
return out_dict

View File

@ -1,137 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.common import exception
from bilean.common import utils
from bilean.db import api as db_api
from bilean.plugins import base as plugin_base
class Policy(object):
"""Policy object contains all policy operations"""
def __init__(self, name, **kwargs):
self.name = name
self.id = kwargs.get('id')
self.is_default = kwargs.get('is_default', False)
# rules schema like [{'id': 'xxx', 'type': 'os.nova.server'}]
self.rules = kwargs.get('rules', [])
self.metadata = kwargs.get('metadata')
self.created_at = kwargs.get('created_at')
self.updated_at = kwargs.get('updated_at')
self.deleted_at = kwargs.get('deleted_at')
def store(self, context):
"""Store the policy record into database table."""
values = {
'name': self.name,
'rules': self.rules,
'is_default': self.is_default,
'meta_data': self.metadata,
'created_at': self.created_at,
'updated_at': self.updated_at,
'deleted_at': self.deleted_at,
}
if self.id:
db_api.policy_update(context, self.id, values)
else:
policy = db_api.policy_create(context, values)
self.id = policy.id
return self.id
@classmethod
def _from_db_record(cls, record):
'''Construct a policy object from database record.
:param record: a DB policy object that contains all fields;
'''
kwargs = {
'id': record.id,
'rules': record.rules,
'is_default': record.is_default,
'metadata': record.meta_data,
'created_at': record.created_at,
'updated_at': record.updated_at,
'deleted_at': record.deleted_at,
}
return cls(record.name, **kwargs)
@classmethod
def load(cls, context, policy_id=None, policy=None, show_deleted=False):
'''Retrieve a policy from database.'''
if policy is None:
policy = db_api.policy_get(context, policy_id,
show_deleted=show_deleted)
if policy is None:
raise exception.PolicyNotFound(policy=policy_id)
return cls._from_db_record(policy)
@classmethod
def load_default(cls, context, show_deleted=False):
'''Retrieve default policy from database.'''
filters = {'is_default': True}
policies = cls.load_all(context, filters=filters,
show_deleted=show_deleted)
if len(policies) > 1:
raise exception.MultipleDefaultPolicy()
policy = None if len(policies) < 1 else policies[0]
return policy
@classmethod
def load_all(cls, context, limit=None, marker=None,
sort_keys=None, sort_dir=None,
filters=None, show_deleted=False):
'''Retrieve all policies of from database.'''
records = db_api.policy_get_all(context,
limit=limit, marker=marker,
sort_keys=sort_keys,
sort_dir=sort_dir,
filters=filters,
show_deleted=show_deleted)
return [cls._from_db_record(record) for record in records]
def to_dict(self):
policy_dict = {
'id': self.id,
'name': self.name,
'rules': self.rules,
'is_default': self.is_default,
'metadata': self.metadata,
'created_at': utils.format_time(self.created_at),
'updated_at': utils.format_time(self.updated_at),
'deleted_at': utils.format_time(self.deleted_at),
}
return policy_dict
def do_delete(self, context):
db_api.policy_delete(context, self.id)
return True
def find_rule(self, context, rtype):
'''Find the exact rule from self.rules by rtype'''
for rule in self.rules:
if rtype == rule['type'].split('-')[0]:
return plugin_base.Rule.load(context, rule_id=rule['id'])
raise exception.RuleNotFound(rule_type=rtype)

View File

@ -1,138 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import itertools
import six
from oslo_log import log as logging
from bilean.common.i18n import _LI, _LW
LOG = logging.getLogger(__name__)
class PluginInfo(object):
'''Base mapping of plugin type to implementation.'''
def __new__(cls, registry, name, plugin, **kwargs):
'''Create a new PluginInfo of the appropriate class.
Placeholder for class hierarchy extensibility
'''
return super(PluginInfo, cls).__new__(cls)
def __init__(self, registry, name, plugin):
self.registry = registry
self.name = name
self.plugin = plugin
self.user_provided = True
def __eq__(self, other):
if other is None:
return False
return (self.name == other.name and
self.plugin == other.plugin and
self.user_provided == other.user_provided)
def __ne__(self, other):
return not self.__eq__(other)
def __lt__(self, other):
if self.user_provided != other.user_provided:
# user provided ones must be sorted above system ones.
return self.user_provided > other.user_provided
if len(self.name) != len(other.name):
# more specific (longer) name must be sorted above system ones.
return len(self.name) > len(other.name)
return self.name < other.name
def __gt__(self, other):
return other.__lt__(self)
def __str__(self):
return '[Plugin](User:%s) %s -> %s' % (self.user_provided,
self.name, str(self.plugin))
class Registry(object):
'''A registry for managing rule classes.'''
def __init__(self, registry_name, global_registry=None):
self.registry_name = registry_name
self._registry = {}
self.is_global = False if global_registry else True
self.global_registry = global_registry
def _register_info(self, name, info):
'''place the new info in the correct location in the registry.
:param path: a string of plugin name.
:param info: reference to a PluginInfo data structure, deregister a
PluginInfo if specified as None.
'''
registry = self._registry
if info is None:
# delete this entry.
LOG.warn(_LW('Removing %(item)s from registry'), {'item': name})
registry.pop(name, None)
return
if name in registry and isinstance(registry[name], PluginInfo):
if registry[name] == info:
return
details = {
'name': name,
'old': str(registry[name].plugin),
'new': str(info.plugin)
}
LOG.warn(_LW('Changing %(name)s from %(old)s to %(new)s'), details)
else:
LOG.info(_LI('Registering %(name)s -> %(value)s'), {
'name': name, 'value': str(info.plugin)})
info.user_provided = not self.is_global
registry[name] = info
def register_plugin(self, name, plugin):
pi = PluginInfo(self, name, plugin)
self._register_info(name, pi)
def load(self, json_snippet):
for k, v in iter(json_snippet.items()):
if v is None:
self._register_info(k, None)
else:
self.register_plugin(k, v)
def iterable_by(self, name):
plugin = self._registry.get(name)
if plugin:
yield plugin
def get_plugin(self, name):
giter = []
if not self.is_global:
giter = self.global_registry.iterable_by(name)
matches = itertools.chain(self.iterable_by(name), giter)
infoes = sorted(matches)
return infoes[0].plugin if infoes else None
def get_plugins(self):
return [p.plugin for p in six.itervalues(self._registry)]
def as_dict(self):
return dict((k, v.plugin) for k, v in self._registry.items())
def get_types(self):
'''Return a list of valid plugin types.'''
return [{'name': name} for name in six.iterkeys(self._registry)]

View File

@ -1,785 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import functools
import six
import time
import eventlet
from oslo_config import cfg
from oslo_log import log as logging
import oslo_messaging
from oslo_service import service
from oslo_service import threadgroup
from oslo_utils import timeutils
from bilean.common import consts
from bilean.common import context as bilean_context
from bilean.common import exception
from bilean.common.i18n import _, _LE, _LI, _LW
from bilean.common import messaging as rpc_messaging
from bilean.common import schema
from bilean.common import utils
from bilean.db import api as db_api
from bilean.engine.actions import base as action_mod
from bilean.engine import consumption as cons_mod
from bilean.engine import dispatcher
from bilean.engine import environment
from bilean.engine import event as event_mod
from bilean.engine import policy as policy_mod
from bilean.engine import user as user_mod
from bilean.plugins import base as plugin_base
from bilean import scheduler as bilean_scheduler
LOG = logging.getLogger(__name__)
def request_context(func):
@functools.wraps(func)
def wrapped(self, ctx, *args, **kwargs):
if ctx is not None and not isinstance(ctx,
bilean_context.RequestContext):
ctx = bilean_context.RequestContext.from_dict(ctx.to_dict())
try:
return func(self, ctx, *args, **kwargs)
except exception.BileanException:
raise oslo_messaging.rpc.dispatcher.ExpectedException()
return wrapped
class ThreadGroupManager(object):
'''Thread group manager.'''
def __init__(self):
super(ThreadGroupManager, self).__init__()
self.workers = {}
self.group = threadgroup.ThreadGroup()
# Create dummy service task, because when there is nothing queued
# on self.tg the process exits
self.add_timer(cfg.CONF.periodic_interval, self._service_task)
self.db_session = bilean_context.get_admin_context()
def _service_task(self):
'''Dummy task which gets queued on the service.Service threadgroup.
Without this service.Service sees nothing running i.e has nothing to
wait() on, so the process exits.
'''
pass
def start(self, func, *args, **kwargs):
'''Run the given method in a thread.'''
return self.group.add_thread(func, *args, **kwargs)
def start_action(self, worker_id, action_id=None):
'''Run the given action in a sub-thread.
Release the action lock when the thread finishes?
:param workder_id: ID of the worker thread; we fake workers using
bilean engines at the moment.
:param action_id: ID of the action to be executed. None means the
1st ready action will be scheduled to run.
'''
def release(thread, action_id):
'''Callback function that will be passed to GreenThread.link().'''
# Remove action thread from thread list
self.workers.pop(action_id)
timestamp = time.time()
if action_id is not None:
action = db_api.action_acquire(self.db_session, action_id,
worker_id, timestamp)
else:
action = db_api.action_acquire_first_ready(self.db_session,
worker_id,
timestamp)
if not action:
return
th = self.start(action_mod.ActionProc, self.db_session, action.id)
self.workers[action.id] = th
th.link(release, action.id)
return th
def cancel_action(self, action_id):
'''Cancel an action execution progress.'''
action = action_mod.Action.load(self.db_session, action_id)
action.signal(action.SIG_CANCEL)
def suspend_action(self, action_id):
'''Suspend an action execution progress.'''
action = action_mod.Action.load(self.db_session, action_id)
action.signal(action.SIG_SUSPEND)
def resume_action(self, action_id):
'''Resume an action execution progress.'''
action = action_mod.Action.load(self.db_session, action_id)
action.signal(action.SIG_RESUME)
def add_timer(self, interval, func, *args, **kwargs):
'''Define a periodic task to be run in the thread group.
The task will be executed in a separate green thread.
Interval is from cfg.CONF.periodic_interval
'''
self.group.add_timer(interval, func, *args, **kwargs)
def stop_timers(self):
self.group.stop_timers()
def stop(self, graceful=False):
'''Stop any active threads belong to this threadgroup.'''
# Try to stop all threads gracefully
self.group.stop(graceful)
self.group.wait()
# Wait for link()ed functions (i.e. lock release)
threads = self.group.threads[:]
links_done = dict((th, False) for th in threads)
def mark_done(gt, th):
links_done[th] = True
for th in threads:
th.link(mark_done, th)
while not all(links_done.values()):
eventlet.sleep()
class EngineService(service.Service):
"""Manages the running instances from creation to destruction.
All the methods in here are called from the RPC backend. This is
all done dynamically so if a call is made via RPC that does not
have a corresponding method here, an exception will be thrown when
it attempts to call into this class. Arguments to these methods
are also dynamically added and will be named as keyword arguments
by the RPC caller.
"""
def __init__(self, host, topic, manager=None, context=None):
super(EngineService, self).__init__()
self.host = host
self.topic = topic
self.dispatcher_topic = consts.ENGINE_DISPATCHER_TOPIC
self.engine_id = None
self.TG = None
self.target = None
self._rpc_server = None
def _init_service(self):
admin_context = bilean_context.get_admin_context()
srv = db_api.service_get_by_host_and_binary(admin_context,
self.host,
'bilean-engine')
if srv is None:
srv = db_api.service_create(admin_context,
host=self.host,
binary='bilean-engine',
topic=self.topic)
self.engine_id = srv.id
def start(self):
self._init_service()
self.TG = ThreadGroupManager()
# create a dispatcher RPC service for this engine.
self.dispatcher = dispatcher.Dispatcher(self,
self.dispatcher_topic,
consts.RPC_API_VERSION,
self.TG)
LOG.info(_LI("Starting dispatcher for engine %s"), self.engine_id)
self.dispatcher.start()
LOG.info(_LI("Starting rpc server for engine: %s"), self.engine_id)
target = oslo_messaging.Target(version=consts.RPC_API_VERSION,
server=self.host,
topic=self.topic)
self.target = target
self._rpc_server = rpc_messaging.get_rpc_server(target, self)
self._rpc_server.start()
self.TG.add_timer(cfg.CONF.periodic_interval,
self.service_manage_report)
super(EngineService, self).start()
def _stop_rpc_server(self):
# Stop RPC connection to prevent new requests
LOG.info(_LI("Stopping engine service..."))
try:
self._rpc_server.stop()
self._rpc_server.wait()
LOG.info(_LI('Engine service stopped successfully'))
except Exception as ex:
LOG.error(_LE('Failed to stop engine service: %s'),
six.text_type(ex))
def stop(self):
self._stop_rpc_server()
# Notify dispatcher to stop all action threads it started.
LOG.info(_LI("Stopping dispatcher for engine %s"), self.engine_id)
self.dispatcher.stop()
self.TG.stop()
super(EngineService, self).stop()
def service_manage_report(self):
admin_context = bilean_context.get_admin_context()
try:
db_api.service_update(admin_context, self.engine_id)
except Exception as ex:
LOG.error(_LE('Service %(id)s update failed: %(error)s'),
{'id': self.engine_id, 'error': six.text_type(ex)})
@request_context
def user_list(self, cnxt, show_deleted=False, limit=None,
marker=None, sort_keys=None, sort_dir=None,
filters=None):
limit = utils.parse_int_param('limit', limit)
show_deleted = utils.parse_bool_param('show_deleted', show_deleted)
users = user_mod.User.load_all(cnxt, show_deleted=show_deleted,
limit=limit, marker=marker,
sort_keys=sort_keys, sort_dir=sort_dir,
filters=filters)
return [user.to_dict() for user in users]
def user_create(self, cnxt, user_id, balance=None, credit=None,
status=None):
"""Create a new user from notification."""
if status is None:
status = consts.USER_INIT
user = user_mod.User(user_id, balance=balance, credit=credit,
status=status)
user.store(cnxt)
return user.to_dict()
@request_context
def user_get(self, cnxt, user_id):
"""Show detailed info about a specify user.
Realtime balance would be return.
"""
user = user_mod.User.load(cnxt, user_id=user_id, realtime=True)
return user.to_dict()
@request_context
def user_recharge(self, cnxt, user_id, value, recharge_type=None,
timestamp=None, metadata=None):
"""Do recharge for specify user."""
try:
user = user_mod.User.load(cnxt, user_id=user_id)
except exception.UserNotFound as ex:
raise exception.BileanBadRequest(msg=six.text_type(ex))
recharge_type = recharge_type or consts.SELF_RECHARGE
timestamp = timestamp or timeutils.utcnow()
metadata = metadata or {}
user.do_recharge(cnxt, value, recharge_type=recharge_type,
timestamp=timestamp, metadata=metadata)
# As user has been updated, the billing job for the user
# should to be updated too.
bilean_scheduler.notify(bilean_scheduler.UPDATE_JOBS,
user=user.to_dict())
return user.to_dict()
def user_delete(self, cnxt, user_id):
"""Delete a specify user according to the notification."""
LOG.info(_LI('Deleging user: %s'), user_id)
user = user_mod.User.load(cnxt, user_id=user_id)
if user.status in [user.ACTIVE, user.WARNING]:
LOG.error(_LE("User (%s) is in use, can not delete."), user_id)
return
user_mod.User.delete(cnxt, user_id=user_id)
bilean_scheduler.notify(bilean_scheduler.DELETE_JOBS,
user=user.to_dict())
@request_context
def user_attach_policy(self, cnxt, user_id, policy_id):
"""Attach specified policy to user."""
LOG.info(_LI("Attaching policy %(policy)s to user %(user)s."),
{'policy': policy_id, 'user': user_id})
user = user_mod.User.load(cnxt, user_id=user_id)
if user.policy_id is not None:
msg = _("User %(user)s is using policy %(now_policy)s, can not "
"attach %(policy)s.") % {'user': user_id,
'now_policy': user.policy_id,
'policy': policy_id}
raise exception.BileanBadRequest(msg=msg)
user.policy_id = policy_id
user.store(cnxt)
return user.to_dict()
@request_context
def rule_create(self, cnxt, name, spec, metadata=None):
if len(plugin_base.Rule.load_all(cnxt, filters={'name': name})) > 0:
msg = _("The rule (%(name)s) already exists."
) % {"name": name}
raise exception.BileanBadRequest(msg=msg)
type_name, version = schema.get_spec_version(spec)
try:
plugin = environment.global_env().get_plugin(type_name)
except exception.RuleTypeNotFound:
msg = _("The specified rule type (%(type)s) is not supported."
) % {"type": type_name}
raise exception.BileanBadRequest(msg=msg)
LOG.info(_LI("Creating rule type: %(type)s, name: %(name)s."),
{'type': type_name, 'name': name})
rule = plugin.RuleClass(name, spec, metadata=metadata)
try:
rule.validate()
except exception.InvalidSpec as ex:
msg = six.text_type(ex)
LOG.error(_LE("Failed in creating rule: %s"), msg)
raise exception.BileanBadRequest(msg=msg)
rule.store(cnxt)
LOG.info(_LI("Rule %(name)s is created: %(id)s."),
{'name': name, 'id': rule.id})
return rule.to_dict()
@request_context
def rule_list(self, cnxt, limit=None, marker=None, sort_keys=None,
sort_dir=None, filters=None, show_deleted=False):
if limit is not None:
limit = utils.parse_int_param('limit', limit)
if show_deleted is not None:
show_deleted = utils.parse_bool_param('show_deleted',
show_deleted)
rules = plugin_base.Rule.load_all(cnxt, limit=limit,
marker=marker,
sort_keys=sort_keys,
sort_dir=sort_dir,
filters=filters,
show_deleted=show_deleted)
return [rule.to_dict() for rule in rules]
@request_context
def rule_get(self, cnxt, rule_id):
rule = plugin_base.Rule.load(cnxt, rule_id=rule_id)
return rule.to_dict()
@request_context
def rule_update(self, cnxt, rule_id, values):
return NotImplemented
@request_context
def rule_delete(self, cnxt, rule_id):
LOG.info(_LI("Deleting rule: '%s'."), rule_id)
plugin_base.Rule.delete(cnxt, rule_id)
@request_context
def validate_creation(self, cnxt, resources):
"""Validate resources creation.
If user's balance is not enough for resources to keep 1 hour,
will fail to validate.
"""
user = user_mod.User.load(cnxt, user_id=cnxt.project)
policy = policy_mod.Policy.load(cnxt, policy_id=user.policy_id)
count = resources.get('count', 1)
total_rate = 0
for resource in resources['resources']:
rule = policy.find_rule(cnxt, resource['resource_type'])
res = plugin_base.Resource('FAKE_ID', user.id,
resource['resource_type'],
resource['properties'])
total_rate += rule.get_price(res)
if count > 1:
total_rate = total_rate * count
# Pre 1 hour bill for resources
pre_bill = total_rate * 3600
if pre_bill > user.balance:
return dict(validation=False)
return dict(validation=True)
def resource_create(self, cnxt, resource_id, user_id, resource_type,
properties):
"""Create resource by given data."""
resource = plugin_base.Resource(resource_id, user_id, resource_type,
properties)
params = {
'name': 'create_resource_%s' % resource_id,
'cause': action_mod.CAUSE_RPC,
'status': action_mod.Action.READY,
'inputs': resource.to_dict(),
}
action_id = action_mod.Action.create(cnxt, user_id,
consts.USER_CREATE_RESOURCE,
**params)
dispatcher.start_action(action_id=action_id)
LOG.info(_LI('Resource create action queued: %s'), action_id)
@request_context
def resource_list(self, cnxt, user_id=None, limit=None, marker=None,
sort_keys=None, sort_dir=None, filters=None,
project_safe=True, show_deleted=False):
if limit is not None:
limit = utils.parse_int_param('limit', limit)
if show_deleted is not None:
show_deleted = utils.parse_bool_param('show_deleted',
show_deleted)
resources = plugin_base.Resource.load_all(cnxt, user_id=user_id,
limit=limit, marker=marker,
sort_keys=sort_keys,
sort_dir=sort_dir,
filters=filters,
project_safe=project_safe,
show_deleted=show_deleted)
return [r.to_dict() for r in resources]
@request_context
def resource_get(self, cnxt, resource_id):
resource = plugin_base.Resource.load(cnxt, resource_id=resource_id)
return resource.to_dict()
def resource_update(self, cnxt, user_id, resource):
"""Do resource update."""
params = {
'name': 'update_resource_%s' % resource.get('id'),
'cause': action_mod.CAUSE_RPC,
'status': action_mod.Action.READY,
'inputs': resource,
}
action_id = action_mod.Action.create(cnxt, user_id,
consts.USER_UPDATE_RESOURCE,
**params)
dispatcher.start_action(action_id=action_id)
LOG.info(_LI('Resource update action queued: %s'), action_id)
def resource_delete(self, cnxt, user_id, resource_id):
"""Delete a specific resource"""
try:
plugin_base.Resource.load(cnxt, resource_id=resource_id)
except exception.ResourceNotFound:
LOG.warn(_LW('The resource(%s) trying to delete not found.'),
resource_id)
return
params = {
'name': 'delete_resource_%s' % resource_id,
'cause': action_mod.CAUSE_RPC,
'status': action_mod.Action.READY,
'inputs': {'resource_id': resource_id},
}
action_id = action_mod.Action.create(cnxt, user_id,
consts.USER_DELETE_RESOURCE,
**params)
dispatcher.start_action(action_id=action_id)
LOG.info(_LI('Resource delete action queued: %s'), action_id)
@request_context
def event_list(self, cnxt, user_id=None, limit=None, marker=None,
sort_keys=None, sort_dir=None, filters=None,
start_time=None, end_time=None, project_safe=True,
show_deleted=False):
if limit is not None:
limit = utils.parse_int_param('limit', limit)
if show_deleted is not None:
show_deleted = utils.parse_bool_param('show_deleted',
show_deleted)
events = event_mod.Event.load_all(cnxt, user_id=user_id,
limit=limit, marker=marker,
sort_keys=sort_keys,
sort_dir=sort_dir,
filters=filters,
start_time=start_time,
end_time=end_time,
project_safe=project_safe,
show_deleted=show_deleted)
return [e.to_dict() for e in events]
@request_context
def policy_create(self, cnxt, name, rule_ids=None, metadata=None):
"""Create a new policy."""
if len(policy_mod.Policy.load_all(cnxt, filters={'name': name})) > 0:
msg = _("The policy (%(name)s) already exists."
) % {"name": name}
raise exception.BileanBadRequest(msg=msg)
rules = []
if rule_ids is not None:
type_cache = []
for rule_id in rule_ids:
try:
rule = plugin_base.Rule.load(cnxt, rule_id=rule_id)
if rule.type not in type_cache:
rules.append({'id': rule_id, 'type': rule.type})
type_cache.append(rule.type)
else:
msg = _("More than one rule in type: '%s', it's "
"not allowed.") % rule.type
raise exception.BileanBadRequest(msg=msg)
except exception.RuleNotFound as ex:
raise exception.BileanBadRequest(msg=six.text_type(ex))
kwargs = {
'rules': rules,
'metadata': metadata,
}
policy = policy_mod.Policy(name, **kwargs)
if not policy.is_default:
default_policy = policy_mod.Policy.load_default(cnxt)
if default_policy is None:
policy.is_default = True
policy.store(cnxt)
LOG.info(_LI("Successfully create policy (%s)."), policy.id)
return policy.to_dict()
@request_context
def policy_list(self, cnxt, limit=None, marker=None, sort_keys=None,
sort_dir=None, filters=None, show_deleted=False):
if limit is not None:
limit = utils.parse_int_param('limit', limit)
if show_deleted is not None:
show_deleted = utils.parse_bool_param('show_deleted',
show_deleted)
policies = policy_mod.Policy.load_all(cnxt, limit=limit,
marker=marker,
sort_keys=sort_keys,
sort_dir=sort_dir,
filters=filters,
show_deleted=show_deleted)
return [policy.to_dict() for policy in policies]
@request_context
def policy_get(self, cnxt, policy_id):
policy = policy_mod.Policy.load(cnxt, policy_id=policy_id)
return policy.to_dict()
@request_context
def policy_update(self, cnxt, policy_id, name=None, metadata=None,
is_default=None):
LOG.info(_LI("Updating policy: '%(id)s'"), {'id': policy_id})
policy = policy_mod.Policy.load(cnxt, policy_id=policy_id)
changed = False
if name is not None and name != policy.name:
policies = policy_mod.Policy.load_all(cnxt, filters={'name': name})
if len(policies) > 0:
msg = _("The policy (%(name)s) already exists."
) % {"name": name}
raise exception.BileanBadRequest(msg=msg)
policy.name = name
changed = True
if metadata is not None and metadata != policy.metadata:
policy.metadata = metadata
changed = True
if is_default is not None and is_default != policy.is_default:
is_default = utils.parse_bool_param('is_default', is_default)
if is_default:
# Set policy to default should unset old default policy.
policies = policy_mod.load_all(cnxt,
filters={'is_default': True})
if len(policies) == 1:
default_policy = policies[0]
default_policy.is_default = False
default_policy.store(cnxt)
policy.is_default = is_default
changed = True
if changed:
policy.store(cnxt)
LOG.info(_LI("Policy '%(id)s' is updated."), {'id': policy_id})
return policy.to_dict()
@request_context
def policy_add_rules(self, cnxt, policy_id, rules):
LOG.info(_LI("Adding rules '%(rules)s' to policy '%(policy)s'."),
{'policy': policy_id, 'rules': rules})
policy = policy_mod.Policy.load(cnxt, policy_id=policy_id)
exist_types = [r['type'] for r in policy.rules]
error_rules = []
ok_rules = []
not_found = []
for rule in rules:
try:
db_rule = plugin_base.Rule.load(cnxt, rule_id=rule)
append_data = {'id': db_rule.id, 'type': db_rule.type}
if db_rule.type in exist_types:
error_rules.append(append_data)
else:
ok_rules.append(append_data)
except exception.RuleNotFound:
not_found.append(rule)
pass
error = None
if len(error_rules) > 0:
error = _("Rule types of rules %(rules)s exist in policy "
"%(policy)s.") % {'rules': error_rules,
'policy': policy_id}
if len(not_found) > 0:
error = _("Rules not found: %s") % not_found
if error is not None:
LOG.error(error)
raise exception.BileanBadRequest(msg=error)
policy.rules += ok_rules
policy.store(cnxt)
return policy.to_dict()
@request_context
def policy_remove_rule(self, cnxt, policy_id, rule_ids):
return NotImplemented
@request_context
def policy_delete(self, cnxt, policy_id):
LOG.info(_LI("Deleting policy: '%s'."), policy_id)
policy_mod.Policy.delete(cnxt, policy_id)
def settle_account(self, cnxt, user_id, task=None):
params = {
'name': 'settle_account_%s' % user_id,
'cause': action_mod.CAUSE_RPC,
'status': action_mod.Action.READY,
'inputs': {'task': task},
}
action_id = action_mod.Action.create(cnxt, user_id,
consts.USER_SETTLE_ACCOUNT,
**params)
self.TG.start_action(self.engine_id, action_id=action_id)
LOG.info(_LI('User settle_account action queued: %s'), action_id)
@request_context
def consumption_list(self, cnxt, user_id=None, limit=None,
marker=None, sort_keys=None, sort_dir=None,
filters=None, project_safe=True):
user_id = user_id or cnxt.project
if limit is not None:
limit = utils.parse_int_param('limit', limit)
consumptions = cons_mod.Consumption.load_all(cnxt,
user_id=user_id,
limit=limit,
marker=marker,
sort_keys=sort_keys,
sort_dir=sort_dir,
filters=filters,
project_safe=project_safe)
return [c.to_dict() for c in consumptions]
@request_context
def consumption_statistics(self, cnxt, user_id=None, filters=None,
start_time=None, end_time=None, summary=False,
project_safe=True):
user_id = user_id or cnxt.project
result = {}
if start_time is None:
start_time = 0
else:
start_time = utils.format_time_to_seconds(start_time)
start_time = utils.make_decimal(start_time)
now_time = utils.format_time_to_seconds(timeutils.utcnow())
now_time = utils.make_decimal(now_time)
if end_time is None:
end_time = now_time
else:
end_time = utils.format_time_to_seconds(end_time)
end_time = utils.make_decimal(end_time)
consumptions = cons_mod.Consumption.load_all(cnxt, user_id=user_id,
filters=filters,
project_safe=project_safe)
for cons in consumptions:
if cons.start_time > end_time or cons.end_time < start_time:
continue
et = min(cons.end_time, end_time)
st = max(cons.start_time, start_time)
seconds = et - st
cost = cons.rate * seconds
if summary:
if cons.resource_type not in result:
result[cons.resource_type] = cost
else:
result[cons.resource_type] += cost
else:
if cons.resource_id not in result:
tmp = {'resource_type': cons.resource_type,
'cost': cost}
result[cons.resource_id] = tmp
else:
result[cons.resource_id]['cost'] += cost
resources = plugin_base.Resource.load_all(cnxt, user_id=user_id,
filters=filters,
project_safe=project_safe)
for res in resources:
if res.last_bill > end_time or now_time < start_time:
continue
et = min(now_time, end_time)
st = max(res.last_bill, start_time)
seconds = et - st
cost = res.rate * seconds
if summary:
if res.resource_type not in result:
result[res.resource_type] = cost
else:
result[res.resource_type] += cost
else:
if res.id not in result:
tmp = {'resource_type': res.resource_type,
'cost': cost}
result[res.id] = tmp
else:
result[res.id]['cost'] += cost
if summary:
for key in six.iterkeys(result):
result[key] = utils.dec2str(result[key])
return result
else:
consumptions = []
for key in six.iterkeys(result):
consumption = cons_mod.Consumption(
user_id,
resource_id=key,
resource_type=result[key]['resource_type'],
cost=result[key]['cost'],
start_time=start_time,
end_time=end_time)
consumptions.append(consumption.to_dict())
return consumptions

View File

@ -1,336 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
import time
from bilean.common import exception
from bilean.common.i18n import _, _LI
from bilean.common import utils
from bilean.db import api as db_api
from bilean.drivers import base as driver_base
from bilean import notifier as bilean_notifier
from bilean.plugins import base as plugin_base
from oslo_config import cfg
from oslo_log import log as logging
wallclock = time.time
LOG = logging.getLogger(__name__)
class User(object):
"""User object contains all user operations"""
statuses = (
INIT, FREE, ACTIVE, WARNING, FREEZE,
) = (
'INIT', 'FREE', 'ACTIVE', 'WARNING', 'FREEZE',
)
def __init__(self, user_id, **kwargs):
self.id = user_id
self.name = kwargs.get('name')
self.policy_id = kwargs.get('policy_id')
self.balance = utils.make_decimal(kwargs.get('balance', 0))
self.rate = utils.make_decimal(kwargs.get('rate', 0))
self.credit = kwargs.get('credit', 0)
self.last_bill = utils.make_decimal(kwargs.get('last_bill', 0))
self.status = kwargs.get('status', self.INIT)
self.status_reason = kwargs.get('status_reason', 'Init user')
self.created_at = kwargs.get('created_at')
self.updated_at = kwargs.get('updated_at')
self.deleted_at = kwargs.get('deleted_at')
if self.name is None:
self.name = self._retrieve_name(self.id)
def store(self, context):
"""Store the user record into database table."""
values = {
'name': self.name,
'policy_id': self.policy_id,
'balance': utils.format_decimal(self.balance),
'rate': utils.format_decimal(self.rate),
'credit': self.credit,
'last_bill': utils.format_decimal(self.last_bill),
'status': self.status,
'status_reason': self.status_reason,
'created_at': self.created_at,
'updated_at': self.updated_at,
'deleted_at': self.deleted_at,
}
if self.created_at:
db_api.user_update(context, self.id, values)
else:
values.update(id=self.id)
user = db_api.user_create(context, values)
self.created_at = user.created_at
return self.id
@classmethod
def init_users(cls, context):
"""Init users from keystone."""
keystoneclient = driver_base.BileanDriver().identity()
try:
projects = keystoneclient.project_list()
except exception.InternalError as ex:
LOG.exception(_('Failed in retrieving project list: %s'),
six.text_type(ex))
return False
users = cls.load_all(context)
user_ids = [user.id for user in users]
for project in projects:
if project.id not in user_ids:
user = cls(project.id, name=project.name, status=cls.INIT,
status_reason='Init from keystone')
user.store(context)
users.append(user)
return users
def _retrieve_name(cls, user_id):
'''Get user name form keystone.'''
keystoneclient = driver_base.BileanDriver().identity()
try:
project = keystoneclient.project_find(user_id)
except exception.InternalError as ex:
LOG.exception(_('Failed in retrieving project: %s'),
six.text_type(ex))
return None
return project.name
@classmethod
def _from_db_record(cls, record):
'''Construct a user object from database record.
:param record: a DB user object that contains all fields;
'''
kwargs = {
'name': record.name,
'policy_id': record.policy_id,
'balance': record.balance,
'rate': record.rate,
'credit': record.credit,
'last_bill': record.last_bill,
'status': record.status,
'status_reason': record.status_reason,
'created_at': record.created_at,
'updated_at': record.updated_at,
'deleted_at': record.deleted_at,
}
return cls(record.id, **kwargs)
@classmethod
def load(cls, context, user_id=None, user=None, realtime=False,
show_deleted=False, project_safe=True):
'''Retrieve a user from database.'''
if context.is_admin:
project_safe = False
if user is None:
user = db_api.user_get(context, user_id,
show_deleted=show_deleted,
project_safe=project_safe)
if user is None:
raise exception.UserNotFound(user=user_id)
u = cls._from_db_record(user)
if not realtime:
return u
if u.rate > 0 and u.status != u.FREEZE:
seconds = utils.make_decimal(wallclock()) - u.last_bill
u.balance -= u.rate * seconds
return u
@classmethod
def load_all(cls, context, show_deleted=False, limit=None,
marker=None, sort_keys=None, sort_dir=None,
filters=None):
'''Retrieve all users of from database.'''
records = db_api.user_get_all(context, show_deleted=show_deleted,
limit=limit, marker=marker,
sort_keys=sort_keys, sort_dir=sort_dir,
filters=filters)
return [cls._from_db_record(record) for record in records]
@classmethod
def delete(cls, context, user_id=None, user=None):
'''Delete a user from database.'''
if user is not None:
db_api.user_delete(context, user_id=user.id)
return True
elif user_id is not None:
db_api.user_delete(context, user_id=user_id)
return True
return False
@classmethod
def from_dict(cls, values):
id = values.pop('id', None)
return cls(id, **values)
def to_dict(self):
user_dict = {
'id': self.id,
'name': self.name,
'policy_id': self.policy_id,
'balance': utils.dec2str(self.balance),
'rate': utils.dec2str(self.rate),
'credit': self.credit,
'last_bill': utils.dec2str(self.last_bill),
'status': self.status,
'status_reason': self.status_reason,
'created_at': utils.format_time(self.created_at),
'updated_at': utils.format_time(self.updated_at),
'deleted_at': utils.format_time(self.deleted_at),
}
return user_dict
def set_status(self, context, status, reason=None):
'''Set status of the user.'''
self.status = status
if reason:
self.status_reason = reason
self.store(context)
def update_rate(self, context, delta_rate, timestamp=None):
"""Update user's rate and update user status.
:param context: The request context.
:param delta_rate: Delta rate to change.
:param timestamp: The time that resource action occurs.
:param delayed_cost: User's action may be delayed by some reason,
adjust balance by delayed_cost.
"""
# Settle account before update rate
self._settle_account(context, timestamp=timestamp)
old_rate = self.rate
new_rate = old_rate + delta_rate
if old_rate == 0 and new_rate > 0:
# Set last_bill when status change to 'ACTIVE' from 'FREE'
self.last_bill = timestamp or wallclock()
reason = _("Status change to 'ACTIVE' cause resource creation.")
self.status = self.ACTIVE
self.status_reason = reason
elif delta_rate < 0:
if new_rate == 0 and self.balance >= 0:
reason = _("Status change to 'FREE' because of resource "
"deletion.")
self.status = self.FREE
self.status_reason = reason
elif self.status == self.WARNING and not self._notify_or_not():
reason = _("Status change from 'WARNING' to 'ACTIVE' "
"because of resource deletion.")
self.status = self.ACTIVE
self.status_reason = reason
self.rate = new_rate
self.store(context)
def do_recharge(self, context, value, recharge_type=None, timestamp=None,
metadata=None):
"""Recharge for user and update status.
param context: The request context.
param value: Recharge value.
param recharge_type: Rechage type, 'Recharge'|'System bonus'.
param timestamp: Record when recharge action occurs.
param metadata: Some other keyword.
"""
self.balance += utils.make_decimal(value)
if self.status == self.INIT and self.balance > 0:
self.status = self.FREE
self.status_reason = "Recharged"
elif self.status == self.FREEZE and self.balance > 0:
reason = _("Status change from 'FREEZE' to 'FREE' because "
"of recharge.")
self.status = self.FREE
self.status_reason = reason
elif self.status == self.WARNING:
if not self._notify_or_not():
reason = _("Status change from 'WARNING' to 'ACTIVE' because "
"of recharge.")
self.status = self.ACTIVE
self.status_reason = reason
self.store(context)
# Create recharge record
values = {'user_id': self.id,
'value': value,
'type': recharge_type,
'timestamp': timestamp,
'metadata': metadata}
db_api.recharge_create(context, values)
def _notify_or_not(self):
'''Check if user should be notified.'''
cfg.CONF.import_opt('prior_notify_time',
'bilean.scheduler.cron_scheduler',
group='scheduler')
prior_notify_time = cfg.CONF.scheduler.prior_notify_time * 3600
rest_usage = utils.make_decimal(prior_notify_time) * self.rate
return self.balance < rest_usage
def do_delete(self, context):
db_api.user_delete(context, self.id)
return True
def _settle_account(self, context, timestamp=None):
if self.rate == 0:
LOG.info(_LI("Ignore settlement action because user is in '%s' "
"status."), self.status)
return
now = timestamp or utils.make_decimal(wallclock())
usage_seconds = now - self.last_bill
cost = self.rate * usage_seconds
self.balance -= cost
self.last_bill = now
def settle_account(self, context, task=None):
'''Settle account for user.'''
notifier = bilean_notifier.Notifier()
timestamp = utils.make_decimal(wallclock())
self._settle_account(context, timestamp=timestamp)
if task == 'notify' and self._notify_or_not():
self.status_reason = "The balance is almost used up"
self.status = self.WARNING
# Notify user
msg = {'user': self.id, 'notification': self.status_reason}
notifier.info('billing.notify', msg)
elif task == 'freeze' and self.balance <= 0:
reason = _("Balance overdraft")
LOG.info(_LI("Freeze user %(user_id)s, reason: %(reason)s"),
{'user_id': self.id, 'reason': reason})
resources = plugin_base.Resource.load_all(
context, user_id=self.id, project_safe=False)
for resource in resources:
resource.do_delete(context, timestamp=timestamp)
self.rate = 0
self.status = self.FREEZE
self.status_reason = reason
# Notify user
msg = {'user': self.id, 'notification': self.status_reason}
notifier.info('billing.notify', msg)
self.store(context)

View File

@ -1,108 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.common import exception
from bilean.common.i18n import _, _LE
from bilean.rpc import client as rpc_client
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
class Action(object):
def __init__(self, cnxt, action, data):
self.rpc_client = rpc_client.EngineClient()
self.cnxt = cnxt
self.action = action
self.data = data
def execute(self):
"""Wrapper of action execution."""
action_name = self.action.lower()
method_name = "do_" + action_name
method = getattr(self, method_name, None)
if method is None:
LOG.error(_LE('Unsupported action: %s.') % self.action)
return None
return method()
def do_create(self):
return NotImplemented
def do_update(self):
return NotImplemented
def do_delete(self):
return NotImplemented
class ResourceAction(Action):
"""Notification controller for Resources."""
def __init__(self, cnxt, action, data):
super(ResourceAction, self).__init__(cnxt, action, data)
self.id = data.get('resource_ref')
self.user_id = data.get('user_id')
self.resource_type = data.get('resource_type')
self.properties = {}
self._parse_and_validate()
def _parse_and_validate(self):
for key in self.data.keys():
if key not in ['resource_ref', 'user_id', 'resource_type']:
self.properties[key] = self.data[key]
if not self.id:
msg = _("Id of resource can not be None")
raise exception.InvalidResource(msg=msg)
if not self.user_id:
msg = _("User_id of resource can not be None")
raise exception.InvalidResource(msg=msg)
if not self.resource_type:
msg = _("Resource_type of resource can not be None")
raise exception.InvalidResource(msg=msg)
if not self.properties:
msg = _("Properties of resource can not be empty")
raise exception.InvalidResource(msg=msg)
def do_create(self):
"""Create new resource"""
return self.rpc_client.resource_create(self.cnxt, self.id,
self.user_id,
self.resource_type,
self.properties)
def do_update(self):
"""Update a resource"""
return self.rpc_client.resource_update(self.cnxt,
self.data.pop('user_id'),
self.data)
def do_delete(self):
"""Delete a resource"""
return self.rpc_client.resource_delete(self.cnxt,
self.user_id,
self.id)
class UserAction(Action):
"""Notification controller for Users."""
def do_create(self):
"""Create a new user"""
return self.rpc_client.user_create(self.cnxt, user_id=self.data)
def do_delete(self):
"""Delete a user"""
return self.rpc_client.delete_user(self.cnxt, user_id=self.data)

View File

@ -1,281 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import fnmatch
import jsonpath_rw
import os
import six
import yaml
from bilean.common.i18n import _
from bilean.common.i18n import _LI
from oslo_config import cfg
from oslo_log import log as logging
from oslo_utils import timeutils
resource_definition_opts = [
cfg.StrOpt('definitions_cfg_file',
default="resource_definitions.yaml",
help="Configuration file for resource definitions."
),
cfg.BoolOpt('drop_unmatched_notifications',
default=True,
help='Drop notifications if no resource definition matches. '
'(Otherwise, we convert them with just the default traits)'),
]
resource_group = cfg.OptGroup('resource_definition')
cfg.CONF.register_group(resource_group)
cfg.CONF.register_opts(resource_definition_opts, group=resource_group)
LOG = logging.getLogger(__name__)
def get_config_file():
config_file = cfg.CONF.resource_definition.definitions_cfg_file
if not os.path.exists(config_file):
config_file = cfg.CONF.find_file(config_file)
return config_file
def setup_resources():
"""Setup the resource definitions from yaml config file."""
config_file = get_config_file()
if config_file is not None:
LOG.debug(_("Resource Definitions configuration file: %s") %
config_file)
with open(config_file) as cf:
config = cf.read()
try:
resources_config = yaml.safe_load(config)
except yaml.YAMLError as err:
if hasattr(err, 'problem_mark'):
mark = err.problem_mark
errmg = (_("Invalid YAML syntax in Resource Definitions "
"file %(file)s at line: %(line)s, column: "
"%(column)s.") % dict(file=config_file,
line=mark.line + 1,
column=mark.column + 1))
else:
errmg = (_("YAML error reading Resource Definitions file "
"%(file)s") % dict(file=config_file))
LOG.error(errmg)
raise
else:
LOG.debug(_("No Resource Definitions configuration file found!"
" Using default config."))
resources_config = []
LOG.info(_LI("Resource Definitions: %s"), resources_config)
allow_drop = cfg.CONF.resource_definition.drop_unmatched_notifications
return NotificationResourcesConverter(resources_config,
add_catchall=not allow_drop)
class NotificationResourcesConverter(object):
"""Notification Resource Converter."""
def __init__(self, resources_config, add_catchall=True):
self.definitions = [
EventDefinition(event_def)
for event_def in reversed(resources_config)]
if add_catchall and not any(d.is_catchall for d in self.definitions):
event_def = dict(event_type='*', resources={})
self.definitions.append(EventDefinition(event_def))
def to_resources(self, notification_body):
event_type = notification_body['event_type']
edef = None
for d in self.definitions:
if d.match_type(event_type):
edef = d
break
if edef is None:
msg = (_('Dropping Notification %(type)s')
% dict(type=event_type))
if cfg.CONF.resource_definition.drop_unmatched_notifications:
LOG.debug(msg)
else:
# If drop_unmatched_notifications is False, this should
# never happen. (mdragon)
LOG.error(msg)
return None
return edef.to_resources(notification_body)
class EventDefinition(object):
def __init__(self, definition_cfg):
self._included_types = []
self._excluded_types = []
self.cfg = definition_cfg
try:
event_type = definition_cfg['event_type']
self.resources = [ResourceDefinition(resource_def)
for resource_def in definition_cfg['resources']]
except KeyError as err:
raise EventDefinitionException(
_("Required field %s not specified") % err.args[0], self.cfg)
if isinstance(event_type, six.string_types):
event_type = [event_type]
for t in event_type:
if t.startswith('!'):
self._excluded_types.append(t[1:])
else:
self._included_types.append(t)
if self._excluded_types and not self._included_types:
self._included_types.append('*')
def included_type(self, event_type):
for t in self._included_types:
if fnmatch.fnmatch(event_type, t):
return True
return False
def excluded_type(self, event_type):
for t in self._excluded_types:
if fnmatch.fnmatch(event_type, t):
return True
return False
def match_type(self, event_type):
return (self.included_type(event_type) and
not self.excluded_type(event_type))
@property
def is_catchall(self):
return '*' in self._included_types and not self._excluded_types
def to_resources(self, notification_body):
resources = []
for resource in self.resources:
resources.append(resource.to_resource(notification_body))
return resources
class ResourceDefinition(object):
DEFAULT_TRAITS = dict(
user_id=dict(type='string', fields='payload.tenant_id'),
)
def __init__(self, definition_cfg):
self.traits = dict()
try:
self.resource_type = definition_cfg['resource_type']
traits = definition_cfg['traits']
except KeyError as err:
raise EventDefinitionException(
_("Required field %s not specified") % err.args[0], self.cfg)
for trait_name in self.DEFAULT_TRAITS:
self.traits[trait_name] = TraitDefinition(
trait_name,
self.DEFAULT_TRAITS[trait_name])
for trait_name in traits:
self.traits[trait_name] = TraitDefinition(
trait_name,
traits[trait_name])
def to_resource(self, notification_body):
traits = (self.traits[t].to_trait(notification_body)
for t in self.traits)
# Only accept non-None value traits ...
traits = [trait for trait in traits if trait is not None]
resource = {"resource_type": self.resource_type}
for trait in traits:
resource.update(trait)
if 'created_at' not in resource:
resource['created_at'] = timeutils.utcnow()
return resource
class TraitDefinition(object):
def __init__(self, name, trait_cfg):
self.cfg = trait_cfg
self.name = name
type_name = trait_cfg.get('type', 'string')
if 'fields' not in trait_cfg:
raise EventDefinitionException(
_("Required field in trait definition not specified: "
"'%s'") % 'fields',
self.cfg)
fields = trait_cfg['fields']
if not isinstance(fields, six.string_types):
# NOTE(mdragon): if not a string, we assume a list.
if len(fields) == 1:
fields = fields[0]
else:
fields = '|'.join('(%s)' % path for path in fields)
try:
self.fields = jsonpath_rw.parse(fields)
except Exception as e:
raise EventDefinitionException(
_("Parse error in JSONPath specification "
"'%(jsonpath)s' for %(trait)s: %(err)s")
% dict(jsonpath=fields, trait=name, err=e), self.cfg)
self.trait_type = type_name
if self.trait_type is None:
raise EventDefinitionException(
_("Invalid trait type '%(type)s' for trait %(trait)s")
% dict(type=type_name, trait=name), self.cfg)
def to_trait(self, notification_body):
values = [match for match in self.fields.find(notification_body)
if match.value is not None]
value = values[0].value if values else None
if value is None:
return None
if self.trait_type != 'string' and value == '':
return None
if self.trait_type is "int":
value = int(value)
elif self.trait_type is "float":
value = float(value)
elif self.trait_type is "string":
value = str(value)
return {self.name: value}
class EventDefinitionException(Exception):
def __init__(self, message, definition_cfg):
super(EventDefinitionException, self).__init__(message)
self.definition_cfg = definition_cfg
def __str__(self):
return '%s %s: %s' % (self.__class__.__name__,
self.definition_cfg, self.message)
def list_opts():
yield resource_group.name, resource_definition_opts

View File

@ -1,93 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.common import context
from bilean.common.i18n import _, _LE, _LI
from bilean.notification import action as notify_action
from bilean.notification import converter
from oslo_log import log as logging
import oslo_messaging
LOG = logging.getLogger(__name__)
KEYSTONE_EVENTS = ['identity.project.created',
'identity.project.deleted']
class EventsNotificationEndpoint(object):
def __init__(self):
self.resource_converter = converter.setup_resources()
self.cnxt = context.get_service_context(set_project_id=True)
super(EventsNotificationEndpoint, self).__init__()
def info(self, ctxt, publisher_id, event_type, payload, metadata):
"""Convert message to Billing Event.
:param ctxt: oslo_messaging context
:param publisher_id: publisher of the notification
:param event_type: type of notification
:param payload: notification payload
:param metadata: metadata about the notification
"""
notification = dict(event_type=event_type,
payload=payload,
metadata=metadata)
LOG.debug(_("Receive notification: %s") % notification)
if event_type in KEYSTONE_EVENTS:
return self.process_identity_notification(notification)
return self.process_resource_notification(notification)
def process_identity_notification(self, notification):
"""Convert notification to user."""
user_id = notification['payload'].get('resource_info')
if not user_id:
LOG.error(_LE("Cannot retrieve user_id from notification: %s"),
notification)
return oslo_messaging.NotificationResult.HANDLED
action = self._get_action(notification['event_type'])
if action:
act = notify_action.UserAction(self.cnxt, action, user_id)
LOG.info(_LI("Notify engine to %(action)s user: %(user)s") %
{'action': action, 'user': user_id})
act.execute()
return oslo_messaging.NotificationResult.HANDLED
def process_resource_notification(self, notification):
"""Convert notification to resources."""
resources = self.resource_converter.to_resources(notification)
if not resources:
LOG.info('Ignore notification because no matched resources '
'found from notification.')
return oslo_messaging.NotificationResult.HANDLED
action = self._get_action(notification['event_type'])
if action:
for resource in resources:
act = notify_action.ResourceAction(
self.cnxt, action, resource)
LOG.info(_LI("Notify engine to %(action)s resource: "
"%(resource)s") % {'action': action,
'resource': resource})
act.execute()
return oslo_messaging.NotificationResult.HANDLED
def _get_action(self, event_type):
available_actions = ['create', 'delete', 'update']
for action in available_actions:
if action in event_type:
return action
LOG.info(_LI("Can not get action info in event_type: %s") % event_type)
return None

View File

@ -1,91 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_log import log as logging
import oslo_messaging
from oslo_service import service
from bilean.common.i18n import _LE
from bilean.common import messaging as bilean_messaging
from bilean.engine import environment
from bilean.notification import endpoint
LOG = logging.getLogger(__name__)
listener_opts = [
cfg.IntOpt('workers',
default=1,
min=1,
help='Number of workers for notification service. A single '
'notification agent is enabled by default.'),
cfg.StrOpt('notifications_pool',
default='bilean-listener',
help='Use an oslo.messaging pool, which can be an alternative '
'to multiple topics. ')
]
CONF = cfg.CONF
CONF.register_opts(listener_opts, group="listener")
class NotificationService(service.Service):
def __init__(self, *args, **kwargs):
super(NotificationService, self).__init__(*args, **kwargs)
self.listeners = []
self.topics_exchanges_set = self.topics_and_exchanges()
def topics_and_exchanges(self):
topics_exchanges = set()
plugins = environment.global_env().get_plugins()
for plugin in plugins:
try:
topic_exchanges = plugin.get_notification_topics_exchanges()
for plugin_topic in topic_exchanges:
if isinstance(plugin_topic, basestring):
raise Exception(
_LE("Plugin %s should return a list of topic "
"exchange pairs") % plugin.__class__.__name__)
topics_exchanges.add(plugin_topic)
except Exception as e:
LOG.error(_LE("Failed to retrieve notification topic(s) "
"and exchanges from bilean plugin "
"%(ext)s: %(e)s") %
{'ext': plugin.__name__, 'e': e})
return topics_exchanges
def start(self):
super(NotificationService, self).start()
transport = bilean_messaging.get_transport()
targets = [
oslo_messaging.Target(topic=tp, exchange=eg)
for tp, eg in self.topics_exchanges_set
]
endpoints = [endpoint.EventsNotificationEndpoint()]
listener = oslo_messaging.get_notification_listener(
transport,
targets,
endpoints,
pool=CONF.listener.notifications_pool)
listener.start()
self.listeners.append(listener)
# Add a dummy thread to have wait() working
self.tg.add_timer(604800, lambda: None)
def stop(self):
map(lambda x: x.stop(), self.listeners)
super(NotificationService, self).stop()

View File

@ -1,46 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
import oslo_messaging
notifier_opts = [
cfg.StrOpt('default_publisher_id', default="billing.localhost",
help='Default publisher_id for outgoing notifications.'),
]
CONF = cfg.CONF
CONF.register_opts(notifier_opts)
def get_transport():
return oslo_messaging.get_transport(CONF)
class Notifier(object):
"""Uses a notification strategy to send out messages about events."""
def __init__(self):
publisher_id = CONF.default_publisher_id
self._transport = get_transport()
self._notifier = oslo_messaging.Notifier(
self._transport, publisher_id=publisher_id)
def warn(self, event_type, payload):
self._notifier.warn({}, event_type, payload)
def info(self, event_type, payload):
self._notifier.info({}, event_type, payload)
def error(self, event_type, payload):
self._notifier.error({}, event_type, payload)

View File

@ -1,453 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import time
from oslo_config import cfg
from oslo_utils import timeutils
from bilean.common import exception
from bilean.common.i18n import _
from bilean.common import schema
from bilean.common import utils
from bilean.db import api as db_api
from bilean.engine import consumption as consumption_mod
from bilean.engine import environment
wallclock = time.time
resource_opts = [
cfg.StrOpt('notifications_topic', default="notifications",
help="The default messaging notifications topic"),
]
CONF = cfg.CONF
CONF.register_opts(resource_opts, group='resource_plugin')
class Plugin(object):
'''Base class for plugins.'''
RuleClass = None
ResourceClass = None
notification_exchanges = []
@classmethod
def get_notification_topics_exchanges(cls):
"""Returns a list of (topic,exchange), (topic,exchange)..)."""
return [(CONF.resource_plugin.notifications_topic, exchange)
for exchange in cls.notification_exchanges]
class Rule(object):
'''Base class for rules.'''
KEYS = (
TYPE, VERSION, PROPERTIES,
) = (
'type', 'version', 'properties',
)
spec_schema = {
TYPE: schema.String(
_('Name of the rule type.'),
required=True,
),
VERSION: schema.String(
_('Version number of the rule type.'),
required=True,
),
PROPERTIES: schema.Map(
_('Properties for the rule.'),
required=True,
)
}
properties_schema = {}
def __new__(cls, name, spec, **kwargs):
"""Create a new rule of the appropriate class.
:param name: The name for the rule.
:param spec: A dictionary containing the spec for the rule.
:param kwargs: Keyword arguments for rule creation.
:returns: An instance of a specific sub-class of Rule.
"""
type_name, version = schema.get_spec_version(spec)
if cls != Rule:
RuleClass = cls
else:
PluginClass = environment.global_env().get_plugin(type_name)
RuleClass = PluginClass.RuleClass
return super(Rule, cls).__new__(RuleClass)
def __init__(self, name, spec, **kwargs):
"""Initialize a rule instance.
:param name: A string that specifies the name for the rule.
:param spec: A dictionary containing the detailed rule spec.
:param kwargs: Keyword arguments for initializing the rule.
:returns: An instance of a specific sub-class of Rule.
"""
type_name, version = schema.get_spec_version(spec)
self.name = name
self.spec = spec
self.id = kwargs.get('id')
self.type = kwargs.get('type', '%s-%s' % (type_name, version))
self.metadata = kwargs.get('metadata', {})
self.created_at = kwargs.get('created_at')
self.updated_at = kwargs.get('updated_at')
self.deleted_at = kwargs.get('deleted_at')
self.spec_data = schema.Spec(self.spec_schema, self.spec)
self.properties = schema.Spec(self.properties_schema,
self.spec.get(self.PROPERTIES, {}))
@classmethod
def from_db_record(cls, record):
'''Construct a rule object from database record.
:param record: a DB Profle object that contains all required fields.
'''
kwargs = {
'id': record.id,
'type': record.type,
'metadata': record.meta_data,
'created_at': record.created_at,
'updated_at': record.updated_at,
'deleted_at': record.deleted_at,
}
return cls(record.name, record.spec, **kwargs)
@classmethod
def load(cls, context, rule_id=None, rule=None, show_deleted=False):
'''Retrieve a rule object from database.'''
if rule is None:
rule = db_api.rule_get(context, rule_id,
show_deleted=show_deleted)
if rule is None:
raise exception.RuleNotFound(rule=rule_id)
return cls.from_db_record(rule)
@classmethod
def load_all(cls, context, limit=None, marker=None, sort_keys=None,
sort_dir=None, filters=None, show_deleted=False):
'''Retrieve all rules from database.'''
records = db_api.rule_get_all(context, limit=limit,
marker=marker,
sort_keys=sort_keys,
sort_dir=sort_dir,
filters=filters,
show_deleted=show_deleted)
return [cls.from_db_record(record) for record in records]
@classmethod
def delete(cls, context, rule_id):
db_api.rule_delete(context, rule_id)
def store(self, context):
'''Store the rule into database and return its ID.'''
timestamp = timeutils.utcnow()
values = {
'name': self.name,
'type': self.type,
'spec': self.spec,
'meta_data': self.metadata,
}
if self.id:
self.updated_at = timestamp
values['updated_at'] = timestamp
db_api.rule_update(context, self.id, values)
else:
self.created_at = timestamp
values['created_at'] = timestamp
rule = db_api.rule_create(context, values)
self.id = rule.id
return self.id
def validate(self):
'''Validate the schema and the data provided.'''
# general validation
self.spec_data.validate()
self.properties.validate()
@classmethod
def get_schema(cls):
return dict((name, dict(schema))
for name, schema in cls.properties_schema.items())
def get_price(self, resource):
'''For subclass to override.'''
return NotImplemented
def to_dict(self):
rule_dict = {
'id': self.id,
'name': self.name,
'type': self.type,
'spec': self.spec,
'metadata': self.metadata,
'created_at': utils.format_time(self.created_at),
'updated_at': utils.format_time(self.updated_at),
'deleted_at': utils.format_time(self.deleted_at),
}
return rule_dict
@classmethod
def from_dict(cls, **kwargs):
type_name = kwargs.pop('type')
name = kwargs.pop('name')
return cls(type_name, name, kwargs)
class Resource(object):
"""A resource is an object that refers to a physical resource.
The resource comes from other openstack component such as nova,
cinder, neutron and so on, it can be an instance or volume or
something else.
"""
def __new__(cls, id, user_id, res_type, properties, **kwargs):
"""Create a new resource of the appropriate class.
:param id: The resource ID comes same as the real resource.
:param user_id: The user ID the resource belongs to.
:param properties: The properties of resource.
:param dict kwargs: Other keyword arguments for the resource.
"""
if cls != Resource:
ResourceClass = cls
else:
PluginClass = environment.global_env().get_plugin(res_type)
ResourceClass = PluginClass.ResourceClass
return super(Resource, cls).__new__(ResourceClass)
def __init__(self, id, user_id, resource_type, properties, **kwargs):
self.id = id
self.user_id = user_id
self.resource_type = resource_type
self.properties = properties
self.rule_id = kwargs.get('rule_id')
self.rate = utils.make_decimal(kwargs.get('rate', 0))
self.last_bill = utils.make_decimal(kwargs.get('last_bill', 0))
self.created_at = kwargs.get('created_at')
self.updated_at = kwargs.get('updated_at')
self.deleted_at = kwargs.get('deleted_at')
# Properties pass to user to help settle account, not store to db
self.delta_rate = 0
self.consumption = None
def store(self, context):
"""Store the resource record into database table."""
values = {
'user_id': self.user_id,
'resource_type': self.resource_type,
'properties': self.properties,
'rule_id': self.rule_id,
'rate': utils.format_decimal(self.rate),
'last_bill': utils.format_decimal(self.last_bill),
'created_at': self.created_at,
'updated_at': self.updated_at,
'deleted_at': self.deleted_at,
}
if self.created_at:
self._update(context, values)
else:
values.update(id=self.id)
self._create(context, values)
return self.id
def delete(self, context, timestamp=None, soft_delete=True):
'''Delete resource from db.'''
self._delete(context, timestamp=timestamp, soft_delete=soft_delete)
def _create(self, context, values):
self.delta_rate = self.rate
if self.delta_rate == 0:
resource = db_api.resource_create(context, values)
self.created_at = resource.created_at
return
self.last_bill = utils.make_decimal(wallclock())
create_time = self.properties.get('created_at')
if create_time is not None:
sec = utils.format_time_to_seconds(create_time)
self.last_bill = utils.make_decimal(sec)
values.update(last_bill=utils.format_decimal(self.last_bill))
resource = db_api.resource_create(context, values)
self.created_at = resource.created_at
def _update(self, context, values):
if self.delta_rate == 0:
db_api.resource_update(context, self.id, values)
return
update_time = self.properties.get('updated_at')
updated_at = utils.make_decimal(wallclock())
if update_time is not None:
sec = utils.format_time_to_seconds(update_time)
updated_at = utils.make_decimal(sec)
# Generate consumption between lass bill and update time
old_rate = self.rate - self.delta_rate
cost = (updated_at - self.last_bill) * old_rate
params = {'resource_id': self.id,
'resource_type': self.resource_type,
'start_time': self.last_bill,
'end_time': updated_at,
'rate': old_rate,
'cost': cost,
'metadata': {'cause': 'Resource update'}}
self.consumption = consumption_mod.Consumption(self.user_id, **params)
self.last_bill = updated_at
values.update(last_bill=utils.format_decimal(updated_at))
db_api.resource_update(context, self.id, values)
def _delete(self, context, timestamp=None, soft_delete=True):
self.delta_rate = - self.rate
if self.delta_rate == 0:
db_api.resource_delete(context, self.id, soft_delete=soft_delete)
return
deleted_at = timestamp or utils.make_decimal(wallclock())
delete_time = self.properties.get('deleted_at')
if delete_time is not None:
sec = utils.format_time_to_seconds(delete_time)
deleted_at = utils.make_decimal(sec)
# Generate consumption between lass bill and delete time
cost = (deleted_at - self.last_bill) * self.rate
params = {'resource_id': self.id,
'resource_type': self.resource_type,
'start_time': self.last_bill,
'end_time': deleted_at,
'rate': self.rate,
'cost': cost,
'metadata': {'cause': 'Resource deletion'}}
self.consumption = consumption_mod.Consumption(self.user_id, **params)
self.last_bill = deleted_at
db_api.resource_delete(context, self.id, soft_delete=soft_delete)
@classmethod
def _from_db_record(cls, record):
'''Construct a resource object from database record.
:param record: a DB user object that contains all fields;
'''
kwargs = {
'rule_id': record.rule_id,
'rate': record.rate,
'last_bill': record.last_bill,
'created_at': record.created_at,
'updated_at': record.updated_at,
'deleted_at': record.deleted_at,
}
return cls(record.id, record.user_id, record.resource_type,
record.properties, **kwargs)
@classmethod
def load(cls, context, resource_id=None, resource=None,
show_deleted=False, project_safe=True):
'''Retrieve a resource from database.'''
if context.is_admin:
project_safe = False
if resource is None:
resource = db_api.resource_get(context, resource_id,
show_deleted=show_deleted,
project_safe=project_safe)
if resource is None:
raise exception.ResourceNotFound(resource=resource_id)
return cls._from_db_record(resource)
@classmethod
def load_all(cls, context, user_id=None, show_deleted=False,
limit=None, marker=None, sort_keys=None, sort_dir=None,
filters=None, project_safe=True):
'''Retrieve all users from database.'''
records = db_api.resource_get_all(context, user_id=user_id,
show_deleted=show_deleted,
limit=limit, marker=marker,
sort_keys=sort_keys,
sort_dir=sort_dir,
filters=filters,
project_safe=project_safe)
return [cls._from_db_record(record) for record in records]
@classmethod
def from_dict(cls, values):
id = values.pop('id', None)
user_id = values.pop('user_id', None)
resource_type = values.pop('resource_type', None)
properties = values.pop('properties', {})
return cls(id, user_id, resource_type, properties, **values)
def to_dict(self):
resource_dict = {
'id': self.id,
'user_id': self.user_id,
'resource_type': self.resource_type,
'properties': self.properties,
'rule_id': self.rule_id,
'rate': utils.dec2str(self.rate),
'last_bill': utils.dec2str(self.last_bill),
'created_at': utils.format_time(self.created_at),
'updated_at': utils.format_time(self.updated_at),
'deleted_at': utils.format_time(self.deleted_at),
}
return resource_dict
@classmethod
def do_check(cls, context, user):
'''Communicate with other services and check user's resources.
This would be a period job of user to check if there are any missing
actions, and then make correction.
'''
return NotImplemented
def do_delete(self, context, ignore_missing=True, timeout=None):
'''Delete resource from other services.'''
return NotImplemented

View File

@ -1,131 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
from oslo_log import log as logging
from bilean.common import exception
from bilean.common.i18n import _, _LE
from bilean.common import schema
from bilean.drivers import base as driver_base
from bilean.plugins import base
LOG = logging.getLogger(__name__)
class VolumeRule(base.Rule):
'''Rule for an OpenStack Cinder volume.'''
KEYS = (
PRICE_MAPPING, UNIT,
) = (
'price_mapping', 'unit',
)
PM_KEYS = (
START, END, PRICE,
) = (
'start', 'end', 'price',
)
AVAILABLE_UNIT = (
PER_HOUR, PER_SEC,
) = (
'per_hour', 'per_sec',
)
properties_schema = {
PRICE_MAPPING: schema.List(
_('A list specifying the prices.'),
schema=schema.Map(
_('A map specifying the pricce of each volume capacity '
'interval.'),
schema={
START: schema.Integer(
_('Start volume capacity.'),
),
END: schema.Integer(
_('End volume capacity.'),
),
PRICE: schema.Integer(
_('Price of this interval.'),
),
}
),
required=True,
updatable=True,
),
UNIT: schema.String(
_('Unit of price, per_hour or per_sec.'),
default='per_hour',
),
}
def get_price(self, resource):
'''Get the price of resource in seconds.
If no exact price found, 0 will be returned.
:param: resource: Resource object to find price.
'''
size = int(resource.properties.get('size'))
if not size:
raise exception.Error(msg='Size of volume should be provided to '
'get price.')
p_mapping = self.properties.get(self.PRICE_MAPPING)
for pm in p_mapping:
if size >= pm.get(self.START) and size <= pm.get(self.END):
price = pm.get(self.PRICE)
if self.PER_HOUR == self.properties.get(self.UNIT) and price > 0:
price = price * 1.0 / 3600
return price
class VolumeResource(base.Resource):
'''Resource for an OpenStack Cinder volume.'''
@classmethod
def do_check(context, user):
'''Communicate with other services and check user's resources.
This would be a period job of user to check if there are any missing
actions, and then make correction.
'''
# TODO(ldb)
return NotImplemented
def do_delete(self, context, timestamp=None, ignore_missing=True,
timeout=None):
'''Delete resource from other services.'''
# Delete resource from db and generate consumption
self.delete(context, timestamp=timestamp)
self.consumption.store(context)
# Delete resource from cinder
cinderclient = driver_base.BileanDriver().block_store()
try:
cinderclient.volume_delete(self.id, ignore_missing=ignore_missing)
except Exception as ex:
LOG.error(_LE('Error: %s'), six.text_type(ex))
return False
return True
class VolumePlugin(base.Plugin):
'''Plugin for Openstack Nova server.'''
RuleClass = VolumeRule
ResourceClass = VolumeResource
notification_exchanges = ['openstack']

View File

@ -1,131 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
from oslo_log import log as logging
from bilean.common import exception
from bilean.common.i18n import _, _LE
from bilean.common import schema
from bilean.drivers import base as driver_base
from bilean.plugins import base
LOG = logging.getLogger(__name__)
class ServerRule(base.Rule):
'''Rule for an OpenStack Nova server.'''
KEYS = (
PRICE_MAPPING, UNIT,
) = (
'price_mapping', 'unit',
)
PM_KEYS = (
FLAVOR, PRICE,
) = (
'flavor', 'price',
)
AVAILABLE_UNIT = (
PER_HOUR, PER_SEC,
) = (
'per_hour', 'per_sec',
)
properties_schema = {
PRICE_MAPPING: schema.List(
_('A list specifying the price of each flavor'),
schema=schema.Map(
_('A map specifying the pricce of a flavor.'),
schema={
FLAVOR: schema.String(
_('Flavor id to set price.'),
),
PRICE: schema.Integer(
_('Price of this flavor.'),
),
}
),
required=True,
updatable=True,
),
UNIT: schema.String(
_('Unit of price, per_hour or per_sec.'),
default='per_hour',
),
}
def get_price(self, resource):
'''Get the price of resource in seconds.
If no exact price found, it shows that rule of the server's flavor
has not been set, will return 0 as the price notify admin to set
it.
:param: resource: Resource object to find price.
'''
flavor = resource.properties.get('flavor')
if not flavor:
raise exception.Error(msg='Flavor should be provided to get '
'the price of server.')
p_mapping = self.properties.get(self.PRICE_MAPPING)
price = 0
for pm in p_mapping:
if flavor == pm.get(self.FLAVOR):
price = pm.get(self.PRICE)
if self.PER_HOUR == self.properties.get(self.UNIT) and price > 0:
price = price * 1.0 / 3600
return price
class ServerResource(base.Resource):
'''Resource for an OpenStack Nova server.'''
@classmethod
def do_check(context, user):
'''Communicate with other services and check user's resources.
This would be a period job of user to check if there are any missing
actions, and then make correction.
'''
# TODO(ldb)
return NotImplemented
def do_delete(self, context, timestamp=None, ignore_missing=True,
timeout=None):
'''Delete resource from other services.'''
# Delete resource from db and generate consumption
self.delete(context, timestamp=timestamp)
self.consumption.store(context)
# Delete resource from nova
novaclient = driver_base.BileanDriver().compute()
try:
novaclient.server_delete(self.id, ignore_missing=ignore_missing)
novaclient.wait_for_server_delete(self.id, timeout=timeout)
except Exception as ex:
LOG.error(_LE('Error: %s'), six.text_type(ex))
return False
return True
class ServerPlugin(base.Plugin):
'''Plugin for Openstack Nova server.'''
RuleClass = ServerRule
ResourceClass = ServerResource
notification_exchanges = ['nova']

View File

View File

@ -1,235 +0,0 @@
#
# Copyright 2012, Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Client side of the bilean engine RPC API.
"""
from bilean.common import consts
from bilean.common import messaging
from oslo_config import cfg
class EngineClient(object):
'''Client side of the bilean engine rpc API.'''
BASE_RPC_API_VERSION = '1.0'
def __init__(self):
cfg.CONF.import_opt('host', 'bilean.common.config')
self._client = messaging.get_rpc_client(
topic=consts.ENGINE_TOPIC,
server=cfg.CONF.host,
version=self.BASE_RPC_API_VERSION)
@staticmethod
def make_msg(method, **kwargs):
return method, kwargs
def call(self, ctxt, msg, version=None):
method, kwargs = msg
if version is not None:
client = self._client.prepare(version=version)
else:
client = self._client
return client.call(ctxt, method, **kwargs)
def cast(self, ctxt, msg, version=None):
method, kwargs = msg
if version is not None:
client = self._client.prepare(version=version)
else:
client = self._client
return client.cast(ctxt, method, **kwargs)
# users
def user_list(self, ctxt, show_deleted=False, limit=None,
marker=None, sort_keys=None, sort_dir=None,
filters=None):
return self.call(ctxt,
self.make_msg('user_list',
show_deleted=show_deleted,
limit=limit, marker=marker,
sort_keys=sort_keys, sort_dir=sort_dir,
filters=filters))
def user_get(self, ctxt, user_id):
return self.call(ctxt, self.make_msg('user_get',
user_id=user_id))
def user_create(self, ctxt, user_id, balance=None, credit=None,
status=None):
return self.call(ctxt, self.make_msg('user_create', user_id=user_id,
balance=balance, credit=credit,
status=status))
def user_recharge(self, ctxt, user_id, value):
return self.call(ctxt, self.make_msg('user_recharge',
user_id=user_id,
value=value))
def user_delete(self, ctxt, user_id):
return self.call(ctxt, self.make_msg('user_delete',
user_id=user_id))
def user_attach_policy(self, ctxt, user_id, policy_id):
return self.call(ctxt, self.make_msg('user_attach_policy',
user_id=user_id,
policy_id=policy_id))
# rules
def rule_list(self, ctxt, limit=None, marker=None, sort_keys=None,
sort_dir=None, filters=None, show_deleted=False):
return self.call(ctxt, self.make_msg('rule_list', limit=limit,
marker=marker,
sort_keys=sort_keys,
sort_dir=sort_dir,
filters=filters,
show_deleted=show_deleted))
def rule_get(self, ctxt, rule_id):
return self.call(ctxt, self.make_msg('rule_get',
rule_id=rule_id))
def rule_create(self, ctxt, name, spec, metadata):
return self.call(ctxt, self.make_msg('rule_create', name=name,
spec=spec, metadata=metadata))
def rule_update(self, ctxt, values):
return self.call(ctxt, self.make_msg('rule_update',
values=values))
def rule_delete(self, ctxt, rule_id):
return self.call(ctxt, self.make_msg('rule_delete',
rule_id=rule_id))
# resources
def resource_list(self, ctxt, user_id=None, limit=None, marker=None,
sort_keys=None, sort_dir=None, filters=None,
project_safe=True, show_deleted=False):
return self.call(ctxt, self.make_msg('resource_list', user_id=user_id,
limit=limit, marker=marker,
sort_keys=sort_keys,
sort_dir=sort_dir,
filters=filters,
project_safe=project_safe,
show_deleted=show_deleted))
def resource_get(self, ctxt, resource_id):
return self.call(ctxt, self.make_msg('resource_get',
resource_id=resource_id))
def resource_create(self, ctxt, resource_id, user_id,
resource_type, properties):
return self.call(ctxt, self.make_msg('resource_create',
resource_id=resource_id,
user_id=user_id,
resource_type=resource_type,
properties=properties))
def resource_update(self, ctxt, user_id, resource):
return self.call(ctxt, self.make_msg('resource_update',
user_id=user_id,
resource=resource))
def resource_delete(self, ctxt, user_id, resource_id):
return self.call(ctxt, self.make_msg('resource_delete',
user_id=user_id,
resource_id=resource_id))
# events
def event_list(self, ctxt, user_id=None, limit=None, marker=None,
sort_keys=None, sort_dir=None, filters=None,
start_time=None, end_time=None, project_safe=True,
show_deleted=False):
return self.call(ctxt, self.make_msg('event_list', user_id=user_id,
limit=limit, marker=marker,
sort_keys=sort_keys,
sort_dir=sort_dir,
filters=filters,
start_time=start_time,
end_time=end_time,
project_safe=project_safe,
show_deleted=show_deleted))
def validate_creation(self, cnxt, resources):
return self.call(cnxt, self.make_msg('validate_creation',
resources=resources))
# policies
def policy_list(self, ctxt, limit=None, marker=None, sort_keys=None,
sort_dir=None, filters=None, show_deleted=False):
return self.call(ctxt, self.make_msg('policy_list', limit=limit,
marker=marker,
sort_keys=sort_keys,
sort_dir=sort_dir,
filters=filters,
show_deleted=show_deleted))
def policy_get(self, ctxt, policy_id):
return self.call(ctxt, self.make_msg('policy_get',
policy_id=policy_id))
def policy_create(self, ctxt, name, rules=None, metadata=None):
return self.call(ctxt, self.make_msg('policy_create',
name=name,
rule_ids=rules,
metadata=metadata))
def policy_update(self, ctxt, policy_id, name=None, metadata=None,
is_default=None):
return self.call(ctxt, self.make_msg('policy_update',
policy_id=policy_id,
name=name,
metadata=metadata,
is_default=is_default))
def policy_delete(self, ctxt, policy_id):
return self.call(ctxt, self.make_msg('policy_delete',
policy_id=policy_id))
def policy_add_rules(self, ctxt, policy_id, rules):
return self.call(ctxt, self.make_msg('policy_add_rules',
policy_id=policy_id,
rules=rules))
def settle_account(self, ctxt, user_id, task=None):
return self.call(ctxt, self.make_msg('settle_account',
user_id=user_id,
task=task))
# consumptions
def consumption_list(self, ctxt, user_id=None, limit=None, marker=None,
sort_keys=None, sort_dir=None, filters=None,
project_safe=True):
return self.call(ctxt, self.make_msg('consumption_list',
user_id=user_id,
limit=limit, marker=marker,
sort_keys=sort_keys,
sort_dir=sort_dir,
filters=filters,
project_safe=project_safe))
def consumption_statistics(self, ctxt, user_id=None, filters=None,
start_time=None, end_time=None, summary=False,
project_safe=True):
return self.call(ctxt, self.make_msg('consumption_statistics',
user_id=user_id,
filters=filters,
start_time=start_time,
end_time=end_time,
summary=summary,
project_safe=project_safe))

View File

@ -1,51 +0,0 @@
#
# Copyright 2012, Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.common import consts
from bilean.common import messaging
from oslo_context import context as oslo_context
import oslo_messaging
supported_actions = (
UPDATE_JOBS, DELETE_JOBS,
) = (
'update_jobs', 'delete_jobs',
)
def notify(method, scheduler_id=None, **kwargs):
'''Send notification to scheduler
:param method: remote method to call
:param scheduler_id: specify scheduler to notify; None implies broadcast
'''
if scheduler_id:
# Notify specific scheduler identified by scheduler_id
client = messaging.get_rpc_client(
topic=consts.SCHEDULER_TOPIC,
server=scheduler_id,
version=consts.RPC_API_VERSION)
else:
# Broadcast to all schedulers
client = messaging.get_rpc_client(
topic=consts.SCHEDULER_TOPIC,
version=consts.RPC_API_VERSION)
try:
client.call(oslo_context.get_current(), method, **kwargs)
return True
except oslo_messaging.MessagingTimeout:
return False

View File

@ -1,263 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from bilean.common import context as bilean_context
from bilean.common import exception
from bilean.common.i18n import _, _LI, _LW
from bilean.common import utils
from bilean.db import api as db_api
from bilean.engine import user as user_mod
from bilean.rpc import client as rpc_client
from oslo_config import cfg
from oslo_log import log as logging
from oslo_utils import timeutils
from apscheduler.schedulers import background
from datetime import timedelta
import random
import six
scheduler_opts = [
cfg.StrOpt('time_zone',
default='utc',
help=_('The time zone of job, default is utc')),
cfg.IntOpt('prior_notify_time',
default=3,
help=_('Time in hours before notify user when the balance of '
'user is almost used up.')),
cfg.IntOpt('misfire_grace_time',
default=3600,
help=_('Seconds after the designated run time that the job is '
'still allowed to be run.')),
cfg.BoolOpt('store_ap_job',
default=False,
help=_('Allow bilean to store apscheduler job.')),
cfg.StrOpt('backend',
default='sqlalchemy',
help='The backend to use for db'),
cfg.StrOpt('connection',
help='The SQLAlchemy connection string used to connect to the '
'database')
]
scheduler_group = cfg.OptGroup('scheduler')
cfg.CONF.register_group(scheduler_group)
cfg.CONF.register_opts(scheduler_opts, group=scheduler_group)
LOG = logging.getLogger(__name__)
class CronScheduler(object):
"""Cron scheduler based on apscheduler"""
job_types = (
NOTIFY, DAILY, FREEZE,
) = (
'notify', 'daily', 'freeze',
)
trigger_types = (DATE, CRON) = ('date', 'cron')
def __init__(self, **kwargs):
super(CronScheduler, self).__init__()
self._scheduler = background.BackgroundScheduler()
self.scheduler_id = kwargs.get('scheduler_id')
self.rpc_client = rpc_client.EngineClient()
if cfg.CONF.scheduler.store_ap_job:
self._scheduler.add_jobstore(cfg.CONF.scheduler.backend,
url=cfg.CONF.scheduler.connection)
def start(self):
LOG.info(_('Starting Cron scheduler'))
self._scheduler.start()
def stop(self):
LOG.info(_('Stopping Cron scheduler'))
self._scheduler.shutdown()
def init_scheduler(self):
"""Init all jobs related to the engine from db."""
admin_context = bilean_context.get_admin_context()
jobs = [] or db_api.job_get_all(admin_context,
scheduler_id=self.scheduler_id)
for job in jobs:
if self._is_exist(job.id):
continue
LOG.info(_LI("Add job '%(job_id)s' to scheduler '%(id)s'."),
{'job_id': job.id, 'id': self.scheduler_id})
self._add_job(job.id, job.job_type, **job.parameters)
LOG.info(_LI("Initialise users from keystone."))
users = user_mod.User.init_users(admin_context)
# Init daily job for all users
if users:
for user in users:
job_id = self._generate_job_id(user.id, self.DAILY)
if self._is_exist(job_id):
continue
self._add_daily_job(user)
def _add_job(self, job_id, task_type, **kwargs):
"""Add a job to scheduler by given data.
:param str|unicode user_id: used as job_id
:param datetime alarm_time: when to first run the job
"""
mg_time = cfg.CONF.scheduler.misfire_grace_time
job_time_zone = cfg.CONF.scheduler.time_zone
user_id = job_id.split('-')[1]
trigger_type = self.CRON if task_type == self.DAILY else self.DATE
if trigger_type == self.DATE:
run_date = kwargs.get('run_date')
if run_date is None:
msg = "Param run_date cannot be None for trigger type 'date'."
raise exception.InvalidInput(reason=msg)
self._scheduler.add_job(self._task, 'date',
timezone=job_time_zone,
run_date=run_date,
args=[user_id, task_type],
id=job_id,
misfire_grace_time=mg_time)
return
# Add a cron type job
hour = kwargs.get('hour')
minute = kwargs.get('minute')
if not hour or not minute:
hour, minute = self._generate_timer()
self._scheduler.add_job(self._task, 'cron',
timezone=job_time_zone,
hour=hour,
minute=minute,
args=[user_id, task_type],
id=job_id,
misfire_grace_time=mg_time)
def _remove_job(self, job_id):
"""Removes a job, preventing it from being run any more.
:param str|unicode job_id: the identifier of the job
"""
self._scheduler.remove_job(job_id)
def _is_exist(self, job_id):
"""Returns if the Job exists that matches the given ``job_id``.
:param str|unicode job_id: the identifier of the job
:return: True|False
"""
job = self._scheduler.get_job(job_id)
return job is not None
def _task(self, user_id, task_type):
admin_context = bilean_context.get_admin_context()
self.rpc_client.settle_account(
admin_context, user_id, task=task_type)
if task_type != self.DAILY:
try:
db_api.job_delete(
admin_context, self._generate_job_id(user_id, task_type))
except exception.NotFound as e:
LOG.warn(_LW("Failed in deleting job: %s") % six.text_type(e))
def _add_notify_job(self, user):
if user.rate == 0:
return False
total_seconds = float(user.balance / user.rate)
prior_notify_time = cfg.CONF.scheduler.prior_notify_time * 3600
notify_seconds = total_seconds - prior_notify_time
notify_seconds = notify_seconds if notify_seconds > 0 else 0
run_date = timeutils.utcnow() + timedelta(seconds=notify_seconds)
job_params = {'run_date': run_date}
job_id = self._generate_job_id(user.id, self.NOTIFY)
self._add_job(job_id, self.NOTIFY, **job_params)
# Save jobs to database
job = {'id': job_id,
'job_type': self.NOTIFY,
'scheduler_id': self.scheduler_id,
'parameters': {'run_date': utils.format_time(run_date)}}
admin_context = bilean_context.get_admin_context()
db_api.job_create(admin_context, job)
def _add_freeze_job(self, user):
if user.rate == 0:
return False
total_seconds = float(user.balance / user.rate)
run_date = timeutils.utcnow() + timedelta(seconds=total_seconds)
job_params = {'run_date': run_date}
job_id = self._generate_job_id(user.id, self.FREEZE)
self._add_job(job_id, self.FREEZE, **job_params)
# Save jobs to database
job = {'id': job_id,
'job_type': self.FREEZE,
'scheduler_id': self.scheduler_id,
'parameters': {'run_date': utils.format_time(run_date)}}
admin_context = bilean_context.get_admin_context()
db_api.job_create(admin_context, job)
return True
def _add_daily_job(self, user):
job_id = self._generate_job_id(user.id, self.DAILY)
job_params = {'hour': random.randint(0, 23),
'minute': random.randint(0, 59)}
self._add_job(job_id, self.DAILY, **job_params)
return True
def _generate_timer(self):
"""Generate a random timer include hour and minute."""
hour = random.randint(0, 23)
minute = random.randint(0, 59)
return hour, minute
def _generate_job_id(self, user_id, job_type):
"""Generate job id by given user_id and job type"""
return "%s-%s" % (job_type, user_id)
def update_jobs(self, user):
"""Update user's billing job"""
# Delete all jobs except daily job
admin_context = bilean_context.get_admin_context()
for job_type in self.NOTIFY, self.FREEZE:
job_id = self._generate_job_id(user.id, job_type)
try:
if self._is_exist(job_id):
self._remove_job(job_id)
db_api.job_delete(admin_context, job_id)
except Exception as e:
LOG.warn(_LW("Failed in deleting job: %s") % six.text_type(e))
if user.status == user.ACTIVE:
self._add_notify_job(user)
elif user.status == user.WARNING:
self._add_freeze_job(user)
def delete_jobs(self, user):
"""Delete all jobs related the specific user."""
admin_context = bilean_context.get_admin_context()
for job_type in self.job_types:
job_id = self._generate_job_id(user.id, job_type)
try:
if self._is_exist(job_id):
self._remove_job(job_id)
db_api.job_delete(admin_context, job_id)
except Exception as e:
LOG.warn(_LW("Failed in deleting job: %s") % six.text_type(e))
def list_opts():
yield scheduler_group.name, scheduler_opts

View File

@ -1,85 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
import socket
from oslo_log import log as logging
import oslo_messaging
from oslo_service import service
from bilean.common import consts
from bilean.common.i18n import _LE, _LI
from bilean.common import messaging as rpc_messaging
from bilean.engine import user as user_mod
from bilean.scheduler import cron_scheduler
LOG = logging.getLogger(__name__)
class SchedulerService(service.Service):
def __init__(self, host, topic, manager=None, context=None):
super(SchedulerService, self).__init__()
self.host = host
self.topic = topic
self.scheduler_id = None
self.scheduler = None
self.target = None
self._rpc_server = None
def start(self):
self.scheduler_id = socket.gethostname()
self.scheduler = cron_scheduler.CronScheduler(
scheduler_id=self.scheduler_id)
LOG.info(_LI("Starting billing scheduler"))
self.scheduler.init_scheduler()
self.scheduler.start()
LOG.info(_LI("Starting rpc server for bilean scheduler service"))
self.target = oslo_messaging.Target(version=consts.RPC_API_VERSION,
server=self.scheduler_id,
topic=self.topic)
self._rpc_server = rpc_messaging.get_rpc_server(self.target, self)
self._rpc_server.start()
super(SchedulerService, self).start()
def _stop_rpc_server(self):
# Stop RPC connection to prevent new requests
LOG.info(_LI("Stopping scheduler service..."))
try:
self._rpc_server.stop()
self._rpc_server.wait()
LOG.info(_LI('Scheduler service stopped successfully'))
except Exception as ex:
LOG.error(_LE('Failed to stop scheduler service: %s'),
six.text_type(ex))
def stop(self):
self._stop_rpc_server()
LOG.info(_LI("Stopping billing scheduler"))
self.scheduler.stop()
super(SchedulerService, self).stop()
def update_jobs(self, ctxt, user):
user_obj = user_mod.User.from_dict(user)
self.scheduler.update_jobs(user_obj)
def delete_jobs(self, ctxt, user):
user_obj = user_mod.User.from_dict(user)
self.scheduler.delete_jobs(user_obj)

Some files were not shown because too many files have changed in this diff Show More