First opensource commit
This commit is contained in:
commit
27abb148bd
10
.gitignore
vendored
Normal file
10
.gitignore
vendored
Normal file
@ -0,0 +1,10 @@
|
||||
.idea
|
||||
.tox
|
||||
*.iml
|
||||
*.pyc
|
||||
*.coverage
|
||||
*.egg*
|
||||
.vagrant
|
||||
.DS_Store
|
||||
AUTHORS
|
||||
ChangeLog
|
14
.travis.yml
Normal file
14
.travis.yml
Normal file
@ -0,0 +1,14 @@
|
||||
language: python
|
||||
python:
|
||||
- '2.7'
|
||||
install:
|
||||
- pip install tox
|
||||
script: tox -r
|
||||
deploy:
|
||||
provider: pypi
|
||||
user: internaphosting
|
||||
on:
|
||||
tags: true
|
||||
repo: internap/almanach
|
||||
password:
|
||||
secure: l8iby1dwEHsWl4Utas393CncC7dpVJeM9XUK8ruexdTXIkrOKfYmIQCYmbAeucD3AVoJj1YKHCMeyS9X48aV6u6X0J1lMze7DiDvGu0/mIGIRlW8vkX9oLzWY5U6KA5u4P7ENLBp4I7o3evob+4f1SW0XUjThOTpavRTPh4NcQ+tgTqOY6P+RKfdxXXeSlWgIQeYCyfvT50gKkf3M+VOryKl8ZeW4mBkstI3+MZQo2PT4xOhBjUHw0i/Exff3+dnQCZTYRGqN0UQAn1aqOxgtZ+PwxwDCRWMoSdmbJjUNrvCmnH/fKkpuQsax946PPOkfGvc8khE6fEZ/fER60AVHhbooNsSr8aOIXBeLxVAvdHOO53/QB5JRcHauTSeegBpThWtZ2tdJxeHyv8/07uEE8VdIQWMbqdA7wDEWUeYrjZ0jKC3pYjtIV4ztgC2U/DKL14OOK3NUzyQkCAeYgB5nefjBR18uasjyss/R7s6YUwP8EVGrZqjWRq42nlPSsD54TzI+9svcFpLS8uwWAX5+TVZaUZWA1YDfOFbp9B3NbPhr0af8cpwdqGVx+AI/EtWye2bCVht1RKiHYOEHBz8iZP5aE0vZt7XNz4DEVhvArWgZBhUOmRDz5HbBpx+3th+cmWC3VbvaSFqE1Cm0yZXfWlTFteYbDi3LBPDTdk3rF8=
|
176
LICENSE
Normal file
176
LICENSE
Normal file
@ -0,0 +1,176 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
17
README.md
Normal file
17
README.md
Normal file
@ -0,0 +1,17 @@
|
||||
Almanach
|
||||
========
|
||||
|
||||
[![Build Status](https://travis-ci.org/internap/almanach.svg?branch=master)](https://travis-ci.org/internap/almanach)
|
||||
[![PyPI version](https://badge.fury.io/py/almanach.svg)](https://badge.fury.io/py/almanach)
|
||||
|
||||
Almanach stores the utilization of OpenStack resources (instances and volumes) for each tenant.
|
||||
|
||||
What is Almanach?
|
||||
-----------------
|
||||
|
||||
The main purpose of this software is to bill customers based on their usage of the cloud infrastructure.
|
||||
|
||||
Almanach is composed of two parts:
|
||||
|
||||
- **Collector**: listen for OpenStack events and store the relevant information in the database.
|
||||
- **REST API**: Expose the information collected to external systems.
|
0
almanach/__init__.py
Normal file
0
almanach/__init__.py
Normal file
0
almanach/adapters/__init__.py
Normal file
0
almanach/adapters/__init__.py
Normal file
301
almanach/adapters/api_route_v1.py
Normal file
301
almanach/adapters/api_route_v1.py
Normal file
@ -0,0 +1,301 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
import json
|
||||
import jsonpickle
|
||||
|
||||
from datetime import datetime
|
||||
from functools import wraps
|
||||
from flask import Blueprint, Response, request
|
||||
from werkzeug.wrappers import BaseResponse
|
||||
|
||||
from almanach import config
|
||||
from almanach.common.DateFormatException import DateFormatException
|
||||
|
||||
api = Blueprint("api", __name__)
|
||||
controller = None
|
||||
|
||||
|
||||
def to_json(api_call):
|
||||
def encode(data):
|
||||
return jsonpickle.encode(data, unpicklable=False)
|
||||
|
||||
@wraps(api_call)
|
||||
def decorator(*args, **kwargs):
|
||||
try:
|
||||
result = api_call(*args, **kwargs)
|
||||
return result if isinstance(result, BaseResponse) \
|
||||
else Response(encode(result), 200, {"Content-Type": "application/json"})
|
||||
except DateFormatException as e:
|
||||
logging.warning(e.message)
|
||||
return Response(encode({"error": e.message}), 400, {"Content-Type": "application/json"})
|
||||
except KeyError as e:
|
||||
message = "The '{param}' param is mandatory for the request you have made.".format(param=e.message)
|
||||
logging.warning(message)
|
||||
return encode({"error": message}), 400, {"Content-Type": "application/json"}
|
||||
except TypeError:
|
||||
message = "The request you have made must have data. None was given."
|
||||
logging.warning(message)
|
||||
return encode({"error": message}), 400, {"Content-Type": "application/json"}
|
||||
except Exception as e:
|
||||
logging.exception(e)
|
||||
return Response(encode({"error": e.message}), 500, {"Content-Type": "application/json"})
|
||||
return decorator
|
||||
|
||||
|
||||
def authenticated(api_call):
|
||||
@wraps(api_call)
|
||||
def decorator(*args, **kwargs):
|
||||
auth_token = request.headers.get('X-Auth-Token')
|
||||
if auth_token == config.api_auth_token():
|
||||
return api_call(*args, **kwargs)
|
||||
else:
|
||||
return Response('Unauthorized', 401)
|
||||
|
||||
return decorator
|
||||
|
||||
|
||||
@api.route("/info", methods=["GET"])
|
||||
@to_json
|
||||
def get_info():
|
||||
logging.info("Get application info")
|
||||
return controller.get_application_info()
|
||||
|
||||
|
||||
@api.route("/project/<project_id>/instance", methods=["POST"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def create_instance(project_id):
|
||||
instance = json.loads(request.data)
|
||||
logging.info("Creating instance for tenant %s with data %s", project_id, instance)
|
||||
controller.create_instance(
|
||||
tenant_id=project_id,
|
||||
instance_id=instance['id'],
|
||||
create_date=instance['created_at'],
|
||||
flavor=instance['flavor'],
|
||||
os_type=instance['os_type'],
|
||||
distro=instance['os_distro'],
|
||||
version=instance['os_version'],
|
||||
name=instance['name'],
|
||||
metadata={}
|
||||
)
|
||||
|
||||
return Response(status=201)
|
||||
|
||||
|
||||
@api.route("/instance/<instance_id>", methods=["DELETE"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def delete_instance(instance_id):
|
||||
data = json.loads(request.data)
|
||||
logging.info("Deleting instance with id %s with data %s", instance_id, data)
|
||||
controller.delete_instance(
|
||||
instance_id=instance_id,
|
||||
delete_date=data['date']
|
||||
)
|
||||
|
||||
return Response(status=202)
|
||||
|
||||
|
||||
@api.route("/instance/<instance_id>/resize", methods=["PUT"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def resize_instance(instance_id):
|
||||
instance = json.loads(request.data)
|
||||
logging.info("Resizing instance with id %s with data %s", instance_id, instance)
|
||||
controller.resize_instance(
|
||||
instance_id=instance_id,
|
||||
resize_date=instance['date'],
|
||||
flavor=instance['flavor']
|
||||
)
|
||||
|
||||
return Response(status=200)
|
||||
|
||||
|
||||
@api.route("/instance/<instance_id>/rebuild", methods=["PUT"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def rebuild_instance(instance_id):
|
||||
instance = json.loads(request.data)
|
||||
logging.info("Rebuilding instance with id %s with data %s", instance_id, instance)
|
||||
controller.rebuild_instance(
|
||||
instance_id=instance_id,
|
||||
distro=instance['distro'],
|
||||
version=instance['version'],
|
||||
rebuild_date=instance['rebuild_date'],
|
||||
)
|
||||
|
||||
return Response(status=200)
|
||||
|
||||
|
||||
@api.route("/project/<project_id>/instances", methods=["GET"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def list_instances(project_id):
|
||||
start, end = get_period()
|
||||
logging.info("Listing instances between %s and %s", start, end)
|
||||
return controller.list_instances(project_id, start, end)
|
||||
|
||||
|
||||
@api.route("/project/<project_id>/volume", methods=["POST"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def create_volume(project_id):
|
||||
volume = json.loads(request.data)
|
||||
logging.info("Creating volume for tenant %s with data %s", project_id, volume)
|
||||
controller.create_volume(
|
||||
project_id=project_id,
|
||||
volume_id=volume['volume_id'],
|
||||
start=volume['start'],
|
||||
volume_type=volume['volume_type'],
|
||||
size=volume['size'],
|
||||
volume_name=volume['volume_name'],
|
||||
attached_to=volume['attached_to']
|
||||
)
|
||||
|
||||
return Response(status=201)
|
||||
|
||||
|
||||
@api.route("/volume/<volume_id>", methods=["DELETE"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def delete_volume(volume_id):
|
||||
data = json.loads(request.data)
|
||||
logging.info("Deleting volume with id %s with data %s", volume_id, data)
|
||||
controller.delete_volume(
|
||||
volume_id=volume_id,
|
||||
delete_date=data['date']
|
||||
)
|
||||
|
||||
return Response(status=202)
|
||||
|
||||
|
||||
@api.route("/volume/<volume_id>/resize", methods=["PUT"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def resize_volume(volume_id):
|
||||
volume = json.loads(request.data)
|
||||
logging.info("Resizing volume with id %s with data %s", volume_id, volume)
|
||||
controller.resize_volume(
|
||||
volume_id=volume_id,
|
||||
size=volume['size'],
|
||||
update_date=volume['date']
|
||||
)
|
||||
|
||||
return Response(status=200)
|
||||
|
||||
|
||||
@api.route("/volume/<volume_id>/attach", methods=["PUT"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def attach_volume(volume_id):
|
||||
volume = json.loads(request.data)
|
||||
logging.info("Attaching volume with id %s with data %s", volume_id, volume)
|
||||
controller.attach_volume(
|
||||
volume_id=volume_id,
|
||||
date=volume['date'],
|
||||
attachments=volume['attachments']
|
||||
)
|
||||
|
||||
return Response(status=200)
|
||||
|
||||
|
||||
@api.route("/volume/<volume_id>/detach", methods=["PUT"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def detach_volume(volume_id):
|
||||
volume = json.loads(request.data)
|
||||
logging.info("Detaching volume with id %s with data %s", volume_id, volume)
|
||||
controller.detach_volume(
|
||||
volume_id=volume_id,
|
||||
date=volume['date'],
|
||||
attachments=volume['attachments']
|
||||
)
|
||||
|
||||
return Response(status=200)
|
||||
|
||||
|
||||
@api.route("/project/<project_id>/volumes", methods=["GET"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def list_volumes(project_id):
|
||||
start, end = get_period()
|
||||
logging.info("Listing volumes between %s and %s", start, end)
|
||||
return controller.list_volumes(project_id, start, end)
|
||||
|
||||
|
||||
@api.route("/project/<project_id>/entities", methods=["GET"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def list_entity(project_id):
|
||||
start, end = get_period()
|
||||
logging.info("Listing entities between %s and %s", start, end)
|
||||
return controller.list_entities(project_id, start, end)
|
||||
|
||||
|
||||
# Temporary for AgileV1 migration
|
||||
@api.route("/instance/<instance_id>/create_date/<create_date>", methods=["PUT"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def update_instance_create_date(instance_id, create_date):
|
||||
logging.info("Update create date for instance %s to %s", instance_id, create_date)
|
||||
return controller.update_instance_create_date(instance_id, create_date)
|
||||
|
||||
|
||||
@api.route("/volume_types", methods=["GET"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def list_volume_types():
|
||||
logging.info("Listing volumes types")
|
||||
return controller.list_volume_types()
|
||||
|
||||
|
||||
@api.route("/volume_type/<type_id>", methods=["GET"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def get_volume_type(type_id):
|
||||
logging.info("Get volumes type for id %s", type_id)
|
||||
return controller.get_volume_type(type_id)
|
||||
|
||||
|
||||
@api.route("/volume_type", methods=["POST"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def create_volume_type():
|
||||
volume_type = json.loads(request.data)
|
||||
logging.info("Creating volume type with data '%s'", volume_type)
|
||||
controller.create_volume_type(
|
||||
volume_type_id=volume_type['type_id'],
|
||||
volume_type_name=volume_type['type_name']
|
||||
)
|
||||
return Response(status=201)
|
||||
|
||||
|
||||
@api.route("/volume_type/<type_id>", methods=["DELETE"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def delete_volume_type(type_id):
|
||||
logging.info("Deleting volume type with id '%s'", type_id)
|
||||
controller.delete_volume_type(type_id)
|
||||
return Response(status=202)
|
||||
|
||||
|
||||
def get_period():
|
||||
start = datetime.strptime(request.args["start"], "%Y-%m-%d %H:%M:%S.%f")
|
||||
if "end" not in request.args:
|
||||
end = datetime.now()
|
||||
else:
|
||||
end = datetime.strptime(request.args["end"], "%Y-%m-%d %H:%M:%S.%f")
|
||||
return start, end
|
187
almanach/adapters/bus_adapter.py
Normal file
187
almanach/adapters/bus_adapter.py
Normal file
@ -0,0 +1,187 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import json
|
||||
import logging
|
||||
import kombu
|
||||
|
||||
from kombu.mixins import ConsumerMixin
|
||||
from almanach import config
|
||||
|
||||
|
||||
class BusAdapter(ConsumerMixin):
|
||||
|
||||
def __init__(self, controller, connection, retry_adapter):
|
||||
super(BusAdapter, self).__init__()
|
||||
self.controller = controller
|
||||
self.connection = connection
|
||||
self.retry_adapter = retry_adapter
|
||||
|
||||
def on_message(self, notification, message):
|
||||
try:
|
||||
self._process_notification(notification)
|
||||
except Exception as e:
|
||||
logging.warning("Sending notification to retry letter exchange {0}".format(json.dumps(notification)))
|
||||
logging.exception(e.message)
|
||||
self.retry_adapter.publish_to_dead_letter(message)
|
||||
message.ack()
|
||||
|
||||
def _process_notification(self, notification):
|
||||
if isinstance(notification, basestring):
|
||||
notification = json.loads(notification)
|
||||
|
||||
event_type = notification.get("event_type")
|
||||
logging.info(event_type)
|
||||
|
||||
if event_type == "compute.instance.create.end":
|
||||
self._instance_created(notification)
|
||||
elif event_type == "compute.instance.delete.end":
|
||||
self._instance_deleted(notification)
|
||||
elif event_type == "compute.instance.resize.confirm.end":
|
||||
self._instance_resized(notification)
|
||||
elif event_type == "compute.instance.rebuild.end":
|
||||
self._instance_rebuilt(notification)
|
||||
elif event_type == "volume.create.end":
|
||||
self._volume_created(notification)
|
||||
elif event_type == "volume.delete.end":
|
||||
self._volume_deleted(notification)
|
||||
elif event_type == "volume.resize.end":
|
||||
self._volume_resized(notification)
|
||||
elif event_type == "volume.attach.end":
|
||||
self._volume_attached(notification)
|
||||
elif event_type == "volume.detach.end":
|
||||
self._volume_detached(notification)
|
||||
elif event_type == "volume.update.end":
|
||||
self._volume_renamed(notification)
|
||||
elif event_type == "volume.exists":
|
||||
self._volume_renamed(notification)
|
||||
elif event_type == "volume_type.create":
|
||||
self._volume_type_create(notification)
|
||||
|
||||
def get_consumers(self, consumer, channel):
|
||||
queue = kombu.Queue(config.rabbitmq_queue(), routing_key=config.rabbitmq_routing_key())
|
||||
return [consumer(
|
||||
[queue],
|
||||
callbacks=[self.on_message],
|
||||
auto_declare=False)]
|
||||
|
||||
def run(self, _tokens=1):
|
||||
try:
|
||||
super(BusAdapter, self).run(_tokens)
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
|
||||
def _instance_created(self, notification):
|
||||
payload = notification.get("payload")
|
||||
project_id = payload.get("tenant_id")
|
||||
date = payload.get("created_at")
|
||||
instance_id = payload.get("instance_id")
|
||||
flavor = payload.get("instance_type")
|
||||
os_type = payload.get("image_meta").get("os_type")
|
||||
distro = payload.get("image_meta").get("distro")
|
||||
version = payload.get("image_meta").get("version")
|
||||
name = payload.get("hostname")
|
||||
metadata = payload.get("metadata")
|
||||
if isinstance(metadata, list):
|
||||
metadata = {}
|
||||
self.controller.create_instance(
|
||||
instance_id,
|
||||
project_id,
|
||||
date,
|
||||
flavor,
|
||||
os_type,
|
||||
distro,
|
||||
version,
|
||||
name,
|
||||
metadata
|
||||
)
|
||||
|
||||
def _instance_deleted(self, notification):
|
||||
payload = notification.get("payload")
|
||||
date = payload.get("terminated_at")
|
||||
instance_id = payload.get("instance_id")
|
||||
self.controller.delete_instance(instance_id, date)
|
||||
|
||||
def _instance_resized(self, notification):
|
||||
payload = notification.get("payload")
|
||||
date = notification.get("timestamp")
|
||||
flavor = payload.get("instance_type")
|
||||
instance_id = payload.get("instance_id")
|
||||
self.controller.resize_instance(instance_id, flavor, date)
|
||||
|
||||
def _volume_created(self, notification):
|
||||
payload = notification.get("payload")
|
||||
date = payload.get("created_at")
|
||||
project_id = payload.get("tenant_id")
|
||||
volume_id = payload.get("volume_id")
|
||||
volume_name = payload.get("display_name")
|
||||
volume_type = payload.get("volume_type")
|
||||
volume_size = payload.get("size")
|
||||
self.controller.create_volume(volume_id, project_id, date, volume_type, volume_size, volume_name)
|
||||
|
||||
def _volume_deleted(self, notification):
|
||||
payload = notification.get("payload")
|
||||
volume_id = payload.get("volume_id")
|
||||
end_date = notification.get("timestamp")
|
||||
self.controller.delete_volume(volume_id, end_date)
|
||||
|
||||
def _volume_renamed(self, notification):
|
||||
payload = notification.get("payload")
|
||||
volume_id = payload.get("volume_id")
|
||||
volume_name = payload.get("display_name")
|
||||
self.controller.rename_volume(volume_id, volume_name)
|
||||
|
||||
def _volume_resized(self, notification):
|
||||
payload = notification.get("payload")
|
||||
date = notification.get("timestamp")
|
||||
volume_id = payload.get("volume_id")
|
||||
volume_size = payload.get("size")
|
||||
self.controller.resize_volume(volume_id, volume_size, date)
|
||||
|
||||
def _volume_attached(self, notification):
|
||||
payload = notification.get("payload")
|
||||
volume_id = payload.get("volume_id")
|
||||
event_date = notification.get("timestamp")
|
||||
self.controller.attach_volume(volume_id, event_date, self._get_attached_instances(payload))
|
||||
|
||||
def _volume_detached(self, notification):
|
||||
payload = notification.get("payload")
|
||||
volume_id = payload.get("volume_id")
|
||||
event_date = notification.get("timestamp")
|
||||
self.controller.detach_volume(volume_id, event_date, self._get_attached_instances(payload))
|
||||
|
||||
@staticmethod
|
||||
def _get_attached_instances(payload):
|
||||
instances_ids = []
|
||||
if "volume_attachment" in payload:
|
||||
for instance in payload["volume_attachment"]:
|
||||
instances_ids.append(instance.get("instance_uuid"))
|
||||
elif payload.get("instance_uuid") is not None:
|
||||
instances_ids.append(payload.get("instance_uuid"))
|
||||
|
||||
return instances_ids
|
||||
|
||||
def _instance_rebuilt(self, notification):
|
||||
payload = notification.get("payload")
|
||||
date = notification.get("timestamp")
|
||||
instance_id = payload.get("instance_id")
|
||||
distro = payload.get("image_meta").get("distro")
|
||||
version = payload.get("image_meta").get("version")
|
||||
self.controller.rebuild_instance(instance_id, distro, version, date)
|
||||
|
||||
def _volume_type_create(self, notification):
|
||||
volume_types = notification.get("payload").get("volume_types")
|
||||
volume_type_id = volume_types.get("id")
|
||||
volume_type_name = volume_types.get("name")
|
||||
self.controller.create_volume_type(volume_type_id, volume_type_name)
|
149
almanach/adapters/database_adapter.py
Normal file
149
almanach/adapters/database_adapter.py
Normal file
@ -0,0 +1,149 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
|
||||
import pymongo
|
||||
|
||||
from pymongo.errors import ConfigurationError
|
||||
from almanach import config
|
||||
from almanach.common.AlmanachException import AlmanachException
|
||||
from almanach.common.VolumeTypeNotFoundException import VolumeTypeNotFoundException
|
||||
from almanach.core.model import build_entity_from_dict, VolumeType
|
||||
from pymongomodem.utils import decode_output, encode_input
|
||||
|
||||
|
||||
def database(function):
|
||||
def _connection(self, *args, **kwargs):
|
||||
try:
|
||||
if not self.db:
|
||||
connection = pymongo.MongoClient(config.mongodb_url(), tz_aware=True)
|
||||
self.db = connection[config.mongodb_database()]
|
||||
ensureindex(self.db)
|
||||
return function(self, *args, **kwargs)
|
||||
except KeyError as e:
|
||||
raise e
|
||||
except VolumeTypeNotFoundException as e:
|
||||
raise e
|
||||
except NotImplementedError as e:
|
||||
raise e
|
||||
except ConfigurationError as e:
|
||||
logging.exception("DB Connection, make sure username and password doesn't contain the following :+&/ "
|
||||
"character")
|
||||
raise e
|
||||
except Exception as e:
|
||||
logging.exception(e)
|
||||
raise e
|
||||
|
||||
return _connection
|
||||
|
||||
|
||||
def ensureindex(db):
|
||||
db.entity.ensure_index(
|
||||
[(index, pymongo.ASCENDING)
|
||||
for index in config.mongodb_indexes()])
|
||||
|
||||
|
||||
class DatabaseAdapter(object):
|
||||
def __init__(self):
|
||||
self.db = None
|
||||
|
||||
@database
|
||||
def get_active_entity(self, entity_id):
|
||||
entity = self._get_one_entity_from_db({"entity_id": entity_id, "end": None})
|
||||
if not entity:
|
||||
raise KeyError("Unable to find entity id %s" % entity_id)
|
||||
return build_entity_from_dict(entity)
|
||||
|
||||
@database
|
||||
def count_entities(self):
|
||||
return self.db.entity.count()
|
||||
|
||||
@database
|
||||
def count_active_entities(self):
|
||||
return self.db.entity.find({"end": None}).count()
|
||||
|
||||
@database
|
||||
def count_entity_entries(self, entity_id):
|
||||
return self.db.entity.find({"entity_id": entity_id}).count()
|
||||
|
||||
@database
|
||||
def list_entities(self, project_id, start, end, entity_type=None):
|
||||
args = {"project_id": project_id, "start": {"$lte": end}, "$or": [{"end": None}, {"end": {"$gte": start}}]}
|
||||
if entity_type:
|
||||
args["entity_type"] = entity_type
|
||||
entities = self._get_entities_from_db(args)
|
||||
return [build_entity_from_dict(entity) for entity in entities]
|
||||
|
||||
@database
|
||||
def insert_entity(self, entity):
|
||||
self._insert_entity(entity.as_dict())
|
||||
|
||||
@database
|
||||
def insert_volume_type(self, volume_type):
|
||||
self.db.volume_type.insert(volume_type.__dict__)
|
||||
|
||||
@database
|
||||
def get_volume_type(self, volume_type_id):
|
||||
volume_type = self.db.volume_type.find_one({"volume_type_id": volume_type_id})
|
||||
if not volume_type:
|
||||
logging.error("Trying to get a volume type not in the database.")
|
||||
raise VolumeTypeNotFoundException(volume_type_id=volume_type_id)
|
||||
|
||||
return VolumeType(volume_type_id=volume_type["volume_type_id"],
|
||||
volume_type_name=volume_type["volume_type_name"])
|
||||
|
||||
@database
|
||||
def delete_volume_type(self, volume_type_id):
|
||||
if volume_type_id is None:
|
||||
error = "Trying to delete all volume types which is not permitted."
|
||||
logging.error(error)
|
||||
raise AlmanachException(error)
|
||||
returned_value = self.db.volume_type.remove({"volume_type_id": volume_type_id})
|
||||
if returned_value['n'] == 1:
|
||||
logging.info("Deleted volume type with id '%s' successfully." % volume_type_id)
|
||||
else:
|
||||
error = "Volume type with id '%s' doesn't exist in the database." % volume_type_id
|
||||
logging.error(error)
|
||||
raise AlmanachException(error)
|
||||
|
||||
@database
|
||||
def list_volume_types(self):
|
||||
volume_types = self.db.volume_type.find()
|
||||
return [VolumeType(volume_type_id=volume_type["volume_type_id"],
|
||||
volume_type_name=volume_type["volume_type_name"]) for volume_type in volume_types]
|
||||
|
||||
@database
|
||||
def close_active_entity(self, entity_id, end):
|
||||
self.db.entity.update({"entity_id": entity_id, "end": None}, {"$set": {"end": end, "last_event": end}})
|
||||
|
||||
@database
|
||||
def update_active_entity(self, entity):
|
||||
self.db.entity.update({"entity_id": entity.entity_id, "end": None}, {"$set": entity.as_dict()})
|
||||
|
||||
@database
|
||||
def delete_active_entity(self, entity_id):
|
||||
self.db.entity.remove({"entity_id": entity_id, "end": None})
|
||||
|
||||
@encode_input
|
||||
def _insert_entity(self, entity):
|
||||
self.db.entity.insert(entity)
|
||||
|
||||
@decode_output
|
||||
def _get_entities_from_db(self, args):
|
||||
return list(self.db.entity.find(args, {"_id": 0}))
|
||||
|
||||
@decode_output
|
||||
def _get_one_entity_from_db(self, args):
|
||||
return self.db.entity.find_one(args, {"_id": 0})
|
119
almanach/adapters/retry_adapter.py
Normal file
119
almanach/adapters/retry_adapter.py
Normal file
@ -0,0 +1,119 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import json
|
||||
import logging
|
||||
from kombu import Exchange, Queue, Producer
|
||||
from almanach import config
|
||||
|
||||
|
||||
class RetryAdapter:
|
||||
def __init__(self, connection):
|
||||
self.connection = connection
|
||||
retry_exchange = self._configure_retry_exchanges(self.connection)
|
||||
dead_exchange = self._configure_dead_exchange(self.connection)
|
||||
self._retry_producer = Producer(self.connection, exchange=retry_exchange)
|
||||
self._dead_producer = Producer(self.connection, exchange=dead_exchange)
|
||||
|
||||
def publish_to_dead_letter(self, message):
|
||||
death_count = self._rejected_count(message)
|
||||
logging.info("Message has been dead {0} times".format(death_count))
|
||||
if death_count < config.rabbitmq_retry():
|
||||
logging.info("Publishing to retry queue")
|
||||
self._publish_message(self._retry_producer, message)
|
||||
logging.info("Published to retry queue")
|
||||
else:
|
||||
logging.info("Publishing to dead letter queue")
|
||||
self._publish_message(self._dead_producer, message)
|
||||
logging.info("Publishing notification to dead letter queue: {0}".format(json.dumps(message.body)))
|
||||
|
||||
def _configure_retry_exchanges(self, connection):
|
||||
def declare_queues():
|
||||
channel = connection.channel()
|
||||
almanach_exchange = Exchange(name=config.rabbitmq_retry_return_exchange(),
|
||||
type='direct',
|
||||
channel=channel)
|
||||
retry_exchange = Exchange(name=config.rabbitmq_retry_exchange(),
|
||||
type='direct',
|
||||
channel=channel)
|
||||
retry_queue = Queue(name=config.rabbitmq_retry_queue(),
|
||||
exchange=retry_exchange,
|
||||
routing_key=config.rabbitmq_routing_key(),
|
||||
queue_arguments=self._get_queue_arguments(),
|
||||
channel=channel)
|
||||
almanach_queue = Queue(name=config.rabbitmq_queue(),
|
||||
exchange=almanach_exchange,
|
||||
durable=False,
|
||||
routing_key=config.rabbitmq_routing_key(),
|
||||
channel=channel)
|
||||
|
||||
retry_queue.declare()
|
||||
almanach_queue.declare()
|
||||
|
||||
return retry_exchange
|
||||
|
||||
def error_callback(exception, interval):
|
||||
logging.error('Failed to declare queues and exchanges, retrying in %d seconds. %r' % (interval, exception))
|
||||
|
||||
declare_queues = connection.ensure(connection, declare_queues, errback=error_callback,
|
||||
interval_start=0, interval_step=5, interval_max=30)
|
||||
return declare_queues()
|
||||
|
||||
def _configure_dead_exchange(self, connection):
|
||||
def declare_dead_queue():
|
||||
channel = connection.channel()
|
||||
dead_exchange = Exchange(name=config.rabbitmq_dead_exchange(),
|
||||
type='direct',
|
||||
channel=channel)
|
||||
dead_queue = Queue(name=config.rabbitmq_dead_queue(),
|
||||
routing_key=config.rabbitmq_routing_key(),
|
||||
exchange=dead_exchange,
|
||||
channel=channel)
|
||||
|
||||
dead_queue.declare()
|
||||
|
||||
return dead_exchange
|
||||
|
||||
def error_callback(exception, interval):
|
||||
logging.error('Failed to declare dead queue and exchange, retrying in %d seconds. %r' % (interval, exception))
|
||||
|
||||
declare_dead_queue = connection.ensure(connection, declare_dead_queue, errback=error_callback,
|
||||
interval_start=0, interval_step=5, interval_max=30)
|
||||
return declare_dead_queue()
|
||||
|
||||
def _get_queue_arguments(self):
|
||||
return {"x-message-ttl": self._get_time_to_live_in_seconds(),
|
||||
"x-dead-letter-exchange": config.rabbitmq_retry_return_exchange(),
|
||||
"x-dead-letter-routing-key": config.rabbitmq_routing_key()}
|
||||
|
||||
def _get_time_to_live_in_seconds(self):
|
||||
return config.rabbitmq_time_to_live() * 1000
|
||||
|
||||
def _rejected_count(self, message):
|
||||
if 'x-death' in message.headers:
|
||||
return len(message.headers['x-death'])
|
||||
return 0
|
||||
|
||||
def _publish_message(self, producer, message):
|
||||
publish = self.connection.ensure(producer, producer.publish, errback=self._error_callback,
|
||||
interval_start=0, interval_step=5, interval_max=30)
|
||||
publish(message.body,
|
||||
routing_key=message.delivery_info['routing_key'],
|
||||
headers=message.headers,
|
||||
content_type=message.content_type,
|
||||
content_encoding=message.content_encoding)
|
||||
|
||||
def _error_callback(self, exception, interval):
|
||||
logging.error('Failed to publish message to dead letter queue, retrying in %d seconds. %r'
|
||||
% (interval, exception))
|
51
almanach/api.py
Normal file
51
almanach/api.py
Normal file
@ -0,0 +1,51 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
from flask import Flask
|
||||
from gunicorn.app.base import Application
|
||||
|
||||
from almanach import config
|
||||
from almanach.adapters import api_route_v1 as api_route
|
||||
from almanach import log_bootstrap
|
||||
from almanach.adapters.database_adapter import DatabaseAdapter
|
||||
from almanach.core.controller import Controller
|
||||
|
||||
|
||||
class AlmanachApi(Application):
|
||||
def __init__(self):
|
||||
super(AlmanachApi, self).__init__()
|
||||
|
||||
def init(self, parser, opts, args):
|
||||
log_bootstrap.configure()
|
||||
config.read(args)
|
||||
self._controller = Controller(DatabaseAdapter())
|
||||
|
||||
def load(self):
|
||||
logging.info("starting flask worker")
|
||||
api_route.controller = self._controller
|
||||
|
||||
app = Flask("almanach")
|
||||
app.register_blueprint(api_route.api)
|
||||
|
||||
return app
|
||||
|
||||
|
||||
def run():
|
||||
almanach_api = AlmanachApi()
|
||||
almanach_api.run()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
run()
|
49
almanach/collector.py
Normal file
49
almanach/collector.py
Normal file
@ -0,0 +1,49 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
import sys
|
||||
from kombu import Connection
|
||||
|
||||
from almanach import log_bootstrap
|
||||
from almanach import config
|
||||
from almanach.adapters.bus_adapter import BusAdapter
|
||||
from almanach.adapters.database_adapter import DatabaseAdapter
|
||||
from almanach.adapters.retry_adapter import RetryAdapter
|
||||
from almanach.core.controller import Controller
|
||||
|
||||
|
||||
class AlmanachCollector(object):
|
||||
def __init__(self):
|
||||
log_bootstrap.configure()
|
||||
config.read(sys.argv)
|
||||
self._controller = Controller(DatabaseAdapter())
|
||||
_connection = Connection(config.rabbitmq_url(), heartbeat=540)
|
||||
retry_adapter = RetryAdapter(_connection)
|
||||
|
||||
self._busAdapter = BusAdapter(self._controller, _connection, retry_adapter)
|
||||
|
||||
def run(self):
|
||||
logging.info("starting bus adapter")
|
||||
self._busAdapter.run()
|
||||
logging.info("shutting down")
|
||||
|
||||
|
||||
def run():
|
||||
almanach_collector = AlmanachCollector()
|
||||
almanach_collector.run()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
run()
|
16
almanach/common/AlmanachException.py
Normal file
16
almanach/common/AlmanachException.py
Normal file
@ -0,0 +1,16 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
class AlmanachException(Exception):
|
||||
pass
|
21
almanach/common/DateFormatException.py
Normal file
21
almanach/common/DateFormatException.py
Normal file
@ -0,0 +1,21 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
class DateFormatException(Exception):
|
||||
def __init__(self, message=None):
|
||||
if not message:
|
||||
message = "The provided date has an invalid format. Format should be of yyyy-mm-ddThh:mm:ss.msZ, " \
|
||||
"ex: 2015-01-31T18:24:34.1523Z"
|
||||
|
||||
super(DateFormatException, self).__init__(message)
|
20
almanach/common/VolumeTypeNotFoundException.py
Normal file
20
almanach/common/VolumeTypeNotFoundException.py
Normal file
@ -0,0 +1,20 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
class VolumeTypeNotFoundException(Exception):
|
||||
def __init__(self, volume_type_id, message=None):
|
||||
if not message:
|
||||
message = "Unable to find volume_type id '{volume_type_id}'".format(volume_type_id=volume_type_id)
|
||||
|
||||
super(VolumeTypeNotFoundException, self).__init__(message)
|
1
almanach/common/__init__.py
Normal file
1
almanach/common/__init__.py
Normal file
@ -0,0 +1 @@
|
||||
|
115
almanach/config.py
Normal file
115
almanach/config.py
Normal file
@ -0,0 +1,115 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import ConfigParser
|
||||
import pkg_resources
|
||||
import os.path as Path
|
||||
|
||||
from almanach.common.AlmanachException import AlmanachException
|
||||
|
||||
configuration = ConfigParser.RawConfigParser()
|
||||
|
||||
|
||||
def read(args=[], config_file="resources/config/almanach.cfg"):
|
||||
filename = pkg_resources.resource_filename("almanach", config_file)
|
||||
|
||||
for param in args:
|
||||
if param.startswith("config_file="):
|
||||
filename = param.split("=")[-1]
|
||||
break
|
||||
|
||||
if not Path.isfile(filename):
|
||||
raise AlmanachException("config file '{0}' not found".format(filename))
|
||||
|
||||
print "loading configuration file {0}".format(filename)
|
||||
configuration.read(filename)
|
||||
|
||||
|
||||
def get(section, option, default=None):
|
||||
try:
|
||||
return configuration.get(section, option)
|
||||
except:
|
||||
return default
|
||||
|
||||
|
||||
def volume_existence_threshold():
|
||||
return int(get("ALMANACH", "volume_existence_threshold"))
|
||||
|
||||
|
||||
def api_auth_token():
|
||||
return get("ALMANACH", "auth_token")
|
||||
|
||||
|
||||
def device_metadata_whitelist():
|
||||
return get("ALMANACH", "device_metadata_whitelist").split(',')
|
||||
|
||||
|
||||
def mongodb_url():
|
||||
return get("MONGODB", "url", default=None)
|
||||
|
||||
|
||||
def mongodb_database():
|
||||
return get("MONGODB", "database", default="almanach")
|
||||
|
||||
|
||||
def mongodb_indexes():
|
||||
return get('MONGODB', 'indexes').split(',')
|
||||
|
||||
|
||||
def rabbitmq_url():
|
||||
return get("RABBITMQ", "url", default=None)
|
||||
|
||||
|
||||
def rabbitmq_queue():
|
||||
return get("RABBITMQ", "queue", default=None)
|
||||
|
||||
|
||||
def rabbitmq_exchange():
|
||||
return get("RABBITMQ", "exchange", default=None)
|
||||
|
||||
|
||||
def rabbitmq_routing_key():
|
||||
return get("RABBITMQ", "routing.key", default=None)
|
||||
|
||||
|
||||
def rabbitmq_retry():
|
||||
return int(get("RABBITMQ", "retry.maximum", default=None))
|
||||
|
||||
|
||||
def rabbitmq_retry_exchange():
|
||||
return get("RABBITMQ", "retry.exchange", default=None)
|
||||
|
||||
|
||||
def rabbitmq_retry_return_exchange():
|
||||
return get("RABBITMQ", "retry.return.exchange", default=None)
|
||||
|
||||
|
||||
def rabbitmq_retry_queue():
|
||||
return get("RABBITMQ", "retry.queue", default=None)
|
||||
|
||||
def rabbitmq_dead_queue():
|
||||
return get("RABBITMQ", "dead.queue", default=None)
|
||||
|
||||
def rabbitmq_dead_exchange():
|
||||
return get("RABBITMQ", "dead.exchange", default=None)
|
||||
|
||||
def rabbitmq_time_to_live():
|
||||
return int(get("RABBITMQ", "retry.time.to.live", default=None))
|
||||
|
||||
|
||||
def _read_file(filename):
|
||||
file = open(filename, "r")
|
||||
content = file.read()
|
||||
file.close()
|
||||
return content
|
0
almanach/core/__init__.py
Normal file
0
almanach/core/__init__.py
Normal file
262
almanach/core/controller.py
Normal file
262
almanach/core/controller.py
Normal file
@ -0,0 +1,262 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
import pytz
|
||||
|
||||
from datetime import datetime
|
||||
from datetime import timedelta
|
||||
from dateutil import parser as date_parser
|
||||
from pkg_resources import get_distribution
|
||||
|
||||
from almanach.common.DateFormatException import DateFormatException
|
||||
from almanach.core.model import Instance, Volume, VolumeType
|
||||
from almanach import config
|
||||
|
||||
|
||||
class Controller(object):
|
||||
def __init__(self, database_adapter):
|
||||
self.database_adapter = database_adapter
|
||||
self.metadata_whitelist = config.device_metadata_whitelist()
|
||||
|
||||
self.volume_existence_threshold = timedelta(0, config.volume_existence_threshold())
|
||||
|
||||
def get_application_info(self):
|
||||
return {
|
||||
"info": {"version": get_distribution("almanach").version},
|
||||
"database": {"all_entities": self.database_adapter.count_entities(),
|
||||
"active_entities": self.database_adapter.count_active_entities()}
|
||||
}
|
||||
|
||||
def _fresher_entity_exists(self, entity_id, date):
|
||||
try:
|
||||
entity = self.database_adapter.get_active_entity(entity_id)
|
||||
if entity and entity.last_event > date:
|
||||
return True
|
||||
except KeyError:
|
||||
pass
|
||||
except NotImplementedError:
|
||||
pass
|
||||
return False
|
||||
|
||||
def create_instance(self, instance_id, tenant_id, create_date, flavor, os_type, distro, version, name, metadata):
|
||||
create_date = self._validate_and_parse_date(create_date)
|
||||
logging.info("instance %s created in project %s (flavor %s; distro %s %s %s) on %s" % (
|
||||
instance_id, tenant_id, flavor, os_type, distro, version, create_date))
|
||||
if self._fresher_entity_exists(instance_id, create_date):
|
||||
logging.warning("instance %s already exists with a more recent entry", instance_id)
|
||||
return
|
||||
|
||||
filtered_metadata = self._filter_metadata_with_whitelist(metadata)
|
||||
|
||||
entity = Instance(instance_id, tenant_id, create_date, None, flavor, {"os_type": os_type, "distro": distro,
|
||||
"version": version},
|
||||
create_date, name, filtered_metadata)
|
||||
self.database_adapter.insert_entity(entity)
|
||||
|
||||
def delete_instance(self, instance_id, delete_date):
|
||||
delete_date = self._validate_and_parse_date(delete_date)
|
||||
logging.info("instance %s deleted on %s" % (instance_id, delete_date))
|
||||
self.database_adapter.close_active_entity(instance_id, delete_date)
|
||||
|
||||
def resize_instance(self, instance_id, flavor, resize_date):
|
||||
resize_date = self._validate_and_parse_date(resize_date)
|
||||
logging.info("instance %s resized to flavor %s on %s" % (instance_id, flavor, resize_date))
|
||||
try:
|
||||
instance = self.database_adapter.get_active_entity(instance_id)
|
||||
if flavor != instance.flavor:
|
||||
self.database_adapter.close_active_entity(instance_id, resize_date)
|
||||
instance.flavor = flavor
|
||||
instance.start = resize_date
|
||||
instance.end = None
|
||||
instance.last_event = resize_date
|
||||
self.database_adapter.insert_entity(instance)
|
||||
except KeyError as e:
|
||||
logging.error("Trying to resize an instance with id '%s' not in the database yet." % instance_id)
|
||||
raise e
|
||||
|
||||
def rebuild_instance(self, instance_id, distro, version, rebuild_date):
|
||||
rebuild_date = self._validate_and_parse_date(rebuild_date)
|
||||
instance = self.database_adapter.get_active_entity(instance_id)
|
||||
logging.info("instance %s rebuilded in project %s to os %s %s on %s" % (instance_id, instance.project_id,
|
||||
distro, version, rebuild_date))
|
||||
if instance.os.distro != distro or instance.os.version != version:
|
||||
self.database_adapter.close_active_entity(instance_id, rebuild_date)
|
||||
|
||||
instance.os.distro = distro
|
||||
instance.os.version = version
|
||||
instance.start = rebuild_date
|
||||
instance.end = None
|
||||
instance.last_event = rebuild_date
|
||||
self.database_adapter.insert_entity(instance)
|
||||
|
||||
def update_instance_create_date(self, instance_id, create_date):
|
||||
logging.info("instance %s create date updated for %s" % (instance_id, create_date))
|
||||
try:
|
||||
instance = self.database_adapter.get_active_entity(instance_id)
|
||||
instance.start = datetime.strptime(create_date[0:19], "%Y-%m-%d %H:%M:%S")
|
||||
self.database_adapter.update_active_entity(instance)
|
||||
return True
|
||||
except KeyError as e:
|
||||
logging.error("Trying to update an instance with id '%s' not in the database yet." % instance_id)
|
||||
raise e
|
||||
|
||||
def create_volume(self, volume_id, project_id, start, volume_type, size, volume_name, attached_to=None):
|
||||
start = self._validate_and_parse_date(start)
|
||||
logging.info("volume %s created in project %s to size %s on %s" % (volume_id, project_id, size, start))
|
||||
if self._fresher_entity_exists(volume_id, start):
|
||||
return
|
||||
|
||||
volume_type_name = self._get_volume_type_name(volume_type)
|
||||
|
||||
entity = Volume(volume_id, project_id, start, None, volume_type_name, size, start, volume_name, attached_to)
|
||||
self.database_adapter.insert_entity(entity)
|
||||
|
||||
def _get_volume_type_name(self, volume_type_id):
|
||||
if volume_type_id is None:
|
||||
return None
|
||||
|
||||
volume_type = self.database_adapter.get_volume_type(volume_type_id)
|
||||
return volume_type.volume_type_name
|
||||
|
||||
def attach_volume(self, volume_id, date, attachments):
|
||||
date = self._validate_and_parse_date(date)
|
||||
logging.info("volume %s attached to %s on %s" % (volume_id, attachments, date))
|
||||
try:
|
||||
self._volume_attach_instance(volume_id, date, attachments)
|
||||
except KeyError as e:
|
||||
logging.error("Trying to attach a volume with id '%s' not in the database yet." % volume_id)
|
||||
raise e
|
||||
|
||||
def detach_volume(self, volume_id, date, attachments):
|
||||
date = self._validate_and_parse_date(date)
|
||||
logging.info("volume %s detached on %s" % (volume_id, date))
|
||||
try:
|
||||
self._volume_detach_instance(volume_id, date, attachments)
|
||||
except KeyError as e:
|
||||
logging.error("Trying to detach a volume with id '%s' not in the database yet." % volume_id)
|
||||
raise e
|
||||
|
||||
def _volume_attach_instance(self, volume_id, date, attachments):
|
||||
volume = self.database_adapter.get_active_entity(volume_id)
|
||||
date = self._localize_date(date)
|
||||
volume.last_event = date
|
||||
existing_attachments = volume.attached_to
|
||||
volume.attached_to = attachments
|
||||
|
||||
if existing_attachments or self._is_within_threshold(date, volume):
|
||||
self.database_adapter.update_active_entity(volume)
|
||||
else:
|
||||
self._close_volume(volume_id, volume, date)
|
||||
|
||||
def _volume_detach_instance(self, volume_id, date, attachments):
|
||||
volume = self.database_adapter.get_active_entity(volume_id)
|
||||
date = self._localize_date(date)
|
||||
volume.last_event = date
|
||||
volume.attached_to = attachments
|
||||
|
||||
if attachments or self._is_within_threshold(date, volume):
|
||||
self.database_adapter.update_active_entity(volume)
|
||||
else:
|
||||
self._close_volume(volume_id, volume, date)
|
||||
|
||||
def _is_within_threshold(self, date, volume):
|
||||
return date - volume.start < self.volume_existence_threshold
|
||||
|
||||
def _close_volume(self, volume_id, volume, date):
|
||||
self.database_adapter.close_active_entity(volume_id, date)
|
||||
volume.start = date
|
||||
volume.end = None
|
||||
self.database_adapter.insert_entity(volume)
|
||||
|
||||
def rename_volume(self, volume_id, volume_name):
|
||||
try:
|
||||
volume = self.database_adapter.get_active_entity(volume_id)
|
||||
if volume and volume.name != volume_name< |