First opensource commit
This commit is contained in:
commit
27abb148bd
10
.gitignore
vendored
Normal file
10
.gitignore
vendored
Normal file
@ -0,0 +1,10 @@
|
||||
.idea
|
||||
.tox
|
||||
*.iml
|
||||
*.pyc
|
||||
*.coverage
|
||||
*.egg*
|
||||
.vagrant
|
||||
.DS_Store
|
||||
AUTHORS
|
||||
ChangeLog
|
14
.travis.yml
Normal file
14
.travis.yml
Normal file
@ -0,0 +1,14 @@
|
||||
language: python
|
||||
python:
|
||||
- '2.7'
|
||||
install:
|
||||
- pip install tox
|
||||
script: tox -r
|
||||
deploy:
|
||||
provider: pypi
|
||||
user: internaphosting
|
||||
on:
|
||||
tags: true
|
||||
repo: internap/almanach
|
||||
password:
|
||||
secure: l8iby1dwEHsWl4Utas393CncC7dpVJeM9XUK8ruexdTXIkrOKfYmIQCYmbAeucD3AVoJj1YKHCMeyS9X48aV6u6X0J1lMze7DiDvGu0/mIGIRlW8vkX9oLzWY5U6KA5u4P7ENLBp4I7o3evob+4f1SW0XUjThOTpavRTPh4NcQ+tgTqOY6P+RKfdxXXeSlWgIQeYCyfvT50gKkf3M+VOryKl8ZeW4mBkstI3+MZQo2PT4xOhBjUHw0i/Exff3+dnQCZTYRGqN0UQAn1aqOxgtZ+PwxwDCRWMoSdmbJjUNrvCmnH/fKkpuQsax946PPOkfGvc8khE6fEZ/fER60AVHhbooNsSr8aOIXBeLxVAvdHOO53/QB5JRcHauTSeegBpThWtZ2tdJxeHyv8/07uEE8VdIQWMbqdA7wDEWUeYrjZ0jKC3pYjtIV4ztgC2U/DKL14OOK3NUzyQkCAeYgB5nefjBR18uasjyss/R7s6YUwP8EVGrZqjWRq42nlPSsD54TzI+9svcFpLS8uwWAX5+TVZaUZWA1YDfOFbp9B3NbPhr0af8cpwdqGVx+AI/EtWye2bCVht1RKiHYOEHBz8iZP5aE0vZt7XNz4DEVhvArWgZBhUOmRDz5HbBpx+3th+cmWC3VbvaSFqE1Cm0yZXfWlTFteYbDi3LBPDTdk3rF8=
|
176
LICENSE
Normal file
176
LICENSE
Normal file
@ -0,0 +1,176 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
17
README.md
Normal file
17
README.md
Normal file
@ -0,0 +1,17 @@
|
||||
Almanach
|
||||
========
|
||||
|
||||
[![Build Status](https://travis-ci.org/internap/almanach.svg?branch=master)](https://travis-ci.org/internap/almanach)
|
||||
[![PyPI version](https://badge.fury.io/py/almanach.svg)](https://badge.fury.io/py/almanach)
|
||||
|
||||
Almanach stores the utilization of OpenStack resources (instances and volumes) for each tenant.
|
||||
|
||||
What is Almanach?
|
||||
-----------------
|
||||
|
||||
The main purpose of this software is to bill customers based on their usage of the cloud infrastructure.
|
||||
|
||||
Almanach is composed of two parts:
|
||||
|
||||
- **Collector**: listen for OpenStack events and store the relevant information in the database.
|
||||
- **REST API**: Expose the information collected to external systems.
|
0
almanach/__init__.py
Normal file
0
almanach/__init__.py
Normal file
0
almanach/adapters/__init__.py
Normal file
0
almanach/adapters/__init__.py
Normal file
301
almanach/adapters/api_route_v1.py
Normal file
301
almanach/adapters/api_route_v1.py
Normal file
@ -0,0 +1,301 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
import json
|
||||
import jsonpickle
|
||||
|
||||
from datetime import datetime
|
||||
from functools import wraps
|
||||
from flask import Blueprint, Response, request
|
||||
from werkzeug.wrappers import BaseResponse
|
||||
|
||||
from almanach import config
|
||||
from almanach.common.DateFormatException import DateFormatException
|
||||
|
||||
api = Blueprint("api", __name__)
|
||||
controller = None
|
||||
|
||||
|
||||
def to_json(api_call):
|
||||
def encode(data):
|
||||
return jsonpickle.encode(data, unpicklable=False)
|
||||
|
||||
@wraps(api_call)
|
||||
def decorator(*args, **kwargs):
|
||||
try:
|
||||
result = api_call(*args, **kwargs)
|
||||
return result if isinstance(result, BaseResponse) \
|
||||
else Response(encode(result), 200, {"Content-Type": "application/json"})
|
||||
except DateFormatException as e:
|
||||
logging.warning(e.message)
|
||||
return Response(encode({"error": e.message}), 400, {"Content-Type": "application/json"})
|
||||
except KeyError as e:
|
||||
message = "The '{param}' param is mandatory for the request you have made.".format(param=e.message)
|
||||
logging.warning(message)
|
||||
return encode({"error": message}), 400, {"Content-Type": "application/json"}
|
||||
except TypeError:
|
||||
message = "The request you have made must have data. None was given."
|
||||
logging.warning(message)
|
||||
return encode({"error": message}), 400, {"Content-Type": "application/json"}
|
||||
except Exception as e:
|
||||
logging.exception(e)
|
||||
return Response(encode({"error": e.message}), 500, {"Content-Type": "application/json"})
|
||||
return decorator
|
||||
|
||||
|
||||
def authenticated(api_call):
|
||||
@wraps(api_call)
|
||||
def decorator(*args, **kwargs):
|
||||
auth_token = request.headers.get('X-Auth-Token')
|
||||
if auth_token == config.api_auth_token():
|
||||
return api_call(*args, **kwargs)
|
||||
else:
|
||||
return Response('Unauthorized', 401)
|
||||
|
||||
return decorator
|
||||
|
||||
|
||||
@api.route("/info", methods=["GET"])
|
||||
@to_json
|
||||
def get_info():
|
||||
logging.info("Get application info")
|
||||
return controller.get_application_info()
|
||||
|
||||
|
||||
@api.route("/project/<project_id>/instance", methods=["POST"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def create_instance(project_id):
|
||||
instance = json.loads(request.data)
|
||||
logging.info("Creating instance for tenant %s with data %s", project_id, instance)
|
||||
controller.create_instance(
|
||||
tenant_id=project_id,
|
||||
instance_id=instance['id'],
|
||||
create_date=instance['created_at'],
|
||||
flavor=instance['flavor'],
|
||||
os_type=instance['os_type'],
|
||||
distro=instance['os_distro'],
|
||||
version=instance['os_version'],
|
||||
name=instance['name'],
|
||||
metadata={}
|
||||
)
|
||||
|
||||
return Response(status=201)
|
||||
|
||||
|
||||
@api.route("/instance/<instance_id>", methods=["DELETE"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def delete_instance(instance_id):
|
||||
data = json.loads(request.data)
|
||||
logging.info("Deleting instance with id %s with data %s", instance_id, data)
|
||||
controller.delete_instance(
|
||||
instance_id=instance_id,
|
||||
delete_date=data['date']
|
||||
)
|
||||
|
||||
return Response(status=202)
|
||||
|
||||
|
||||
@api.route("/instance/<instance_id>/resize", methods=["PUT"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def resize_instance(instance_id):
|
||||
instance = json.loads(request.data)
|
||||
logging.info("Resizing instance with id %s with data %s", instance_id, instance)
|
||||
controller.resize_instance(
|
||||
instance_id=instance_id,
|
||||
resize_date=instance['date'],
|
||||
flavor=instance['flavor']
|
||||
)
|
||||
|
||||
return Response(status=200)
|
||||
|
||||
|
||||
@api.route("/instance/<instance_id>/rebuild", methods=["PUT"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def rebuild_instance(instance_id):
|
||||
instance = json.loads(request.data)
|
||||
logging.info("Rebuilding instance with id %s with data %s", instance_id, instance)
|
||||
controller.rebuild_instance(
|
||||
instance_id=instance_id,
|
||||
distro=instance['distro'],
|
||||
version=instance['version'],
|
||||
rebuild_date=instance['rebuild_date'],
|
||||
)
|
||||
|
||||
return Response(status=200)
|
||||
|
||||
|
||||
@api.route("/project/<project_id>/instances", methods=["GET"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def list_instances(project_id):
|
||||
start, end = get_period()
|
||||
logging.info("Listing instances between %s and %s", start, end)
|
||||
return controller.list_instances(project_id, start, end)
|
||||
|
||||
|
||||
@api.route("/project/<project_id>/volume", methods=["POST"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def create_volume(project_id):
|
||||
volume = json.loads(request.data)
|
||||
logging.info("Creating volume for tenant %s with data %s", project_id, volume)
|
||||
controller.create_volume(
|
||||
project_id=project_id,
|
||||
volume_id=volume['volume_id'],
|
||||
start=volume['start'],
|
||||
volume_type=volume['volume_type'],
|
||||
size=volume['size'],
|
||||
volume_name=volume['volume_name'],
|
||||
attached_to=volume['attached_to']
|
||||
)
|
||||
|
||||
return Response(status=201)
|
||||
|
||||
|
||||
@api.route("/volume/<volume_id>", methods=["DELETE"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def delete_volume(volume_id):
|
||||
data = json.loads(request.data)
|
||||
logging.info("Deleting volume with id %s with data %s", volume_id, data)
|
||||
controller.delete_volume(
|
||||
volume_id=volume_id,
|
||||
delete_date=data['date']
|
||||
)
|
||||
|
||||
return Response(status=202)
|
||||
|
||||
|
||||
@api.route("/volume/<volume_id>/resize", methods=["PUT"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def resize_volume(volume_id):
|
||||
volume = json.loads(request.data)
|
||||
logging.info("Resizing volume with id %s with data %s", volume_id, volume)
|
||||
controller.resize_volume(
|
||||
volume_id=volume_id,
|
||||
size=volume['size'],
|
||||
update_date=volume['date']
|
||||
)
|
||||
|
||||
return Response(status=200)
|
||||
|
||||
|
||||
@api.route("/volume/<volume_id>/attach", methods=["PUT"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def attach_volume(volume_id):
|
||||
volume = json.loads(request.data)
|
||||
logging.info("Attaching volume with id %s with data %s", volume_id, volume)
|
||||
controller.attach_volume(
|
||||
volume_id=volume_id,
|
||||
date=volume['date'],
|
||||
attachments=volume['attachments']
|
||||
)
|
||||
|
||||
return Response(status=200)
|
||||
|
||||
|
||||
@api.route("/volume/<volume_id>/detach", methods=["PUT"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def detach_volume(volume_id):
|
||||
volume = json.loads(request.data)
|
||||
logging.info("Detaching volume with id %s with data %s", volume_id, volume)
|
||||
controller.detach_volume(
|
||||
volume_id=volume_id,
|
||||
date=volume['date'],
|
||||
attachments=volume['attachments']
|
||||
)
|
||||
|
||||
return Response(status=200)
|
||||
|
||||
|
||||
@api.route("/project/<project_id>/volumes", methods=["GET"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def list_volumes(project_id):
|
||||
start, end = get_period()
|
||||
logging.info("Listing volumes between %s and %s", start, end)
|
||||
return controller.list_volumes(project_id, start, end)
|
||||
|
||||
|
||||
@api.route("/project/<project_id>/entities", methods=["GET"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def list_entity(project_id):
|
||||
start, end = get_period()
|
||||
logging.info("Listing entities between %s and %s", start, end)
|
||||
return controller.list_entities(project_id, start, end)
|
||||
|
||||
|
||||
# Temporary for AgileV1 migration
|
||||
@api.route("/instance/<instance_id>/create_date/<create_date>", methods=["PUT"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def update_instance_create_date(instance_id, create_date):
|
||||
logging.info("Update create date for instance %s to %s", instance_id, create_date)
|
||||
return controller.update_instance_create_date(instance_id, create_date)
|
||||
|
||||
|
||||
@api.route("/volume_types", methods=["GET"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def list_volume_types():
|
||||
logging.info("Listing volumes types")
|
||||
return controller.list_volume_types()
|
||||
|
||||
|
||||
@api.route("/volume_type/<type_id>", methods=["GET"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def get_volume_type(type_id):
|
||||
logging.info("Get volumes type for id %s", type_id)
|
||||
return controller.get_volume_type(type_id)
|
||||
|
||||
|
||||
@api.route("/volume_type", methods=["POST"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def create_volume_type():
|
||||
volume_type = json.loads(request.data)
|
||||
logging.info("Creating volume type with data '%s'", volume_type)
|
||||
controller.create_volume_type(
|
||||
volume_type_id=volume_type['type_id'],
|
||||
volume_type_name=volume_type['type_name']
|
||||
)
|
||||
return Response(status=201)
|
||||
|
||||
|
||||
@api.route("/volume_type/<type_id>", methods=["DELETE"])
|
||||
@authenticated
|
||||
@to_json
|
||||
def delete_volume_type(type_id):
|
||||
logging.info("Deleting volume type with id '%s'", type_id)
|
||||
controller.delete_volume_type(type_id)
|
||||
return Response(status=202)
|
||||
|
||||
|
||||
def get_period():
|
||||
start = datetime.strptime(request.args["start"], "%Y-%m-%d %H:%M:%S.%f")
|
||||
if "end" not in request.args:
|
||||
end = datetime.now()
|
||||
else:
|
||||
end = datetime.strptime(request.args["end"], "%Y-%m-%d %H:%M:%S.%f")
|
||||
return start, end
|
187
almanach/adapters/bus_adapter.py
Normal file
187
almanach/adapters/bus_adapter.py
Normal file
@ -0,0 +1,187 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import json
|
||||
import logging
|
||||
import kombu
|
||||
|
||||
from kombu.mixins import ConsumerMixin
|
||||
from almanach import config
|
||||
|
||||
|
||||
class BusAdapter(ConsumerMixin):
|
||||
|
||||
def __init__(self, controller, connection, retry_adapter):
|
||||
super(BusAdapter, self).__init__()
|
||||
self.controller = controller
|
||||
self.connection = connection
|
||||
self.retry_adapter = retry_adapter
|
||||
|
||||
def on_message(self, notification, message):
|
||||
try:
|
||||
self._process_notification(notification)
|
||||
except Exception as e:
|
||||
logging.warning("Sending notification to retry letter exchange {0}".format(json.dumps(notification)))
|
||||
logging.exception(e.message)
|
||||
self.retry_adapter.publish_to_dead_letter(message)
|
||||
message.ack()
|
||||
|
||||
def _process_notification(self, notification):
|
||||
if isinstance(notification, basestring):
|
||||
notification = json.loads(notification)
|
||||
|
||||
event_type = notification.get("event_type")
|
||||
logging.info(event_type)
|
||||
|
||||
if event_type == "compute.instance.create.end":
|
||||
self._instance_created(notification)
|
||||
elif event_type == "compute.instance.delete.end":
|
||||
self._instance_deleted(notification)
|
||||
elif event_type == "compute.instance.resize.confirm.end":
|
||||
self._instance_resized(notification)
|
||||
elif event_type == "compute.instance.rebuild.end":
|
||||
self._instance_rebuilt(notification)
|
||||
elif event_type == "volume.create.end":
|
||||
self._volume_created(notification)
|
||||
elif event_type == "volume.delete.end":
|
||||
self._volume_deleted(notification)
|
||||
elif event_type == "volume.resize.end":
|
||||
self._volume_resized(notification)
|
||||
elif event_type == "volume.attach.end":
|
||||
self._volume_attached(notification)
|
||||
elif event_type == "volume.detach.end":
|
||||
self._volume_detached(notification)
|
||||
elif event_type == "volume.update.end":
|
||||
self._volume_renamed(notification)
|
||||
elif event_type == "volume.exists":
|
||||
self._volume_renamed(notification)
|
||||
elif event_type == "volume_type.create":
|
||||
self._volume_type_create(notification)
|
||||
|
||||
def get_consumers(self, consumer, channel):
|
||||
queue = kombu.Queue(config.rabbitmq_queue(), routing_key=config.rabbitmq_routing_key())
|
||||
return [consumer(
|
||||
[queue],
|
||||
callbacks=[self.on_message],
|
||||
auto_declare=False)]
|
||||
|
||||
def run(self, _tokens=1):
|
||||
try:
|
||||
super(BusAdapter, self).run(_tokens)
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
|
||||
def _instance_created(self, notification):
|
||||
payload = notification.get("payload")
|
||||
project_id = payload.get("tenant_id")
|
||||
date = payload.get("created_at")
|
||||
instance_id = payload.get("instance_id")
|
||||
flavor = payload.get("instance_type")
|
||||
os_type = payload.get("image_meta").get("os_type")
|
||||
distro = payload.get("image_meta").get("distro")
|
||||
version = payload.get("image_meta").get("version")
|
||||
name = payload.get("hostname")
|
||||
metadata = payload.get("metadata")
|
||||
if isinstance(metadata, list):
|
||||
metadata = {}
|
||||
self.controller.create_instance(
|
||||
instance_id,
|
||||
project_id,
|
||||
date,
|
||||
flavor,
|
||||
os_type,
|
||||
distro,
|
||||
version,
|
||||
name,
|
||||
metadata
|
||||
)
|
||||
|
||||
def _instance_deleted(self, notification):
|
||||
payload = notification.get("payload")
|
||||
date = payload.get("terminated_at")
|
||||
instance_id = payload.get("instance_id")
|
||||
self.controller.delete_instance(instance_id, date)
|
||||
|
||||
def _instance_resized(self, notification):
|
||||
payload = notification.get("payload")
|
||||
date = notification.get("timestamp")
|
||||
flavor = payload.get("instance_type")
|
||||
instance_id = payload.get("instance_id")
|
||||
self.controller.resize_instance(instance_id, flavor, date)
|
||||
|
||||
def _volume_created(self, notification):
|
||||
payload = notification.get("payload")
|
||||
date = payload.get("created_at")
|
||||
project_id = payload.get("tenant_id")
|
||||
volume_id = payload.get("volume_id")
|
||||
volume_name = payload.get("display_name")
|
||||
volume_type = payload.get("volume_type")
|
||||
volume_size = payload.get("size")
|
||||
self.controller.create_volume(volume_id, project_id, date, volume_type, volume_size, volume_name)
|
||||
|
||||
def _volume_deleted(self, notification):
|
||||
payload = notification.get("payload")
|
||||
volume_id = payload.get("volume_id")
|
||||
end_date = notification.get("timestamp")
|
||||
self.controller.delete_volume(volume_id, end_date)
|
||||
|
||||
def _volume_renamed(self, notification):
|
||||
payload = notification.get("payload")
|
||||
volume_id = payload.get("volume_id")
|
||||
volume_name = payload.get("display_name")
|
||||
self.controller.rename_volume(volume_id, volume_name)
|
||||
|
||||
def _volume_resized(self, notification):
|
||||
payload = notification.get("payload")
|
||||
date = notification.get("timestamp")
|
||||
volume_id = payload.get("volume_id")
|
||||
volume_size = payload.get("size")
|
||||
self.controller.resize_volume(volume_id, volume_size, date)
|
||||
|
||||
def _volume_attached(self, notification):
|
||||
payload = notification.get("payload")
|
||||
volume_id = payload.get("volume_id")
|
||||
event_date = notification.get("timestamp")
|
||||
self.controller.attach_volume(volume_id, event_date, self._get_attached_instances(payload))
|
||||
|
||||
def _volume_detached(self, notification):
|
||||
payload = notification.get("payload")
|
||||
volume_id = payload.get("volume_id")
|
||||
event_date = notification.get("timestamp")
|
||||
self.controller.detach_volume(volume_id, event_date, self._get_attached_instances(payload))
|
||||
|
||||
@staticmethod
|
||||
def _get_attached_instances(payload):
|
||||
instances_ids = []
|
||||
if "volume_attachment" in payload:
|
||||
for instance in payload["volume_attachment"]:
|
||||
instances_ids.append(instance.get("instance_uuid"))
|
||||
elif payload.get("instance_uuid") is not None:
|
||||
instances_ids.append(payload.get("instance_uuid"))
|
||||
|
||||
return instances_ids
|
||||
|
||||
def _instance_rebuilt(self, notification):
|
||||
payload = notification.get("payload")
|
||||
date = notification.get("timestamp")
|
||||
instance_id = payload.get("instance_id")
|
||||
distro = payload.get("image_meta").get("distro")
|
||||
version = payload.get("image_meta").get("version")
|
||||
self.controller.rebuild_instance(instance_id, distro, version, date)
|
||||
|
||||
def _volume_type_create(self, notification):
|
||||
volume_types = notification.get("payload").get("volume_types")
|
||||
volume_type_id = volume_types.get("id")
|
||||
volume_type_name = volume_types.get("name")
|
||||
self.controller.create_volume_type(volume_type_id, volume_type_name)
|
149
almanach/adapters/database_adapter.py
Normal file
149
almanach/adapters/database_adapter.py
Normal file
@ -0,0 +1,149 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
|
||||
import pymongo
|
||||
|
||||
from pymongo.errors import ConfigurationError
|
||||
from almanach import config
|
||||
from almanach.common.AlmanachException import AlmanachException
|
||||
from almanach.common.VolumeTypeNotFoundException import VolumeTypeNotFoundException
|
||||
from almanach.core.model import build_entity_from_dict, VolumeType
|
||||
from pymongomodem.utils import decode_output, encode_input
|
||||
|
||||
|
||||
def database(function):
|
||||
def _connection(self, *args, **kwargs):
|
||||
try:
|
||||
if not self.db:
|
||||
connection = pymongo.MongoClient(config.mongodb_url(), tz_aware=True)
|
||||
self.db = connection[config.mongodb_database()]
|
||||
ensureindex(self.db)
|
||||
return function(self, *args, **kwargs)
|
||||
except KeyError as e:
|
||||
raise e
|
||||
except VolumeTypeNotFoundException as e:
|
||||
raise e
|
||||
except NotImplementedError as e:
|
||||
raise e
|
||||
except ConfigurationError as e:
|
||||
logging.exception("DB Connection, make sure username and password doesn't contain the following :+&/ "
|
||||
"character")
|
||||
raise e
|
||||
except Exception as e:
|
||||
logging.exception(e)
|
||||
raise e
|
||||
|
||||
return _connection
|
||||
|
||||
|
||||
def ensureindex(db):
|
||||
db.entity.ensure_index(
|
||||
[(index, pymongo.ASCENDING)
|
||||
for index in config.mongodb_indexes()])
|
||||
|
||||
|
||||
class DatabaseAdapter(object):
|
||||
def __init__(self):
|
||||
self.db = None
|
||||
|
||||
@database
|
||||
def get_active_entity(self, entity_id):
|
||||
entity = self._get_one_entity_from_db({"entity_id": entity_id, "end": None})
|
||||
if not entity:
|
||||
raise KeyError("Unable to find entity id %s" % entity_id)
|
||||
return build_entity_from_dict(entity)
|
||||
|
||||
@database
|
||||
def count_entities(self):
|
||||
return self.db.entity.count()
|
||||
|
||||
@database
|
||||
def count_active_entities(self):
|
||||
return self.db.entity.find({"end": None}).count()
|
||||
|
||||
@database
|
||||
def count_entity_entries(self, entity_id):
|
||||
return self.db.entity.find({"entity_id": entity_id}).count()
|
||||
|
||||
@database
|
||||
def list_entities(self, project_id, start, end, entity_type=None):
|
||||
args = {"project_id": project_id, "start": {"$lte": end}, "$or": [{"end": None}, {"end": {"$gte": start}}]}
|
||||
if entity_type:
|
||||
args["entity_type"] = entity_type
|
||||
entities = self._get_entities_from_db(args)
|
||||
return [build_entity_from_dict(entity) for entity in entities]
|
||||
|
||||
@database
|
||||
def insert_entity(self, entity):
|
||||
self._insert_entity(entity.as_dict())
|
||||
|
||||
@database
|
||||
def insert_volume_type(self, volume_type):
|
||||
self.db.volume_type.insert(volume_type.__dict__)
|
||||
|
||||
@database
|
||||
def get_volume_type(self, volume_type_id):
|
||||
volume_type = self.db.volume_type.find_one({"volume_type_id": volume_type_id})
|
||||
if not volume_type:
|
||||
logging.error("Trying to get a volume type not in the database.")
|
||||
raise VolumeTypeNotFoundException(volume_type_id=volume_type_id)
|
||||
|
||||
return VolumeType(volume_type_id=volume_type["volume_type_id"],
|
||||
volume_type_name=volume_type["volume_type_name"])
|
||||
|
||||
@database
|
||||
def delete_volume_type(self, volume_type_id):
|
||||
if volume_type_id is None:
|
||||
error = "Trying to delete all volume types which is not permitted."
|
||||
logging.error(error)
|
||||
raise AlmanachException(error)
|
||||
returned_value = self.db.volume_type.remove({"volume_type_id": volume_type_id})
|
||||
if returned_value['n'] == 1:
|
||||
logging.info("Deleted volume type with id '%s' successfully." % volume_type_id)
|
||||
else:
|
||||
error = "Volume type with id '%s' doesn't exist in the database." % volume_type_id
|
||||
logging.error(error)
|
||||
raise AlmanachException(error)
|
||||
|
||||
@database
|
||||
def list_volume_types(self):
|
||||
volume_types = self.db.volume_type.find()
|
||||
return [VolumeType(volume_type_id=volume_type["volume_type_id"],
|
||||
volume_type_name=volume_type["volume_type_name"]) for volume_type in volume_types]
|
||||
|
||||
@database
|
||||
def close_active_entity(self, entity_id, end):
|
||||
self.db.entity.update({"entity_id": entity_id, "end": None}, {"$set": {"end": end, "last_event": end}})
|
||||
|
||||
@database
|
||||
def update_active_entity(self, entity):
|
||||
self.db.entity.update({"entity_id": entity.entity_id, "end": None}, {"$set": entity.as_dict()})
|
||||
|
||||
@database
|
||||
def delete_active_entity(self, entity_id):
|
||||
self.db.entity.remove({"entity_id": entity_id, "end": None})
|
||||
|
||||
@encode_input
|
||||
def _insert_entity(self, entity):
|
||||
self.db.entity.insert(entity)
|
||||
|
||||
@decode_output
|
||||
def _get_entities_from_db(self, args):
|
||||
return list(self.db.entity.find(args, {"_id": 0}))
|
||||
|
||||
@decode_output
|
||||
def _get_one_entity_from_db(self, args):
|
||||
return self.db.entity.find_one(args, {"_id": 0})
|
119
almanach/adapters/retry_adapter.py
Normal file
119
almanach/adapters/retry_adapter.py
Normal file
@ -0,0 +1,119 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import json
|
||||
import logging
|
||||
from kombu import Exchange, Queue, Producer
|
||||
from almanach import config
|
||||
|
||||
|
||||
class RetryAdapter:
|
||||
def __init__(self, connection):
|
||||
self.connection = connection
|
||||
retry_exchange = self._configure_retry_exchanges(self.connection)
|
||||
dead_exchange = self._configure_dead_exchange(self.connection)
|
||||
self._retry_producer = Producer(self.connection, exchange=retry_exchange)
|
||||
self._dead_producer = Producer(self.connection, exchange=dead_exchange)
|
||||
|
||||
def publish_to_dead_letter(self, message):
|
||||
death_count = self._rejected_count(message)
|
||||
logging.info("Message has been dead {0} times".format(death_count))
|
||||
if death_count < config.rabbitmq_retry():
|
||||
logging.info("Publishing to retry queue")
|
||||
self._publish_message(self._retry_producer, message)
|
||||
logging.info("Published to retry queue")
|
||||
else:
|
||||
logging.info("Publishing to dead letter queue")
|
||||
self._publish_message(self._dead_producer, message)
|
||||
logging.info("Publishing notification to dead letter queue: {0}".format(json.dumps(message.body)))
|
||||
|
||||
def _configure_retry_exchanges(self, connection):
|
||||
def declare_queues():
|
||||
channel = connection.channel()
|
||||
almanach_exchange = Exchange(name=config.rabbitmq_retry_return_exchange(),
|
||||
type='direct',
|
||||
channel=channel)
|
||||
retry_exchange = Exchange(name=config.rabbitmq_retry_exchange(),
|
||||
type='direct',
|
||||
channel=channel)
|
||||
retry_queue = Queue(name=config.rabbitmq_retry_queue(),
|
||||
exchange=retry_exchange,
|
||||
routing_key=config.rabbitmq_routing_key(),
|
||||
queue_arguments=self._get_queue_arguments(),
|
||||
channel=channel)
|
||||
almanach_queue = Queue(name=config.rabbitmq_queue(),
|
||||
exchange=almanach_exchange,
|
||||
durable=False,
|
||||
routing_key=config.rabbitmq_routing_key(),
|
||||
channel=channel)
|
||||
|
||||
retry_queue.declare()
|
||||
almanach_queue.declare()
|
||||
|
||||
return retry_exchange
|
||||
|
||||
def error_callback(exception, interval):
|
||||
logging.error('Failed to declare queues and exchanges, retrying in %d seconds. %r' % (interval, exception))
|
||||
|
||||
declare_queues = connection.ensure(connection, declare_queues, errback=error_callback,
|
||||
interval_start=0, interval_step=5, interval_max=30)
|
||||
return declare_queues()
|
||||
|
||||
def _configure_dead_exchange(self, connection):
|
||||
def declare_dead_queue():
|
||||
channel = connection.channel()
|
||||
dead_exchange = Exchange(name=config.rabbitmq_dead_exchange(),
|
||||
type='direct',
|
||||
channel=channel)
|
||||
dead_queue = Queue(name=config.rabbitmq_dead_queue(),
|
||||
routing_key=config.rabbitmq_routing_key(),
|
||||
exchange=dead_exchange,
|
||||
channel=channel)
|
||||
|
||||
dead_queue.declare()
|
||||
|
||||
return dead_exchange
|
||||
|
||||
def error_callback(exception, interval):
|
||||
logging.error('Failed to declare dead queue and exchange, retrying in %d seconds. %r' % (interval, exception))
|
||||
|
||||
declare_dead_queue = connection.ensure(connection, declare_dead_queue, errback=error_callback,
|
||||
interval_start=0, interval_step=5, interval_max=30)
|
||||
return declare_dead_queue()
|
||||
|
||||
def _get_queue_arguments(self):
|
||||
return {"x-message-ttl": self._get_time_to_live_in_seconds(),
|
||||
"x-dead-letter-exchange": config.rabbitmq_retry_return_exchange(),
|
||||
"x-dead-letter-routing-key": config.rabbitmq_routing_key()}
|
||||
|
||||
def _get_time_to_live_in_seconds(self):
|
||||
return config.rabbitmq_time_to_live() * 1000
|
||||
|
||||
def _rejected_count(self, message):
|
||||
if 'x-death' in message.headers:
|
||||
return len(message.headers['x-death'])
|
||||
return 0
|
||||
|
||||
def _publish_message(self, producer, message):
|
||||
publish = self.connection.ensure(producer, producer.publish, errback=self._error_callback,
|
||||
interval_start=0, interval_step=5, interval_max=30)
|
||||
publish(message.body,
|
||||
routing_key=message.delivery_info['routing_key'],
|
||||
headers=message.headers,
|
||||
content_type=message.content_type,
|
||||
content_encoding=message.content_encoding)
|
||||
|
||||
def _error_callback(self, exception, interval):
|
||||
logging.error('Failed to publish message to dead letter queue, retrying in %d seconds. %r'
|
||||
% (interval, exception))
|
51
almanach/api.py
Normal file
51
almanach/api.py
Normal file
@ -0,0 +1,51 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
from flask import Flask
|
||||
from gunicorn.app.base import Application
|
||||
|
||||
from almanach import config
|
||||
from almanach.adapters import api_route_v1 as api_route
|
||||
from almanach import log_bootstrap
|
||||
from almanach.adapters.database_adapter import DatabaseAdapter
|
||||
from almanach.core.controller import Controller
|
||||
|
||||
|
||||
class AlmanachApi(Application):
|
||||
def __init__(self):
|
||||
super(AlmanachApi, self).__init__()
|
||||
|
||||
def init(self, parser, opts, args):
|
||||
log_bootstrap.configure()
|
||||
config.read(args)
|
||||
self._controller = Controller(DatabaseAdapter())
|
||||
|
||||
def load(self):
|
||||
logging.info("starting flask worker")
|
||||
api_route.controller = self._controller
|
||||
|
||||
app = Flask("almanach")
|
||||
app.register_blueprint(api_route.api)
|
||||
|
||||
return app
|
||||
|
||||
|
||||
def run():
|
||||
almanach_api = AlmanachApi()
|
||||
almanach_api.run()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
run()
|
49
almanach/collector.py
Normal file
49
almanach/collector.py
Normal file
@ -0,0 +1,49 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
import sys
|
||||
from kombu import Connection
|
||||
|
||||
from almanach import log_bootstrap
|
||||
from almanach import config
|
||||
from almanach.adapters.bus_adapter import BusAdapter
|
||||
from almanach.adapters.database_adapter import DatabaseAdapter
|
||||
from almanach.adapters.retry_adapter import RetryAdapter
|
||||
from almanach.core.controller import Controller
|
||||
|
||||
|
||||
class AlmanachCollector(object):
|
||||
def __init__(self):
|
||||
log_bootstrap.configure()
|
||||
config.read(sys.argv)
|
||||
self._controller = Controller(DatabaseAdapter())
|
||||
_connection = Connection(config.rabbitmq_url(), heartbeat=540)
|
||||
retry_adapter = RetryAdapter(_connection)
|
||||
|
||||
self._busAdapter = BusAdapter(self._controller, _connection, retry_adapter)
|
||||
|
||||
def run(self):
|
||||
logging.info("starting bus adapter")
|
||||
self._busAdapter.run()
|
||||
logging.info("shutting down")
|
||||
|
||||
|
||||
def run():
|
||||
almanach_collector = AlmanachCollector()
|
||||
almanach_collector.run()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
run()
|
16
almanach/common/AlmanachException.py
Normal file
16
almanach/common/AlmanachException.py
Normal file
@ -0,0 +1,16 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
class AlmanachException(Exception):
|
||||
pass
|
21
almanach/common/DateFormatException.py
Normal file
21
almanach/common/DateFormatException.py
Normal file
@ -0,0 +1,21 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
class DateFormatException(Exception):
|
||||
def __init__(self, message=None):
|
||||
if not message:
|
||||
message = "The provided date has an invalid format. Format should be of yyyy-mm-ddThh:mm:ss.msZ, " \
|
||||
"ex: 2015-01-31T18:24:34.1523Z"
|
||||
|
||||
super(DateFormatException, self).__init__(message)
|
20
almanach/common/VolumeTypeNotFoundException.py
Normal file
20
almanach/common/VolumeTypeNotFoundException.py
Normal file
@ -0,0 +1,20 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
class VolumeTypeNotFoundException(Exception):
|
||||
def __init__(self, volume_type_id, message=None):
|
||||
if not message:
|
||||
message = "Unable to find volume_type id '{volume_type_id}'".format(volume_type_id=volume_type_id)
|
||||
|
||||
super(VolumeTypeNotFoundException, self).__init__(message)
|
1
almanach/common/__init__.py
Normal file
1
almanach/common/__init__.py
Normal file
@ -0,0 +1 @@
|
||||
|
115
almanach/config.py
Normal file
115
almanach/config.py
Normal file
@ -0,0 +1,115 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import ConfigParser
|
||||
import pkg_resources
|
||||
import os.path as Path
|
||||
|
||||
from almanach.common.AlmanachException import AlmanachException
|
||||
|
||||
configuration = ConfigParser.RawConfigParser()
|
||||
|
||||
|
||||
def read(args=[], config_file="resources/config/almanach.cfg"):
|
||||
filename = pkg_resources.resource_filename("almanach", config_file)
|
||||
|
||||
for param in args:
|
||||
if param.startswith("config_file="):
|
||||
filename = param.split("=")[-1]
|
||||
break
|
||||
|
||||
if not Path.isfile(filename):
|
||||
raise AlmanachException("config file '{0}' not found".format(filename))
|
||||
|
||||
print "loading configuration file {0}".format(filename)
|
||||
configuration.read(filename)
|
||||
|
||||
|
||||
def get(section, option, default=None):
|
||||
try:
|
||||
return configuration.get(section, option)
|
||||
except:
|
||||
return default
|
||||
|
||||
|
||||
def volume_existence_threshold():
|
||||
return int(get("ALMANACH", "volume_existence_threshold"))
|
||||
|
||||
|
||||
def api_auth_token():
|
||||
return get("ALMANACH", "auth_token")
|
||||
|
||||
|
||||
def device_metadata_whitelist():
|
||||
return get("ALMANACH", "device_metadata_whitelist").split(',')
|
||||
|
||||
|
||||
def mongodb_url():
|
||||
return get("MONGODB", "url", default=None)
|
||||
|
||||
|
||||
def mongodb_database():
|
||||
return get("MONGODB", "database", default="almanach")
|
||||
|
||||
|
||||
def mongodb_indexes():
|
||||
return get('MONGODB', 'indexes').split(',')
|
||||
|
||||
|
||||
def rabbitmq_url():
|
||||
return get("RABBITMQ", "url", default=None)
|
||||
|
||||
|
||||
def rabbitmq_queue():
|
||||
return get("RABBITMQ", "queue", default=None)
|
||||
|
||||
|
||||
def rabbitmq_exchange():
|
||||
return get("RABBITMQ", "exchange", default=None)
|
||||
|
||||
|
||||
def rabbitmq_routing_key():
|
||||
return get("RABBITMQ", "routing.key", default=None)
|
||||
|
||||
|
||||
def rabbitmq_retry():
|
||||
return int(get("RABBITMQ", "retry.maximum", default=None))
|
||||
|
||||
|
||||
def rabbitmq_retry_exchange():
|
||||
return get("RABBITMQ", "retry.exchange", default=None)
|
||||
|
||||
|
||||
def rabbitmq_retry_return_exchange():
|
||||
return get("RABBITMQ", "retry.return.exchange", default=None)
|
||||
|
||||
|
||||
def rabbitmq_retry_queue():
|
||||
return get("RABBITMQ", "retry.queue", default=None)
|
||||
|
||||
def rabbitmq_dead_queue():
|
||||
return get("RABBITMQ", "dead.queue", default=None)
|
||||
|
||||
def rabbitmq_dead_exchange():
|
||||
return get("RABBITMQ", "dead.exchange", default=None)
|
||||
|
||||
def rabbitmq_time_to_live():
|
||||
return int(get("RABBITMQ", "retry.time.to.live", default=None))
|
||||
|
||||
|
||||
def _read_file(filename):
|
||||
file = open(filename, "r")
|
||||
content = file.read()
|
||||
file.close()
|
||||
return content
|
0
almanach/core/__init__.py
Normal file
0
almanach/core/__init__.py
Normal file
262
almanach/core/controller.py
Normal file
262
almanach/core/controller.py
Normal file
@ -0,0 +1,262 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
import pytz
|
||||
|
||||
from datetime import datetime
|
||||
from datetime import timedelta
|
||||
from dateutil import parser as date_parser
|
||||
from pkg_resources import get_distribution
|
||||
|
||||
from almanach.common.DateFormatException import DateFormatException
|
||||
from almanach.core.model import Instance, Volume, VolumeType
|
||||
from almanach import config
|
||||
|
||||
|
||||
class Controller(object):
|
||||
def __init__(self, database_adapter):
|
||||
self.database_adapter = database_adapter
|
||||
self.metadata_whitelist = config.device_metadata_whitelist()
|
||||
|
||||
self.volume_existence_threshold = timedelta(0, config.volume_existence_threshold())
|
||||
|
||||
def get_application_info(self):
|
||||
return {
|
||||
"info": {"version": get_distribution("almanach").version},
|
||||
"database": {"all_entities": self.database_adapter.count_entities(),
|
||||
"active_entities": self.database_adapter.count_active_entities()}
|
||||
}
|
||||
|
||||
def _fresher_entity_exists(self, entity_id, date):
|
||||
try:
|
||||
entity = self.database_adapter.get_active_entity(entity_id)
|
||||
if entity and entity.last_event > date:
|
||||
return True
|
||||
except KeyError:
|
||||
pass
|
||||
except NotImplementedError:
|
||||
pass
|
||||
return False
|
||||
|
||||
def create_instance(self, instance_id, tenant_id, create_date, flavor, os_type, distro, version, name, metadata):
|
||||
create_date = self._validate_and_parse_date(create_date)
|
||||
logging.info("instance %s created in project %s (flavor %s; distro %s %s %s) on %s" % (
|
||||
instance_id, tenant_id, flavor, os_type, distro, version, create_date))
|
||||
if self._fresher_entity_exists(instance_id, create_date):
|
||||
logging.warning("instance %s already exists with a more recent entry", instance_id)
|
||||
return
|
||||
|
||||
filtered_metadata = self._filter_metadata_with_whitelist(metadata)
|
||||
|
||||
entity = Instance(instance_id, tenant_id, create_date, None, flavor, {"os_type": os_type, "distro": distro,
|
||||
"version": version},
|
||||
create_date, name, filtered_metadata)
|
||||
self.database_adapter.insert_entity(entity)
|
||||
|
||||
def delete_instance(self, instance_id, delete_date):
|
||||
delete_date = self._validate_and_parse_date(delete_date)
|
||||
logging.info("instance %s deleted on %s" % (instance_id, delete_date))
|
||||
self.database_adapter.close_active_entity(instance_id, delete_date)
|
||||
|
||||
def resize_instance(self, instance_id, flavor, resize_date):
|
||||
resize_date = self._validate_and_parse_date(resize_date)
|
||||
logging.info("instance %s resized to flavor %s on %s" % (instance_id, flavor, resize_date))
|
||||
try:
|
||||
instance = self.database_adapter.get_active_entity(instance_id)
|
||||
if flavor != instance.flavor:
|
||||
self.database_adapter.close_active_entity(instance_id, resize_date)
|
||||
instance.flavor = flavor
|
||||
instance.start = resize_date
|
||||
instance.end = None
|
||||
instance.last_event = resize_date
|
||||
self.database_adapter.insert_entity(instance)
|
||||
except KeyError as e:
|
||||
logging.error("Trying to resize an instance with id '%s' not in the database yet." % instance_id)
|
||||
raise e
|
||||
|
||||
def rebuild_instance(self, instance_id, distro, version, rebuild_date):
|
||||
rebuild_date = self._validate_and_parse_date(rebuild_date)
|
||||
instance = self.database_adapter.get_active_entity(instance_id)
|
||||
logging.info("instance %s rebuilded in project %s to os %s %s on %s" % (instance_id, instance.project_id,
|
||||
distro, version, rebuild_date))
|
||||
if instance.os.distro != distro or instance.os.version != version:
|
||||
self.database_adapter.close_active_entity(instance_id, rebuild_date)
|
||||
|
||||
instance.os.distro = distro
|
||||
instance.os.version = version
|
||||
instance.start = rebuild_date
|
||||
instance.end = None
|
||||
instance.last_event = rebuild_date
|
||||
self.database_adapter.insert_entity(instance)
|
||||
|
||||
def update_instance_create_date(self, instance_id, create_date):
|
||||
logging.info("instance %s create date updated for %s" % (instance_id, create_date))
|
||||
try:
|
||||
instance = self.database_adapter.get_active_entity(instance_id)
|
||||
instance.start = datetime.strptime(create_date[0:19], "%Y-%m-%d %H:%M:%S")
|
||||
self.database_adapter.update_active_entity(instance)
|
||||
return True
|
||||
except KeyError as e:
|
||||
logging.error("Trying to update an instance with id '%s' not in the database yet." % instance_id)
|
||||
raise e
|
||||
|
||||
def create_volume(self, volume_id, project_id, start, volume_type, size, volume_name, attached_to=None):
|
||||
start = self._validate_and_parse_date(start)
|
||||
logging.info("volume %s created in project %s to size %s on %s" % (volume_id, project_id, size, start))
|
||||
if self._fresher_entity_exists(volume_id, start):
|
||||
return
|
||||
|
||||
volume_type_name = self._get_volume_type_name(volume_type)
|
||||
|
||||
entity = Volume(volume_id, project_id, start, None, volume_type_name, size, start, volume_name, attached_to)
|
||||
self.database_adapter.insert_entity(entity)
|
||||
|
||||
def _get_volume_type_name(self, volume_type_id):
|
||||
if volume_type_id is None:
|
||||
return None
|
||||
|
||||
volume_type = self.database_adapter.get_volume_type(volume_type_id)
|
||||
return volume_type.volume_type_name
|
||||
|
||||
def attach_volume(self, volume_id, date, attachments):
|
||||
date = self._validate_and_parse_date(date)
|
||||
logging.info("volume %s attached to %s on %s" % (volume_id, attachments, date))
|
||||
try:
|
||||
self._volume_attach_instance(volume_id, date, attachments)
|
||||
except KeyError as e:
|
||||
logging.error("Trying to attach a volume with id '%s' not in the database yet." % volume_id)
|
||||
raise e
|
||||
|
||||
def detach_volume(self, volume_id, date, attachments):
|
||||
date = self._validate_and_parse_date(date)
|
||||
logging.info("volume %s detached on %s" % (volume_id, date))
|
||||
try:
|
||||
self._volume_detach_instance(volume_id, date, attachments)
|
||||
except KeyError as e:
|
||||
logging.error("Trying to detach a volume with id '%s' not in the database yet." % volume_id)
|
||||
raise e
|
||||
|
||||
def _volume_attach_instance(self, volume_id, date, attachments):
|
||||
volume = self.database_adapter.get_active_entity(volume_id)
|
||||
date = self._localize_date(date)
|
||||
volume.last_event = date
|
||||
existing_attachments = volume.attached_to
|
||||
volume.attached_to = attachments
|
||||
|
||||
if existing_attachments or self._is_within_threshold(date, volume):
|
||||
self.database_adapter.update_active_entity(volume)
|
||||
else:
|
||||
self._close_volume(volume_id, volume, date)
|
||||
|
||||
def _volume_detach_instance(self, volume_id, date, attachments):
|
||||
volume = self.database_adapter.get_active_entity(volume_id)
|
||||
date = self._localize_date(date)
|
||||
volume.last_event = date
|
||||
volume.attached_to = attachments
|
||||
|
||||
if attachments or self._is_within_threshold(date, volume):
|
||||
self.database_adapter.update_active_entity(volume)
|
||||
else:
|
||||
self._close_volume(volume_id, volume, date)
|
||||
|
||||
def _is_within_threshold(self, date, volume):
|
||||
return date - volume.start < self.volume_existence_threshold
|
||||
|
||||
def _close_volume(self, volume_id, volume, date):
|
||||
self.database_adapter.close_active_entity(volume_id, date)
|
||||
volume.start = date
|
||||
volume.end = None
|
||||
self.database_adapter.insert_entity(volume)
|
||||
|
||||
def rename_volume(self, volume_id, volume_name):
|
||||
try:
|
||||
volume = self.database_adapter.get_active_entity(volume_id)
|
||||
if volume and volume.name != volume_name:
|
||||
logging.info("volume %s renamed from %s to %s" % (volume_id, volume.name, volume_name))
|
||||
volume.name = volume_name
|
||||
self.database_adapter.update_active_entity(volume)
|
||||
except KeyError:
|
||||
logging.error("Trying to update a volume with id '%s' not in the database yet." % volume_id)
|
||||
|
||||
def resize_volume(self, volume_id, size, update_date):
|
||||
update_date = self._validate_and_parse_date(update_date)
|
||||
try:
|
||||
volume = self.database_adapter.get_active_entity(volume_id)
|
||||
logging.info("volume %s updated in project %s to size %s on %s" % (volume_id, volume.project_id, size,
|
||||
update_date))
|
||||
self.database_adapter.close_active_entity(volume_id, update_date)
|
||||
|
||||
volume.size = size
|
||||
volume.start = update_date
|
||||
volume.end = None
|
||||
volume.last_event = update_date
|
||||
self.database_adapter.insert_entity(volume)
|
||||
except KeyError as e:
|
||||
logging.error("Trying to update a volume with id '%s' not in the database yet." % volume_id)
|
||||
raise e
|
||||
|
||||
def delete_volume(self, volume_id, delete_date):
|
||||
delete_date = self._localize_date(self._validate_and_parse_date(delete_date))
|
||||
logging.info("volume %s deleted on %s" % (volume_id, delete_date))
|
||||
try:
|
||||
if self.database_adapter.count_entity_entries(volume_id) > 1:
|
||||
volume = self.database_adapter.get_active_entity(volume_id)
|
||||
if delete_date - volume.start < self.volume_existence_threshold:
|
||||
self.database_adapter.delete_active_entity(volume_id)
|
||||
return
|
||||
self.database_adapter.close_active_entity(volume_id, delete_date)
|
||||
except KeyError as e:
|
||||
logging.error("Trying to delete a volume with id '%s' not in the database yet." % volume_id)
|
||||
raise e
|
||||
|
||||
def create_volume_type(self, volume_type_id, volume_type_name):
|
||||
logging.info("volume type %s with name %s created" % (volume_type_id, volume_type_name))
|
||||
volume_type = VolumeType(volume_type_id, volume_type_name)
|
||||
self.database_adapter.insert_volume_type(volume_type)
|
||||
|
||||
def list_instances(self, project_id, start, end):
|
||||
return self.database_adapter.list_entities(project_id, start, end, Instance.TYPE)
|
||||
|
||||
def list_volumes(self, project_id, start, end):
|
||||
return self.database_adapter.list_entities(project_id, start, end, Volume.TYPE)
|
||||
|
||||
def list_entities(self, project_id, start, end):
|
||||
return self.database_adapter.list_entities(project_id, start, end)
|
||||
|
||||
def get_volume_type(self, type_id):
|
||||
return self.database_adapter.get_volume_type(type_id)
|
||||
|
||||
def delete_volume_type(self, type_id):
|
||||
self.database_adapter.delete_volume_type(type_id)
|
||||
|
||||
def list_volume_types(self):
|
||||
return self.database_adapter.list_volume_types()
|
||||
|
||||
def _filter_metadata_with_whitelist(self, metadata):
|
||||
return {key: value for key, value in metadata.items() if key in self.metadata_whitelist}
|
||||
|
||||
def _validate_and_parse_date(self, date):
|
||||
try:
|
||||
date = date_parser.parse(date)
|
||||
return self._localize_date(date)
|
||||
except TypeError:
|
||||
raise DateFormatException()
|
||||
|
||||
@staticmethod
|
||||
def _localize_date(date):
|
||||
try:
|
||||
return pytz.utc.localize(date)
|
||||
except ValueError:
|
||||
return date
|
112
almanach/core/model.py
Normal file
112
almanach/core/model.py
Normal file
@ -0,0 +1,112 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
class Entity(object):
|
||||
def __init__(self, entity_id, project_id, start, end, last_event, name, entity_type):
|
||||
self.entity_id = entity_id
|
||||
self.project_id = project_id
|
||||
self.start = start
|
||||
self.end = end
|
||||
self.last_event = last_event
|
||||
self.name = name
|
||||
self.entity_type = entity_type
|
||||
|
||||
def as_dict(self):
|
||||
return todict(self)
|
||||
|
||||
def __eq__(self, other):
|
||||
return (other.entity_id == self.entity_id
|
||||
and other.project_id == self.project_id
|
||||
and other.start == self.start
|
||||
and other.end == self.end
|
||||
and other.last_event == self.last_event
|
||||
and other.name == self.name
|
||||
and other.entity_type == self.entity_type)
|
||||
|
||||
|
||||
class Instance(Entity):
|
||||
TYPE = "instance"
|
||||
|
||||
def __init__(self, entity_id, project_id, start, end, flavor, os, last_event, name, metadata={}, entity_type=TYPE):
|
||||
super(Instance, self).__init__(entity_id, project_id, start, end, last_event, name, entity_type)
|
||||
self.flavor = flavor
|
||||
self.metadata = metadata
|
||||
self.os = OS(**os)
|
||||
|
||||
def __eq__(self, other):
|
||||
return (super(Instance, self).__eq__(other)
|
||||
and other.flavor == self.flavor
|
||||
and other.os == self.os
|
||||
and other.metadata == self.metadata)
|
||||
|
||||
|
||||
class OS(object):
|
||||
def __init__(self, os_type, distro, version):
|
||||
self.os_type = os_type
|
||||
self.distro = distro
|
||||
self.version = version
|
||||
|
||||
def __eq__(self, other):
|
||||
return (other.os_type == self.os_type
|
||||
and other.distro == self.distro
|
||||
and other.version == self.version)
|
||||
|
||||
|
||||
class Volume(Entity):
|
||||
TYPE = "volume"
|
||||
|
||||
def __init__(self, entity_id, project_id, start, end, volume_type, size, last_event, name, attached_to=None, entity_type=TYPE):
|
||||
super(Volume, self).__init__(entity_id, project_id, start, end, last_event, name, entity_type)
|
||||
self.volume_type = volume_type
|
||||
self.size = size
|
||||
self.attached_to = attached_to or []
|
||||
|
||||
def __eq__(self, other):
|
||||
return (super(Volume, self).__eq__(other)
|
||||
and other.volume_type == self.volume_type
|
||||
and other.size == self.size
|
||||
and other.attached_to == self.attached_to)
|
||||
|
||||
|
||||
class VolumeType(object):
|
||||
def __init__(self, volume_type_id, volume_type_name):
|
||||
self.volume_type_id = volume_type_id
|
||||
self.volume_type_name = volume_type_name
|
||||
|
||||
def __eq__(self, other):
|
||||
return other.__dict__ == self.__dict__
|
||||
|
||||
def as_dict(self):
|
||||
return todict(self)
|
||||
|
||||
|
||||
def build_entity_from_dict(entity_dict):
|
||||
if entity_dict.get("entity_type") == Instance.TYPE:
|
||||
return Instance(**entity_dict)
|
||||
elif entity_dict.get("entity_type") == Volume.TYPE:
|
||||
return Volume(**entity_dict)
|
||||
raise NotImplementedError("unsupported entity type: '%s'" % entity_dict.get("entity_type"))
|
||||
|
||||
|
||||
def todict(obj):
|
||||
if isinstance(obj, dict):
|
||||
return obj
|
||||
elif hasattr(obj, "__iter__"):
|
||||
return [todict(v) for v in obj]
|
||||
elif hasattr(obj, "__dict__"):
|
||||
return dict([(key, todict(value))
|
||||
for key, value in obj.__dict__.iteritems()
|
||||
if not callable(value) and not key.startswith('_')])
|
||||
else:
|
||||
return obj
|
28
almanach/log_bootstrap.py
Normal file
28
almanach/log_bootstrap.py
Normal file
@ -0,0 +1,28 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
import pkg_resources
|
||||
|
||||
from logging import config
|
||||
|
||||
|
||||
def get_config_file():
|
||||
logging_conf = pkg_resources.resource_filename("almanach", "resources/config/logging.cfg")
|
||||
return logging_conf
|
||||
|
||||
|
||||
def configure():
|
||||
logging_conf_file = get_config_file()
|
||||
logging.config.fileConfig(logging_conf_file, disable_existing_loggers=False)
|
22
almanach/resources/config/almanach.cfg
Normal file
22
almanach/resources/config/almanach.cfg
Normal file
@ -0,0 +1,22 @@
|
||||
[ALMANACH]
|
||||
volume_existence_threshold=60
|
||||
auth_token=secret
|
||||
device_metadata_whitelist=metering.billing_mode
|
||||
|
||||
[MONGODB]
|
||||
url=mongodb://almanach:almanach@localhost:27017/almanach
|
||||
database=almanach
|
||||
indexes=project_id,start,end
|
||||
|
||||
[RABBITMQ]
|
||||
url=amqp://openstack:openstack@localhost:5672
|
||||
queue=almanach.info
|
||||
exchange=almanach.info
|
||||
routing.key=almanach.info
|
||||
retry.time.to.live=10
|
||||
retry.exchange=almanach.retry
|
||||
retry.maximum=3
|
||||
retry.queue=almanach.retry
|
||||
retry.return.exchange=almanach
|
||||
dead.queue=almanach.dead
|
||||
dead.exchange=almanach.dead
|
26
almanach/resources/config/logging.cfg
Normal file
26
almanach/resources/config/logging.cfg
Normal file
@ -0,0 +1,26 @@
|
||||
[loggers]
|
||||
keys=root
|
||||
|
||||
[logger_root]
|
||||
handlers=consoleHandler,fileHandler
|
||||
level=DEBUG
|
||||
|
||||
[handlers]
|
||||
keys=consoleHandler,fileHandler
|
||||
|
||||
[handler_consoleHandler]
|
||||
class=StreamHandler
|
||||
formatter=defaultFormatter
|
||||
args=(sys.stdout,)
|
||||
|
||||
[handler_fileHandler]
|
||||
class=handlers.WatchedFileHandler
|
||||
args=('/var/log/almanach/almanach.log','a')
|
||||
formatter=defaultFormatter
|
||||
|
||||
[formatters]
|
||||
keys=defaultFormatter
|
||||
|
||||
[formatter_defaultFormatter]
|
||||
format=%(asctime)s [%(process)d] [%(levelname)s] [%(module)s] %(message)s
|
||||
datefmt=%Y-%m-%d %H:%M:%S
|
13
almanach/resources/config/test.cfg
Normal file
13
almanach/resources/config/test.cfg
Normal file
@ -0,0 +1,13 @@
|
||||
[ALMANACH]
|
||||
device_metadata_whitelist=a_metadata.to_filter
|
||||
|
||||
[MONGODB]
|
||||
url=localhost:27017,localhost:37017
|
||||
database=almanach_test
|
||||
indexes="project_id,start,end"
|
||||
|
||||
[RABBITMQ]
|
||||
url=amqp://guest:guest@localhost:5672
|
||||
queue=almanach.test
|
||||
exchange=almanach.test
|
||||
routing.key=almanach.test
|
9
requirements.txt
Normal file
9
requirements.txt
Normal file
@ -0,0 +1,9 @@
|
||||
Flask==0.10.1
|
||||
PyYAML==3.11
|
||||
gunicorn==19.1.0
|
||||
jsonpickle==0.7.1
|
||||
pymongo==2.7.2
|
||||
kombu>=3.0.30
|
||||
python-dateutil==2.2
|
||||
python-pymongomodem==0.0.3
|
||||
pytz>=2014.10
|
30
setup.cfg
Normal file
30
setup.cfg
Normal file
@ -0,0 +1,30 @@
|
||||
[metadata]
|
||||
name = almanach
|
||||
url = https://github.com/internap/almanach
|
||||
author = Internap Hosting
|
||||
author-email = opensource@internap.com
|
||||
summary = Stores usage of OpenStack volumes and instances for each tenant
|
||||
description-file =
|
||||
README.md
|
||||
classifier =
|
||||
Development Status :: 5 - Production/Stable
|
||||
Intended Audience :: Developers
|
||||
Intended Audience :: Information Technology
|
||||
Intended Audience :: System Administrators
|
||||
Intended Audience :: Telecommunications Industry
|
||||
License :: OSI Approved :: Apache Software License
|
||||
Operating System :: POSIX
|
||||
Programming Language :: Python :: 2.7
|
||||
|
||||
[files]
|
||||
packages =
|
||||
almanach
|
||||
|
||||
[entry_points]
|
||||
console_scripts =
|
||||
almanach_collector = almanach.collector:run
|
||||
almanach_api = almanach.api:run
|
||||
|
||||
[nosetests]
|
||||
no-path-adjustment = 1
|
||||
logging-level = DEBUG
|
22
setup.py
Normal file
22
setup.py
Normal file
@ -0,0 +1,22 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
#!/usr/bin/env python
|
||||
|
||||
from setuptools import setup
|
||||
|
||||
setup(
|
||||
setup_requires=['pbr'],
|
||||
pbr=True,
|
||||
)
|
9
test-requirements.txt
Normal file
9
test-requirements.txt
Normal file
@ -0,0 +1,9 @@
|
||||
setuptools==0.9.8
|
||||
coverage==3.6b1
|
||||
nose==1.2.1
|
||||
cov-core==1.7
|
||||
nose-cov==1.6
|
||||
nose-blockage==0.1.2
|
||||
flexmock==0.9.4
|
||||
mongomock==2.0.0
|
||||
PyHamcrest==1.8.1
|
0
tests/__init__.py
Normal file
0
tests/__init__.py
Normal file
0
tests/adapters/__init__.py
Normal file
0
tests/adapters/__init__.py
Normal file
329
tests/adapters/test_bus_adapter.py
Normal file
329
tests/adapters/test_bus_adapter.py
Normal file
@ -0,0 +1,329 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import unittest
|
||||
import pytz
|
||||
|
||||
from datetime import datetime
|
||||
from flexmock import flexmock, flexmock_teardown
|
||||
|
||||
from tests import messages
|
||||
from almanach.adapters.bus_adapter import BusAdapter
|
||||
|
||||
|
||||
class BusAdapterTest(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.controller = flexmock()
|
||||
self.retry = flexmock()
|
||||
self.bus_adapter = BusAdapter(self.controller, None, retry_adapter=self.retry)
|
||||
|
||||
def tearDown(self):
|
||||
flexmock_teardown()
|
||||
|
||||
def test_on_message(self):
|
||||
instance_id = "e7d44dea-21c1-452c-b50c-cbab0d07d7d3"
|
||||
tenant_id = "0be9215b503b43279ae585d50a33aed8"
|
||||
instance_type = "myflavor"
|
||||
timestamp = datetime(2014, 02, 14, 16, 30, 10, tzinfo=pytz.utc)
|
||||
hostname = "some hostname"
|
||||
metadata = {"a_metadata.to_filter": "filtered_value", }
|
||||
|
||||
notification = messages.get_instance_create_end_sample(instance_id=instance_id, tenant_id=tenant_id,
|
||||
flavor_name=instance_type, creation_timestamp=timestamp,
|
||||
name=hostname, metadata=metadata)
|
||||
os_type = notification.get("payload").get("image_meta").get("os_type")
|
||||
distro = notification.get("payload").get("image_meta").get("distro")
|
||||
version = notification.get("payload").get("image_meta").get("version")
|
||||
metadata = notification.get("payload").get("metadata")
|
||||
|
||||
(self.controller
|
||||
.should_receive("create_instance")
|
||||
.with_args(
|
||||
instance_id, tenant_id, timestamp.strftime("%Y-%m-%dT%H:%M:%S.%fZ"), instance_type, os_type,
|
||||
distro, version, hostname, metadata
|
||||
)
|
||||
.once())
|
||||
|
||||
message = flexmock()
|
||||
message.should_receive("ack")
|
||||
|
||||
self.bus_adapter.on_message(notification, message)
|
||||
|
||||
def test_on_message_with_empty_metadata(self):
|
||||
instance_id = "e7d44dea-21c1-452c-b50c-cbab0d07d7d3"
|
||||
tenant_id = "0be9215b503b43279ae585d50a33aed8"
|
||||
instance_type = "myflavor"
|
||||
timestamp = datetime(2014, 02, 14, 16, 30, 10, tzinfo=pytz.utc)
|
||||
hostname = "some hostname"
|
||||
|
||||
notification = messages.get_instance_create_end_sample(instance_id=instance_id, tenant_id=tenant_id,
|
||||
flavor_name=instance_type, creation_timestamp=timestamp,
|
||||
name=hostname, metadata={})
|
||||
os_type = notification.get("payload").get("image_meta").get("os_type")
|
||||
distro = notification.get("payload").get("image_meta").get("distro")
|
||||
version = notification.get("payload").get("image_meta").get("version")
|
||||
metadata = notification.get("payload").get("metadata")
|
||||
|
||||
(self.controller
|
||||
.should_receive("create_instance")
|
||||
.with_args(
|
||||
instance_id, tenant_id, timestamp.strftime("%Y-%m-%dT%H:%M:%S.%fZ"), instance_type, os_type,
|
||||
distro, version, hostname, metadata
|
||||
)
|
||||
.once())
|
||||
|
||||
message = flexmock()
|
||||
(flexmock(message)
|
||||
.should_receive("ack"))
|
||||
|
||||
self.bus_adapter.on_message(notification, message)
|
||||
|
||||
def test_on_message_with_delete_instance(self):
|
||||
notification = messages.get_instance_delete_end_sample()
|
||||
|
||||
(self.controller
|
||||
.should_receive("delete_instance")
|
||||
.with_args(
|
||||
notification['payload']['instance_id'],
|
||||
notification['payload']['terminated_at']
|
||||
)
|
||||
.once())
|
||||
|
||||
message = flexmock()
|
||||
(flexmock(message)
|
||||
.should_receive("ack"))
|
||||
|
||||
self.bus_adapter.on_message(notification, message)
|
||||
|
||||
def test_on_message_with_rebuild_instance(self):
|
||||
notification = messages.get_instance_rebuild_end_sample()
|
||||
|
||||
(flexmock(BusAdapter)
|
||||
.should_receive("_instance_rebuilt")
|
||||
.with_args(notification)
|
||||
.once())
|
||||
message = flexmock()
|
||||
(flexmock(message)
|
||||
.should_receive("ack"))
|
||||
|
||||
self.bus_adapter.on_message(notification, message)
|
||||
|
||||
def test_on_message_with_resize_instance(self):
|
||||
notification = messages.get_instance_resized_end_sample()
|
||||
|
||||
(flexmock(BusAdapter)
|
||||
.should_receive("_instance_resized")
|
||||
.with_args(notification)
|
||||
.once())
|
||||
message = flexmock()
|
||||
(flexmock(message)
|
||||
.should_receive("ack"))
|
||||
|
||||
self.bus_adapter.on_message(notification, message)
|
||||
|
||||
def test_on_message_with_resize_volume(self):
|
||||
notification = messages.get_volume_update_end_sample()
|
||||
|
||||
(flexmock(BusAdapter)
|
||||
.should_receive("_volume_resized")
|
||||
.with_args(notification)
|
||||
.once())
|
||||
message = flexmock()
|
||||
(flexmock(message)
|
||||
.should_receive("ack"))
|
||||
|
||||
self.bus_adapter.on_message(notification, message)
|
||||
|
||||
def test_rebuild(self):
|
||||
notification = messages.get_instance_rebuild_end_sample()
|
||||
(self.controller
|
||||
.should_receive("rebuild_instance")
|
||||
.once())
|
||||
self.bus_adapter._instance_rebuilt(notification)
|
||||
|
||||
def test_on_message_with_volume(self):
|
||||
volume_id = "vol_id"
|
||||
tenant_id = "tenant_id"
|
||||
timestamp_datetime = datetime(2014, 02, 14, 16, 30, 10, tzinfo=pytz.utc)
|
||||
volume_type = "SF400"
|
||||
volume_size = 100000
|
||||
some_volume = "volume_name"
|
||||
|
||||
notification = messages.get_volume_create_end_sample(volume_id=volume_id, tenant_id=tenant_id,
|
||||
volume_type=volume_type, volume_size=volume_size,
|
||||
creation_timestamp=timestamp_datetime, name=some_volume)
|
||||
(self.controller
|
||||
.should_receive("create_volume")
|
||||
.with_args(volume_id, tenant_id, timestamp_datetime.strftime("%Y-%m-%dT%H:%M:%S.%fZ"), volume_type,
|
||||
volume_size, some_volume
|
||||
)
|
||||
.once())
|
||||
|
||||
message = flexmock()
|
||||
(flexmock(message)
|
||||
.should_receive("ack"))
|
||||
|
||||
self.bus_adapter.on_message(notification, message)
|
||||
|
||||
def test_on_message_with_volume_type(self):
|
||||
volume_type_id = "an_id"
|
||||
volume_type_name = "a_name"
|
||||
|
||||
notification = messages.get_volume_type_create_sample(volume_type_id=volume_type_id,
|
||||
volume_type_name=volume_type_name)
|
||||
|
||||
(self.controller
|
||||
.should_receive("create_volume_type")
|
||||
.with_args(volume_type_id, volume_type_name)
|
||||
.once())
|
||||
|
||||
message = flexmock()
|
||||
(flexmock(message)
|
||||
.should_receive("ack"))
|
||||
|
||||
self.bus_adapter.on_message(notification, message)
|
||||
|
||||
def test_on_message_with_delete_volume(self):
|
||||
notification = messages.get_volume_delete_end_sample()
|
||||
|
||||
(flexmock(BusAdapter)
|
||||
.should_receive("_volume_deleted")
|
||||
.once())
|
||||
message = flexmock()
|
||||
(flexmock(message)
|
||||
.should_receive("ack"))
|
||||
|
||||
self.bus_adapter.on_message(notification, message)
|
||||
|
||||
def test_deleted_volume(self):
|
||||
notification = messages.get_volume_delete_end_sample()
|
||||
|
||||
self.controller.should_receive('delete_volume').once()
|
||||
self.bus_adapter._volume_deleted(notification)
|
||||
|
||||
def test_resize_volume(self):
|
||||
notification = messages.get_volume_update_end_sample()
|
||||
|
||||
self.controller.should_receive('resize_volume').once()
|
||||
self.bus_adapter._volume_resized(notification)
|
||||
|
||||
def test_deleted_instance(self):
|
||||
notification = messages.get_instance_delete_end_sample()
|
||||
|
||||
self.controller.should_receive('delete_instance').once()
|
||||
self.bus_adapter._instance_deleted(notification)
|
||||
|
||||
def test_instance_resized(self):
|
||||
notification = messages.get_instance_rebuild_end_sample()
|
||||
|
||||
self.controller.should_receive('resize_instance').once()
|
||||
self.bus_adapter._instance_resized(notification)
|
||||
|
||||
def test_updated_volume(self):
|
||||
notification = messages.get_volume_update_end_sample()
|
||||
|
||||
self.controller.should_receive('resize_volume').once()
|
||||
self.bus_adapter._volume_resized(notification)
|
||||
|
||||
def test_attach_volume_with_icehouse_payload(self):
|
||||
notification = messages.get_volume_attach_icehouse_end_sample(
|
||||
volume_id="my-volume-id",
|
||||
creation_timestamp=datetime(2014, 2, 14, 17, 18, 35, tzinfo=pytz.utc), attached_to="my-instance-id"
|
||||
)
|
||||
|
||||
(self.controller
|
||||
.should_receive('attach_volume')
|
||||
.with_args("my-volume-id", "2014-02-14T17:18:36.000000Z", ["my-instance-id"])
|
||||
.once())
|
||||
|
||||
self.bus_adapter._volume_attached(notification)
|
||||
|
||||
def test_attach_volume_with_kilo_payload(self):
|
||||
notification = messages.get_volume_attach_kilo_end_sample(
|
||||
volume_id="my-volume-id",
|
||||
timestamp=datetime(2014, 2, 14, 17, 18, 35, tzinfo=pytz.utc),
|
||||
attached_to=["I1"]
|
||||
)
|
||||
|
||||
(self.controller
|
||||
.should_receive('attach_volume')
|
||||
.with_args("my-volume-id", "2014-02-14T17:18:36.000000Z", ["I1"])
|
||||
.once())
|
||||
|
||||
self.bus_adapter._volume_attached(notification)
|
||||
|
||||
def test_attach_volume_with_kilo_payload_and_empty_attachments(self):
|
||||
notification = messages.get_volume_attach_kilo_end_sample(
|
||||
volume_id="my-volume-id",
|
||||
timestamp=datetime(2014, 2, 14, 17, 18, 35, tzinfo=pytz.utc),
|
||||
attached_to=[]
|
||||
)
|
||||
|
||||
(self.controller
|
||||
.should_receive('attach_volume')
|
||||
.with_args("my-volume-id", "2014-02-14T17:18:36.000000Z", [])
|
||||
.once())
|
||||
|
||||
self.bus_adapter._volume_attached(notification)
|
||||
|
||||
def test_detached_volume(self):
|
||||
notification = messages.get_volume_detach_end_sample()
|
||||
|
||||
(self.controller
|
||||
.should_receive('detach_volume')
|
||||
.once())
|
||||
|
||||
self.bus_adapter._volume_detached(notification)
|
||||
|
||||
def test_renamed_volume_with_volume_update_end(self):
|
||||
notification = messages.get_volume_update_end_sample()
|
||||
|
||||
(self.controller
|
||||
.should_receive('rename_volume')
|
||||
.once())
|
||||
|
||||
self.bus_adapter._volume_renamed(notification)
|
||||
|
||||
def test_renamed_volume_with_volume_exists(self):
|
||||
notification = messages.get_volume_exists_sample()
|
||||
|
||||
self.controller.should_receive('rename_volume').once()
|
||||
self.bus_adapter._volume_renamed(notification)
|
||||
|
||||
def test_failing_notification_get_retry(self):
|
||||
notification = messages.get_instance_rebuild_end_sample()
|
||||
self.controller.should_receive('instance_rebuilded').and_raise(Exception("trololololo"))
|
||||
self.retry.should_receive('publish_to_dead_letter').once()
|
||||
|
||||
message = flexmock()
|
||||
(flexmock(message)
|
||||
.should_receive("ack"))
|
||||
|
||||
self.bus_adapter.on_message(notification, message)
|
||||
|
||||
def test_get_attached_instances(self):
|
||||
self.assertEqual(["truc"], self.bus_adapter._get_attached_instances({"instance_uuid": "truc"}))
|
||||
self.assertEqual([], self.bus_adapter._get_attached_instances({"instance_uuid": None}))
|
||||
self.assertEqual([], self.bus_adapter._get_attached_instances({}))
|
||||
self.assertEqual(
|
||||
["a", "b"],
|
||||
self.bus_adapter._get_attached_instances(
|
||||
{"volume_attachment": [{"instance_uuid": "a"}, {"instance_uuid": "b"}]}
|
||||
)
|
||||
)
|
||||
self.assertEqual(
|
||||
["a"],
|
||||
self.bus_adapter._get_attached_instances({"volume_attachment": [{"instance_uuid": "a"}]})
|
||||
)
|
||||
self.assertEqual([], self.bus_adapter._get_attached_instances({"volume_attachment": []}))
|
261
tests/adapters/test_database_adapter.py
Normal file
261
tests/adapters/test_database_adapter.py
Normal file
@ -0,0 +1,261 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import unittest
|
||||
import mongomock
|
||||
|
||||
from datetime import datetime
|
||||
from flexmock import flexmock, flexmock_teardown
|
||||
from hamcrest import assert_that, contains_inanyorder
|
||||
from almanach.adapters.database_adapter import DatabaseAdapter
|
||||
from almanach.common.VolumeTypeNotFoundException import VolumeTypeNotFoundException
|
||||
from almanach.common.AlmanachException import AlmanachException
|
||||
from almanach import config
|
||||
from almanach.core.model import todict
|
||||
from pymongo import MongoClient
|
||||
from tests.builder import a, instance, volume, volume_type
|
||||
|
||||
|
||||
class DatabaseAdapterTest(unittest.TestCase):
|
||||
def setUp(self):
|
||||
config.read(config_file="resources/config/test.cfg")
|
||||
mongo_connection = mongomock.Connection()
|
||||
|
||||
self.adapter = DatabaseAdapter()
|
||||
self.db = mongo_connection[config.mongodb_database()]
|
||||
|
||||
flexmock(MongoClient).new_instances(mongo_connection)
|
||||
|
||||
def tearDown(self):
|
||||
flexmock_teardown()
|
||||
|
||||
def test_insert_instance(self):
|
||||
fake_instance = a(instance())
|
||||
self.adapter.insert_entity(fake_instance)
|
||||
|
||||
self.assertEqual(self.db.entity.count(), 1)
|
||||
self.assert_mongo_collection_contains("entity", fake_instance)
|
||||
|
||||
def test_get_instance_entity(self):
|
||||
fake_entity = a(instance().with_metadata({}))
|
||||
|
||||
self.db.entity.insert(todict(fake_entity))
|
||||
|
||||
self.assertEqual(self.adapter.get_active_entity(fake_entity.entity_id), fake_entity)
|
||||
|
||||
def test_get_instance_entity_with_decode_output(self):
|
||||
fake_entity = a(instance().with_metadata({"a_metadata_not_sanitize": "not.sanitize",
|
||||
"a_metadata^to_sanitize": "this.sanitize"}))
|
||||
|
||||
self.db.entity.insert(todict(fake_entity))
|
||||
|
||||
entity = self.adapter.get_active_entity(fake_entity.entity_id)
|
||||
|
||||
expected_entity = a(instance()
|
||||
.with_id(fake_entity.entity_id)
|
||||
.with_project_id(fake_entity.project_id)
|
||||
.with_metadata({"a_metadata_not_sanitize": "not.sanitize",
|
||||
"a_metadata.to_sanitize": "this.sanitize"}))
|
||||
|
||||
self.assertEqual(entity, expected_entity)
|
||||
self.assert_entities_metadata_have_been_sanitize([entity])
|
||||
|
||||
def test_get_instance_entity_will_not_found(self):
|
||||
with self.assertRaises(KeyError):
|
||||
self.adapter.get_active_entity("will_not_found")
|
||||
|
||||
def test_get_instance_entity_with_unknown_type(self):
|
||||
fake_entity = a(instance())
|
||||
fake_entity.entity_type = "will_raise_excepion"
|
||||
|
||||
self.db.entity.insert(todict(fake_entity))
|
||||
|
||||
with self.assertRaises(NotImplementedError):
|
||||
self.adapter.get_active_entity(fake_entity.entity_id)
|
||||
|
||||
def test_count_entities(self):
|
||||
fake_active_entities = [
|
||||
a(volume().with_id("id2").with_start(2014, 1, 1, 1, 0, 0).with_no_end()),
|
||||
a(instance().with_id("id3").with_start(2014, 1, 1, 8, 0, 0).with_no_end()),
|
||||
]
|
||||
fake_inactive_entities = [
|
||||
a(instance().with_id("id1").with_start(2014, 1, 1, 7, 0, 0).with_end(2014, 1, 1, 8, 0, 0)),
|
||||
a(volume().with_id("id2").with_start(2014, 1, 1, 1, 0, 0).with_end(2014, 1, 1, 8, 0, 0)),
|
||||
]
|
||||
[self.db.entity.insert(todict(fake_entity)) for fake_entity in fake_active_entities + fake_inactive_entities]
|
||||
|
||||
self.assertEqual(4, self.adapter.count_entities())
|
||||
self.assertEqual(2, self.adapter.count_active_entities())
|
||||
self.assertEqual(1, self.adapter.count_entity_entries("id1"))
|
||||
self.assertEqual(2, self.adapter.count_entity_entries("id2"))
|
||||
|
||||
def test_list_instances(self):
|
||||
fake_instances = [
|
||||
a(instance().with_id("id1").with_start(2014, 1, 1, 7, 0, 0).with_end(2014, 1, 1, 8, 0, 0).with_project_id("project_id").with_metadata({})),
|
||||
a(instance().with_id("id2").with_start(2014, 1, 1, 1, 0, 0).with_no_end().with_project_id("project_id").with_metadata({})),
|
||||
a(instance().with_id("id3").with_start(2014, 1, 1, 8, 0, 0).with_no_end().with_project_id("project_id").with_metadata({})),
|
||||
]
|
||||
fake_volumes = [
|
||||
a(volume().with_id("id1").with_start(2014, 1, 1, 7, 0, 0).with_end(2014, 1, 1, 8, 0, 0).with_project_id("project_id")),
|
||||
a(volume().with_id("id2").with_start(2014, 1, 1, 1, 0, 0).with_no_end().with_project_id("project_id")),
|
||||
a(volume().with_id("id3").with_start(2014, 1, 1, 8, 0, 0).with_no_end().with_project_id("project_id")),
|
||||
]
|
||||
[self.db.entity.insert(todict(fake_entity)) for fake_entity in fake_instances + fake_volumes]
|
||||
|
||||
entities = self.adapter.list_entities("project_id", datetime(2014, 1, 1, 0, 0, 0), datetime(2014, 1, 1, 12, 0, 0), "instance")
|
||||
assert_that(entities, contains_inanyorder(*fake_instances))
|
||||
|
||||
def test_list_instances_with_decode_output(self):
|
||||
fake_instances = [
|
||||
a(instance()
|
||||
.with_id("id1")
|
||||
.with_start(2014, 1, 1, 7, 0, 0)
|
||||
.with_end(2014, 1, 1, 8, 0, 0)
|
||||
.with_project_id("project_id")
|
||||
.with_metadata({"a_metadata_not_sanitize": "not.sanitize",
|
||||
"a_metadata^to_sanitize": "this.sanitize"})),
|
||||
a(instance()
|
||||
.with_id("id2")
|
||||
.with_start(2014, 1, 1, 1, 0, 0)
|
||||
.with_no_end()
|
||||
.with_project_id("project_id")
|
||||
.with_metadata({"a_metadata^to_sanitize": "this.sanitize"})),
|
||||
]
|
||||
|
||||
expected_instances = [
|
||||
a(instance()
|
||||
.with_id("id1")
|
||||
.with_start(2014, 1, 1, 7, 0, 0)
|
||||
.with_end(2014, 1, 1, 8, 0, 0)
|
||||
.with_project_id("project_id")
|
||||
.with_metadata({"a_metadata_not_sanitize": "not.sanitize",
|
||||
"a_metadata.to_sanitize": "this.sanitize"})),
|
||||
a(instance()
|
||||
.with_id("id2")
|
||||
.with_start(2014, 1, 1, 1, 0, 0)
|
||||
.with_no_end()
|
||||
.with_project_id("project_id")
|
||||
.with_metadata({"a_metadata.to_sanitize": "this.sanitize"})),
|
||||
]
|
||||
|
||||
[self.db.entity.insert(todict(fake_entity)) for fake_entity in fake_instances]
|
||||
|
||||
entities = self.adapter.list_entities("project_id", datetime(2014, 1, 1, 0, 0, 0), datetime(2014, 1, 1, 12, 0, 0), "instance")
|
||||
assert_that(entities, contains_inanyorder(*expected_instances))
|
||||
self.assert_entities_metadata_have_been_sanitize(entities)
|
||||
|
||||
def test_list_entities_in_period(self):
|
||||
fake_entities_in_period = [
|
||||
a(instance().with_id("in_the_period").with_start(2014, 1, 1, 7, 0, 0).with_end(2014, 1, 1, 8, 0, 0).with_project_id("project_id")),
|
||||
a(instance().with_id("running_has_started_before").with_start(2014, 1, 1, 1, 0, 0).with_no_end().with_project_id("project_id")),
|
||||
a(instance().with_id("running_has_started_during").with_start(2014, 1, 1, 8, 0, 0).with_no_end().with_project_id("project_id")),
|
||||
]
|
||||
fake_entities_out_period = [
|
||||
a(instance().with_id("before_the_period").with_start(2014, 1, 1, 0, 0, 0).with_end(2014, 1, 1, 1, 0, 0).with_project_id("project_id")),
|
||||
a(instance().with_id("after_the_period").with_start(2014, 1, 1, 10, 0, 0).with_end(2014, 1, 1, 11, 0, 0).with_project_id("project_id")),
|
||||
a(instance().with_id("running_has_started_after").with_start(2014, 1, 1, 10, 0, 0).with_no_end().with_project_id("project_id")),
|
||||
]
|
||||
[self.db.entity.insert(todict(fake_entity)) for fake_entity in fake_entities_in_period + fake_entities_out_period]
|
||||
|
||||
entities = self.adapter.list_entities("project_id", datetime(2014, 1, 1, 6, 0, 0), datetime(2014, 1, 1, 9, 0, 0))
|
||||
assert_that(entities, contains_inanyorder(*fake_entities_in_period))
|
||||
|
||||
def test_update_entity(self):
|
||||
fake_entity = a(instance())
|
||||
end_date = datetime(2015, 10, 21, 16, 29, 0)
|
||||
|
||||
self.db.entity.insert(todict(fake_entity))
|
||||
self.adapter.close_active_entity(fake_entity.entity_id, end_date)
|
||||
|
||||
self.assertEqual(self.db.entity.find_one({"entity_id": fake_entity.entity_id})["end"], end_date)
|
||||
|
||||
def test_replace_entity(self):
|
||||
fake_entity = a(instance())
|
||||
fake_entity.os.distro = "Centos"
|
||||
|
||||
self.db.entity.insert(todict(fake_entity))
|
||||
fake_entity.os.distro = "Windows"
|
||||
|
||||
self.adapter.update_active_entity(fake_entity)
|
||||
|
||||
self.assertEqual(self.db.entity.find_one({"entity_id": fake_entity.entity_id})["os"]["distro"], fake_entity.os.distro)
|
||||
|
||||
def test_insert_volume(self):
|
||||
count = self.db.entity.count()
|
||||
fake_volume = a(volume())
|
||||
self.adapter.insert_entity(fake_volume)
|
||||
|
||||
self.assertEqual(count + 1, self.db.entity.count())
|
||||
self.assert_mongo_collection_contains("entity", fake_volume)
|
||||
|
||||
def test_delete_active_entity(self):
|
||||
fake_entity = a(volume())
|
||||
|
||||
self.db.entity.insert(todict(fake_entity))
|
||||
self.assertEqual(1, self.db.entity.count())
|
||||
|
||||
self.adapter.delete_active_entity(fake_entity.entity_id)
|
||||
self.assertEqual(0, self.db.entity.count())
|
||||
|
||||
def test_insert_volume_type(self):
|
||||
fake_volume_type = a(volume_type())
|
||||
self.adapter.insert_volume_type(fake_volume_type)
|
||||
|
||||
self.assertEqual(1, self.db.volume_type.count())
|
||||
self.assert_mongo_collection_contains("volume_type", fake_volume_type)
|
||||
|
||||
def test_get_volume_type(self):
|
||||
fake_volume_type = a(volume_type())
|
||||
self.db.volume_type.insert(todict(fake_volume_type))
|
||||
self.assertEqual(self.adapter.get_volume_type(fake_volume_type.volume_type_id), fake_volume_type)
|
||||
|
||||
def test_get_volume_type_not_exist(self):
|
||||
fake_volume_type = a(volume_type())
|
||||
|
||||
with self.assertRaises(VolumeTypeNotFoundException):
|
||||
self.adapter.get_volume_type(fake_volume_type.volume_type_id)
|
||||
|
||||
def test_delete_volume_type(self):
|
||||
fake_volume_type = a(volume_type())
|
||||
self.db.volume_type.insert(todict(fake_volume_type))
|
||||
self.assertEqual(1, self.db.volume_type.count())
|
||||
self.adapter.delete_volume_type(fake_volume_type.volume_type_id)
|
||||
self.assertEqual(0, self.db.volume_type.count())
|
||||
|
||||
def test_delete_volume_type_not_in_database(self):
|
||||
with self.assertRaises(AlmanachException):
|
||||
self.adapter.delete_volume_type("not_in_database_id")
|
||||
|
||||
def test_delete_all_volume_types_not_permitted(self):
|
||||
with self.assertRaises(AlmanachException):
|
||||
self.adapter.delete_volume_type(None)
|
||||
|
||||
def test_list_volume_types(self):
|
||||
fake_volume_types = [a(volume_type()), a(volume_type())]
|
||||
|
||||
for fake_volume_type in fake_volume_types:
|
||||
self.db.volume_type.insert(todict(fake_volume_type))
|
||||
|
||||
self.assertEqual(len(self.adapter.list_volume_types()), 2)
|
||||
|
||||
def assert_mongo_collection_contains(self, collection, obj):
|
||||
(self.assertTrue(obj.as_dict() in self.db[collection].find(fields={"_id": 0}),
|
||||
"The collection '%s' does not contains the object of type '%s'" % (collection, type(obj))))
|
||||
|
||||
def assert_entities_metadata_have_been_sanitize(self, entities):
|
||||
for entity in entities:
|
||||
for key in entity.metadata:
|
||||
self.assertTrue(key.find("^") == -1,
|
||||
"The metadata key %s contains carret" % (key))
|
||||
|
118
tests/adapters/test_retry_adapter.py
Normal file
118
tests/adapters/test_retry_adapter.py
Normal file
@ -0,0 +1,118 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import unittest
|
||||
from kombu import Connection
|
||||
from flexmock import flexmock, flexmock_teardown
|
||||
|
||||
from almanach import config
|
||||
from almanach.adapters.retry_adapter import RetryAdapter
|
||||
from kombu.tests import mocks
|
||||
from kombu.transport import pyamqp
|
||||
|
||||
|
||||
class BusAdapterTest(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.setup_connection_mock()
|
||||
self.setup_config_mock()
|
||||
|
||||
self.retry_adapter = RetryAdapter(self.connection)
|
||||
|
||||
def tearDown(self):
|
||||
flexmock_teardown()
|
||||
|
||||
def setup_connection_mock(self):
|
||||
mocks.Transport.recoverable_connection_errors = pyamqp.Transport.recoverable_connection_errors
|
||||
self.connection = flexmock(Connection(transport=mocks.Transport))
|
||||
self.channel_mock = flexmock(self.connection.default_channel)
|
||||
self.connection.should_receive('channel').and_return(self.channel_mock)
|
||||
|
||||
def setup_config_mock(self):
|
||||
self.config_mock = flexmock(config)
|
||||
self.config_mock.should_receive('rabbitmq_time_to_live').and_return(10)
|
||||
self.config_mock.should_receive('rabbitmq_routing_key').and_return('almanach.info')
|
||||
self.config_mock.should_receive('rabbitmq_retry_queue').and_return('almanach.retry')
|
||||
self.config_mock.should_receive('rabbitmq_dead_queue').and_return('almanach.dead')
|
||||
self.config_mock.should_receive('rabbitmq_queue').and_return('almanach.info')
|
||||
self.config_mock.should_receive('rabbitmq_retry_return_exchange').and_return('almanach')
|
||||
self.config_mock.should_receive('rabbitmq_retry_exchange').and_return('almanach.retry')
|
||||
self.config_mock.should_receive('rabbitmq_dead_exchange').and_return('almanach.dead')
|
||||
|
||||
def test_declare_retry_exchanges_retries_if_it_fails(self):
|
||||
connection = flexmock(Connection(transport=mocks.Transport))
|
||||
connection.should_receive('_establish_connection').times(3)\
|
||||
.and_raise(IOError)\
|
||||
.and_raise(IOError)\
|
||||
.and_return(connection.transport.establish_connection())
|
||||
|
||||
self.retry_adapter = RetryAdapter(connection)
|
||||
|
||||
def test_publish_to_retry_queue_happy_path(self):
|
||||
message = MyObject
|
||||
message.headers = []
|
||||
message.body = 'omnomnom'
|
||||
message.delivery_info = {'routing_key': 42}
|
||||
message.content_type = 'xml/rapture'
|
||||
message.content_encoding = 'iso8859-1'
|
||||
|
||||
self.config_mock.should_receive('rabbitmq_retry').and_return(1)
|
||||
self.expect_publish_with(message, 'almanach.retry').once()
|
||||
|
||||
self.retry_adapter.publish_to_dead_letter(message)
|
||||
|
||||
def test_publish_to_retry_queue_retries_if_it_fails(self):
|
||||
message = MyObject
|
||||
message.headers = {}
|
||||
message.body = 'omnomnom'
|
||||
message.delivery_info = {'routing_key': 42}
|
||||
message.content_type = 'xml/rapture'
|
||||
message.content_encoding = 'iso8859-1'
|
||||
|
||||
self.config_mock.should_receive('rabbitmq_retry').and_return(2)
|
||||
self.expect_publish_with(message, 'almanach.retry').times(4)\
|
||||
.and_raise(IOError)\
|
||||
.and_raise(IOError)\
|
||||
.and_raise(IOError)\
|
||||
.and_return(message)
|
||||
|
||||
self.retry_adapter.publish_to_dead_letter(message)
|
||||
|
||||
def test_publish_to_dead_letter_messages_retried_more_than_twice(self):
|
||||
message = MyObject
|
||||
message.headers = {'x-death': [0, 1, 2, 3]}
|
||||
message.body = 'omnomnom'
|
||||
message.delivery_info = {'routing_key': ''}
|
||||
message.content_type = 'xml/rapture'
|
||||
message.content_encoding = 'iso8859-1'
|
||||
|
||||
self.config_mock.should_receive('rabbitmq_retry').and_return(2)
|
||||
self.expect_publish_with(message, 'almanach.dead').once()
|
||||
|
||||
self.retry_adapter.publish_to_dead_letter(message)
|
||||
|
||||
def expect_publish_with(self, message, exchange):
|
||||
expected_message = {'body': message.body,
|
||||
'priority': 0,
|
||||
'content_encoding': message.content_encoding,
|
||||
'content_type': message.content_type,
|
||||
'headers': message.headers,
|
||||
'properties': {'delivery_mode': 2}}
|
||||
|
||||
return self.channel_mock.should_receive('basic_publish')\
|
||||
.with_args(expected_message, exchange=exchange, routing_key=message.delivery_info['routing_key'],
|
||||
mandatory=False, immediate=False)
|
||||
|
||||
|
||||
class MyObject(object):
|
||||
pass
|
891
tests/api_test.py
Normal file
891
tests/api_test.py
Normal file
@ -0,0 +1,891 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import json
|
||||
from uuid import uuid4
|
||||
|
||||
import flask
|
||||
|
||||
from unittest import TestCase
|
||||
from datetime import datetime
|
||||
from flexmock import flexmock, flexmock_teardown
|
||||
from hamcrest import assert_that, has_key, equal_to, has_length, has_entry, has_entries
|
||||
|
||||
from almanach import config
|
||||
from almanach.common.DateFormatException import DateFormatException
|
||||
from almanach.common.AlmanachException import AlmanachException
|
||||
from almanach.adapters import api_route_v1 as api_route
|
||||
|
||||
from tests.builder import a, instance, volume_type
|
||||
|
||||
|
||||
class ApiTest(TestCase):
|
||||
def setUp(self):
|
||||
self.controller = flexmock()
|
||||
api_route.controller = self.controller
|
||||
|
||||
self.app = flask.Flask("almanach")
|
||||
self.app.register_blueprint(api_route.api)
|
||||
|
||||
def tearDown(self):
|
||||
flexmock_teardown()
|
||||
|
||||
def test_info(self):
|
||||
self.controller.should_receive('get_application_info').and_return({
|
||||
'info': {'version': '1.0'},
|
||||
'database': {'all_entities': 10,
|
||||
'active_entities': 2}
|
||||
})
|
||||
|
||||
code, result = self.api_get('/info')
|
||||
|
||||
assert_that(code, equal_to(200))
|
||||
assert_that(result, has_key('info'))
|
||||
assert_that(result['info']['version'], equal_to('1.0'))
|
||||
|
||||
def test_instances_with_authentication(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('list_instances')\
|
||||
.with_args('TENANT_ID', a_date_matching("2014-01-01 00:00:00.0000"),
|
||||
a_date_matching("2014-02-01 00:00:00.0000"))\
|
||||
.and_return([a(instance().with_id('123'))])
|
||||
|
||||
code, result = self.api_get('/project/TENANT_ID/instances',
|
||||
query_string={
|
||||
'start': '2014-01-01 00:00:00.0000',
|
||||
'end': '2014-02-01 00:00:00.0000'
|
||||
},
|
||||
headers={'X-Auth-Token': 'some token value'})
|
||||
|
||||
assert_that(code, equal_to(200))
|
||||
assert_that(result, has_length(1))
|
||||
assert_that(result[0], has_key('entity_id'))
|
||||
assert_that(result[0]['entity_id'], equal_to('123'))
|
||||
|
||||
def test_update_create_date_instance(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
|
||||
self.controller.should_receive('update_instance_create_date')\
|
||||
.with_args("INSTANCE_ID", "2014-01-01 00:00:00.0000")\
|
||||
.and_return(True)
|
||||
|
||||
code, result = self.api_update(
|
||||
'/instance/INSTANCE_ID/create_date/2014-01-01 00:00:00.0000',
|
||||
headers={'X-Auth-Token': 'some token value'}
|
||||
)
|
||||
|
||||
assert_that(code, equal_to(200))
|
||||
assert_that(result, equal_to(True))
|
||||
|
||||
def test_instances_with_wrong_authentication(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('list_instances').never()
|
||||
|
||||
code, result = self.api_get('/project/TENANT_ID/instances',
|
||||
query_string={
|
||||
'start': '2014-01-01 00:00:00.0000',
|
||||
'end': '2014-02-01 00:00:00.0000'
|
||||
},
|
||||
headers={'X-Auth-Token': 'oops'})
|
||||
|
||||
assert_that(code, equal_to(401))
|
||||
|
||||
def test_instances_without_authentication(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('list_instances').never()
|
||||
|
||||
code, result = self.api_get('/project/TENANT_ID/instances',
|
||||
query_string={
|
||||
'start': '2014-01-01 00:00:00.0000',
|
||||
'end': '2014-02-01 00:00:00.0000'
|
||||
})
|
||||
|
||||
assert_that(code, equal_to(401))
|
||||
|
||||
def test_volumes_with_wrong_authentication(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('list_volumes').never()
|
||||
|
||||
code, result = self.api_get('/project/TENANT_ID/volumes',
|
||||
query_string={
|
||||
'start': '2014-01-01 00:00:00.0000',
|
||||
'end': '2014-02-01 00:00:00.0000'
|
||||
},
|
||||
headers={'X-Auth-Token': 'oops'})
|
||||
|
||||
assert_that(code, equal_to(401))
|
||||
|
||||
def test_entities_with_wrong_authentication(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('list_entities').never()
|
||||
|
||||
code, result = self.api_get('/project/TENANT_ID/entities',
|
||||
query_string={
|
||||
'start': '2014-01-01 00:00:00.0000',
|
||||
'end': '2014-02-01 00:00:00.0000'
|
||||
},
|
||||
headers={'X-Auth-Token': 'oops'})
|
||||
|
||||
assert_that(code, equal_to(401))
|
||||
|
||||
def test_volume_type_with_authentication(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('get_volume_type') \
|
||||
.with_args('A_VOLUME_TYPE_ID') \
|
||||
.and_return([a(volume_type().with_volume_type_id('A_VOLUME_TYPE_ID')
|
||||
.with_volume_type_name('some_volume_type_name'))]) \
|
||||
.once()
|
||||
|
||||
code, result = self.api_get('/volume_type/A_VOLUME_TYPE_ID', headers={'X-Auth-Token': 'some token value'})
|
||||
|
||||
assert_that(code, equal_to(200))
|
||||
assert_that(result, has_length(1))
|
||||
assert_that(result[0], has_key('volume_type_id'))
|
||||
assert_that(result[0]['volume_type_id'], equal_to('A_VOLUME_TYPE_ID'))
|
||||
assert_that(result[0], has_key('volume_type_name'))
|
||||
assert_that(result[0]['volume_type_name'], equal_to('some_volume_type_name'))
|
||||
|
||||
def test_volume_type_wrong_authentication(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('get_volume_type').never()
|
||||
|
||||
code, result = self.api_get('/volume_type/A_VOLUME_TYPE_ID', headers={'X-Auth-Token': 'oops'})
|
||||
assert_that(code, equal_to(401))
|
||||
|
||||
def test_volume_types_with_authentication(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('list_volume_types') \
|
||||
.and_return([a(volume_type().with_volume_type_name('some_volume_type_name'))]) \
|
||||
.once()
|
||||
|
||||
code, result = self.api_get('/volume_types', headers={'X-Auth-Token': 'some token value'})
|
||||
|
||||
assert_that(code, equal_to(200))
|
||||
assert_that(result, has_length(1))
|
||||
assert_that(result[0], has_key('volume_type_name'))
|
||||
assert_that(result[0]['volume_type_name'], equal_to('some_volume_type_name'))
|
||||
|
||||
def test_volume_types_wrong_authentication(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('list_volume_types').never()
|
||||
|
||||
code, result = self.api_get('/volume_types', headers={'X-Auth-Token': 'oops'})
|
||||
assert_that(code, equal_to(401))
|
||||
|
||||
def test_successful_volume_type_create(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(
|
||||
type_id='A_VOLUME_TYPE_ID',
|
||||
type_name="A_VOLUME_TYPE_NAME"
|
||||
)
|
||||
|
||||
self.controller.should_receive('create_volume_type') \
|
||||
.with_args(
|
||||
volume_type_id=data['type_id'],
|
||||
volume_type_name=data['type_name']) \
|
||||
.once()
|
||||
|
||||
code, result = self.api_post('/volume_type', data=data, headers={'X-Auth-Token': 'some token value'})
|
||||
assert_that(code, equal_to(201))
|
||||
|
||||
def test_volume_type_create_missing_a_param_returns_bad_request_code(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(type_name="A_VOLUME_TYPE_NAME")
|
||||
|
||||
self.controller.should_receive('create_volume_type') \
|
||||
.never()
|
||||
|
||||
code, result = self.api_post('/volume_type', data=data, headers={'X-Auth-Token': 'some token value'})
|
||||
assert_that(code, equal_to(400))
|
||||
assert_that(result, has_entries({"error": "The 'type_id' param is mandatory for the request you have made."}))
|
||||
|
||||
def test_volume_type_create_wrong_authentication(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('create_volume_type').never()
|
||||
|
||||
code, result = self.api_post('/volume_type', headers={'X-Auth-Token': 'oops'})
|
||||
assert_that(code, equal_to(401))
|
||||
|
||||
def test_volume_type_delete_with_authentication(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('delete_volume_type') \
|
||||
.with_args('A_VOLUME_TYPE_ID') \
|
||||
.once()
|
||||
|
||||
code, result = self.api_delete('/volume_type/A_VOLUME_TYPE_ID', headers={'X-Auth-Token': 'some token value'})
|
||||
assert_that(code, equal_to(202))
|
||||
|
||||
def test_volume_type_delete_not_in_database(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('delete_volume_type') \
|
||||
.with_args('A_VOLUME_TYPE_ID') \
|
||||
.and_raise(AlmanachException("An exception occurred")) \
|
||||
.once()
|
||||
|
||||
code, result = self.api_delete('/volume_type/A_VOLUME_TYPE_ID', headers={'X-Auth-Token': 'some token value'})
|
||||
|
||||
assert_that(code, equal_to(500))
|
||||
assert_that(result, has_entry("error", "An exception occurred"))
|
||||
|
||||
def test_volume_type_delete_wrong_authentication(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('delete_volume_type').never()
|
||||
|
||||
code, result = self.api_delete('/volume_type/A_VOLUME_TYPE_ID', headers={'X-Auth-Token': 'oops'})
|
||||
assert_that(code, equal_to(401))
|
||||
|
||||
def test_successful_volume_create(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(volume_id="VOLUME_ID",
|
||||
start="START_DATE",
|
||||
volume_type="VOLUME_TYPE",
|
||||
size="A_SIZE",
|
||||
volume_name="VOLUME_NAME",
|
||||
attached_to=["INSTANCE_ID"])
|
||||
|
||||
self.controller.should_receive('create_volume') \
|
||||
.with_args(project_id="PROJECT_ID",
|
||||
**data) \
|
||||
.once()
|
||||
|
||||
code, result = self.api_post(
|
||||
'/project/PROJECT_ID/volume',
|
||||
data=data,
|
||||
headers={'X-Auth-Token': 'some token value'}
|
||||
)
|
||||
assert_that(code, equal_to(201))
|
||||
|
||||
def test_volume_create_missing_a_param_returns_bad_request_code(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(volume_id="VOLUME_ID",
|
||||
start="START_DATE",
|
||||
size="A_SIZE",
|
||||
volume_name="VOLUME_NAME",
|
||||
attached_to=[])
|
||||
|
||||
self.controller.should_receive('create_volume') \
|
||||
.never()
|
||||
|
||||
code, result = self.api_post(
|
||||
'/project/PROJECT_ID/volume',
|
||||
data=data,
|
||||
headers={'X-Auth-Token': 'some token value'}
|
||||
)
|
||||
assert_that(
|
||||
result,
|
||||
has_entries({"error": "The 'volume_type' param is mandatory for the request you have made."})
|
||||
)
|
||||
assert_that(code, equal_to(400))
|
||||
|
||||
def test_volume_create_bad_date_format_returns_bad_request_code(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(volume_id="VOLUME_ID",
|
||||
start="A_BAD_DATE",
|
||||
volume_type="VOLUME_TYPE",
|
||||
size="A_SIZE",
|
||||
volume_name="VOLUME_NAME",
|
||||
attached_to=["INSTANCE_ID"])
|
||||
|
||||
self.controller.should_receive('create_volume') \
|
||||
.with_args(project_id="PROJECT_ID",
|
||||
**data) \
|
||||
.once() \
|
||||
.and_raise(DateFormatException)
|
||||
|
||||
code, result = self.api_post(
|
||||
'/project/PROJECT_ID/volume',
|
||||
data=data,
|
||||
headers={'X-Auth-Token': 'some token value'}
|
||||
)
|
||||
assert_that(result, has_entries(
|
||||
{
|
||||
"error": "The provided date has an invalid format. "
|
||||
"Format should be of yyyy-mm-ddThh:mm:ss.msZ, ex: 2015-01-31T18:24:34.1523Z"
|
||||
}
|
||||
))
|
||||
assert_that(code, equal_to(400))
|
||||
|
||||
def test_volume_create_wrong_authentication(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('create_volume').never()
|
||||
|
||||
code, result = self.api_post('/project/PROJECT_ID/volume', headers={'X-Auth-Token': 'oops'})
|
||||
assert_that(code, equal_to(401))
|
||||
|
||||
def test_successfull_volume_delete(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(date="DELETE_DATE")
|
||||
|
||||
self.controller.should_receive('delete_volume') \
|
||||
.with_args(volume_id="VOLUME_ID",
|
||||
delete_date=data['date']) \
|
||||
.once()
|
||||
|
||||
code, result = self.api_delete('/volume/VOLUME_ID', data=data, headers={'X-Auth-Token': 'some token value'})
|
||||
assert_that(code, equal_to(202))
|
||||
|
||||
def test_volume_delete_missing_a_param_returns_bad_request_code(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
|
||||
self.controller.should_receive('delete_volume') \
|
||||
.never()
|
||||
|
||||
code, result = self.api_delete('/volume/VOLUME_ID', data=dict(), headers={'X-Auth-Token': 'some token value'})
|
||||
assert_that(result, has_entries({"error": "The 'date' param is mandatory for the request you have made."}))
|
||||
assert_that(code, equal_to(400))
|
||||
|
||||
def test_volume_delete_no_data_bad_request_code(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
|
||||
self.controller.should_receive('delete_volume') \
|
||||
.never()
|
||||
|
||||
code, result = self.api_delete('/volume/VOLUME_ID', headers={'X-Auth-Token': 'some token value'})
|
||||
assert_that(result, has_entries({"error": "The request you have made must have data. None was given."}))
|
||||
assert_that(code, equal_to(400))
|
||||
|
||||
def test_volume_delete_bad_date_format_returns_bad_request_code(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(date="A_BAD_DATE")
|
||||
|
||||
self.controller.should_receive('delete_volume') \
|
||||
.with_args(volume_id="VOLUME_ID",
|
||||
delete_date=data['date']) \
|
||||
.once() \
|
||||
.and_raise(DateFormatException)
|
||||
|
||||
code, result = self.api_delete('/volume/VOLUME_ID', data=data, headers={'X-Auth-Token': 'some token value'})
|
||||
assert_that(result, has_entries(
|
||||
{
|
||||
"error": "The provided date has an invalid format. "
|
||||
"Format should be of yyyy-mm-ddThh:mm:ss.msZ, ex: 2015-01-31T18:24:34.1523Z"
|
||||
}
|
||||
))
|
||||
assert_that(code, equal_to(400))
|
||||
|
||||
def test_volume_delete_wrong_authentication(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('delete_volume').never()
|
||||
|
||||
code, result = self.api_delete('/volume/VOLUME_ID', headers={'X-Auth-Token': 'oops'})
|
||||
assert_that(code, equal_to(401))
|
||||
|
||||
def test_successful_volume_resize(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(date="UPDATED_AT",
|
||||
size="NEW_SIZE")
|
||||
|
||||
self.controller.should_receive('resize_volume') \
|
||||
.with_args(volume_id="VOLUME_ID",
|
||||
size=data['size'],
|
||||
update_date=data['date']) \
|
||||
.once()
|
||||
|
||||
code, result = self.api_put('/volume/VOLUME_ID/resize', data=data, headers={'X-Auth-Token': 'some token value'})
|
||||
assert_that(code, equal_to(200))
|
||||
|
||||
def test_volume_resize_missing_a_param_returns_bad_request_code(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(date="A_DATE")
|
||||
|
||||
self.controller.should_receive('resize_volume') \
|
||||
.never()
|
||||
|
||||
code, result = self.api_put('/volume/VOLUME_ID/resize', data=data, headers={'X-Auth-Token': 'some token value'})
|
||||
assert_that(result, has_entries({"error": "The 'size' param is mandatory for the request you have made."}))
|
||||
assert_that(code, equal_to(400))
|
||||
|
||||
def test_volume_resize_bad_date_format_returns_bad_request_code(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(date="BAD_DATE",
|
||||
size="NEW_SIZE")
|
||||
|
||||
self.controller.should_receive('resize_volume') \
|
||||
.with_args(volume_id="VOLUME_ID",
|
||||
size=data['size'],
|
||||
update_date=data['date']) \
|
||||
.once() \
|
||||
.and_raise(DateFormatException)
|
||||
|
||||
code, result = self.api_put('/volume/VOLUME_ID/resize', data=data, headers={'X-Auth-Token': 'some token value'})
|
||||
assert_that(result, has_entries(
|
||||
{
|
||||
"error": "The provided date has an invalid format. "
|
||||
"Format should be of yyyy-mm-ddThh:mm:ss.msZ, ex: 2015-01-31T18:24:34.1523Z"
|
||||
}
|
||||
))
|
||||
assert_that(code, equal_to(400))
|
||||
|
||||
def test_volume_resize_wrong_authentication(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('resize_volume').never()
|
||||
|
||||
code, result = self.api_put('/volume/INSTANCE_ID/resize', headers={'X-Auth-Token': 'oops'})
|
||||
assert_that(code, equal_to(401))
|
||||
|
||||
def test_successful_volume_attach(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(date="UPDATED_AT",
|
||||
attachments=[str(uuid4())])
|
||||
|
||||
self.controller.should_receive('attach_volume') \
|
||||
.with_args(volume_id="VOLUME_ID",
|
||||
attachments=data['attachments'],
|
||||
date=data['date']) \
|
||||
.once()
|
||||
|
||||
code, result = self.api_put('/volume/VOLUME_ID/attach', data=data, headers={'X-Auth-Token': 'some token value'})
|
||||
assert_that(code, equal_to(200))
|
||||
|
||||
def test_volume_attach_missing_a_param_returns_bad_request_code(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(date="A_DATE")
|
||||
|
||||
self.controller.should_receive('attach_volume') \
|
||||
.never()
|
||||
|
||||
code, result = self.api_put(
|
||||
'/volume/VOLUME_ID/attach',
|
||||
data=data,
|
||||
headers={'X-Auth-Token': 'some token value'}
|
||||
)
|
||||
assert_that(
|
||||
result,
|
||||
has_entries({"error": "The 'attachments' param is mandatory for the request you have made."})
|
||||
)
|
||||
assert_that(code, equal_to(400))
|
||||
|
||||
def test_volume_attach_bad_date_format_returns_bad_request_code(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(date="A_BAD_DATE",
|
||||
attachments=[str(uuid4())])
|
||||
|
||||
self.controller.should_receive('attach_volume') \
|
||||
.with_args(volume_id="VOLUME_ID",
|
||||
attachments=data['attachments'],
|
||||
date=data['date']) \
|
||||
.once() \
|
||||
.and_raise(DateFormatException)
|
||||
|
||||
code, result = self.api_put('/volume/VOLUME_ID/attach', data=data, headers={'X-Auth-Token': 'some token value'})
|
||||
assert_that(result, has_entries(
|
||||
{
|
||||
"error": "The provided date has an invalid format. "
|
||||
"Format should be of yyyy-mm-ddThh:mm:ss.msZ, ex: 2015-01-31T18:24:34.1523Z"
|
||||
}
|
||||
))
|
||||
assert_that(code, equal_to(400))
|
||||
|
||||
def test_volume_attach_wrong_authentication(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('attach_volume').never()
|
||||
|
||||
code, result = self.api_put('/volume/INSTANCE_ID/attach', headers={'X-Auth-Token': 'oops'})
|
||||
assert_that(code, equal_to(401))
|
||||
|
||||
def test_successful_volume_detach(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(date="UPDATED_AT",
|
||||
attachments=[str(uuid4())])
|
||||
|
||||
self.controller.should_receive('detach_volume') \
|
||||
.with_args(volume_id="VOLUME_ID",
|
||||
attachments=data['attachments'],
|
||||
date=data['date']) \
|
||||
.once()
|
||||
|
||||
code, result = self.api_put('/volume/VOLUME_ID/detach', data=data, headers={'X-Auth-Token': 'some token value'})
|
||||
assert_that(code, equal_to(200))
|
||||
|
||||
def test_volume_detach_missing_a_param_returns_bad_request_code(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(date="A_DATE")
|
||||
|
||||
self.controller.should_receive('detach_volume') \
|
||||
.never()
|
||||
|
||||
code, result = self.api_put('/volume/VOLUME_ID/detach', data=data, headers={'X-Auth-Token': 'some token value'})
|
||||
assert_that(
|
||||
result,
|
||||
has_entries({"error": "The 'attachments' param is mandatory for the request you have made."})
|
||||
)
|
||||
assert_that(code, equal_to(400))
|
||||
|
||||
def test_volume_detach_bad_date_format_returns_bad_request_code(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(date="A_BAD_DATE",
|
||||
attachments=[str(uuid4())])
|
||||
|
||||
self.controller.should_receive('detach_volume') \
|
||||
.with_args(volume_id="VOLUME_ID",
|
||||
attachments=data['attachments'],
|
||||
date=data['date']) \
|
||||
.once() \
|
||||
.and_raise(DateFormatException)
|
||||
|
||||
code, result = self.api_put('/volume/VOLUME_ID/detach', data=data, headers={'X-Auth-Token': 'some token value'})
|
||||
assert_that(result, has_entries(
|
||||
{
|
||||
"error": "The provided date has an invalid format. "
|
||||
"Format should be of yyyy-mm-ddThh:mm:ss.msZ, ex: 2015-01-31T18:24:34.1523Z"
|
||||
}
|
||||
))
|
||||
assert_that(code, equal_to(400))
|
||||
|
||||
def test_volume_detach_wrong_authentication(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('detach_volume').never()
|
||||
|
||||
code, result = self.api_put('/volume/INSTANCE_ID/detach', headers={'X-Auth-Token': 'oops'})
|
||||
assert_that(code, equal_to(401))
|
||||
|
||||
def test_successful_instance_create(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(id="INSTANCE_ID",
|
||||
created_at="CREATED_AT",
|
||||
name="INSTANCE_NAME",
|
||||
flavor="A_FLAVOR",
|
||||
os_type="AN_OS_TYPE",
|
||||
os_distro="A_DISTRIBUTION",
|
||||
os_version="AN_OS_VERSION")
|
||||
|
||||
self.controller.should_receive('create_instance') \
|
||||
.with_args(tenant_id="PROJECT_ID",
|
||||
instance_id=data["id"],
|
||||
create_date=data["created_at"],
|
||||
flavor=data['flavor'],
|
||||
os_type=data['os_type'],
|
||||
distro=data['os_distro'],
|
||||
version=data['os_version'],
|
||||
name=data['name'],
|
||||
metadata={}) \
|
||||
.once()
|
||||
|
||||
code, result = self.api_post(
|
||||
'/project/PROJECT_ID/instance',
|
||||
data=data,
|
||||
headers={'X-Auth-Token': 'some token value'}
|
||||
)
|
||||
assert_that(code, equal_to(201))
|
||||
|
||||
def test_instance_create_missing_a_param_returns_bad_request_code(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(id="INSTANCE_ID",
|
||||
created_at="CREATED_AT",
|
||||
name="INSTANCE_NAME",
|
||||
flavor="A_FLAVOR",
|
||||
os_type="AN_OS_TYPE",
|
||||
os_version="AN_OS_VERSION")
|
||||
|
||||
self.controller.should_receive('create_instance') \
|
||||
.never()
|
||||
|
||||
code, result = self.api_post(
|
||||
'/project/PROJECT_ID/instance',
|
||||
data=data,
|
||||
headers={'X-Auth-Token': 'some token value'}
|
||||
)
|
||||
assert_that(result, has_entries({"error": "The 'os_distro' param is mandatory for the request you have made."}))
|
||||
assert_that(code, equal_to(400))
|
||||
|
||||
def test_instance_create_bad_date_format_returns_bad_request_code(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(id="INSTANCE_ID",
|
||||
created_at="A_BAD_DATE",
|
||||
name="INSTANCE_NAME",
|
||||
flavor="A_FLAVOR",
|
||||
os_type="AN_OS_TYPE",
|
||||
os_distro="A_DISTRIBUTION",
|
||||
os_version="AN_OS_VERSION")
|
||||
|
||||
self.controller.should_receive('create_instance') \
|
||||
.with_args(tenant_id="PROJECT_ID",
|
||||
instance_id=data["id"],
|
||||
create_date=data["created_at"],
|
||||
flavor=data['flavor'],
|
||||
os_type=data['os_type'],
|
||||
distro=data['os_distro'],
|
||||
version=data['os_version'],
|
||||
name=data['name'],
|
||||
metadata={}) \
|
||||
.once() \
|
||||
.and_raise(DateFormatException)
|
||||
|
||||
code, result = self.api_post(
|
||||
'/project/PROJECT_ID/instance',
|
||||
data=data,
|
||||
headers={'X-Auth-Token': 'some token value'}
|
||||
)
|
||||
assert_that(result, has_entries(
|
||||
{
|
||||
"error": "The provided date has an invalid format. "
|
||||
"Format should be of yyyy-mm-ddThh:mm:ss.msZ, ex: 2015-01-31T18:24:34.1523Z"
|
||||
}
|
||||
))
|
||||
assert_that(code, equal_to(400))
|
||||
|
||||
def test_instance_create_wrong_authentication(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('create_instance').never()
|
||||
|
||||
code, result = self.api_post('/project/PROJECT_ID/instance', headers={'X-Auth-Token': 'oops'})
|
||||
|
||||
assert_that(code, equal_to(401))
|
||||
|
||||
def test_successful_instance_resize(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(date="UPDATED_AT",
|
||||
flavor="A_FLAVOR")
|
||||
|
||||
self.controller.should_receive('resize_instance') \
|
||||
.with_args(instance_id="INSTANCE_ID",
|
||||
flavor=data['flavor'],
|
||||
resize_date=data['date']) \
|
||||
.once()
|
||||
|
||||
code, result = self.api_put(
|
||||
'/instance/INSTANCE_ID/resize',
|
||||
data=data,
|
||||
headers={'X-Auth-Token': 'some token value'}
|
||||
)
|
||||
assert_that(code, equal_to(200))
|
||||
|
||||
def test_successfull_instance_delete(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(date="DELETE_DATE")
|
||||
|
||||
self.controller.should_receive('delete_instance') \
|
||||
.with_args(instance_id="INSTANCE_ID",
|
||||
delete_date=data['date']) \
|
||||
.once()
|
||||
|
||||
code, result = self.api_delete('/instance/INSTANCE_ID', data=data, headers={'X-Auth-Token': 'some token value'})
|
||||
assert_that(code, equal_to(202))
|
||||
|
||||
def test_instance_delete_missing_a_param_returns_bad_request_code(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
|
||||
self.controller.should_receive('delete_instance') \
|
||||
.never()
|
||||
|
||||
code, result = self.api_delete(
|
||||
'/instance/INSTANCE_ID',
|
||||
data=dict(),
|
||||
headers={'X-Auth-Token': 'some token value'}
|
||||
)
|
||||
assert_that(result, has_entries({"error": "The 'date' param is mandatory for the request you have made."}))
|
||||
assert_that(code, equal_to(400))
|
||||
|
||||
def test_instance_delete_no_data_bad_request_code(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
|
||||
self.controller.should_receive('delete_instance') \
|
||||
.never()
|
||||
|
||||
code, result = self.api_delete('/instance/INSTANCE_ID', headers={'X-Auth-Token': 'some token value'})
|
||||
assert_that(result, has_entries({"error": "The request you have made must have data. None was given."}))
|
||||
assert_that(code, equal_to(400))
|
||||
|
||||
def test_instance_delete_bad_date_format_returns_bad_request_code(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(date="A_BAD_DATE")
|
||||
|
||||
self.controller.should_receive('delete_instance') \
|
||||
.with_args(instance_id="INSTANCE_ID",
|
||||
delete_date=data['date']) \
|
||||
.once() \
|
||||
.and_raise(DateFormatException)
|
||||
|
||||
code, result = self.api_delete('/instance/INSTANCE_ID', data=data, headers={'X-Auth-Token': 'some token value'})
|
||||
assert_that(result, has_entries(
|
||||
{
|
||||
"error": "The provided date has an invalid format. "
|
||||
"Format should be of yyyy-mm-ddThh:mm:ss.msZ, ex: 2015-01-31T18:24:34.1523Z"
|
||||
}
|
||||
))
|
||||
assert_that(code, equal_to(400))
|
||||
|
||||
def test_instance_delete_wrong_authentication(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('delete_instance').never()
|
||||
|
||||
code, result = self.api_delete('/instance/INSTANCE_ID', headers={'X-Auth-Token': 'oops'})
|
||||
assert_that(code, equal_to(401))
|
||||
|
||||
def test_instance_resize_missing_a_param_returns_bad_request_code(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(date="UPDATED_AT")
|
||||
|
||||
self.controller.should_receive('resize_instance') \
|
||||
.never()
|
||||
|
||||
code, result = self.api_put(
|
||||
'/instance/INSTANCE_ID/resize',
|
||||
data=data,
|
||||
headers={'X-Auth-Token': 'some token value'}
|
||||
)
|
||||
assert_that(result, has_entries({"error": "The 'flavor' param is mandatory for the request you have made."}))
|
||||
assert_that(code, equal_to(400))
|
||||
|
||||
def test_instance_resize_bad_date_format_returns_bad_request_code(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = dict(date="A_BAD_DATE",
|
||||
flavor="A_FLAVOR")
|
||||
|
||||
self.controller.should_receive('resize_instance') \
|
||||
.with_args(instance_id="INSTANCE_ID",
|
||||
flavor=data['flavor'],
|
||||
resize_date=data['date']) \
|
||||
.once() \
|
||||
.and_raise(DateFormatException)
|
||||
|
||||
code, result = self.api_put(
|
||||
'/instance/INSTANCE_ID/resize',
|
||||
data=data,
|
||||
headers={'X-Auth-Token': 'some token value'}
|
||||
)
|
||||
assert_that(result, has_entries(
|
||||
{
|
||||
"error": "The provided date has an invalid format. "
|
||||
"Format should be of yyyy-mm-ddThh:mm:ss.msZ, ex: 2015-01-31T18:24:34.1523Z"
|
||||
}
|
||||
))
|
||||
assert_that(code, equal_to(400))
|
||||
|
||||
def test_instance_resize_wrong_authentication(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('resize_instance').never()
|
||||
|
||||
code, result = self.api_put('/instance/INSTANCE_ID/resize', headers={'X-Auth-Token': 'oops'})
|
||||
assert_that(code, equal_to(401))
|
||||
|
||||
def test_rebuild_instance(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
instance_id = 'INSTANCE_ID'
|
||||
data = {
|
||||
'distro': 'A_DISTRIBUTION',
|
||||
'version': 'A_VERSION',
|
||||
'rebuild_date': 'UPDATE_DATE',
|
||||
}
|
||||
self.controller.should_receive('rebuild_instance') \
|
||||
.with_args(
|
||||
instance_id=instance_id,
|
||||
distro=data.get('distro'),
|
||||
version=data.get('version'),
|
||||
rebuild_date=data.get('rebuild_date')) \
|
||||
.once()
|
||||
|
||||
code, result = self.api_put(
|
||||
'/instance/INSTANCE_ID/rebuild',
|
||||
data=data,
|
||||
headers={'X-Auth-Token': 'some token value'}
|
||||
)
|
||||
|
||||
assert_that(code, equal_to(200))
|
||||
|
||||
def test_rebuild_instance_missing_a_param_returns_bad_request_code(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
data = {
|
||||
'distro': 'A_DISTRIBUTION',
|
||||
'rebuild_date': 'UPDATE_DATE',
|
||||
}
|
||||
|
||||
self.controller.should_receive('rebuild_instance') \
|
||||
.never()
|
||||
|
||||
code, result = self.api_put(
|
||||
'/instance/INSTANCE_ID/rebuild',
|
||||
data=data,
|
||||
headers={'X-Auth-Token': 'some token value'}
|
||||
)
|
||||
assert_that(result, has_entries({"error": "The 'version' param is mandatory for the request you have made."}))
|
||||
assert_that(code, equal_to(400))
|
||||
|
||||
def test_rebuild_instance_bad_date_format_returns_bad_request_code(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
instance_id = 'INSTANCE_ID'
|
||||
data = {
|
||||
'distro': 'A_DISTRIBUTION',
|
||||
'version': 'A_VERSION',
|
||||
'rebuild_date': 'A_BAD_UPDATE_DATE',
|
||||
}
|
||||
|
||||
self.controller.should_receive('rebuild_instance') \
|
||||
.with_args(instance_id=instance_id, **data) \
|
||||
.once() \
|
||||
.and_raise(DateFormatException)
|
||||
|
||||
code, result = self.api_put(
|
||||
'/instance/INSTANCE_ID/rebuild',
|
||||
data=data,
|
||||
headers={'X-Auth-Token': 'some token value'}
|
||||
)
|
||||
assert_that(result, has_entries(
|
||||
{
|
||||
"error": "The provided date has an invalid format. "
|
||||
"Format should be of yyyy-mm-ddThh:mm:ss.msZ, ex: 2015-01-31T18:24:34.1523Z"
|
||||
}
|
||||
))
|
||||
assert_that(code, equal_to(400))
|
||||
|
||||
def test_rebuild_instance_wrong_authentication(self):
|
||||
self.having_config('api_auth_token', 'some token value')
|
||||
self.controller.should_receive('rebuild_instance').never()
|
||||
|
||||
code, result = self.api_put('/instance/INSTANCE_ID/rebuild', headers={'X-Auth-Token': 'oops'})
|
||||
assert_that(code, equal_to(401))
|
||||
|
||||
def api_get(self, url, query_string=None, headers=None, accept='application/json'):
|
||||
return self._api_call(url, "get", None, query_string, headers, accept)
|
||||
|
||||
def api_post(self, url, data=None, query_string=None, headers=None, accept='application/json'):
|
||||
return self._api_call(url, "post", data, query_string, headers, accept)
|
||||
|
||||
def api_put(self, url, data=None, query_string=None, headers=None, accept='application/json'):
|
||||
return self._api_call(url, "put", data, query_string, headers, accept)
|
||||
|
||||
def api_delete(self, url, query_string=None, data=None, headers=None, accept='application/json'):
|
||||
return self._api_call(url, "delete", data, query_string, headers, accept)
|
||||
|
||||
def api_update(self, url, data=None, query_string=None, headers=None, accept='application/json'):
|
||||
return self._api_call(url, "put", data, query_string, headers, accept)
|
||||
|
||||
def _api_call(self, url, method, data=None, query_string=None, headers=None, accept='application/json'):
|
||||
with self.app.test_client() as http_client:
|
||||
if not headers:
|
||||
headers = {}
|
||||
headers['Accept'] = accept
|
||||
result = getattr(http_client, method)(url, data=json.dumps(data), query_string=query_string, headers=headers)
|
||||
return_data = json.loads(result.data)\
|
||||
if result.headers.get('Content-Type') == 'application/json' \
|
||||
else result.data
|
||||
return result.status_code, return_data
|
||||
|
||||
@staticmethod
|
||||
def having_config(key, value):
|
||||
(flexmock(config)
|
||||
.should_receive(key)
|
||||
.and_return(value))
|
||||
|
||||
|
||||
class DateMatcher(object):
|
||||
def __init__(self, date):
|
||||
self.date = date
|
||||
|
||||
def __eq__(self, other):
|
||||
return other == self.date
|
||||
|
||||
|
||||
def a_date_matching(date_string):
|
||||
return DateMatcher(datetime.strptime(date_string, "%Y-%m-%d %H:%M:%S.%f"))
|
151
tests/builder.py
Normal file
151
tests/builder.py
Normal file
@ -0,0 +1,151 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from copy import copy
|
||||
|
||||
from datetime import datetime
|
||||
from uuid import uuid4
|
||||
|
||||
from almanach.core.model import build_entity_from_dict, Instance, Volume, VolumeType
|
||||
import pytz
|
||||
|
||||
|
||||
class Builder(object):
|
||||
def __init__(self, dict_object):
|
||||
self.dict_object = dict_object
|
||||
|
||||
|
||||
class EntityBuilder(Builder):
|
||||
def build(self):
|
||||
return build_entity_from_dict(self.dict_object)
|
||||
|
||||
def with_id(self, entity_id):
|
||||
self.dict_object["entity_id"] = entity_id
|
||||
return self
|
||||
|
||||
def with_project_id(self, project_id):
|
||||
self.dict_object["project_id"] = project_id
|
||||
return self
|
||||
|
||||
def with_last_event(self, last_event):
|
||||
self.dict_object["last_event"] = last_event
|
||||
return self
|
||||
|
||||
def with_start(self, year, month, day, hour, minute, second):
|
||||
self.with_datetime_start(datetime(year, month, day, hour, minute, second))
|
||||
return self
|
||||
|
||||
def with_datetime_start(self, date):
|
||||
self.dict_object["start"] = date
|
||||
return self
|
||||
|
||||
def with_end(self, year, month, day, hour, minute, second):
|
||||
self.dict_object["end"] = datetime(year, month, day, hour, minute, second)
|
||||
return self
|
||||
|
||||
def with_no_end(self):
|
||||
self.dict_object["end"] = None
|
||||
return self
|
||||
|
||||
def with_metadata(self, metadata):
|
||||
self.dict_object['metadata'] = metadata
|
||||
return self
|
||||
|
||||
def build_from(self, other):
|
||||
self.dict_object = copy(other.__dict__)
|
||||
return self
|
||||
|
||||
def with_all_dates_in_string(self):
|
||||
self.dict_object['start'] = self.dict_object['start'].strftime("%Y-%m-%dT%H:%M:%S.%fZ")
|
||||
self.dict_object['last_event'] = self.dict_object['last_event'].strftime("%Y-%m-%dT%H:%M:%S.%fZ")
|
||||
return self
|
||||
|
||||
|
||||
class VolumeBuilder(EntityBuilder):
|
||||
def with_attached_to(self, attached_to):
|
||||
self.dict_object["attached_to"] = attached_to
|
||||
return self
|
||||
|
||||
def with_no_attachment(self):
|
||||
self.dict_object["attached_to"] = []
|
||||
return self
|
||||
|
||||
def with_display_name(self, display_name):
|
||||
self.dict_object["name"] = display_name
|
||||
return self
|
||||
|
||||
def with_volume_type(self, volume_type):
|
||||
self.dict_object["volume_type"] = volume_type
|
||||
return self
|
||||
|
||||
|
||||
class VolumeTypeBuilder(Builder):
|
||||
def build(self):
|
||||
return VolumeType(**self.dict_object)
|
||||
|
||||
def with_volume_type_id(self, volume_type_id):
|
||||
self.dict_object["volume_type_id"] = volume_type_id
|
||||
return self
|
||||
|
||||
def with_volume_type_name(self, volume_type_name):
|
||||
self.dict_object["volume_type_name"] = volume_type_name
|
||||
return self
|
||||
|
||||
|
||||
def instance():
|
||||
return EntityBuilder({
|
||||
"entity_id": str(uuid4()),
|
||||
"project_id": str(uuid4()),
|
||||
"start": datetime(2014, 1, 1, 0, 0, 0, 0, pytz.utc),
|
||||
"end": None,
|
||||
"last_event": datetime(2014, 1, 1, 0, 0, 0, 0, pytz.utc),
|
||||
"flavor": "A1.1",
|
||||
"os": {
|
||||
"os_type": "windows",
|
||||
"distro": "windows",
|
||||
"version": "2012r2"
|
||||
},
|
||||
"entity_type": Instance.TYPE,
|
||||
"name": "some-instance",
|
||||
"metadata": {
|
||||
"a_metadata.to_filter": "include.this",
|
||||
"a_metadata.to_exclude": "exclude.this"
|
||||
}
|
||||
})
|
||||
|
||||
|
||||
def volume():
|
||||
return VolumeBuilder({
|
||||
"entity_id": str(uuid4()),
|
||||
"project_id": str(uuid4()),
|
||||
"start": datetime(2014, 1, 1, 0, 0, 0, 0, pytz.utc),
|
||||
"end": None,
|
||||
"last_event": datetime(2014, 1, 1, 0, 0, 0, 0, pytz.utc),
|
||||
"volume_type": "SF400",
|
||||
"size": 1000000,
|
||||
"entity_type": Volume.TYPE,
|
||||
"name": "some-volume",
|
||||
"attached_to": None,
|
||||
})
|
||||
|
||||
|
||||
def volume_type():
|
||||
return VolumeTypeBuilder({
|
||||
"volume_type_id": str(uuid4()),
|
||||
"volume_type_name": "a_type_name"
|
||||
})
|
||||
|
||||
|
||||
def a(builder):
|
||||
return builder.build()
|
0
tests/core/__init__.py
Normal file
0
tests/core/__init__.py
Normal file
619
tests/core/test_controller.py
Normal file
619
tests/core/test_controller.py
Normal file
@ -0,0 +1,619 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import unittest
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
import pytz
|
||||
from dateutil import parser as date_parser
|
||||
from flexmock import flexmock, flexmock_teardown
|
||||
from nose.tools import assert_raises
|
||||
|
||||
from almanach import config
|
||||
from almanach.common.DateFormatException import DateFormatException
|
||||
from almanach.core.controller import Controller
|
||||
from almanach.core.model import Instance, Volume
|
||||
from tests.builder import a, instance, volume, volume_type
|
||||
|
||||
|
||||
class ControllerTest(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
self.database_adapter = flexmock()
|
||||
|
||||
(flexmock(config)
|
||||
.should_receive("volume_existence_threshold")
|
||||
.and_return(10))
|
||||
(flexmock(config)
|
||||
.should_receive("device_metadata_whitelist")
|
||||
.and_return(["a_metadata.to_filter"]))
|
||||
|
||||
self.controller = Controller(self.database_adapter)
|
||||
|
||||
def tearDown(self):
|
||||
flexmock_teardown()
|
||||
|
||||
def test_instance_created(self):
|
||||
fake_instance = a(instance().with_all_dates_in_string())
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_active_entity")
|
||||
.with_args(fake_instance.entity_id)
|
||||
.and_raise(KeyError)
|
||||
.once())
|
||||
|
||||
expected_instance = a(instance()
|
||||
.with_id(fake_instance.entity_id)
|
||||
.with_project_id(fake_instance.project_id)
|
||||
.with_metadata({"a_metadata.to_filter": "include.this"}))
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("insert_entity")
|
||||
.with_args(expected_instance)
|
||||
.once())
|
||||
|
||||
self.controller.create_instance(fake_instance.entity_id, fake_instance.project_id, fake_instance.start,
|
||||
fake_instance.flavor, fake_instance.os.os_type, fake_instance.os.distro,
|
||||
fake_instance.os.version, fake_instance.name, fake_instance.metadata)
|
||||
|
||||
def test_resize_instance(self):
|
||||
fake_instance = a(instance())
|
||||
|
||||
dates_str = "2015-10-21T16:25:00.000000Z"
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_active_entity")
|
||||
.with_args(fake_instance.entity_id)
|
||||
.and_return(fake_instance)
|
||||
.once())
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("close_active_entity")
|
||||
.with_args(fake_instance.entity_id, date_parser.parse(dates_str))
|
||||
.once())
|
||||
fake_instance.start = dates_str
|
||||
fake_instance.end = None
|
||||
fake_instance.last_event = dates_str
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("insert_entity")
|
||||
.with_args(fake_instance)
|
||||
.once())
|
||||
|
||||
self.controller.resize_instance(fake_instance.entity_id, "newly_flavor", dates_str)
|
||||
|
||||
def test_instance_create_date_updated(self):
|
||||
fake_instance1 = a(instance())
|
||||
fake_instance2 = fake_instance1
|
||||
fake_instance2.start = "2015-10-05 12:04:00.0000Z"
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_active_entity")
|
||||
.with_args(fake_instance1.entity_id)
|
||||
.and_return(fake_instance1)
|
||||
.once())
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("update_active_entity")
|
||||
.with_args(fake_instance2)
|
||||
.once())
|
||||
|
||||
self.controller.update_instance_create_date(fake_instance1.entity_id, "2015-10-05 12:04:00.0000Z")
|
||||
|
||||
def test_instance_created_but_its_an_old_event(self):
|
||||
fake_instance = a(instance()
|
||||
.with_last_event(pytz.utc.localize(datetime(2015, 10, 21, 16, 29, 0))))
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_active_entity")
|
||||
.with_args(fake_instance.entity_id)
|
||||
.and_return(fake_instance)
|
||||
.once())
|
||||
|
||||
self.controller.create_instance(fake_instance.entity_id, fake_instance.project_id,
|
||||
'2015-10-21T16:25:00.000000Z',
|
||||
fake_instance.flavor, fake_instance.os.os_type, fake_instance.os.distro,
|
||||
fake_instance.os.version, fake_instance.name, fake_instance.metadata)
|
||||
|
||||
def test_instance_created_but_find_garbage(self):
|
||||
fake_instance = a(instance().with_all_dates_in_string())
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_active_entity")
|
||||
.with_args(fake_instance.entity_id)
|
||||
.and_raise(NotImplementedError) # The db adapter found garbage in the database, we will ignore this entry
|
||||
.once())
|
||||
|
||||
expected_instance = a(instance()
|
||||
.with_id(fake_instance.entity_id)
|
||||
.with_project_id(fake_instance.project_id)
|
||||
.with_metadata({"a_metadata.to_filter": "include.this"}))
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("insert_entity")
|
||||
.with_args(expected_instance)
|
||||
.once())
|
||||
|
||||
self.controller.create_instance(fake_instance.entity_id, fake_instance.project_id, fake_instance.start,
|
||||
fake_instance.flavor, fake_instance.os.os_type, fake_instance.os.distro,
|
||||
fake_instance.os.version, fake_instance.name, fake_instance.metadata)
|
||||
|
||||
def test_instance_deleted(self):
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("close_active_entity")
|
||||
.with_args("id1", date_parser.parse("2015-10-21T16:25:00.000000Z"))
|
||||
.once())
|
||||
|
||||
self.controller.delete_instance("id1", "2015-10-21T16:25:00.000000Z")
|
||||
|
||||
def test_volume_deleted(self):
|
||||
fake_volume = a(volume())
|
||||
|
||||
date = datetime(fake_volume.start.year, fake_volume.start.month, fake_volume.start.day, fake_volume.start.hour,
|
||||
fake_volume.start.minute, fake_volume.start.second, fake_volume.start.microsecond)
|
||||
date = date + timedelta(1)
|
||||
expected_date = pytz.utc.localize(date)
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("count_entity_entries")
|
||||
.with_args(fake_volume.entity_id)
|
||||
.and_return(1))
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_active_entity")
|
||||
.with_args(fake_volume.entity_id)
|
||||
.and_return(fake_volume))
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("close_active_entity")
|
||||
.with_args(fake_volume.entity_id, expected_date)
|
||||
.once())
|
||||
|
||||
self.controller.delete_volume(fake_volume.entity_id, date.strftime("%Y-%m-%dT%H:%M:%S.%fZ"))
|
||||
|
||||
def test_volume_deleted_within_volume_existance_threshold(self):
|
||||
fake_volume = a(volume())
|
||||
|
||||
date = datetime(fake_volume.start.year, fake_volume.start.month, fake_volume.start.day, fake_volume.start.hour,
|
||||
fake_volume.start.minute, fake_volume.start.second, fake_volume.start.microsecond)
|
||||
date = date + timedelta(0, 5)
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("count_entity_entries")
|
||||
.with_args(fake_volume.entity_id)
|
||||
.and_return(2))
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_active_entity")
|
||||
.with_args(fake_volume.entity_id)
|
||||
.and_return(fake_volume))
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("delete_active_entity")
|
||||
.with_args(fake_volume.entity_id)
|
||||
.once())
|
||||
|
||||
self.controller.delete_volume(fake_volume.entity_id, date.strftime("%Y-%m-%dT%H:%M:%S.%fZ"))
|
||||
|
||||
def test_volume_deleted_within_volume_existance_threshold_but_with_only_one_entry(self):
|
||||
fake_volume = a(volume())
|
||||
|
||||
date = datetime(fake_volume.start.year, fake_volume.start.month, fake_volume.start.day, fake_volume.start.hour,
|
||||
fake_volume.start.minute, fake_volume.start.second, fake_volume.start.microsecond)
|
||||
date = date + timedelta(0, 5)
|
||||
expected_date = pytz.utc.localize(date)
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("count_entity_entries")
|
||||
.with_args(fake_volume.entity_id)
|
||||
.and_return(1))
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("close_active_entity")
|
||||
.with_args(fake_volume.entity_id, expected_date)
|
||||
.once())
|
||||
|
||||
self.controller.delete_volume(fake_volume.entity_id, date.strftime("%Y-%m-%dT%H:%M:%S.%fZ"))
|
||||
|
||||
def test_list_instances(self):
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("list_entities")
|
||||
.with_args("project_id", "start", "end", Instance.TYPE)
|
||||
.and_return(["instance1", "instance2"])
|
||||
.once())
|
||||
|
||||
self.assertEqual(self.controller.list_instances("project_id", "start", "end"), ["instance1", "instance2"])
|
||||
|
||||
def test_list_volumes(self):
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("list_entities")
|
||||
.with_args("project_id", "start", "end", Volume.TYPE)
|
||||
.and_return(["volume2", "volume3"]))
|
||||
|
||||
self.assertEqual(self.controller.list_volumes("project_id", "start", "end"), ["volume2", "volume3"])
|
||||
|
||||
def test_list_entities(self):
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("list_entities")
|
||||
.with_args("project_id", "start", "end")
|
||||
.and_return(["volume2", "volume3", "instance1"]))
|
||||
|
||||
self.assertEqual(self.controller.list_entities("project_id", "start", "end"), ["volume2", "volume3", "instance1"])
|
||||
|
||||
def test_create_volume(self):
|
||||
some_volume_type = a(volume_type().with_volume_type_name("some_volume_type_name"))
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_volume_type")
|
||||
.with_args(some_volume_type.volume_type_id)
|
||||
.and_return(some_volume_type)
|
||||
.once())
|
||||
|
||||
some_volume = a(volume()
|
||||
.with_volume_type(some_volume_type.volume_type_name)
|
||||
.with_all_dates_in_string())
|
||||
|
||||
expected_volume = a(volume()
|
||||
.with_volume_type(some_volume_type.volume_type_name)
|
||||
.with_project_id(some_volume.project_id)
|
||||
.with_id(some_volume.entity_id))
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("insert_entity")
|
||||
.with_args(expected_volume)
|
||||
.once())
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_active_entity")
|
||||
.with_args(some_volume.entity_id)
|
||||
.and_return(None)
|
||||
.once())
|
||||
|
||||
self.controller.create_volume(some_volume.entity_id, some_volume.project_id, some_volume.start,
|
||||
some_volume_type.volume_type_id, some_volume.size, some_volume.name,
|
||||
some_volume.attached_to)
|
||||
|
||||
def test_create_volume_raises_bad_date_format(self):
|
||||
some_volume = a(volume())
|
||||
|
||||
assert_raises(
|
||||
DateFormatException,
|
||||
self.controller.create_volume,
|
||||
some_volume.entity_id,
|
||||
some_volume.project_id,
|
||||
'bad_date_format',
|
||||
some_volume.volume_type,
|
||||
some_volume.size,
|
||||
some_volume.name,
|
||||
some_volume.attached_to
|
||||
)
|
||||
|
||||
def test_create_volume_insert_none_volume_type_as_type(self):
|
||||
some_volume_type = a(volume_type().with_volume_type_id(None).with_volume_type_name(None))
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_volume_type")
|
||||
.never())
|
||||
|
||||
some_volume = a(volume()
|
||||
.with_volume_type(some_volume_type.volume_type_name)
|
||||
.with_all_dates_in_string())
|
||||
|
||||
expected_volume = a(volume()
|
||||
.with_volume_type(some_volume_type.volume_type_name)
|
||||
.with_project_id(some_volume.project_id)
|
||||
.with_id(some_volume.entity_id))
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("insert_entity")
|
||||
.with_args(expected_volume)
|
||||
.once())
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_active_entity")
|
||||
.with_args(some_volume.entity_id)
|
||||
.and_return(None)
|
||||
.once())
|
||||
|
||||
self.controller.create_volume(some_volume.entity_id, some_volume.project_id, some_volume.start,
|
||||
some_volume_type.volume_type_id, some_volume.size, some_volume.name,
|
||||
some_volume.attached_to)
|
||||
|
||||
def test_create_volume_with_invalid_volume_type(self):
|
||||
some_volume_type = a(volume_type())
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_volume_type")
|
||||
.with_args(some_volume_type.volume_type_id)
|
||||
.and_raise(KeyError)
|
||||
.once())
|
||||
|
||||
some_volume = a(volume()
|
||||
.with_volume_type(some_volume_type.volume_type_name)
|
||||
.with_all_dates_in_string())
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("insert_entity")
|
||||
.never())
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_active_entity")
|
||||
.with_args(some_volume.entity_id)
|
||||
.and_return(None)
|
||||
.once())
|
||||
|
||||
with self.assertRaises(KeyError):
|
||||
self.controller.create_volume(some_volume.entity_id, some_volume.project_id, some_volume.start,
|
||||
some_volume_type.volume_type_id, some_volume.size, some_volume.name,
|
||||
some_volume.attached_to)
|
||||
|
||||
def test_create_volume_but_its_an_old_event(self):
|
||||
some_volume = a(volume().with_last_event(pytz.utc.localize(datetime(2015, 10, 21, 16, 29, 0))))
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_active_entity")
|
||||
.with_args(some_volume.entity_id)
|
||||
.and_return(some_volume)
|
||||
.once())
|
||||
|
||||
self.controller.create_volume(some_volume.entity_id, some_volume.project_id, '2015-10-21T16:25:00.000000Z',
|
||||
some_volume.volume_type, some_volume.size, some_volume.name, some_volume.attached_to)
|
||||
|
||||
def test_volume_updated(self):
|
||||
fake_volume = a(volume())
|
||||
dates_str = "2015-10-21T16:25:00.000000Z"
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_active_entity")
|
||||
.with_args(fake_volume.entity_id)
|
||||
.and_return(fake_volume)
|
||||
.once())
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("close_active_entity")
|
||||
.with_args(fake_volume.entity_id, date_parser.parse(dates_str))
|
||||
.once())
|
||||
fake_volume.size = "new_size"
|
||||
fake_volume.start = dates_str
|
||||
fake_volume.end = None
|
||||
fake_volume.last_event = dates_str
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("insert_entity")
|
||||
.with_args(fake_volume)
|
||||
.once())
|
||||
|
||||
self.controller.resize_volume(fake_volume.entity_id, "new_size", dates_str)
|
||||
|
||||
def test_volume_attach_with_no_existing_attachment(self):
|
||||
fake_volume = a(volume()
|
||||
.with_no_attachment())
|
||||
|
||||
date = datetime(fake_volume.start.year, fake_volume.start.month, fake_volume.start.day, fake_volume.start.hour,
|
||||
fake_volume.start.minute, fake_volume.start.second, fake_volume.start.microsecond)
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_active_entity")
|
||||
.with_args(fake_volume.entity_id)
|
||||
.and_return(fake_volume)
|
||||
.once())
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("update_active_entity")
|
||||
.with_args(fake_volume))
|
||||
|
||||
self.controller.attach_volume(
|
||||
fake_volume.entity_id,
|
||||
date.strftime("%Y-%m-%dT%H:%M:%S.%f"),
|
||||
["new_attached_to"]
|
||||
)
|
||||
self.assertEqual(fake_volume.attached_to, ["new_attached_to"])
|
||||
|
||||
def test_volume_attach_with_existing_attachments(self):
|
||||
fake_volume = a(volume()
|
||||
.with_attached_to(["existing_attached_to"]))
|
||||
|
||||
date = datetime(fake_volume.start.year, fake_volume.start.month, fake_volume.start.day, fake_volume.start.hour,
|
||||
fake_volume.start.minute, fake_volume.start.second, fake_volume.start.microsecond)
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_active_entity")
|
||||
.with_args(fake_volume.entity_id)
|
||||
.and_return(fake_volume)
|
||||
.once())
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("update_active_entity")
|
||||
.with_args(fake_volume))
|
||||
|
||||
self.controller.attach_volume(
|
||||
fake_volume.entity_id,
|
||||
date.strftime("%Y-%m-%dT%H:%M:%S.%f"),
|
||||
["existing_attached_to", "new_attached_to"]
|
||||
)
|
||||
self.assertEqual(fake_volume.attached_to, ["existing_attached_to", "new_attached_to"])
|
||||
|
||||
def test_volume_attach_after_threshold(self):
|
||||
fake_volume = a(volume())
|
||||
|
||||
date = datetime(fake_volume.start.year, fake_volume.start.month, fake_volume.start.day, fake_volume.start.hour,
|
||||
fake_volume.start.minute, fake_volume.start.second, fake_volume.start.microsecond)
|
||||
date = date + timedelta(0, 120)
|
||||
expected_date = pytz.utc.localize(date)
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_active_entity")
|
||||
.with_args(fake_volume.entity_id)
|
||||
.and_return(fake_volume)
|
||||
.once())
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("close_active_entity")
|
||||
.with_args(fake_volume.entity_id, expected_date)
|
||||
.once())
|
||||
|
||||
new_volume = a(volume()
|
||||
.build_from(fake_volume)
|
||||
.with_datetime_start(expected_date)
|
||||
.with_no_end()
|
||||
.with_last_event(expected_date)
|
||||
.with_attached_to(["new_attached_to"]))
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("insert_entity")
|
||||
.with_args(new_volume)
|
||||
.once())
|
||||
|
||||
self.controller.attach_volume(
|
||||
fake_volume.entity_id,
|
||||
date.strftime("%Y-%m-%dT%H:%M:%S.%f"),
|
||||
["new_attached_to"]
|
||||
)
|
||||
|
||||
def test_volume_detach_with_two_attachments(self):
|
||||
fake_volume = a(volume().with_attached_to(["I1", "I2"]))
|
||||
|
||||
date = datetime(fake_volume.start.year, fake_volume.start.month, fake_volume.start.day, fake_volume.start.hour,
|
||||
fake_volume.start.minute, fake_volume.start.second, fake_volume.start.microsecond)
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_active_entity")
|
||||
.with_args(fake_volume.entity_id)
|
||||
.and_return(fake_volume)
|
||||
.once())
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("update_active_entity")
|
||||
.with_args(fake_volume))
|
||||
|
||||
self.controller.detach_volume(fake_volume.entity_id, date.strftime("%Y-%m-%dT%H:%M:%S.%fZ"), ["I2"])
|
||||
self.assertEqual(fake_volume.attached_to, ["I2"])
|
||||
|
||||
def test_volume_detach_with_one_attachments(self):
|
||||
fake_volume = a(volume().with_attached_to(["I1"]))
|
||||
|
||||
date = datetime(fake_volume.start.year, fake_volume.start.month, fake_volume.start.day, fake_volume.start.hour,
|
||||
fake_volume.start.minute, fake_volume.start.second, fake_volume.start.microsecond)
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_active_entity")
|
||||
.with_args(fake_volume.entity_id)
|
||||
.and_return(fake_volume)
|
||||
.once())
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("update_active_entity")
|
||||
.with_args(fake_volume))
|
||||
|
||||
self.controller.detach_volume(fake_volume.entity_id, date.strftime("%Y-%m-%dT%H:%M:%S.%f"), [])
|
||||
self.assertEqual(fake_volume.attached_to, [])
|
||||
|
||||
def test_volume_detach_last_attachment_after_threshold(self):
|
||||
fake_volume = a(volume().with_attached_to(["I1"]))
|
||||
|
||||
date = datetime(fake_volume.start.year, fake_volume.start.month, fake_volume.start.day, fake_volume.start.hour,
|
||||
fake_volume.start.minute, fake_volume.start.second, fake_volume.start.microsecond)
|
||||
date = date + timedelta(0, 120)
|
||||
expected_date = pytz.utc.localize(date)
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_active_entity")
|
||||
.with_args(fake_volume.entity_id)
|
||||
.and_return(fake_volume)
|
||||
.once())
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("close_active_entity")
|
||||
.with_args(fake_volume.entity_id, expected_date)
|
||||
.once())
|
||||
|
||||
new_volume = a(volume()
|
||||
.build_from(fake_volume)
|
||||
.with_datetime_start(expected_date)
|
||||
.with_no_end()
|
||||
.with_last_event(expected_date)
|
||||
.with_no_attachment())
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("insert_entity")
|
||||
.with_args(new_volume)
|
||||
.once())
|
||||
|
||||
self.controller.detach_volume(fake_volume.entity_id, date.strftime("%Y-%m-%dT%H:%M:%S.%f"), [])
|
||||
self.assertEqual(fake_volume.attached_to, [])
|
||||
|
||||
def test_instance_rebuilded(self):
|
||||
i = a(instance())
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_active_entity")
|
||||
.and_return(i)
|
||||
.twice())
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("close_active_entity")
|
||||
.once())
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("insert_entity")
|
||||
.once())
|
||||
|
||||
self.controller.rebuild_instance("an_instance_id", "some_distro", "some_version", "2015-10-21T16:25:00.000000Z")
|
||||
self.controller.rebuild_instance("an_instance_id", i.os.distro, i.os.version, "2015-10-21T16:25:00.000000Z")
|
||||
|
||||
def test_rename_volume(self):
|
||||
fake_volume = a(volume().with_display_name('old_volume_name'))
|
||||
|
||||
volume_name = 'new_volume_name'
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_active_entity")
|
||||
.with_args(fake_volume.entity_id)
|
||||
.and_return(fake_volume)
|
||||
.once())
|
||||
|
||||
new_volume = a(volume().build_from(fake_volume).with_display_name(volume_name))
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("update_active_entity")
|
||||
.with_args(new_volume)
|
||||
.once())
|
||||
|
||||
self.controller.rename_volume(fake_volume.entity_id, volume_name)
|
||||
|
||||
def test_volume_type_created(self):
|
||||
fake_volume_type = a(volume_type())
|
||||
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("insert_volume_type")
|
||||
.with_args(fake_volume_type)
|
||||
.once())
|
||||
|
||||
self.controller.create_volume_type(fake_volume_type.volume_type_id, fake_volume_type.volume_type_name)
|
||||
|
||||
def test_get_volume_type(self):
|
||||
some_volume = a(volume_type())
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("get_volume_type")
|
||||
.and_return(some_volume)
|
||||
.once())
|
||||
|
||||
returned_volume_type = self.controller.get_volume_type(some_volume.volume_type_id)
|
||||
|
||||
self.assertEqual(some_volume, returned_volume_type)
|
||||
|
||||
def test_delete_volume_type(self):
|
||||
some_volume = a(volume_type())
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("delete_volume_type")
|
||||
.once())
|
||||
|
||||
self.controller.delete_volume_type(some_volume.volume_type_id)
|
||||
|
||||
def test_list_volume_types(self):
|
||||
some_volumes = [a(volume_type()), a(volume_type())]
|
||||
(flexmock(self.database_adapter)
|
||||
.should_receive("list_volume_types")
|
||||
.and_return(some_volumes)
|
||||
.once())
|
||||
|
||||
self.assertEqual(len(self.controller.list_volume_types()), 2)
|
||||
|
408
tests/messages.py
Normal file
408
tests/messages.py
Normal file
@ -0,0 +1,408 @@
|
||||
# Copyright 2016 Internap.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from datetime import datetime, timedelta
|
||||
import dateutil.parser
|
||||
import pytz
|
||||
|
||||
DEFAULT_VOLUME_TYPE = "5dadd67f-e21e-4c13-b278-c07b73b21250"
|
||||
|
||||
|
||||
def get_instance_create_end_sample(instance_id=None, tenant_id=None, flavor_name=None,
|
||||
creation_timestamp=None, name=None, os_distro=None, os_version=None, metadata={}):
|
||||
kwargs = {
|
||||
"instance_id": instance_id or "e7d44dea-21c1-452c-b50c-cbab0d07d7d3",
|
||||
"tenant_id": tenant_id or "0be9215b503b43279ae585d50a33aed8",
|
||||
"hostname": name or "to.to",
|
||||
"display_name": name or "to.to",
|
||||
"instance_type": flavor_name or "myflavor",
|
||||
"os_distro": os_distro or "CentOS",
|
||||
"os_version": os_version or "6.4",
|
||||
"created_at": creation_timestamp if creation_timestamp else datetime(2014, 2, 14, 16, 29, 58, tzinfo=pytz.utc),
|
||||
"launched_at": creation_timestamp + timedelta(seconds=1) if creation_timestamp else datetime(2014, 2, 14, 16, 30, 02, tzinfo=pytz.utc),
|
||||
"terminated_at": None,
|
||||
"deleted_at": None,
|
||||
"state": "active",
|
||||
"metadata": metadata
|
||||
}
|
||||
kwargs["timestamp"] = kwargs["launched_at"] + timedelta(microseconds=200000)
|
||||
return _get_instance_payload("compute.instance.create.end", **kwargs)
|
||||
|
||||
|
||||
def get_instance_delete_end_sample(instance_id=None, tenant_id=None, flavor_name=None, os_distro=None, os_version=None,
|
||||
creation_timestamp=None, deletion_timestamp=None, name=None):
|
||||
kwargs = {
|
||||
"instance_id": instance_id,
|
||||
"tenant_id": tenant_id,
|
||||
"hostname": name,
|
||||
"display_name": name,
|
||||
"instance_type": flavor_name,
|
||||
"os_distro": os_distro or "centos",
|
||||
"os_version": os_version or "6.4",
|
||||
"created_at": creation_timestamp if creation_timestamp else datetime(2014, 2, 14, 16, 29, 58, tzinfo=pytz.utc),
|
||||
"launched_at": creation_timestamp + timedelta(seconds=1) if creation_timestamp else datetime(2014, 2, 14, 16, 30, 02, tzinfo=pytz.utc),
|
||||
"terminated_at": deletion_timestamp if deletion_timestamp else datetime(2014, 2, 18, 12, 5, 23, tzinfo=pytz.utc),
|
||||
"deleted_at": deletion_timestamp if deletion_timestamp else datetime(2014, 2, 18, 12, 5, 23, tzinfo=pytz.utc),
|
||||
"state": "deleted"
|
||||
}
|
||||
kwargs["timestamp"] = kwargs["terminated_at"] + timedelta(microseconds=200000)
|
||||
return _get_instance_payload("compute.instance.delete.end", **kwargs)
|
||||
|
||||
|
||||
def get_volume_create_end_sample(volume_id=None, tenant_id=None, volume_type=None, volume_size=None,
|
||||
creation_timestamp=None, name=None):
|
||||
kwargs = {
|
||||
"volume_id": volume_id or "64a0ca7f-5f5a-4dc5-a1e1-e04e89eb95ed",
|
||||
"tenant_id": tenant_id or "46eeb8e44298460899cf4b3554bfe11f",
|
||||
"display_name": name or "mytenant-0001-myvolume",
|
||||
"volume_type": volume_type or DEFAULT_VOLUME_TYPE,
|
||||
"volume_size": volume_size or 50,
|
||||
"created_at": creation_timestamp if creation_timestamp else datetime(2014, 2, 14, 17, 18, 35, tzinfo=pytz.utc),
|
||||
"launched_at": creation_timestamp + timedelta(seconds=1) if creation_timestamp else datetime(2014, 2, 14, 17, 18, 40, tzinfo=pytz.utc),
|
||||
"status": "available"
|
||||
}
|
||||
kwargs["timestamp"] = kwargs["launched_at"] + timedelta(microseconds=200000)
|
||||
return _get_volume_icehouse_payload("volume.create.end", **kwargs)
|
||||
|
||||
|
||||
def get_volume_delete_end_sample(volume_id=None, tenant_id=None, volume_type=None, volume_size=None,
|
||||
creation_timestamp=None, deletion_timestamp=None, name=None):
|
||||
kwargs = {
|
||||
"volume_id": volume_id or "64a0ca7f-5f5a-4dc5-a1e1-e04e89eb95ed",
|
||||
"tenant_id": tenant_id or "46eeb8e44298460899cf4b3554bfe11f",
|
||||
"display_name": name or "mytenant-0001-myvolume",
|
||||
"volume_type": volume_type or DEFAULT_VOLUME_TYPE,
|
||||
"volume_size": volume_size or 50,
|
||||
"created_at": creation_timestamp if creation_timestamp else datetime(2014, 2, 14, 17, 18, 35, tzinfo=pytz.utc),
|
||||
"launched_at": deletion_timestamp if deletion_timestamp else datetime(2014, 2, 14, 17, 18, 40, tzinfo=pytz.utc),
|
||||
"timestamp": deletion_timestamp if deletion_timestamp else datetime(2014, 2, 23, 8, 1, 58, tzinfo=pytz.utc),
|
||||
"status": "deleting"
|
||||
}
|
||||
return _get_volume_icehouse_payload("volume.delete.end", **kwargs)
|
||||
|
||||
|
||||
def get_volume_attach_icehouse_end_sample(volume_id=None, tenant_id=None, volume_type=None, volume_size=None,
|
||||
creation_timestamp=None, name=None, attached_to=None):
|
||||
kwargs = {
|
||||
"volume_id": volume_id or "64a0ca7f-5f5a-4dc5-a1e1-e04e89eb95ed",
|
||||
"tenant_id": tenant_id or "46eeb8e44298460899cf4b3554bfe11f",
|
||||
"display_name": name or "mytenant-0001-myvolume",
|
||||
"volume_type": volume_type or DEFAULT_VOLUME_TYPE,
|
||||
"volume_size": volume_size or 50,
|
||||
"attached_to": attached_to or "e7d44dea-21c1-452c-b50c-cbab0d07d7d3",
|
||||
"created_at": creation_timestamp if creation_timestamp else datetime(2014, 2, 14, 17, 18, 35, tzinfo=pytz.utc),
|
||||
"launched_at": creation_timestamp + timedelta(seconds=1) if creation_timestamp else datetime(2014, 2, 14, 17, 18, 40, tzinfo=pytz.utc),
|
||||
"timestamp": creation_timestamp + timedelta(seconds=1) if creation_timestamp else datetime(2014, 2, 14, 17, 18, 40, tzinfo=pytz.utc),
|
||||
}
|
||||
return _get_volume_icehouse_payload("volume.attach.end", **kwargs)
|
||||
|
||||
|
||||
def get_volume_attach_kilo_end_sample(volume_id=None, tenant_id=None, volume_type=None, volume_size=None,
|
||||
timestamp=None, name=None, attached_to=None):
|
||||
kwargs = {
|
||||
"volume_id": volume_id or "64a0ca7f-5f5a-4dc5-a1e1-e04e89eb95ed",
|
||||
"tenant_id": tenant_id or "46eeb8e44298460899cf4b3554bfe11f",
|
||||
"display_name": name or "mytenant-0001-myvolume",
|
||||
"volume_type": volume_type or DEFAULT_VOLUME_TYPE,
|
||||
"volume_size": volume_size or 50,
|
||||
"attached_to": attached_to,
|
||||
"timestamp": timestamp + timedelta(seconds=1) if timestamp else datetime(2014, 2, 14, 17, 18, 40, tzinfo=pytz.utc),
|
||||
}
|
||||
return _get_volume_kilo_payload("volume.attach.end", **kwargs)
|
||||
|
||||
|
||||
def get_volume_detach_kilo_end_sample(volume_id=None, tenant_id=None, volume_type=None, volume_size=None,
|
||||
timestamp=None, name=None, attached_to=None):
|
||||
kwargs = {
|
||||
"volume_id": volume_id or "64a0ca7f-5f5a-4dc5-a1e1-e04e89eb95ed",
|
||||
"tenant_id": tenant_id or "46eeb8e44298460899cf4b3554bfe11f",
|
||||
"display_name": name or "mytenant-0001-myvolume",
|
||||
"volume_type": volume_type or DEFAULT_VOLUME_TYPE,
|
||||
"volume_size": volume_size or 50,
|
||||
"attached_to": attached_to,
|
||||
"timestamp": timestamp + timedelta(seconds=1) if timestamp else datetime(2014, 2, 14, 17, 18, 40, tzinfo=pytz.utc),
|
||||
}
|
||||
return _get_volume_kilo_payload("volume.detach.end", **kwargs)
|
||||
|
||||
|
||||
def get_volume_detach_end_sample(volume_id=None, tenant_id=None, volume_type=None, volume_size=None,
|
||||
creation_timestamp=None, deletion_timestamp=None, name=None):
|
||||
kwargs = {
|
||||
"volume_id": volume_id or "64a0ca7f-5f5a-4dc5-a1e1-e04e89eb95ed",
|
||||
"tenant_id": tenant_id or "46eeb8e44298460899cf4b3554bfe11f",
|
||||
"display_name": name or "mytenant-0001-myvolume",
|
||||
"volume_type": volume_type or DEFAULT_VOLUME_TYPE,
|
||||
"volume_size": volume_size or 50,
|
||||
"attached_to": None,
|
||||
"created_at": creation_timestamp if creation_timestamp else datetime(2014, 2, 14, 17, 18, 35, tzinfo=pytz.utc),
|
||||
"launched_at": creation_timestamp + timedelta(seconds=1) if creation_timestamp else datetime(2014, 2, 14, 17, 18, 40, tzinfo=pytz.utc),
|
||||
"timestamp": deletion_timestamp if deletion_timestamp else datetime(2014, 2, 23, 8, 1, 58, tzinfo=pytz.utc),
|
||||
"status": "detach"
|
||||
}
|
||||
return _get_volume_icehouse_payload("volume.detach.end", **kwargs)
|
||||
|
||||
|
||||
def get_volume_rename_end_sample(volume_id=None, tenant_id=None, volume_type=None, volume_size=None,
|
||||
creation_timestamp=None, deletion_timestamp=None, name=None):
|
||||
kwargs = {
|
||||
"volume_id": volume_id or "64a0ca7f-5f5a-4dc5-a1e1-e04e89eb95ed",
|
||||
"tenant_id": tenant_id or "46eeb8e44298460899cf4b3554bfe11f",
|
||||
"display_name": name or "mytenant-0001-mysnapshot01",
|
||||
"volume_type": volume_type or DEFAULT_VOLUME_TYPE,
|
||||
"volume_size": volume_size or 50,
|
||||
"attached_to": None,
|
||||
"created_at": creation_timestamp if creation_timestamp else datetime(2014, 2, 14, 17, 18, 35, tzinfo=pytz.utc),
|
||||
"launched_at": creation_timestamp + timedelta(seconds=1) if creation_timestamp else datetime(2014, 2, 14, 17, 18, 40, tzinfo=pytz.utc),
|
||||
"timestamp": deletion_timestamp if deletion_timestamp else datetime(2014, 2, 23, 8, 1, 58, tzinfo=pytz.utc),
|
||||
"status": "detach"
|
||||
}
|
||||
return _get_volume_icehouse_payload("volume.update.end", **kwargs)
|
||||
|
||||
|
||||
def get_volume_exists_sample(volume_id=None, tenant_id=None, volume_type=None, volume_size=None,
|
||||
creation_timestamp=None, deletion_timestamp=None, name=None):
|
||||
kwargs = {
|
||||
"volume_id": volume_id or "64a0ca7f-5f5a-4dc5-a1e1-e04e89eb95ed",
|
||||
"tenant_id": tenant_id or "46eeb8e44298460899cf4b3554bfe11f",
|
||||
"display_name": name or "mytenant-0001-mysnapshot",
|
||||
"volume_type": volume_type or DEFAULT_VOLUME_TYPE,
|
||||
"volume_size": volume_size or 50,
|
||||
"attached_to": None,
|
||||
"created_at": creation_timestamp if creation_timestamp else datetime(2014, 2, 14, 17, 18, 35, tzinfo=pytz.utc),
|
||||
"launched_at": creation_timestamp + timedelta(seconds=1) if creation_timestamp else datetime(2014, 2, 14, 17, 18, 40, tzinfo=pytz.utc),
|
||||
"timestamp": deletion_timestamp if deletion_timestamp else datetime(2014, 2, 23, 8, 1, 58, tzinfo=pytz.utc),
|
||||
"status": "detach"
|
||||
}
|
||||
return _get_volume_icehouse_payload("volume.exists", **kwargs)
|
||||
|
||||
|
||||
def _format_date(datetime_obj):
|
||||
return datetime_obj.strftime("%Y-%m-%dT%H:%M:%S.%fZ")
|
||||
|
||||
|
||||
def _get_instance_payload(event_type, instance_id=None, tenant_id=None, hostname=None, display_name=None,
|
||||
instance_type=None,
|
||||
instance_flavor_id=None, timestamp=None, created_at=None, launched_at=None,
|
||||
deleted_at=None, terminated_at=None, state=None, os_type=None, os_distro=None, os_version=None, metadata={}):
|
||||
instance_id = instance_id or "e7d44dea-21c1-452c-b50c-cbab0d07d7d3"
|
||||
os_type = os_type or "linux"
|
||||
os_distro = os_distro or "centos"
|
||||
os_version = os_version or "6.4"
|
||||
hostname = hostname or "to.to"
|
||||
display_name = display_name or "to.to"
|
||||
tenant_id = tenant_id or "0be9215b503b43279ae585d50a33aed8"
|
||||
instance_type = instance_type or "myflavor"
|
||||
instance_flavor_id = instance_flavor_id or "201"
|
||||
timestamp = timestamp if timestamp else "2014-02-14T16:30:10.453532Z"
|
||||
created_at = _format_date(created_at) if created_at else "2014-02-14T16:29:58.000000Z"
|
||||
launched_at = _format_date(launched_at) if launched_at else "2014-02-14T16:30:10.221171Z"
|
||||
deleted_at = _format_date(deleted_at) if deleted_at else ""
|
||||
terminated_at = _format_date(terminated_at) if terminated_at else ""
|
||||
state = state or "active"
|
||||
metadata = metadata
|
||||
|
||||
if not isinstance(timestamp, datetime):
|
||||
timestamp = dateutil.parser.parse(timestamp)
|
||||
|
||||
return {
|
||||
"event_type": event_type,
|
||||
"payload": {
|
||||
"state_description": "",
|
||||
"availability_zone": None,
|
||||
"terminated_at": terminated_at,
|
||||
"ephemeral_gb": 0,
|
||||
"instance_type_id": 12,
|
||||
"message": "Success",
|
||||
"deleted_at": deleted_at,
|
||||
"memory_mb": 1024,
|
||||
"user_id": "2525317304464dc3a03f2a63e99200c8",
|
||||
"reservation_id": "r-7e68nhfk",
|
||||
"hostname": hostname,
|
||||
"state": state,
|
||||
"launched_at": launched_at,
|
||||
"metadata": [],
|
||||
"node": "mynode.domain.tld",
|
||||
"ramdisk_id": "",
|
||||
"access_ip_v6": None,
|
||||
"disk_gb": 50,
|
||||
"access_ip_v4": None,
|
||||
"kernel_id": "",
|
||||
"image_name": "CentOS 6.4 x86_64",
|
||||
"host": "node02",
|
||||
"display_name": display_name,
|
||||
"root_gb": 50,
|
||||
"tenant_id": tenant_id,
|
||||
"created_at": created_at,
|
||||
"instance_id": instance_id,
|
||||
"instance_type": instance_type,
|
||||
"vcpus": 1,
|
||||
"image_meta": {
|
||||
"min_disk": "50",
|
||||
"container_format": "bare",
|
||||
"min_ram": "256",
|
||||
"disk_format": "qcow2",
|
||||
"build_version": "68",
|
||||
"version": os_version,
|
||||
"architecture": "x86_64",
|
||||
"auto_disk_config": "True",
|
||||
"os_type": os_type,
|
||||
"base_image_ref": "ea0d5e26-a272-462a-9333-1e38813bac7b",
|
||||
"distro": os_distro
|
||||
},
|
||||
"architecture": "x86_64",
|
||||
"os_type": "linux",
|
||||
"instance_flavor_id": instance_flavor_id,
|
||||
"metadata": metadata
|
||||
},
|
||||
"timestamp": timestamp.strftime("%Y-%m-%dT%H:%M:%S.%fZ"),
|
||||
"updated_at": _format_date(timestamp - timedelta(seconds=10)),
|
||||
}
|
||||
|
||||
|
||||
def _get_volume_icehouse_payload(event_type, volume_id=None, tenant_id=None, display_name=None, volume_type=None,
|
||||
volume_size=None, timestamp=None, created_at=None, launched_at=None, status=None, attached_to=None):
|
||||
volume_id = volume_id or "64a0ca7f-5f5a-4dc5-a1e1-e04e89eb95ed"
|
||||
tenant_id = tenant_id or "46eeb8e44298460899cf4b3554bfe11f"
|
||||
display_name = display_name or "mytenant-0001-myvolume"
|
||||
volume_type = volume_type or DEFAULT_VOLUME_TYPE
|
||||
volume_size = volume_size or 50
|
||||
timestamp = timestamp if timestamp else "2014-02-14T17:18:40.888401Z"
|
||||
created_at = _format_date(created_at) if created_at else "2014-02-14T17:18:35.000000Z"
|
||||
launched_at = _format_date(launched_at) if launched_at else "2014-02-14T17:18:40.765844Z"
|
||||
status = status or "available"
|
||||
attached_to = attached_to or "e7d44dea-21c1-452c-b50c-cbab0d07d7d3"
|
||||
|
||||
if not isinstance(timestamp, datetime):
|
||||
timestamp = dateutil.parser.parse(timestamp)
|
||||
|
||||
return {
|
||||
"event_type": event_type,
|
||||
"timestamp": launched_at,
|
||||
"publisher_id": "volume.cinder01",
|
||||
"payload": {
|
||||
"instance_uuid": attached_to,
|
||||
"status": status,
|
||||
"display_name": display_name,
|
||||
"availability_zone": "nova",
|
||||
"tenant_id": tenant_id,
|
||||
"created_at": created_at,
|
||||
"snapshot_id": None,
|
||||
"volume_type": volume_type,
|
||||
"volume_id": volume_id,
|
||||
"user_id": "ebc0d5a5ecf3417ca0d4f8c90d682f6e",
|
||||
"launched_at": launched_at,
|
||||
"size": volume_size,
|
||||
},
|
||||
"priority": "INFO",
|
||||
"updated_at": _format_date(timestamp - timedelta(seconds=10)),
|
||||
|
||||
}
|
||||
|
||||
|
||||
def _get_volume_kilo_payload(event_type, volume_id=None, tenant_id=None, display_name=None, volume_type=None,
|
||||
timestamp=None, attached_to=None, volume_size=1):
|
||||
volume_id = volume_id or "64a0ca7f-5f5a-4dc5-a1e1-e04e89eb95ed"
|
||||
tenant_id = tenant_id or "46eeb8e44298460899cf4b3554bfe11f"
|
||||
display_name = display_name or "mytenant-0001-myvolume"
|
||||
volume_type = volume_type or DEFAULT_VOLUME_TYPE
|
||||
timestamp = timestamp if timestamp else "2014-02-14T17:18:40.888401Z"
|
||||
attached_to = attached_to
|
||||
volume_attachment = []
|
||||
|
||||
if not isinstance(timestamp, datetime):
|
||||
timestamp = dateutil.parser.parse(timestamp)
|
||||
|
||||
for instance_id in attached_to:
|
||||
volume_attachment.append({
|
||||
"instance_uuid": instance_id,
|
||||
"attach_time": _format_date(timestamp - timedelta(seconds=10)),
|
||||
"deleted": False,
|
||||
"attach_mode": "ro",
|
||||
"created_at": _format_date(timestamp - timedelta(seconds=10)),
|
||||
"attached_host": "",
|
||||
"updated_at": _format_date(timestamp - timedelta(seconds=10)),
|
||||
"attach_status": 'available',
|
||||
"detach_time": "",
|
||||
"volume_id": volume_id,
|
||||
"mountpoint": "/dev/vdd",
|
||||
"deleted_at": "",
|
||||
"id": "228345ee-0520-4d45-86fa-1e4c9f8d057d"
|
||||
})
|
||||
|
||||
return {
|
||||
"event_type": event_type,
|
||||
"timestamp": _format_date(timestamp),
|
||||
"publisher_id": "volume.cinder01",
|
||||
"payload": {
|
||||
"status": "in-use",
|
||||
"display_name": display_name,
|
||||
"volume_attachment": volume_attachment,
|
||||
"availability_zone": "nova",
|
||||
"tenant_id": tenant_id,
|
||||
"created_at": "2015-07-27T16:11:07Z",
|
||||
"volume_id": volume_id,
|
||||
"volume_type": volume_type,
|
||||
"host": "web@lvmdriver-1#lvmdriver-1",
|
||||
"replication_status": "disabled",
|
||||
"user_id": "aa518ac79d4c4d61b806e64600fcad21",
|
||||
"metadata": [],
|
||||
"launched_at": "2015-07-27T16:11:08Z",
|
||||
"size": volume_size
|
||||
},
|
||||
"priority": "INFO",
|
||||
"updated_at": _format_date(timestamp - timedelta(seconds=10)),
|
||||
}
|
||||
|
||||
|
||||
def get_instance_rebuild_end_sample():
|
||||
return _get_instance_payload("compute.instance.rebuild.end")
|
||||
|
||||
|
||||
def get_instance_resized_end_sample():
|
||||
return _get_instance_payload("compute.instance.resize.confirm.end")
|
||||
|
||||
|
||||
def get_volume_update_end_sample(volume_id=None, tenant_id=None, volume_type=None, volume_size=None,
|
||||
creation_timestamp=None, deletion_timestamp=None, name=None):
|
||||
kwargs = {
|
||||
"volume_id": volume_id or "64a0ca7f-5f5a-4dc5-a1e1-e04e89eb95ed",
|
||||
"tenant_id": tenant_id or "46eeb8e44298460899cf4b3554bfe11f",
|
||||
"display_name": name or "mytenant-0001-myvolume",
|
||||
"volume_type": volume_type or DEFAULT_VOLUME_TYPE,
|
||||
"volume_size": volume_size or 50,
|
||||
"created_at": creation_timestamp if creation_timestamp else datetime(2014, 2, 14, 17, 18, 35, tzinfo=pytz.utc),
|
||||
"launched_at": deletion_timestamp if deletion_timestamp else datetime(2014, 2, 23, 8, 1, 58, tzinfo=pytz.utc),
|
||||
"timestamp": deletion_timestamp if deletion_timestamp else datetime(2014, 2, 23, 8, 1, 58, tzinfo=pytz.utc),
|
||||
"status": "deleting"
|
||||
}
|
||||
return _get_volume_icehouse_payload("volume.resize.end", **kwargs)
|
||||
|
||||
|
||||
def get_volume_type_create_sample(volume_type_id, volume_type_name):
|
||||
return {
|
||||
"event_type": "volume_type.create",
|
||||
"publisher_id": "volume.cinder01",
|
||||
"payload": {
|
||||
"volume_types": {
|
||||
"name": volume_type_name,
|
||||
"qos_specs_id": None,
|
||||
"deleted": False,
|
||||
"created_at": "2014-02-14T17:18:35.036186Z",
|
||||
"extra_specs": {},
|
||||
"deleted_at": None,
|
||||
"id": volume_type_id,
|
||||
}
|
||||
},
|
||||
"updated_at": "2014-02-14T17:18:35.036186Z",
|
||||
}
|
Loading…
Reference in New Issue
Block a user