seeding the project

Change-Id: I4235f6e837c85a062fda3fad188cd9339f15243d
This commit is contained in:
Tong Li 2015-08-06 21:01:26 -04:00
parent 4f5481d09f
commit 163c121232
116 changed files with 14333 additions and 0 deletions

7
.coveragerc Executable file
View File

@ -0,0 +1,7 @@
[run]
branch = True
source = kiloeyes
omit = kiloeyes/tests/*,kiloeyes/openstack/*
[report]
ignore-errors = True

58
.gitignore vendored Executable file
View File

@ -0,0 +1,58 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
# C extensions
*.so
# Distribution / packaging
.Python
.testrepository/
.venv/
env/
bin/
build/
develop-eggs/
dist/
eggs/
lib/
lib64/
parts/
sdist/
var/
covhtml/
cover/
*.egg-info/
.installed.cfg
*.egg
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.cache
nosetests.xml
coverage.xml
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Rope
.ropeproject
# Django stuff:
*.log
*.pot
# Sphinx documentation
docs/_build/

8
.testr.conf Executable file
View File

@ -0,0 +1,8 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
${PYTHON:-python} -m subunit.run discover kiloeyes $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

12
AUTHORS Executable file
View File

@ -0,0 +1,12 @@
Maintainer
----------
OpenStack Foundation
IRC: #openstack-kiloeyes on irc.freenode.net
Original Authors
----------------
Tong Li (litong01@us.ibm.com)
Contributors
------------
Tong Li (litong01@us.ibm.com)

7
CHANGELOG Executable file
View File

@ -0,0 +1,7 @@
kiloeyes (1.0)
* Initial project setup
Choose framework of wsgiref, pasteDeploy, falcon.
The server will be wsgiref server like any other OpenStack server
Use PasteDeploy to allow WSGI pipelines
Use Falcon framework to implement ReSTful API services

13
CONTRIBUTING.md Executable file
View File

@ -0,0 +1,13 @@
If you would like to contribute to the development of OpenStack,
you must follow the steps in the "If you're a developer, start here"
section of this page: [http://wiki.openstack.org/HowToContribute](http://wiki.openstack.org/HowToContribute#If_you.27re_a_developer.2C_start_here:)
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at [http://wiki.openstack.org/GerritWorkflow](http://wiki.openstack.org/GerritWorkflow).
Gerrit is the review system used in the OpenStack projects. We're sorry, but
we won't be able to respond to pull requests submitted through GitHub.
Bugs should be filed [on Launchpad](https://bugs.launchpad.net/kiloeyes),
not in GitHub's issue tracker.

201
LICENSE Executable file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

136
README.md Executable file
View File

@ -0,0 +1,136 @@
Kiloeyes
=======
Kiloeyes is a monitoring software allowing you to collect data from any compute
systems.
Install Prerequisites
=====================
Kiloeyes python implementation install process installs Kiloeyes and most of its
dependencies automatically. However some components cannot be installed automatically
by python setup tools, they will have to be installed manually. These components are
python setup tools, python-dev, python-pip and gunicorn. Follow the steps below to
install dependencies:
The typical process of installing setup tools is to download the tar.gz file
then tar -xvf and run python setup.py install, you can also reference this page:
https://pypi.python.org/pypi/setuptools
To install python-dev and pip, run the following command:
sudo apt-get install python-dev python-pip
To install gunicorn, run the following command:
sudo pip install gunicorn==19.1.0
Kiloeyes depends on Kafka and ElasticSearch, both requires Java. If you do not
already have Java, Kafka and ElasticSearch running, you will have to install
them. Please refer to respective document on how to install Java, Kafka and
ElasticSearch::
http://www.java.com
http://kafka.apache.org/documentation.html#introduction
https://www.elastic.co/products/elasticsearch
Install Kiloeyes
===============
Get the source code::
git clone https://github.com/openstack/kiloeyes.git
Go to the root directory of the project and run the following command:
sudo python setup.py install
If Kiloeyes installs successfully, you can then make changes to the following
two files to reflect your system settings, such as Kafka server locations::
/etc/kiloeyes/kiloeyes.ini
/etc/kiloeyes/kiloeyes.conf
Once the configurations are modified to match your environment, you can start
up various services by following these instructions.
To start the API server, run the following command:
Running the server in foreground mode
gunicorn -k eventlet --worker-connections=2000 --backlog=1000
--paste /etc/kiloeyes/kiloeyes.ini
Running the server as daemons
gunicorn -k eventlet --worker-connections=2000 --backlog=1000
--paste /etc/kiloeyes/kiloeyes.ini -D
To start a Kiloeyes micro service servers, run the following command:
kiloeyes-service --config-file /etc/kiloeyes/xxxx.conf
where xxxx.conf should be a micro service specific configuration file.
Currently the following services are available:
Metrics persister service:
kiloeyes-service --config-file /etc/kiloeyes/metrics-persister.conf
Alarm persister service:
kiloeyes-service --config-file /etc/kiloeyes/alarms-persister.conf
Notification service:
kiloeyes-service --config-file /etc/kiloeyes/kiloeyes-notification-engine.conf
Threshold service:
kiloeyes-service --config-file /etc/kiloeyes/kiloeyes-threshold-engine.conf
In the future, there might be other services such as threshold engine,
anomaly detection, alarms etc. All these services should be able to take
a specific configuration file to be launched. Here are the examples:
kiloeyes-service --config-file /etc/kiloeyes/kiloeyes-anomaly.conf
If you are developing kiloeyes, and would like to run all the services in one
screen and use default configurations, then you can simply run the following
command from kiloeyes root direction:
screen -c kiloeyes
Kiloeyes Integration with OpenStack Horizon
==========================================
To integrate with Horizon, two projects (monasca-ui and python-monascaclient)
have to be installed. The steps on how to install these two projects can be
found here::
https://github.com/stackforge/monasca-ui
https://github.com/stackforge/python-monascaclient
Once both projects are installed, some configurations are needed:
Copy _60_monitoring.py to Horizon openstack_dashboard/local/enabled directory
Run the following command to create service and endpoint
setup_horizon.sh
Kiloeyes Development
===================
To check if the code follows python coding style, run the following command
from the root directory of this project
./run_tests.sh -p
To run all the unit test cases, run the following command from the root
directory of this project
./run_tests.sh
To see the unit test case coverage, run the following command from the root
directory of the project
./run_tests.sh -c
If the command runs successfully, then set of files will be created in the root
directory named covhtml. Open up the index.html from a browser to see the summary
of the unit test coverage and the details.

40
_60_monitoring.py Executable file
View File

@ -0,0 +1,40 @@
#!/usr/bin/env python
#
# Copyright 2013 IBM Corp
##
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# The name of the dashboard to be added to HORIZON['dashboards']. Required.
DASHBOARD = "monitoring"
# A list of applications to be added to INSTALLED_APPS.
ADD_INSTALLED_APPS = ['monitoring']
# A list of angular modules to be added as dependencies to horizon app.
ADD_ANGULAR_MODULES = ['monitoringApp']
# A list of javascript files to be included for all pages
ADD_JS_FILES = ['monitoring/js/app.js',
'monitoring/js/controllers.js',
'monitoring/js/ng-tags-input.js']
from monascaclient import exc
# A dictionary of exception classes to be added to HORIZON['exceptions'].
ADD_EXCEPTIONS = {
'recoverable': (exc.HTTPUnProcessable, exc.HTTPConflict,
exc.HTTPException),
'not_found': (exc.HTTPNotFound,),
'unauthorized': (exc.HTTPUnauthorized,),
}
# If set to True, this dashboard will not be added to the settings.
DISABLED = False

2
babel.cfg Executable file
View File

@ -0,0 +1,2 @@
[python: **.py]

56
etc/alarms-persister.conf Executable file
View File

@ -0,0 +1,56 @@
[DEFAULT]
#logging, make sure that the user under whom the server runs has permission
#to write to the directory.
log_file=alarm_persister.log
log_dir=/var/log/kiloeyes/
log_level=DEBUG
default_log_levels = kiloeyes=DEBUG
service = es_persister
threads = 3
[es_persister]
topic = alarms
doc_type = alarms
index_strategy = timed
index_prefix = data_
[timed_strategy]
time_unit = m
frequency = 1
start_date = 2015-01-01
[kafka_opts]
#The endpoint to the kafka server, you can have multiple servers listed here
#for example:
#uri = 10.100.41.114:9092,10.100.41.115:9092,10.100.41.116:9092
uri = 192.168.1.191:9092
#consumer group name
group = datapoints_group
#how many times to try when error occurs
max_retry = 1
#wait time between tries when kafka goes down
wait_time = 1
#use synchronized or asynchronized connection to kafka
async = False
#send messages in bulk or send messages one by one.
compact = False
#How many partitions this connection should listen messages on, this
#parameter is for reading from kafka. If listens on multiple partitions,
#For example, if the client should listen on partitions 1 and 3, then the
#configuration should look like the following:
# partitions = 1
# partitions = 3
#default to listen on partition 0.
partitions = 0
[es_conn]
uri = http://192.168.1.191:9200
id_field = id
drop_data = False

17
etc/dashboard.ini Executable file
View File

@ -0,0 +1,17 @@
[DEFAULT]
monasca_root = /opt/kiloeyes
[composite:main]
use = egg:Paste#urlmap
/ = home
[app:home]
use = egg:Paste#static
document_root = %(monasca_root)s/ui
[server:main]
use = egg:gunicorn#main
host = 0.0.0.0
port = 9400
workers = 3
proc_name = kiloeyes-ui

20
etc/default-mapping.json Executable file
View File

@ -0,0 +1,20 @@
{"_default_":
{"dynamic_templates":
[{"date_template":
{"match": "timestamp",
"match_mapping_type": "date",
"mapping":
{"type": "date"}
}
},
{"string_template":
{"match": "*",
"match_mapping_type": "string",
"mapping":
{"type": "string",
"index": "not_analyzed"
}
}
}]
}
}

View File

@ -0,0 +1,62 @@
[DEFAULT]
#logging, make sure that the user under whom the server runs has permission
#to write to the directory.
log_file=notification.log
log_dir=/var/log/kiloeyes/
log_level=DEBUG
default_log_levels = kiloeyes=DEBUG
service = notification_engine
threads = 3
[notificationengine]
topic = alarms
doc_type = notificationmethods
index_strategy = fixed
index_prefix = admin
processor = notification_processor
[fixed_strategy]
index_name =
[mailsender]
username = kiloeyes.notification@gmail.com
password = notification
smtp_host = smtp.gmail.com
port = 25
use_tls = true
[kafka_opts]
#The endpoint to the kafka server, you can have multiple servers listed here
#for example:
#uri = 10.100.41.114:9092,10.100.41.115:9092,10.100.41.116:9092
uri = 192.168.1.191:9092
#consumer group name
group = datapoints_group
#how many times to try when error occurs
max_retry = 1
#wait time between tries when kafka goes down
wait_time = 1
#use synchronized or asynchronized connection to kafka
async = False
#send messages in bulk or send messages one by one.
compact = False
#How many partitions this connection should listen messages on, this
#parameter is for reading from kafka. If listens on multiple partitions,
#For example, if the client should listen on partitions 1 and 3, then the
#configuration should look like the following:
# partitions = 1
# partitions = 3
#default to listen on partition 0.
partitions = 0
[es_conn]
uri = http://192.168.1.191:9200
time_id =
drop_data = False

View File

@ -0,0 +1,74 @@
[DEFAULT]
#logging, make sure that the user under whom the server runs has permission
#to write to the directory.
log_file=threshold.log
log_dir=/var/log/kiloeyes/
log_level=DEBUG
default_log_levels = kiloeyes=DEBUG
service = threshold_engine
threads = 3
[thresholdengine]
metrics_topic = metrics
alarm_topic = alarms
processor = threshold_processor
check_alarm_interval = 60
[alarmdefinitions]
doc_type = alarmdefinitions
index_strategy = fixed
index_prefix = admin
size = 10000
#query param includes name and dimensions
#GET /v2.0/alarm-definitions?name=CPU percent greater than 10&dimensions=hostname:devstack,os=linux
#the alarm definition name filter
#for example, name = cpu, only start thresh engine with alarm defs who are about cpu
#name =, set no filter
name =
#dimension key/value pairs filter
#for example, dimensions = hostname:devstack,os=linux,
#only start engine with alarm defs whose have the related dimensions
dimensions =
#the time interval to retrieve the latest alarm definitions
check_alarm_def_interval = 120
[kafka_opts]
#The endpoint to the kafka server, you can have multiple servers listed here
#for example:
#uri = 10.100.41.114:9092,10.100.41.115:9092,10.100.41.116:9092
uri = 192.168.1.191:9092
#consumer group name
group = datapoints_group
#how many times to try when error occurs
max_retry = 1
#wait time between tries when kafka goes down
wait_time = 1
#use synchronized or asynchronized connection to kafka
async = False
#send messages in bulk or send messages one by one.
compact = False
#How many partitions this connection should listen messages on, this
#parameter is for reading from kafka. If listens on multiple partitions,
#For example, if the client should listen on partitions 1 and 3, then the
#configuration should look like the following:
# partitions = 1
# partitions = 3
#default to listen on partition 0.
partitions = 0
[es_conn]
uri = http://192.168.1.191:9200
time_id = timestamp
drop_data = False

83
etc/kiloeyes.conf Executable file
View File

@ -0,0 +1,83 @@
[DEFAULT]
#logging, make sure that the user under whom the server runs has permission
#to write to the directory.
log_file=api.log
log_dir=/var/log/kiloeyes/
log_level=DEBUG
default_log_levels = kiloeyes=DEBUG
dispatcher = metrics
dispatcher = versions
dispatcher = alarmdefinitions
dispatcher = notificationmethods
dispatcher = alarms
[metrics]
topic = metrics
doc_type = metrics
index_strategy = timed
index_prefix = data_
size = 10000
[alarmdefinitions]
doc_type = alarmdefinitions
index_strategy = fixed
index_prefix = admin
size = 10000
[notificationmethods]
doc_type = notificationmethods
index_strategy = fixed
index_prefix = admin
size = 10000
[alarms]
topic = alarms
doc_type = alarms
index_strategy = timed
index_prefix = data_
size = 10000
[fixed_strategy]
index_name =
[timed_strategy]
time_unit = m
frequency = 1
start_date = 2015-01-01
[kafka_opts]
#The endpoint to the kafka server, you can have multiple servers listed here
#for example:
#uri = 10.100.41.114:9092,10.100.41.115:9092,10.100.41.116:9092
uri = 192.168.1.191:9092
#consumer group name
group = datapoints_group
#how many times to try when error occurs
max_retry = 1
#wait time between tries when kafka goes down
wait_time = 1
#use synchronized or asynchronized connection to kafka
async = False
#send messages in bulk or send messages one by one.
compact = False
#How many partitions this connection should listen messages on, this
#parameter is for reading from kafka. If listens on multiple partitions,
#For example, if the client should listen on partitions 1 and 3, then the
#configuration should look like the following:
# partitions = 1
# partitions = 3
#default to listen on partition 0.
partitions = 0
[es_conn]
uri = http://192.168.1.191:9200
time_id = timestamp
drop_data = False

25
etc/kiloeyes.ini Executable file
View File

@ -0,0 +1,25 @@
[DEFAULT]
name = kiloeyes
[pipeline:main]
# Add validator in the pipeline so the metrics messages can be validated.
pipeline = api
[app:api]
paste.app_factory = kiloeyes.api.server:api_app
[filter:login]
use = egg: kiloeyes#login
[filter:inspector]
use = egg: kiloeyes#inspector
[filter:validator]
use = egg: kiloeyes#metric_validator
[server:main]
use = egg:gunicorn#main
host = 0.0.0.0
port = 9090
workers = 1
proc_name = kiloeyes-api

57
etc/metrics-persister.conf Executable file
View File

@ -0,0 +1,57 @@
[DEFAULT]
#logging, make sure that the user under whom the server runs has permission
#to write to the directory.
log_file=metric_persister.log
log_dir=/var/log/kiloeyes/
log_level=DEBUG
default_log_levels = kiloeyes=DEBUG
service = es_persister
threads = 3
[es_persister]
topic = metrics
doc_type = metrics
index_strategy = timed
index_prefix = data_
processor = metrics_msg_fixer
[timed_strategy]
time_unit = m
frequency = 1
start_date = 2015-01-01
[kafka_opts]
#The endpoint to the kafka server, you can have multiple servers listed here
#for example:
#uri = 10.100.41.114:9092,10.100.41.115:9092,10.100.41.116:9092
uri = 192.168.1.191:9092
#consumer group name
group = datapoints_group
#how many times to try when error occurs
max_retry = 1
#wait time between tries when kafka goes down
wait_time = 1
#use synchronized or asynchronized connection to kafka
async = False
#send messages in bulk or send messages one by one.
compact = False
#How many partitions this connection should listen messages on, this
#parameter is for reading from kafka. If listens on multiple partitions,
#For example, if the client should listen on partitions 1 and 3, then the
#configuration should look like the following:
# partitions = 1
# partitions = 3
#default to listen on partition 0.
partitions = 0
[es_conn]
uri = http://192.168.1.191:9200
time_id = timestamp
drop_data = False

0
kiloeyes/__init__.py Normal file
View File

0
kiloeyes/api/__init__.py Normal file
View File

138
kiloeyes/api/monasca_api_v2.py Executable file
View File

@ -0,0 +1,138 @@
# Copyright 2013 IBM Corp
##
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from kiloeyes.common import resource_api
from kiloeyes.openstack.common import log
LOG = log.getLogger(__name__)
class V2API(object):
def __init__(self, global_conf):
LOG.debug('initializing V2API!')
self.global_conf = global_conf
@resource_api.Restify('/', method='get')
def do_get_versions(self, req, res, version_id):
res.status = '501 Not Implemented'
@resource_api.Restify('/{version_id}', method='get')
def do_get_version_by_id(self, req, res, version_id):
res.status = '501 Not Implemented'
@resource_api.Restify('/v2.0/metrics/', method='post')
def do_post_metrics(self, req, res):
res.status = '501 Not Implemented'
# This is an extension to kiloeyes spec.
@resource_api.Restify('/v2.0/metrics/{id}', method='get')
def do_get_metrics_by_id(self, req, res, id):
res.status = '501 Not Implemented'
@resource_api.Restify('/v2.0/metrics/', method='get')
def do_get_metrics(self, req, res):
res.status = '501 Not Implemented'
@resource_api.Restify('/v2.0/metrics/measurements', method='get')
def do_get_measurements(self, req, res):
res.status = '501 Not Implemented'
@resource_api.Restify('/v2.0/metrics/names', method='get')
def do_get_metrics_names(self, req, res):
res.status = '501 Not Implemented'
@resource_api.Restify('/v2.0/metrics/statistics')
def do_get_statistics(self, req, res):
res.status = '501 Not Implemented'
# Notification-method APIs
@resource_api.Restify('/v2.0/notification-methods', method='post')
def do_post_notification_methods(self, req, res):
res.status = '501 Not Implemented'
@resource_api.Restify('/v2.0/notification-methods/', method='get')
def do_get_notification_methods(self, req, res):
res.status = '501 Not Implemented'
@resource_api.Restify('/v2.0/notification-methods/{id}', method='get')
def do_get_notification_method_by_id(self, req, res, id):
res.status = '501 Not Implemented'
@resource_api.Restify('/v2.0/notification-methods/{id}', method='put')
def do_put_notification_methods(self, req, res, id):
res.status = '501 Not Implemented'
@resource_api.Restify('/v2.0/notification-methods/{id}', method='delete')
def do_delete_notification_methods(self, req, res, id):
res.status = '501 Not Implemented'
# Alarm-definition APIs
@resource_api.Restify('/v2.0/alarm-definitions/', method='post')
def do_post_alarm_definitions(self, req, res):
res.status = '501 Not Implemented'
@resource_api.Restify('/v2.0/alarm-definitions/', method='get')
def do_get_alarm_definitions(self, req, res, id):
res.status = '501 Not Implemented'
@resource_api.Restify('/v2.0/alarm-definitions/{id}', method='get')
def do_get_alarm_definition_by_id(self, req, res, id):
res.status = '501 Not Implemented'
@resource_api.Restify('/v2.0/alarm-definitions/{id}', method='put')
def do_put_alarm_definition_by_id(self, req, res, id):
res.status = '501 Not Implemented'
@resource_api.Restify('/v2.0/alarm-definitions/{id}', method='patch')
def do_patch_alarm_definition_by_id(self, req, res, id):
res.status = '501 Not Implemented'
@resource_api.Restify('/v2.0/alarm-definitions/{id}', method='delete')
def do_delete_alarm_definition_by_id(self, req, res, id):
res.status = '501 Not Implemented'
# Alarm APIs
@resource_api.Restify('/v2.0/alarms/', method='get')
def do_get_alarms(self, req, res, id):
res.status = '501 Not Implemented'
@resource_api.Restify('/v2.0/alarms/state-history', method='get')
def do_get_alarms_state_history(self, req, res, id):
res.status = '501 Not Implemented'
@resource_api.Restify('/v2.0/alarms/{alarm_id}', method='get')
def do_get_alarm_by_id(self, req, res, id):
res.status = '501 Not Implemented'
@resource_api.Restify('/v2.0/alarms/{alarm_id}', method='put')
def do_put_alarms(self, req, res, id):
res.status = '501 Not Implemented'
@resource_api.Restify('/v2.0/alarms/{alarm_id}', method='patch')
def do_patch_alarms(self, req, res, id):
res.status = '501 Not Implemented'
@resource_api.Restify('/v2.0/alarms/{alarm_id}', method='delete')
def do_delete_alarms(self, req, res, id):
res.status = '501 Not Implemented'
# This is an extention to the API spec.
@resource_api.Restify('/v2.0/alarms', method='post')
def do_post_alarms(self, req, res):
res.status = '501 Not Implemented'
@resource_api.Restify('/v2.0/alarms/{alarm_id}/state-history')
def do_get_alarm_state_history(self, req, res, id):
res.status = '501 Not Implemented'

71
kiloeyes/api/server.py Executable file
View File

@ -0,0 +1,71 @@
# Copyright 2014 IBM Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from wsgiref import simple_server
from oslo_config import cfg
import paste.deploy
from stevedore import named
from kiloeyes.common import namespace
from kiloeyes.common import resource_api
from kiloeyes.openstack.common import log
OPTS = [
cfg.MultiStrOpt('dispatcher',
default=[],
help='Dispatchers to process data.'),
]
cfg.CONF.register_opts(OPTS)
LOG = log.getLogger(__name__)
def api_app(conf):
cfg.CONF(args=[], project='kiloeyes')
log_levels = (cfg.CONF.default_log_levels)
cfg.set_defaults(log.log_opts, default_log_levels=log_levels)
log.setup('kiloeyes')
dispatcher_manager = named.NamedExtensionManager(
namespace=namespace.DISPATCHER_NS,
names=cfg.CONF.dispatcher,
invoke_on_load=True,
invoke_args=[cfg.CONF])
if not list(dispatcher_manager):
LOG.error('Failed to load any dispatchers for %s' %
namespace.DISPATCHER_NS)
return None
# Create the application
app = resource_api.ResourceAPI()
# add each dispatcher to the application to serve requests offered by
# each dispatcher
for driver in dispatcher_manager:
app.add_route(None, driver.obj)
LOG.debug('Dispatcher drivers have been added to the routes!')
return app
if __name__ == '__main__':
wsgi_app = (
paste.deploy.loadapp('config:etc/kiloeyes.ini',
relative_to=os.getcwd()))
httpd = simple_server.make_server('127.0.0.1', 9000, wsgi_app)
httpd.serve_forever()

View File

View File

@ -0,0 +1,91 @@
# Copyright 2015 Carnegie Mellon University
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
agg_ops = {'SUM': sum,
'AVG': lambda x: sum(x) / len(x),
'MAX': max,
'MIN': min,
'COUNT': len}
comp_ops = {'GT': lambda x, y: x <= y,
'LT': lambda x, y: x >= y,
'LTE': lambda x, y: x > y,
'GTE': lambda x, y: x < y}
STATE_OK = 'OK'
STATE_ALARM = 'ALARM'
STATE_UNDETERMINED = 'UNDETERMINED'
def calc_value(func, data_list):
"""Calc float values according to 5 functions."""
if (func not in agg_ops or
(len(data_list) == 0 and func != 'COUNT')):
return None
else:
return agg_ops[func](data_list)
def compare_thresh(values, op, thresh):
"""Check if value from metrics exceeds thresh.
Only the value in each period meet thresh, the alarm state can be 'ALARM'.
For example, the alarm definition defines 3 periods, values = [a,b,c].
If the value in any period doesn't meet thresh,
then alarm state must be 'OK';
If some values are None (means no metrics in that period)
but all other values meet thresh,
we still don't know if the alarm can be triggered,
so it's 'UNDETERMINED';
otherwise, the state can be 'ALARM'
"""
for value in values:
if value is not None and comp_ops[op](value, thresh):
return STATE_OK
state = STATE_ALARM
for value in values:
if value is None:
state = STATE_UNDETERMINED
return state
def calc_logic(logic_operator, subs):
"""Calc overall state of an alarm expression.
'OK' means False;
'ALARM' means True;
'UNDETERMINED' means either True or False.
"""
if logic_operator == 'AND':
state = 'ALARM'
for o in subs:
if o == 'OK':
return 'OK'
elif o == 'UNDETERMINED':
state = 'UNDETERMINED'
return state
elif logic_operator == 'OR':
state = 'OK'
for o in subs:
if o == 'ALARM':
return 'ALARM'
elif o == 'UNDETERMINED':
state = 'UNDETERMINED'
return state
else:
return 'UNDETERMINED'

View File

@ -0,0 +1,320 @@
# -*- coding: utf-8 -*-
# Copyright 2014 Hewlett-Packard
# Copyright 2015 Carnegie Mellon University
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import itertools
import pyparsing
class SubExpr(object):
def __init__(self, tokens):
self._sub_expr = tokens
self._func = tokens.func
self._metric_name = tokens.metric_name
self._dimensions = tokens.dimensions.dimensions_list
self._operator = tokens.relational_op
self._threshold = tokens.threshold
self._period = tokens.period
self._periods = tokens.periods
self._id = None
@property
def sub_expr_str(self):
"""Get the entire sub expression as a string with no spaces."""
return "".join(list(itertools.chain(*self._sub_expr)))
@property
def fmtd_sub_expr_str(self):
"""Get the entire sub expressions as a string with spaces."""
result = "{}({}".format(self._func.encode('utf8'),
self._metric_name.encode('utf8'))
if self._dimensions:
result += "{{{}}}".format(self._dimensions.encode('utf8'))
if self._period:
result += ", {}".format(self._period.encode('utf8'))
result += ")"
result += " {} {}".format(self._operator.encode('utf8'),
self._threshold.encode('utf8'))
if self._periods:
result += " times {}".format(self._periods.encode('utf8'))
return result.decode('utf8')
@property
def dimensions_str(self):
"""Get all the dimensions as a single comma delimited string."""
return self._dimensions
@property
def operands_list(self):
"""Get this sub expression as a list."""
return [self]
@property
def logic_operator(self):
return None
@property
def sub_expr_list(self):
return []
@property
def func(self):
"""Get the function as it appears in the orig expression."""
return self._func
@property
def normalized_func(self):
"""Get the function upper-cased."""
return self._func.upper()
@property
def metric_name(self):
"""Get the metric name as it appears in the orig expression."""
return self._metric_name
@property
def normalized_metric_name(self):
"""Get the metric name lower-cased."""
return self._metric_name.lower()
@property
def dimensions_as_list(self):
"""Get the dimensions as a list."""
if self._dimensions:
return self._dimensions.split(",")
else:
return []
@property
def dimensions_as_dict(self):
"""Get the dimensions as a dict."""
dimension_dict = {}
for di in self.dimensions_as_list:
temp = di.split("=")
dimension_dict[temp[0]] = temp[1]
return dimension_dict
@property
def operator(self):
"""Get the operator."""
return self._operator
@property
def threshold(self):
"""Get the threshold value."""
return self._threshold
@property
def period(self):
"""Get the period. Default is 60 seconds."""
if self._period:
return self._period
else:
return u'60'
@property
def periods(self):
"""Get the periods. Default is 1."""
if self._periods:
return self._periods
else:
return u'1'
@property
def normalized_operator(self):
"""Get the operator as one of LT, GT, LTE, or GTE."""
if self._operator.lower() == "lt" or self._operator == "<":
return u"LT"
elif self._operator.lower() == "gt" or self._operator == ">":
return u"GT"
elif self._operator.lower() == "lte" or self._operator == "<=":
return u"LTE"
elif self._operator.lower() == "gte" or self._operator == ">=":
return u"GTE"
@property
def id(self):
"""Get the id used to identify this sub expression in the repo."""
return self._id
@id.setter
def id(self, id):
"""Set the d used to identify this sub expression in the repo."""
self._id = id
class BinaryOp(object):
def __init__(self, tokens):
self.op = tokens[0][1]
self.operands = tokens[0][0::2]
if self.op == u'&&' or self.op == u'and':
self.op = u'AND'
if self.op == u'||' or self.op == u'or':
self.op = u'OR'
@property
def operands_list(self):
return ([sub_operand for operand in self.operands for sub_operand in
operand.operands_list])
@property
def logic_operator(self):
return self.op
@property
def sub_expr_list(self):
if self.op:
return self.operands
else:
return []
class AndSubExpr(BinaryOp):
"""Expand later as needed."""
pass
class OrSubExpr(BinaryOp):
"""Expand later as needed."""
pass
COMMA = pyparsing.Literal(",")
LPAREN = pyparsing.Literal("(")
RPAREN = pyparsing.Literal(")")
EQUAL = pyparsing.Literal("=")
LBRACE = pyparsing.Literal("{")
RBRACE = pyparsing.Literal("}")
# Initialize non-ascii unicode code points in the Basic Multilingual Plane.
unicode_printables = u''.join(
unichr(c) for c in xrange(128, 65536) if not unichr(c).isspace())
# Does not like comma. No Literals from above allowed.
valid_identifier_chars = (
(unicode_printables + pyparsing.alphanums + ".-_#!$%&'*+/:;?@[\\]^`|~"))
metric_name = (
pyparsing.Word(valid_identifier_chars, min=1, max=255)("metric_name"))
dimension_name = pyparsing.Word(valid_identifier_chars, min=1, max=255)
dimension_value = pyparsing.Word(valid_identifier_chars, min=1, max=255)
integer_number = pyparsing.Word(pyparsing.nums)
decimal_number = pyparsing.Word(pyparsing.nums + ".")
max = pyparsing.CaselessLiteral("max")
min = pyparsing.CaselessLiteral("min")
avg = pyparsing.CaselessLiteral("avg")
count = pyparsing.CaselessLiteral("count")
sum = pyparsing.CaselessLiteral("sum")
func = (max | min | avg | count | sum)("func")
less_than_op = (
(pyparsing.CaselessLiteral("<") | pyparsing.CaselessLiteral("lt")))
less_than_eq_op = (
(pyparsing.CaselessLiteral("<=") | pyparsing.CaselessLiteral("lte")))
greater_than_op = (
(pyparsing.CaselessLiteral(">") | pyparsing.CaselessLiteral("gt")))
greater_than_eq_op = (
(pyparsing.CaselessLiteral(">=") | pyparsing.CaselessLiteral("gte")))
# Order is important. Put longer prefix first.
relational_op = (
less_than_eq_op | less_than_op | greater_than_eq_op | greater_than_op)(
"relational_op")
AND = pyparsing.CaselessLiteral("and") | pyparsing.CaselessLiteral("&&")
OR = pyparsing.CaselessLiteral("or") | pyparsing.CaselessLiteral("||")
logical_op = (AND | OR)("logical_op")
times = pyparsing.CaselessLiteral("times")
dimension = pyparsing.Group(dimension_name + EQUAL + dimension_value)
# Cannot have any whitespace after the comma delimiter.
dimension_list = pyparsing.Group(pyparsing.Optional(
LBRACE + pyparsing.delimitedList(dimension, delim=',', combine=True)(
"dimensions_list") + RBRACE))
metric = metric_name + dimension_list("dimensions")
period = integer_number("period")
threshold = decimal_number("threshold")
periods = integer_number("periods")
expression = pyparsing.Forward()
sub_expression = (func + LPAREN + metric + pyparsing.Optional(
COMMA + period) + RPAREN + relational_op + threshold + pyparsing.Optional(
times + periods) | LPAREN + expression + RPAREN)
sub_expression.setParseAction(SubExpr)
expression = (
pyparsing.operatorPrecedence(sub_expression,
[(AND, 2, pyparsing.opAssoc.LEFT, AndSubExpr),
(OR, 2, pyparsing.opAssoc.LEFT, OrSubExpr)]))
class AlarmExprParser(object):
def __init__(self, expr):
self._expr = expr
try:
self.parseResult = (expression + pyparsing.stringEnd).parseString(
self._expr.replace(' ', ''))[0]
except Exception:
self.parseResult = None
@property
def parse_result(self):
return self.parseResult
@property
def sub_expr_list(self):
if self.parseResult:
return self.parseResult.operands_list
else:
return None
@property
def related_metrics(self):
"""Get a list of all the metrics related with this expression."""
related_metrics = []
for expr in self.sub_expr_list:
related_metrics.append({
'name': expr.metric_name,
'dimensions': expr.dimensions_as_dict
})
return related_metrics
@property
def sub_alarm_expressions(self):
"""Get a list of all the sub expr parsed information."""
sub_alarm_expr = {}
for expr in self.sub_expr_list:
sub_alarm_expr[expr.fmtd_sub_expr_str] = {
'function': expr.normalized_func,
'metric_name': expr.normalized_metric_name,
'dimensions': expr.dimensions_as_dict,
'operator': expr.normalized_operator,
'threshold': expr.threshold,
'period': expr.period,
'periods': expr.periods
}
return sub_alarm_expr

View File

@ -0,0 +1,76 @@
# Copyright 2015 Carnegie Mellon University
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
from kiloeyes.common import alarm_expr_parser as parser
key_set = ['expression',
'alarm_actions',
'ok_actions',
'undetermined_actions',
'match_by',
'name',
'description']
def is_valid_alarm_definition(alarm_def_json):
alarm_definition = json.loads(alarm_def_json)
for key in key_set:
if key not in alarm_definition:
return False
expression = alarm_definition['expression']
alarm_parser = parser.AlarmExprParser(expression)
if not alarm_parser.parse_result:
return False
return True
def is_valid_update_alarm_definition(ori_alarm_def_json, new_alarm_def_json):
# both should be valid alarm definition
if (not (is_valid_alarm_definition(ori_alarm_def_json)
and is_valid_alarm_definition(new_alarm_def_json))):
return False
ori_alarm_definition = json.loads(ori_alarm_def_json)
new_alarm_definition = json.loads(new_alarm_def_json)
# match_by should not change
if ori_alarm_definition['match_by'] != new_alarm_definition['match_by']:
return False
ori_expression = ori_alarm_definition['expression']
ori_alarm_parser = parser.AlarmExprParser(ori_expression)
ori_sub_expr_list = ori_alarm_parser.sub_expr_list
new_expression = new_alarm_definition['expression']
new_alarm_parser = parser.AlarmExprParser(new_expression)
new_sub_expr_list = new_alarm_parser.sub_expr_list
# should have same number of sub alarm exprs
l = len(ori_sub_expr_list)
if not new_sub_expr_list or l != len(new_sub_expr_list):
return False
for i in range(l):
sub_expr_ori = ori_sub_expr_list[i]
sub_expr_new = new_sub_expr_list[i]
# each metrics in alarm expr should remain the same
if (sub_expr_ori.normalized_metric_name
!= sub_expr_new.normalized_metric_name):
return False
if (sub_expr_ori.dimensions_as_dict
!= sub_expr_new.dimensions_as_dict):
return False
return True

87
kiloeyes/common/email_sender.py Executable file
View File

@ -0,0 +1,87 @@
# Copyright 2015 Carnegie Mellon University
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import email.mime.text
from oslo_config import cfg
import smtplib
from kiloeyes.openstack.common import log
MAILSENDER_OPTS = [
cfg.StrOpt('username',
default='kiloeyes.notification@gmail.com',
help='The email account user name.'),
cfg.StrOpt('password',
default='password',
help='The email account user password.'),
cfg.StrOpt('smtp_host', default='smtp.gmail.com',
help='The email service host.'),
cfg.IntOpt('port', default=25,
help='The email service port.'),
cfg.BoolOpt('use_tls', default=True,
help='Set to True if the service uses TLS.'),
]
cfg.CONF.register_opts(MAILSENDER_OPTS, group="mailsender")
LOG = log.getLogger(__name__)
class EmailSender(object):
def __init__(self):
self.username = cfg.CONF.mailsender.username
self.password = cfg.CONF.mailsender.password
self.smtp_host = cfg.CONF.mailsender.smtp_host
self.port = cfg.CONF.mailsender.port
self.use_tls = cfg.CONF.mailsender.use_tls
self.from_addr = self.username
self.smtp = smtplib.SMTP()
LOG.debug('connecting ...')
# connect
try:
self.smtp.connect(self.smtp_host, self.port)
except Exception:
LOG.debug('SMTP Connection error.')
if self.use_tls:
self.smtp.starttls()
# login with username & password
try:
LOG.debug('Login ...')
self.smtp.login(self.username, self.password)
except Exception:
LOG.debug('Login exception.')
def reset(self):
self.__init__()
def send_emails(self, to_addrs, subject, content):
# fill content with MIMEText's object
msg = email.mime.text.MIMEText(content)
msg['From'] = self.from_addr
msg['To'] = ';'.join(to_addrs)
msg['Subject'] = subject
try:
self.smtp.sendmail(self.from_addr, to_addrs, msg.as_string())
LOG.debug('Mail sent to: %s' % str(to_addrs))
return True
except Exception as e:
LOG.debug('Mail sent Exception: %s, reset the sender.' % str(e))
self.reset()
return False

142
kiloeyes/common/es_conn.py Executable file
View File

@ -0,0 +1,142 @@
# Copyright 2012-2013 eNovance <licensing@enovance.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
import requests
import ujson as json
from kiloeyes.openstack.common import log
ES_OPTS = [
cfg.StrOpt('uri',
help='Address to kafka server. For example: '
'uri=http://192.168.1.191:9200/'),
cfg.StrOpt('id_field',
default='',
help='The field name for _id.'),
cfg.BoolOpt('drop_data',
default=False,
help=('Specify if received data should be simply dropped. '
'This parameter is only for testing purposes.')),
]
cfg.CONF.register_opts(ES_OPTS, group="es_conn")
LOG = log.getLogger(__name__)
class ESConnection(object):
def __init__(self, doc_type, index_stratey, index_prefix):
if not cfg.CONF.es_conn.uri:
raise Exception('ElasticSearch is not configured correctly! '
'Use configuration file to specify ElasticSearch '
'uri, for example: '
'uri=192.168.1.191:9200')
self.uri = cfg.CONF.es_conn.uri
if self.uri.strip()[-1] != '/':
self.uri += '/'
self.doc_type = doc_type
self.index_strategy = index_stratey
self.index_prefix = index_prefix
self.id_field = cfg.CONF.es_conn.id_field
self.drop_data = cfg.CONF.es_conn.drop_data
self.search_path = '%s%s*/%s/_search' % (self.uri,
self.index_prefix,
self.doc_type)
LOG.debug('ElasticSearch Connection initialized successfully!')
def send_messages(self, msg):
LOG.debug('Prepare to send messages.')
if self.drop_data:
return
else:
# figure out id situation
_id = ''
if self.id_field:
obj = json.loads(msg)
_id = obj.get(self.id_field)
if not _id:
LOG.error('Msg does not have required id field %s' %
self.id_field)
return 400
# index may change over the time, it has to be called for each
# request
index = self.index_strategy.get_index()
path = '%s%s%s/%s/%s' % (self.uri, self.index_prefix,
index, self.doc_type, _id)
res = requests.post(path, data=msg)
LOG.debug('Msg post target=%s' % path)
LOG.debug('Msg posted with response code: %s' % res.status_code)
return res.status_code
def get_messages(self, cond, q_string=""):
LOG.debug('Prepare to get messages.')
if cond:
data = json.dumps(cond)
else:
data = {}
return requests.post(self.search_path + "?" + q_string, data=data)
def get_message_by_id(self, id):
LOG.debug('Prepare to get messages by id.')
path = self.search_path + '?q=_id:' + id
LOG.debug('Search path:' + path)
res = requests.get(path)
LOG.debug('Msg get with response code: %s' % res.status_code)
return res
def post_messages(self, msg, id):
LOG.debug('Prepare to post messages.')
if self.drop_data:
return 204
else:
index = self.index_strategy.get_index()
path = '%s%s%s/%s/' % (self.uri, self.index_prefix,
index, self.doc_type)
res = requests.post(path + id, data=msg)
LOG.debug('Msg post with response code: %s' % res.status_code)
return res.status_code
def put_messages(self, msg, id):
LOG.debug('Prepare to put messages.')
if self.drop_data:
return 204
else:
index = self.index_strategy.get_index()
path = '%s%s%s/%s/' % (self.uri, self.index_prefix,
index, self.doc_type)
res = requests.put(path + id, data=msg)
LOG.debug('Msg put with response code: %s' % res.status_code)
return res.status_code
def del_messages(self, id):
LOG.debug('Prepare to delete messages.')
if self.drop_data:
return 204
else:
index = self.index_strategy.get_index()
path = '%s%s%s/%s/' % (self.uri, self.index_prefix,
index, self.doc_type)
res = requests.delete(path + id)
LOG.debug('Msg delete with response code: %s' % res.status_code)
return res.status_code

212
kiloeyes/common/kafka_conn.py Executable file
View File

@ -0,0 +1,212 @@
# Copyright 2013 IBM Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import time
from kafka import client
from kafka import common
from kafka import consumer
from kafka import producer
from oslo_config import cfg
from oslo_config import types
try:
import ujson as json
except ImportError:
import json
from kiloeyes.openstack.common import log
KAFKA_OPTS = [
cfg.StrOpt('uri', help='Address to kafka server. For example: '
'uri=192.168.1.191:9092'),
cfg.StrOpt('group', default='api',
help='The group name that this service belongs to.'),
cfg.IntOpt('wait_time', default=1,
help='The wait time when no messages on kafka queue.'),
cfg.IntOpt('ack_time', default=20,
help='The ack time back to kafka.'),
cfg.IntOpt('max_retry', default=3,
help='The number of retry when there is a connection error.'),
cfg.BoolOpt('auto_commit', default=True,
help='If automatically commmit when consume messages.'),
cfg.BoolOpt('async', default=True, help='The type of posting.'),
cfg.BoolOpt('compact', default=True,
help=('Specify if the message received should be parsed. '
'If True, message will not be parsed, otherwise '
'messages will be parsed.')),
cfg.MultiOpt('partitions', item_type=types.Integer(),
default=[0],
help='The sleep time when no messages on kafka queue.'),
cfg.BoolOpt('drop_data', default=False,
help=('Specify if received data should be simply dropped. '
'This parameter is only for testing purposes.')),
]
cfg.CONF.register_opts(KAFKA_OPTS, group="kafka_opts")
LOG = log.getLogger(__name__)
class KafkaConnection(object):
def __init__(self, topic):
if not cfg.CONF.kafka_opts.uri:
raise Exception('Kafka is not configured correctly! '
'Use configuration file to specify Kafka '
'uri, for example: '
'uri=192.168.1.191:9092')
self.uri = cfg.CONF.kafka_opts.uri
self.topic = topic
self.group = cfg.CONF.kafka_opts.group
self.wait_time = cfg.CONF.kafka_opts.wait_time
self.async = cfg.CONF.kafka_opts.async
self.ack_time = cfg.CONF.kafka_opts.ack_time
self.max_retry = cfg.CONF.kafka_opts.max_retry
self.auto_commit = cfg.CONF.kafka_opts.auto_commit
self.compact = cfg.CONF.kafka_opts.compact
self.partitions = cfg.CONF.kafka_opts.partitions
self.drop_data = cfg.CONF.kafka_opts.drop_data
self._client = None
self._consumer = None
self._producer = None
LOG.debug('Kafka Connection initialized successfully!')
def _init_client(self, wait_time=None):
for i in range(self.max_retry):
try:
# if there is a client instance, but _init_client is called
# again, most likely the connection has gone stale, close that
# connection and reconnect.
if self._client:
self._client.close()
if not wait_time:
wait_time = self.wait_time
time.sleep(wait_time)
self._client = client.KafkaClient(self.uri)
# when a client is re-initialized, existing consumer should be
# reset as well.
self._consumer = None
self._producer = None
LOG.debug("Successfully connected to Kafka server at topic: "
"\"%s\" partitions %s" % (self.topic,
self.partitions))
break
except common.KafkaUnavailableError:
LOG.error('Kafka server at %s is down.' % self.uri)
except common.LeaderNotAvailableError:
LOG.error('Kafka at %s has no leader available.' % self.uri)
except Exception:
LOG.error('Kafka at %s initialization failed.' % self.uri)
# Wait a bit and try again to get a client
time.sleep(self.wait_time)
def _init_consumer(self):
try:
if not self._client:
self._init_client()
self._consumer = consumer.SimpleConsumer(
self._client, self.group, self.topic,
auto_commit=self.auto_commit,
partitions=self.partitions)
LOG.debug('Consumer was created successfully.')
except Exception:
self._consumer = None
LOG.exception('Kafka (%s) consumer can not be created.' %
self.uri)
def _init_producer(self):
try:
if not self._client:
self._init_client()
self._producer = producer.SimpleProducer(
self._client, async=self.async, ack_timeout=self.ack_time)
LOG.debug('Producer was created successfully.')
except Exception:
self._producer = None
LOG.exception('Kafka (%s) producer can not be created.' %
self.uri)
def commit(self):
if self._consumer and self.auto_commit:
self._consumer.commit()
def close(self):
if self._client:
self._consumer = None
self._producer = None
self._client.close()
def get_messages(self):
try:
if not self._consumer:
self._init_consumer()
for msg in self._consumer:
if msg.message:
LOG.debug(msg.message.value)
yield msg
except common.OffsetOutOfRangeError:
self._consumer.seek(0, 0)
LOG.error('Seems consumer has been down for a long time.')
yield None
except Exception as ex:
LOG.exception(ex)
self._consumer = None
yield None
def send_messages(self, messages):
LOG.debug('Prepare to send messages.')
if not messages or self.drop_data:
return 204
code = 400
try:
if not self._producer:
self._init_producer()
LOG.debug('Start sending messages to kafka.')
if self.compact:
self._producer.send_messages(self.topic, messages)
else:
data = json.loads(messages)
LOG.debug('Msg parsed successfully.')
if isinstance(data, list):
for item in data:
self._producer.send_messages(
self.topic, json.dumps(item))
else:
self._producer.send_messages(self.topic, messages)
LOG.debug('Message posted successfully.')
code = 204
except (common.KafkaUnavailableError,
common.LeaderNotAvailableError):
self._client = None
code = 503
LOG.exception('Error occurred while posting data to Kafka.')
except ValueError:
code = 406
LOG.exception('Message %s is not valid json.' % messages)
except Exception:
code = 500
LOG.exception('Unknown error.')
return code

18
kiloeyes/common/namespace.py Executable file
View File

@ -0,0 +1,18 @@
# Copyright 2013 IBM Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
DISPATCHER_NS = 'kiloeyes.dispatcher'
PROCESSOR_NS = 'kiloeyes.message.processor'
STRATEGY_NS = 'kiloeyes.index.strategy'
MICROSERVICE_NS = 'kiloeyes.microservice'

115
kiloeyes/common/resource_api.py Executable file
View File

@ -0,0 +1,115 @@
# Copyright 2013 IBM Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import falcon
from falcon import api_helpers
from kiloeyes.openstack.common import log
RESOURCE_METHOD_FLAG = 'fab05a04-b861-4651-bd0c-9cb3eb9a6088'
LOG = log.getLogger(__name__)
class Restify(object):
def __init__(self, path='', method='GET'):
if not path:
raise Exception('Path has to be specified.')
if method.upper() not in falcon.HTTP_METHODS:
raise Exception('Invalid request method.')
self.path = path
self.method = method.upper()
def __call__(self, func):
setattr(func, RESOURCE_METHOD_FLAG, self)
return func
class ResourceAPI(falcon.API):
def add_route(self, uri_template, resource):
"""Associates uri patterns with resource methods.
A resource is an instance of a class that defines various methods
to handle http requests.
Use this class to create applications which serve a standard
compliant ReSTful API. For example, you may have an API which manage
monitoring data, there can be multiple implementations of the API
using different technologies. One can use Mongodb, the other can use
Cassandra. To make the configuration of the application very easy,
each implementation provides a class with set of methods decorated
by class Restify, the application can simply using single entry
configuration to load different implementations.
For example::
class ExampleResource(object):
@Restify(path='/path1/', method='post')
def func1(self, req, res):
pass
@Restify(path='/path2/{id}/key/', method='get')
def func2(self, req, res, id):
pass
def func3(self, req, res, id):
pass
With the above class, the following code will add the class method
func1, func2 to handle post and get requests respectively, method
func3 won't be added to the routes.::
app.add_route(None, ExampleResource())
Args:
uri_template (url pattern): the url pattern which a client will
post a request against. If none, ResourceAPI will
automatically look up the decorated methods.
resource (instance): Object which represents an HTTP/REST
"resource". Falcon will pass requests to various decorated
methods to handle http requests.
"""
if not resource:
raise Exception('Not a valid resource')
path_maps = {}
try:
if uri_template:
super(ResourceAPI, self).add_route(uri_template, resource)
else:
for attr in dir(resource):
method = getattr(resource, attr)
if callable(method) and hasattr(method,
RESOURCE_METHOD_FLAG):
flag = getattr(method, RESOURCE_METHOD_FLAG)
map = path_maps.get(flag.path)
if not map:
uri_fields, template = (
api_helpers.compile_uri_template(flag.path))
map = (template, {})
path_maps[flag.path] = map
new_method = api_helpers._wrap_with_hooks(
self._before, self._after, method)
map[1][flag.method] = new_method
for item in path_maps:
self._routes.insert(0, (path_maps[item][0],
path_maps[item][1]))
except Exception:
LOG.exception('Error occurred while adding the resource')
LOG.debug(self._routes)

27
kiloeyes/dispatcher/__init__.py Executable file
View File

@ -0,0 +1,27 @@
# Copyright 2013 IBM Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import six
@six.add_metaclass(abc.ABCMeta)
class Base(object):
def __init__(self, conf):
self.conf = conf
@abc.abstractmethod
def define_routes(self, app):
"""Post metric data interface."""

View File

@ -0,0 +1,79 @@
# Copyright 2013 IBM Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import falcon
from oslo_config import cfg
from kiloeyes.api import monasca_api_v2
from kiloeyes.common import resource_api
from kiloeyes.openstack.common import log
OPTS = [
cfg.MultiStrOpt('id',
default=['sample'],
help='Multiple String configuration.'),
cfg.StrOpt('prefix',
default='monasca_',
help='String configuration sample.'),
]
cfg.CONF.register_opts(OPTS, group='sample_dispatcher')
LOG = log.getLogger(__name__)
class SampleDispatcher(monasca_api_v2.V2API):
"""kiloeyes dispatcher sample class
This class shows how to develop a dispatcher and how the configuration
parameters should be defined and how these configuration parameters
should be set in kiloeyes.conf file.
This class uses configuration parameters appear in sample_dispatcher
section such as the following:
[sample_dispatcher]
id = 101
id = 105
id = 180
prefix = sample__
If the above section appears in file kiloeyes.conf, these values will be
loaded to cfg.CONF after the dispatcher gets loaded. The cfg.CONF should
have the following values:
cfg.CONF.sample_dispatcher.id = [101, 105, 180]
cfg.CONF.sample_dispatcher.prefix = "sample__"
"""
def __init__(self, global_conf):
LOG.debug('initializing SampleDispatcher!')
super(SampleDispatcher, self).__init__(global_conf)
LOG.debug('SampleDispatcher conf entries: prefix')
LOG.debug(global_conf.sample_dispatcher.prefix)
LOG.debug('SampleDispatcher conf entries: id')
LOG.debug(global_conf.sample_dispatcher.id)
@resource_api.Restify('/v2.0/datapoints/', method='post')
def do_post_metrics(self, req, res):
LOG.debug('Getting the call at endpoint datapoints.')
msg = req.stream.read()
LOG.debug('The msg:', msg)
res.status = getattr(falcon, 'HTTP_201')
@resource_api.Restify('/v2.0/demopoints/', method='get')
def do_get_metrics(self, req, res):
LOG.debug('Getting the call at endpoint demopoints.')
res.body = 'demo response'
res.status = getattr(falcon, 'HTTP_200')

View File

View File

@ -0,0 +1,109 @@
#
# Copyright 2013 IBM Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from stevedore import driver
from kiloeyes.common import es_conn
from kiloeyes.common import kafka_conn
from kiloeyes.common import namespace
from kiloeyes.openstack.common import log
from kiloeyes.openstack.common import service as os_service
OPTS = [
cfg.StrOpt('topic', default='metrics',
help=('The topic that messages will be retrieved from.'
'This also will be used as a doc type when saved '
'to ElasticSearch.')),
cfg.StrOpt('doc_type', default='',
help=('The document type which defines what document '
'type the messages will be save into. If not '
'specified, then the topic will be used.')),
cfg.StrOpt('index_strategy', default='fixed',
help='The index strategy used to create index name.'),
cfg.StrOpt('index_prefix', default='data_',
help='The index prefix where metrics were saved to.'),
cfg.StrOpt('processor', default='',
help=('The message processer to load to process the message.'
'If the message does not need to be process anyway,'
'leave the default')),
]
cfg.CONF.register_opts(OPTS, group="es_persister")
LOG = log.getLogger(__name__)
class ESPersister(os_service.Service):
def __init__(self, threads=1000):
super(ESPersister, self).__init__(threads)
self._kafka_conn = kafka_conn.KafkaConnection(
cfg.CONF.es_persister.topic)
# load index strategy
if cfg.CONF.es_persister.index_strategy:
self.index_strategy = driver.DriverManager(
namespace.STRATEGY_NS,
cfg.CONF.es_persister.index_strategy,
invoke_on_load=True,
invoke_kwds={}).driver
LOG.debug(dir(self.index_strategy))
else:
self.index_strategy = None
self.index_prefix = cfg.CONF.es_persister.index_prefix
# Use doc_type if it is defined.
if cfg.CONF.es_persister.doc_type:
self.doc_type = cfg.CONF.es_persister.doc_type
else:
self.doc_type = cfg.CONF.es_persister.topic
# create connection to ElasticSearch
self._es_conn = es_conn.ESConnection(
self.doc_type, self.index_strategy, self.index_prefix)
# load message processor
if cfg.CONF.es_persister.processor:
self.msg_processor = driver.DriverManager(
namespace.PROCESSOR_NS,
cfg.CONF.es_persister.processor,
invoke_on_load=True,
invoke_kwds={}).driver
LOG.debug(dir(self.msg_processor))
else:
self.msg_processor = None
def start(self):
while True:
try:
for msg in self._kafka_conn.get_messages():
if msg and msg.message:
LOG.debug(msg.message.value)
if self.msg_processor:
value = self.msg_processor.process_msg(
msg.message.value)
else:
value = msg.message.value
if value:
self._es_conn.send_messages(value)
# if autocommit is set, this will be a no-op call.
self._kafka_conn.commit()
except Exception:
LOG.exception('Error occurred while handling kafka messages.')
def stop(self):
self._kafka_conn.close()
super(ESPersister, self).stop()

View File

@ -0,0 +1,39 @@
#
# Copyright 2012-2013 eNovance <licensing@enovance.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from kiloeyes.openstack.common import log
LOG = log.getLogger(__name__)
OPTS = [
cfg.StrOpt('index_name',
default='',
help='The pre-configured index name.'),
]
cfg.CONF.register_opts(OPTS, group="fixed_strategy")
class FixedStrategy(object):
"""This strategy returns an empty string."""
def __init__(self):
self.index_name = cfg.CONF.fixed_strategy.index_name
LOG.debug('EmptyStrategy initialized successfully!')
def get_index(self):
return self.index_name

View File

@ -0,0 +1,58 @@
# Copyright 2013 IBM Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import hashlib
import json
import time
from kiloeyes.openstack.common import log
LOG = log.getLogger(__name__)
class MetricsFixer(object):
def __init__(self):
LOG.debug('initializing MetricsFixer!')
super(MetricsFixer, self).__init__()
@staticmethod
def _add_hash(message):
# If there is no timestamp, we need to fix that up
if not message.get('timestamp'):
message['timestamp'] = time.time()
# fixup the dimensions_hash
if not message.get('dimensions_hash') and message.get('dimensions'):
key_str = json.dumps(message['dimensions'],
sort_keys=True, indent=None,
separators=(',', ':'))
message['dimensions_hash'] = hashlib.md5(key_str).hexdigest()
return json.dumps(message, sort_keys=False, indent=None,
separators=(',', ':'))
def process_msg(self, msg):
try:
data = json.loads(msg)
if not isinstance(data, list):
data = [data]
result = ''
for item in data:
result += '{"index":{}}\n' + MetricsFixer._add_hash(item)
result += '\n'
return result
except Exception:
LOG.exception('')
return ''

View File

@ -0,0 +1,96 @@
# Copyright 2015 Carnegie Mellon University
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from stevedore import driver
from kiloeyes.common import es_conn
from kiloeyes.common import kafka_conn
from kiloeyes.common import namespace
from kiloeyes.openstack.common import log
from kiloeyes.openstack.common import service as os_service
NOTIFICATION_ENGINE_OPTS = [
cfg.StrOpt('topic',
default='alarms',
help='The topic that messages will be retrieved from.'),
cfg.StrOpt('doc_type',
default='notificationmethods',
help=('The document type which notification methods were '
'saved into.')),
cfg.StrOpt('index_strategy', default='fixed',
help='The index strategy used to create index name.'),
cfg.StrOpt('index_prefix', default='',
help='The index prefix where metrics were saved to.'),
cfg.StrOpt('processor',
default='',
help=('The message processer to load to process the message.'
'If the message does not need to be process anyway,'
'leave the default')),
]
cfg.CONF.register_opts(NOTIFICATION_ENGINE_OPTS, group="notificationengine")
LOG = log.getLogger(__name__)
class NotificationEngine(os_service.Service):
def __init__(self, threads=1000):
super(NotificationEngine, self).__init__(threads)
self._kafka_conn = kafka_conn.KafkaConnection(
cfg.CONF.notificationengine.topic)
self.doc_type = cfg.CONF.notificationengine.doc_type
# load index strategy
if cfg.CONF.notificationengine.index_strategy:
self.index_strategy = driver.DriverManager(
namespace.STRATEGY_NS,
cfg.CONF.notificationengine.index_strategy,
invoke_on_load=True,
invoke_kwds={}).driver
LOG.debug(dir(self.index_strategy))
else:
self.index_strategy = None
self.index_prefix = cfg.CONF.notificationengine.index_prefix
self._es_conn = es_conn.ESConnection(
self.doc_type, self.index_strategy, self.index_prefix)
if cfg.CONF.notificationengine.processor:
self.notification_processor = driver.DriverManager(
namespace.PROCESSOR_NS,
cfg.CONF.notificationengine.processor,
invoke_on_load=True,
invoke_kwds={}).driver
LOG.debug(dir(self.notification_processor))
else:
self.notification_processor = None
def start(self):
while True:
try:
for msg in self._kafka_conn.get_messages():
(self.notification_processor.
handle_alarm_msg(self._es_conn, msg))
# if autocommit is set, this will be a no-op call.
self._kafka_conn.commit()
except Exception:
LOG.exception('Error occurred while handling kafka messages.')
def stop(self):
self._kafka_conn.close()
super(NotificationEngine, self).stop()

View File

@ -0,0 +1,132 @@
# Copyright 2015 Carnegie Mellon University
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
from kiloeyes.common import email_sender
from kiloeyes.openstack.common import log
LOG = log.getLogger(__name__)
ACTIONS = {'ALARM': 'alarm_actions',
'OK': 'ok_actions',
'UNDETERMINED': 'undetermined_actions'}
class NotificationProcessor(object):
def __init__(self):
LOG.debug('initializing NotificationProcessor!')
super(NotificationProcessor, self).__init__()
self._email_sender = email_sender.EmailSender()
self.email_addresses = []
def _get_notification_method_response(self, res):
if res and res.status_code == 200:
obj = res.json()
if obj:
return obj.get('hits')
return None
else:
return None
def handle_alarm_msg(self, _es_conn, msg):
if msg and msg.message:
LOG.debug("Message received for alarm: " + msg.message.value)
value = msg.message.value
if value:
# value's format is:
# {
# "metrics": {
# "timestamp": 1432672915.409,
# "name": "biz",
# "value": 1500,
# "dimensions": {
# "key2": "value2",
# "key1": "value1"
# }
# },
# "state_updated_timestamp": 1432672915,
# "state": "ALARM",
# "alarm_definition": {
# "alarm_actions": [
# "c60ec47e-5038-4bf1-9f95-4046c6e9a759"
# ],
# "undetermined_actions": [
# "c60ec47e-5038-4bf1-9f95-4046c6e9a759"
# ],
# "name": "Average CPU percent greater than 10",
# "match_by": [
# "hostname"
# ],
# "description": "The average CPU percent is greater than 10",
# "ok_actions": [
# "c60ec47e-5038-4bf1-9f95-4046c6e9a759"
# ],
# "expression": "max(foo{hostname=mini-mon,mu=na}, 120) > 1100
# and max(bar { asd = asd} )>1200 or avg(biz)>1300",
# "id": "c60ec47e-5038-4bf1-9f95-4046c6e91111",
# "severity": "LOW"
# }
# }
# convert to dict, and get state to determine
# the actions(notification method id) needed.
# the method id can be used to match the
# notification method in elasticSearch
# Then an email will be sent
json_msg = json.loads(value)
state = json_msg["state"]
if state not in ["ALARM", "OK", "UNDETERMINED"]:
LOG.error("state of alarm is not defined as expected")
return
actions = []
if state in ACTIONS.keys():
actions = json_msg["alarm_definition"][ACTIONS[state]]
addresses = []
types = []
# the action_id is an id of notification method
# there can be multiple ids in one alarm message with different
# types
try:
for action_id in actions:
es_res = _es_conn.get_message_by_id(action_id)
es_res = self._get_notification_method_response(es_res)
LOG.debug('Query to ElasticSearch returned: %s'
% es_res)
if es_res is None or es_res["hits"] is None:
LOG.error("The action is not defined as expected")
return
name = es_res["hits"][0]["_source"]["name"]
type = es_res["hits"][0]["_source"]["type"]
address = es_res["hits"][0]["_source"]["address"]
types.append(type)
addresses.append(address)
for i in range(len(types)):
if types[i] == "EMAIL":
self.email_addresses.append(addresses[i])
self._email_sender.send_emails(
self.email_addresses,
"Alarm from kiloeyes:" + name + "-" +
json_msg["alarm_definition"]["description"],
str(json_msg))
except Exception:
LOG.exception('Exception performing alarm action')

62
kiloeyes/microservice/server.py Executable file
View File

@ -0,0 +1,62 @@
#
# Copyright 2013 IBM Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from stevedore import driver
from kiloeyes.common import namespace
from kiloeyes.openstack.common import log
from kiloeyes.openstack.common import service as os_service
from kiloeyes import service
OPTS = [
cfg.StrOpt('service',
help='kiloeyes micro services to process data.'),
cfg.IntOpt('threads', default=1,
help='The number of threads for the service.'),
]
cfg.CONF.register_opts(OPTS)
LOG = log.getLogger(__name__)
def main():
service.prepare_service()
if not cfg.CONF.service:
LOG.error('No micro service is configured, please specify service '
'in the configuration file.')
return None
launcher = os_service.ServiceLauncher()
# Now load the micro service
service_driver = driver.DriverManager(
namespace.MICROSERVICE_NS,
cfg.CONF.service,
invoke_on_load=True,
invoke_kwds={'threads': cfg.CONF.threads})
if not service_driver.driver:
LOG.error('Failed loading micro service under name space %s.%s' %
(namespace.MICROSERVICE_NS, cfg.CONF.service))
return None
LOG.debug("Micro service %s is now loaded." %
service_driver.driver.__class__.__name__)
# now launch the service
launcher.launch_service(service_driver.driver)
launcher.wait()

View File

@ -0,0 +1,346 @@
# Copyright 2015 Carnegie Mellon University
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from kiloeyes.common import es_conn
from kiloeyes.common import kafka_conn
from kiloeyes.common import namespace
from kiloeyes.openstack.common import log
from kiloeyes.openstack.common import service as os_service
from oslo_config import cfg
from stevedore import driver
import threading
import time
lock = threading.RLock()
THRESHOLD_ENGINE_OPTS = [
cfg.StrOpt('metrics_topic',
default='metrics',
help='topics to read metrics'),
cfg.StrOpt('alarm_topic',
default='alarm',
help='topic to send alarms'),
cfg.StrOpt('processor',
default='thresholding_processor',
help='the thresh processor'),
cfg.IntOpt('check_alarm_interval',
default=60)
]
ALARM_DEFINITION_OPTS = [
cfg.StrOpt('doc_type', default='alarmdefinitions',
help='The doc_type that alarm definitions will be saved to.'),
cfg.StrOpt('index_strategy', default='fixed',
help='The index strategy used to create index name.'),
cfg.StrOpt('index_prefix', default='data_',
help='The index prefix where metrics were saved to.'),
cfg.IntOpt('size', default=1000,
help=('The query result limit. Any result set more than '
'the limit will be discarded.')),
cfg.StrOpt('name', default='',
help='The name for query alarm definitions.'),
cfg.StrOpt('dimensions', default='',
help='The dimensions for query alarm definitions.'),
cfg.IntOpt('check_alarm_def_interval',
default=120)
]
cfg.CONF.register_opts(THRESHOLD_ENGINE_OPTS, group="thresholdengine")
cfg.CONF.register_opts(ALARM_DEFINITION_OPTS, group="alarmdefinitions")
LOG = log.getLogger(__name__)
class AlarmPublisher(threading.Thread):
"""The thread to publish alarm messages.
This class will periodically call processors to get alarms produced,
and send them into kafka
"""
def __init__(self, t_name, tp):
threading.Thread.__init__(self, name=t_name)
# init kafka connection to alarm topic
self._publish_kafka_conn = None
topic = cfg.CONF.thresholdengine.alarm_topic
self._publish_kafka_conn = (
kafka_conn.KafkaConnection(topic))
# set time interval for calling processors to refresh alarms
self.interval = cfg.CONF.thresholdengine.check_alarm_interval
self.threshold_processors = tp
def send_alarm(self):
if self._publish_kafka_conn:
if lock.acquire():
for aid in self.threshold_processors:
# get alarms produced by each processor
for alarm in (self.threshold_processors
[aid]['processor'].process_alarms()):
LOG.debug(alarm)
self._publish_kafka_conn.send_messages(alarm)
lock.release()
def run(self):
while True:
try:
self.send_alarm()
time.sleep(self.interval)
except Exception:
LOG.exception(
'Error occurred while publishing alarm messages.')
def stop(self):
self._publish_kafka_conn.close()
class MetricsConsumer(threading.Thread):
"""The thread to read metrics.
This class will get metrics messages from kafka,
and deliver them to processors.
"""
def __init__(self, t_name, tp):
threading.Thread.__init__(self, name=t_name)
# init kafka connection to metrics topic
self._consume_kafka_conn = None
topic = cfg.CONF.thresholdengine.metrics_topic
self._consume_kafka_conn = kafka_conn.KafkaConnection(topic)
self.threshold_processors = tp
def read_metrics(self):
def consume_metrics():
if lock.acquire():
# send metrics to each processor
for aid in self.threshold_processors:
processor = self.threshold_processors[aid]['processor']
processor.process_metrics(msg.message.value)
lock.release()
if self._consume_kafka_conn:
for msg in self._consume_kafka_conn.get_messages():
if msg and msg.message:
LOG.debug(msg.message.value)
consume_metrics()
self._consume_kafka_conn.commit()
def run(self):
while True:
try:
self.read_metrics()
except Exception:
LOG.exception('Error occurred while reading metrics messages.')
def stop(self):
self._consume_kafka_conn.close()
class AlarmDefinitionConsumer(threading.Thread):
"""The thread to process alarm definitions.
This class will get alarm definition messages from kafka,
Then init new processor, update existing processor or delete processor
according to the request.
"""
def __init__(self, t_name, tp):
threading.Thread.__init__(self, name=t_name)
self.doc_type = cfg.CONF.alarmdefinitions.doc_type
self.size = cfg.CONF.alarmdefinitions.size
# load index strategy
if cfg.CONF.alarmdefinitions.index_strategy:
self.index_strategy = driver.DriverManager(
namespace.STRATEGY_NS,
cfg.CONF.alarmdefinitions.index_strategy,
invoke_on_load=True,
invoke_kwds={}).driver
LOG.debug(self.index_strategy)
else:
self.index_strategy = None
self.index_prefix = cfg.CONF.alarmdefinitions.index_prefix
self._es_conn = es_conn.ESConnection(
self.doc_type, self.index_strategy, self.index_prefix)
# get the dict where all processors are indexed
self.threshold_processors = tp
# get the time interval to query es
self.interval = cfg.CONF.alarmdefinitions.check_alarm_def_interval
# set the flag, which is used to determine if a processor is expired
# each processor will has its flag, if same with self.flag, it's valid
self.flag = 0
# setup query params
self.params = self._build_alarm_definitions_query(
cfg.CONF.alarmdefinitions.name,
cfg.CONF.alarmdefinitions.dimensions)
def _build_alarm_definitions_query(self, name, dimensions):
query = {}
queries = []
field_string = 'alarmdefinitions.expression_data.dimensions.'
if dimensions:
# add dimensions to query params
current_dimension_split = (
dimensions.split(','))
for current_dimension in current_dimension_split:
current_dimen_data = current_dimension.split(':')
queries.append({'query_string': {
'default_field': (field_string +
current_dimen_data[0]),
'query': current_dimen_data[1]}})
if name:
# add name to query params
queries.append({'query_string': {
'default_field': 'name', 'query': name}})
# finally build the query
query = {'query': {'bool': {'must': queries}}}
return query
def _get_alarm_definitions_response(self, res):
if res and res.status_code == 200:
obj = res.json()
if obj:
return obj.get('hits')
return None
def get_alarm_definitions(self):
"""Get alarm definitions from es."""
es_res = self._es_conn.get_messages(self.params)
es_res = self._get_alarm_definitions_response(es_res)
if es_res is None:
return None
LOG.debug('Query to ElasticSearch returned: %s' % es_res)
if es_res["hits"]:
return es_res["hits"]
else:
return []
def refresh_alarm_processors(self):
def create_alarm_processor():
# make sure received a new alarm definition
if aid in self.threshold_processors:
LOG.debug('already exsist alarm definition')
return
# init a processor for this alarm definition
temp_processor = (
driver.DriverManager(
namespace.PROCESSOR_NS,
cfg.CONF.thresholdengine.processor,
invoke_on_load=True,
invoke_args=(alarm_def,)).driver)
# register this new processor
self.threshold_processors[aid] = {}
self.threshold_processors[aid]['processor'] = (
temp_processor)
self.threshold_processors[aid]['flag'] = self.flag
self.threshold_processors[aid]['json'] = alarm_def
def update_alarm_processor():
# update the processor when alarm definition is changed
updated = False
if aid in self.threshold_processors:
updated = (self.threshold_processors
[aid]['processor']
.update_thresh_processor(alarm_def))
self.threshold_processors[aid]['json'] = alarm_def
if updated:
LOG.debug('alarm definition updates successfully!')
else:
LOG.debug('alarm definition update fail!')
def delete_alarm_processor():
# delete related processor when an alarm definition is deleted
if aid in self.threshold_processors:
self.threshold_processors.pop(aid)
self.flag = 1 - self.flag
# get all alarm definitions from es to update those in the engine
alarm_definitions = self.get_alarm_definitions()
# http request fails, do nothing
if alarm_definitions is None:
return
if lock.acquire():
for alarm_definition in alarm_definitions:
alarm_def = alarm_definition['_source']
aid = alarm_def['id']
if aid in self.threshold_processors:
# alarm definition is updated
if alarm_def != self.threshold_processors[aid]['json']:
update_alarm_processor()
self.threshold_processors[aid]['flag'] = self.flag
else:
# comes a new alarm definition
create_alarm_processor()
for aid in self.threshold_processors.keys():
if self.threshold_processors[aid]['flag'] != self.flag:
# the alarm definition is expired
delete_alarm_processor()
lock.release()
def run(self):
while True:
try:
self.refresh_alarm_processors()
time.sleep(self.interval)
except Exception:
LOG.exception('Error occurred '
'while reading alarm definitions.')
def stop(self):
pass
class ThresholdEngine(os_service.Service):
def __init__(self, threads=1000):
super(ThresholdEngine, self).__init__(threads)
# dict to index all the processors,
# key = alarm def id; value = processor
self.threshold_processors = {}
# init threads for processing metrics, alarm definition and alarm
try:
self.thread_alarm = AlarmPublisher(
'alarm_publisher',
self.threshold_processors)
except Exception:
self.thread_alarm = None
try:
self.thread_alarm_def = AlarmDefinitionConsumer(
'alarm_def_consumer',
self.threshold_processors)
except Exception:
self.thread_alarm_def = None
try:
self.thread_metrics = MetricsConsumer(
'metrics_consumer',
self.threshold_processors)
except Exception:
self.thread_metrics = None
def start(self):
try:
self.thread_alarm.start()
self.thread_alarm_def.start()
self.thread_metrics.start()
LOG.info('Starting thresh processing succeed!')
except Exception:
LOG.debug('Starting thresh processing failed!')
def stop(self):
try:
if self.thread_alarm:
self.thread_alarm.stop()
if self.thread_alarm_def:
self.thread_alarm_def.stop()
if self.thread_metrics:
self.thread_metrics.stop()
except Exception:
LOG.debug('Terminate thresh process threads error')
super(ThresholdEngine, self).stop()

View File

@ -0,0 +1,328 @@
# Copyright 2015 Carnegie Mellon University
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import copy
import json
from kiloeyes.common import alarm_expr_calculator as calculator
from kiloeyes.common import alarm_expr_parser as parser
from kiloeyes.openstack.common import log
from kiloeyes.openstack.common import timeutils as tu
import uuid
LOG = log.getLogger(__name__)
reasons = {'ALARM': 'The alarm threshold(s) have '
'been exceeded for the sub-alarms',
'OK': 'The alarm threshold(s) have '
'not been exceeded for the sub-alarms',
'UNDETERMINED': 'Unable to determine the alarm state'}
class ThresholdProcessor(object):
"""Thresh processor.
This processor is for alarm definitions with short period.
It will store the metrics value/timestamp in memory using dict.
The basic data structure is:
ALL_DATA = {#match_by name#: ALARM_DATA}
For example, an alarm def has "match_by": ["hostname", "os"]
Metrics come in:
Metrics_A -> 'dimensions': {'hostname': 'A', 'os': 'windows'}
Metrics_B -> 'dimensions': {'hostname': 'B', 'os': 'unix'}
Then, ALL_DATA = {'Awindows': ALARM_DATA, 'Bunix': ALARM_DATA}
ALARM_DATA = {'state': #alarm state#,
'timestamp': #timestamp#, data: SUB_ALARM_DATA, ...}
It will hold the overall info of th alarm,
like state, timestamps, and metrics data.
SUB_ALARM_DATA = {#sub alarm expr#: METRICS}
For example, alarm expr is 'max(cpu)>10 and avg(memory)<10'
SUB_ALARM_DATA = {'max(cpu)>10': METRICS, 'avg(memory)<10': METRICS}
METRICS = {'value': [X, ...], 'timestamp': [T, ...], 'sub_state': S}
Other key/values in a metrics will not be stored here.
The state here is the state of this sub_alarm.
"""
def __init__(self, alarm_def):
"""One processor instance hold one alarm definition."""
LOG.debug('initializing ThresholdProcessor!')
super(ThresholdProcessor, self).__init__()
self.alarm_definition = alarm_def
self.expression = self.alarm_definition['expression']
self.match_by = self.alarm_definition['match_by']
self.expr_data_queue = {}
self.related_metrics = {}
if len(self.match_by) == 0:
self.match_by = None
alarm_parser = parser.AlarmExprParser(self.expression)
self.parse_result = alarm_parser.parse_result
self.sub_expr_list = alarm_parser.sub_expr_list
self.related_metrics[None] = alarm_parser.related_metrics
self.sub_alarm_expr = alarm_parser.sub_alarm_expressions
LOG.debug('successfully initialize ThresholdProcessor!')
def update_thresh_processor(self, alarm_def):
"""Update the processor with updated alarm definition."""
def update_data():
# inherit previous stored metrics values
for name in self.expr_data_queue:
ts = tu.utcnow_ts()
new_expr_data_queue[name] = {
'data': {},
'state': 'UNDETERMINED',
'create_timestamp':
self.expr_data_queue[name]['create_timestamp'],
'update_timestamp': ts,
'state_update_timestamp':
self.expr_data_queue[name]['state_update_timestamp']
}
for i in range(0, len(new_sub_expr_list), 1):
expr_old = self.sub_expr_list[i].fmtd_sub_expr_str
expr_new = new_sub_expr_list[i].fmtd_sub_expr_str
new_expr_data_queue[name]['data'][expr_new] = {
'state': 'UNDETERMINED',
'metrics':
(self.expr_data_queue[name]
['data'][expr_old]['metrics']),
'values': []}
LOG.debug('update ThresholdProcessor!')
new_alarm_definition = alarm_def
new_expression = new_alarm_definition['expression']
alarm_parser = parser.AlarmExprParser(new_expression)
new_sub_expr_list = alarm_parser.sub_expr_list
new_expr_data_queue = {}
update_data()
self.expr_data_queue = new_expr_data_queue
self.sub_expr_list = new_sub_expr_list
self.sub_alarm_expr = alarm_parser.sub_alarm_expressions
self.parse_result = alarm_parser.parse_result
self.alarm_definition = new_alarm_definition
self.expression = new_expression
self.match_by = self.alarm_definition['match_by']
if '' in self.match_by:
self.match_by.remove('')
if len(self.match_by) == 0:
self.match_by = None
LOG.debug('successfully update ThresholdProcessor!')
return True
def process_metrics(self, metrics):
"""Add new metrics to matched expr."""
try:
data = json.loads(metrics)
self.add_expr_metrics(data)
except Exception:
LOG.exception('Received a wrong format metrics')
def process_alarms(self):
"""Called to produce alarms."""
try:
alarm_list = []
for m in self.expr_data_queue.keys():
is_updated = self.update_state(self.expr_data_queue[m])
if is_updated:
alarm_list.append(self.build_alarm(m))
return alarm_list
except Exception:
LOG.exception('process metrics error')
return []
def update_state(self, expr_data):
"""Update the state of each alarm under this alarm definition."""
def _calc_state(operand):
if operand.logic_operator:
subs = []
for o in operand.sub_expr_list:
subs.append(_calc_state(o))
return calculator.calc_logic(operand.logic_operator, subs)
else:
return expr_data['data'][operand.fmtd_sub_expr_str]['state']
for sub_expr in self.sub_expr_list:
self.update_sub_expr_state(sub_expr, expr_data)
state_new = _calc_state(self.parse_result)
if state_new != expr_data['state']:
expr_data['state_update_timestamp'] = tu.utcnow_ts()
expr_data['update_timestamp'] = tu.utcnow_ts()
expr_data['state'] = state_new
return True
else:
return False
def update_sub_expr_state(self, expr, expr_data):
def _update_metrics():
"""Delete metrics not in period."""
data_list = expr_data['data'][expr.fmtd_sub_expr_str]['metrics']
start_time = t_now - (float(expr.period)) * int(expr.periods)
while (len(data_list) != 0
and data_list[0]['timestamp'] < start_time):
data_list.popleft()
def _update_state():
"""Update state of a sub expr."""
data_sub = expr_data['data'][expr.fmtd_sub_expr_str]
data_list = data_sub['metrics']
period = float(expr.period)
periods = int(expr.periods)
right = t_now
left = right - period
temp_data = []
value_in_periods = []
i = len(data_list) - 1
while i >= 0:
if data_list[i]['timestamp'] >= left:
temp_data.append(float(data_list[i]['value']))
else:
value = calculator.calc_value(
expr.normalized_func, temp_data)
value_in_periods.append(value)
right = left
left = right - period
temp_data = []
i += 1
i -= 1
value = calculator.calc_value(
expr.normalized_func, temp_data)
value_in_periods.append(value)
for i in range(len(value_in_periods), periods, 1):
value_in_periods.append(
calculator.calc_value(expr.normalized_func, []))
expr_data['data'][expr.fmtd_sub_expr_str]['values'] = (
value_in_periods)
expr_data['data'][expr.fmtd_sub_expr_str]['state'] = (
calculator.compare_thresh(
value_in_periods,
expr.normalized_operator,
float(expr.threshold)))
t_now = tu.utcnow_ts()
_update_metrics()
_update_state()
def add_expr_metrics(self, data):
"""Add new metrics to matched place."""
for sub_expr in self.sub_expr_list:
self.add_sub_expr_metrics(sub_expr, data)
def add_sub_expr_metrics(self, expr, data):
"""Add new metrics to sub expr place."""
def _has_match_expr():
if (data['name'].lower() != expr.normalized_metric_name):
return False
metrics_dimensions = {}
if 'dimensions' in data:
metrics_dimensions = data['dimensions']
def_dimensions = expr.dimensions_as_dict
for dimension_key in def_dimensions.keys():
if dimension_key in metrics_dimensions:
if (metrics_dimensions[dimension_key].lower()
!= def_dimensions[dimension_key].lower()):
return False
else:
return False
return True
def _add_metrics():
temp = None
if self.match_by:
q_name = self.get_matched_data_queue_name(data)
if q_name:
temp = self.expr_data_queue[q_name]
else:
if None not in self.expr_data_queue:
self.create_data_item(None)
temp = self.expr_data_queue[None]
if temp:
data_list = temp['data'][expr.fmtd_sub_expr_str]
data_list['metrics'].append(
{'value': float(data['value']),
'timestamp': tu.utcnow_ts()})
return True
else:
return False
if _has_match_expr() and _add_metrics():
LOG.debug("Alarm def: %s consumes the metrics!"
% self.alarm_definition['name'])
else:
LOG.debug("Alarm def: %s don't need the metrics!"
% self.alarm_definition['name'])
def create_data_item(self, name):
"""If new match_up tuple, create new entry to store metrics value."""
ts = tu.utcnow_ts()
self.expr_data_queue[name] = {
'data': {},
'state': 'UNDETERMINED',
'create_timestamp': ts,
'update_timestamp': ts,
'state_update_timestamp': ts}
for expr in self.sub_expr_list:
self.expr_data_queue[name]['data'][expr.fmtd_sub_expr_str] = {
'state': 'UNDETERMINED',
'metrics': collections.deque(),
'values': []}
def get_matched_data_queue_name(self, data):
"""Use dimensions in match_up to generate a name."""
name = ''
for m in self.match_by:
if m in data['dimensions']:
name = name + data['dimensions'][m] + ','
else:
return None
if name in self.expr_data_queue:
return name
else:
self.related_metrics[name] = []
for m in self.related_metrics[None]:
temp = copy.deepcopy(m)
for match in self.match_by:
temp['dimensions'][match] = data['dimensions'][match]
self.related_metrics[name].append(temp)
self.create_data_item(name)
return name
def build_alarm(self, name):
"""Build alarm json."""
alarm = {}
id = str(uuid.uuid4())
alarm['id'] = id
alarm['alarm_definition'] = self.alarm_definition
alarm['metrics'] = self.related_metrics[name]
alarm['state'] = self.expr_data_queue[name]['state']
alarm['reason'] = reasons[alarm['state']]
alarm['reason_data'] = {}
sub_alarms = []
dt = self.expr_data_queue[name]['data']
for expr in self.sub_expr_list:
sub_alarms.append({
'sub_alarm_expression':
self.sub_alarm_expr[expr.fmtd_sub_expr_str],
'sub_alarm_state': dt[expr.fmtd_sub_expr_str]['state'],
'current_values': dt[expr.fmtd_sub_expr_str]['values']
})
alarm['sub_alarms'] = sub_alarms
ct = self.expr_data_queue[name]['create_timestamp']
st = self.expr_data_queue[name]['state_update_timestamp']
t = self.expr_data_queue[name]['update_timestamp']
alarm['state_updated_timestamp'] = tu.iso8601_from_timestamp(st)
alarm['updated_timestamp'] = tu.iso8601_from_timestamp(t)
alarm['created_timestamp'] = tu.iso8601_from_timestamp(ct)
return json.dumps(alarm)

View File

@ -0,0 +1,110 @@
#
# Copyright 2013 IBM Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import dateutil.parser as dparser
from oslo_config import cfg
import time
from kiloeyes.openstack.common import log
"""
The following strategy is to define how the a given date can be break up into
ranges which will be used to create new Elastic Search indices. For example:
[strategy]
pattern=m
frequency=2
start_date=2014-01-01
The above strategy will create a index every 2 months starting Jan 1, 2014
[strategy]
pattern=d
frequency=5
start_date=2014-01-01
The above strategy will create a index every 5 days starting Jan 1, 2014
"""
OPTS = [
cfg.StrOpt('time_unit',
default='m',
help=('The time unit to create a index for a given date. '
'The valid values are h, d, w, m, y. Each means hour, '
'day, week, month and year respectively.')),
cfg.IntOpt('frequency',
default=1,
help='The frequency of the pattern to make a date range.'),
cfg.StrOpt('start_date',
default='2014-01-01',
help='The start date.'),
]
cfg.CONF.register_opts(OPTS, group="timed_strategy")
LOG = log.getLogger(__name__)
class TimedStrategy(object):
def __init__(self):
self.time_unit = cfg.CONF.timed_strategy.time_unit
self.frequency = cfg.CONF.timed_strategy.frequency
self.start_date = dparser.parse(cfg.CONF.timed_strategy.start_date,
fuzzy=True)
self.now = None
LOG.debug('TimedStrategy initialized successfully!')
def set_time(self, a_date):
self.now = a_date
def get_index(self):
# Right now, only support frequency of 1.
# To support any frequency greater than 1, we need more work.
if self.now:
a_date = self.now
else:
a_date = datetime.datetime.now()
if isinstance(a_date, long) or isinstance(a_date, int):
try:
a_date = datetime.datetime.fromtimestamp(a_date)
except Exception:
return
elif isinstance(a_date, datetime.datetime):
pass
else:
try:
a_date = dparser.parse(a_date, fuzzy=True)
except Exception:
return
if self.time_unit is 'y':
return "%04i0101000000" % a_date.year
if self.time_unit is 'm':
return "%04i%02i01000000" % (a_date.year, a_date.month)
if self.time_unit is 'd':
return "%04i%02i%02i000000" % (a_date.year, a_date.month,
a_date.day)
if self.time_unit is 'h':
return "%04i%02i%02i%02i0000" % (a_date.year, a_date.month,
a_date.day, a_date.hour)
if self.time_unit is 'w':
year, week, day = a_date.isocalendar()
if day == 7:
day_str = "%04i %i 0" % (year, week)
else:
day_str = "%04i %i 0" % (year, week - 1)
day = time.strptime(day_str, '%Y %U %w')
return "%04i%02i%02i000000" % (day.tm_year, day.tm_mon,
day.tm_mday)

View File

View File

@ -0,0 +1,49 @@
# Copyright 2013 IBM Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class Inspector(object):
"""The middleware that logs the request body and header
This is a middleware for debug purposes. To enable this middleware, add
the following lines into the configuration file, for example:
[pipeline:main]
pipeline = inspector api
[filter:inspector]
use = egg: monasca_api_server#inspector
"""
def __init__(self, app, conf):
"""Inspect each request
:param app: OptionParser to use. If not sent one will be created.
:param conf: Override sys.argv; used in testing
"""
self.app = app
self.conf = conf
print('Inspect config:', self.conf)
def __call__(self, environ, start_response):
print('environ is ', environ)
return self.app(environ, start_response)
def filter_factory(global_conf, **local_conf):
def login_filter(app):
return Inspector(app, local_conf)
return login_filter

45
kiloeyes/middleware/login.py Executable file
View File

@ -0,0 +1,45 @@
# Copyright 2013 IBM Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class SimpleLogin(object):
"""Example middleware that demostrates how a login middleware should work.
In this example, the middleware checks if a request path starts with
string '/datapoints/', if it does, the request gets pass this
middleware, otherwise, a 401 response is returned.
"""
def __init__(self, app, conf):
self.app = app
self.conf = conf
def __call__(self, environ, start_response):
# if request starts with /datapoints/, then let it go on.
# this login middle
if environ.get('PATH_INFO', '').startswith('/datapoints/'):
return self.app(environ, start_response)
# otherwise, send something else, request stops here.
else:
status = "401 Unauthorized"
response_headers = [("content-type", "text/plain")]
start_response(status, response_headers, "please login first")
return ['Please log in!']
def filter_factory(global_conf, **local_conf):
def login_filter(app):
return SimpleLogin(app, local_conf)
return login_filter

View File

@ -0,0 +1,123 @@
# Copyright 2013 IBM Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import StringIO
try:
import ujson as json
except ImportError:
import json
class MetricValidator(object):
"""middleware that validate the metric input stream.
This middleware checks if the input stream actually follows metric spec
and all the messages in the request has valid metric data. If the body
is valid json and compliant with the spec, then the request will forward
the request to the next in the pipeline, otherwise, it will reject the
request with response code of 400 or 406.
"""
def __init__(self, app, conf):
self.app = app
self.conf = conf
def _is_valid_metric(self, metric):
"""Validate a message
The external message format is
{
"name":"name1",
"dimensions":{
"key1":"value1",
"key2":"value2"
},
"timestamp":1405630174,
"value":1.0
}
Once this is validated, the message needs to be transformed into
the following internal format:
The current valid message format is as follows (interna):
{
"metric": {"something": "The metric as a JSON object"},
"meta": {
"tenantId": "the tenant ID acquired",
"region": "the region that the metric was submitted under",
},
"creation_time": "the time when the API received the metric",
}
"""
if (metric.get('name') and metric.get('dimensions') and
metric.get('timestamp') and metric.get('value')):
return True
else:
return False
def __call__(self, env, start_response):
# if request starts with /datapoints/, then let it go on.
# this login middle
if (env.get('PATH_INFO', '').startswith('/v2.0/metrics') and
env.get('REQUEST_METHOD', '') == 'POST'):
# We only check the requests which are posting against metrics
# endpoint
try:
body = env['wsgi.input'].read()
metrics = json.loads(body)
# Do business logic validation here.
is_valid = True
if isinstance(metrics, list):
for metric in metrics:
if not self._is_valid_metric(metric):
is_valid = False
break
else:
is_valid = self._is_valid_metric(metrics)
if is_valid:
# If the message is valid, then wrap it into this internal
# format. The tenantId should be available from the
# request since this should have been authenticated.
# ideally this transformation should be done somewhere
# else. For the sake of simplicity, do the simple one
# here to make the life a bit easier.
# TODO(HP) Add logic to get region id from request header
# HTTP_X_SERVICE_CATALOG, then find endpoints, then region
region_id = None
msg = {'metric': metrics,
'meta': {'tenantId': env.get('HTTP_X_PROJECT_ID'),
'region': region_id},
'creation_time': datetime.datetime.now()}
env['wsgi.input'] = StringIO.StringIO(json.dumps(msg))
return self.app(env, start_response)
except Exception:
pass
# It is either invalid or exceptioned out while parsing json
# we will send the request back with 400.
start_response("400 Bad Request", [], '')
return []
else:
# not a metric post request, move on.
return self.app(env, start_response)
def filter_factory(global_conf, **local_conf):
def validator_filter(app):
return MetricValidator(app, local_conf)
return validator_filter

View File

View File

@ -0,0 +1,17 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
six.add_move(six.MovedModule('mox', 'mox', 'mox3.mox'))

View File

@ -0,0 +1,145 @@
# Copyright (c) 2012 OpenStack Foundation.
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
import errno
import gc
import os
import pprint
import socket
import sys
import traceback
import eventlet
import eventlet.backdoor
import greenlet
from oslo_config import cfg
from kiloeyes.openstack.common.gettextutils import _LI
from kiloeyes.openstack.common import log as logging
help_for_backdoor_port = (
"Acceptable values are 0, <port>, and <start>:<end>, where 0 results "
"in listening on a random tcp port number; <port> results in listening "
"on the specified port number (and not enabling backdoor if that port "
"is in use); and <start>:<end> results in listening on the smallest "
"unused port number within the specified range of port numbers. The "
"chosen port is displayed in the service's log file.")
eventlet_backdoor_opts = [
cfg.StrOpt('backdoor_port',
help="Enable eventlet backdoor. %s" % help_for_backdoor_port)
]
CONF = cfg.CONF
CONF.register_opts(eventlet_backdoor_opts)
LOG = logging.getLogger(__name__)
class EventletBackdoorConfigValueError(Exception):
def __init__(self, port_range, help_msg, ex):
msg = ('Invalid backdoor_port configuration %(range)s: %(ex)s. '
'%(help)s' %
{'range': port_range, 'ex': ex, 'help': help_msg})
super(EventletBackdoorConfigValueError, self).__init__(msg)
self.port_range = port_range
def _dont_use_this():
print("Don't use this, just disconnect instead")
def _find_objects(t):
return [o for o in gc.get_objects() if isinstance(o, t)]
def _print_greenthreads():
for i, gt in enumerate(_find_objects(greenlet.greenlet)):
print(i, gt)
traceback.print_stack(gt.gr_frame)
print()
def _print_nativethreads():
for threadId, stack in sys._current_frames().items():
print(threadId)
traceback.print_stack(stack)
print()
def _parse_port_range(port_range):
if ':' not in port_range:
start, end = port_range, port_range
else:
start, end = port_range.split(':', 1)
try:
start, end = int(start), int(end)
if end < start:
raise ValueError
return start, end
except ValueError as ex:
raise EventletBackdoorConfigValueError(port_range, ex,
help_for_backdoor_port)
def _listen(host, start_port, end_port, listen_func):
try_port = start_port
while True:
try:
return listen_func((host, try_port))
except socket.error as exc:
if (exc.errno != errno.EADDRINUSE or
try_port >= end_port):
raise
try_port += 1
def initialize_if_enabled():
backdoor_locals = {
'exit': _dont_use_this, # So we don't exit the entire process
'quit': _dont_use_this, # So we don't exit the entire process
'fo': _find_objects,
'pgt': _print_greenthreads,
'pnt': _print_nativethreads,
}
if CONF.backdoor_port is None:
return None
start_port, end_port = _parse_port_range(str(CONF.backdoor_port))
# NOTE(johannes): The standard sys.displayhook will print the value of
# the last expression and set it to __builtin__._, which overwrites
# the __builtin__._ that gettext sets. Let's switch to using pprint
# since it won't interact poorly with gettext, and it's easier to
# read the output too.
def displayhook(val):
if val is not None:
pprint.pprint(val)
sys.displayhook = displayhook
sock = _listen('localhost', start_port, end_port, eventlet.listen)
# In the case of backdoor port being zero, a port number is assigned by
# listen(). In any case, pull the port number out here.
port = sock.getsockname()[1]
LOG.info(
_LI('Eventlet backdoor listening on %(port)s for process %(pid)d') %
{'port': port, 'pid': os.getpid()}
)
eventlet.spawn_n(eventlet.backdoor.backdoor_server, sock,
locals=backdoor_locals)
return port

View File

@ -0,0 +1,113 @@
# Copyright 2011 OpenStack Foundation.
# Copyright 2012, Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Exception related utilities.
"""
import logging
import sys
import time
import traceback
import six
from kiloeyes.openstack.common.gettextutils import _LE
class save_and_reraise_exception(object):
"""Save current exception, run some code and then re-raise.
In some cases the exception context can be cleared, resulting in None
being attempted to be re-raised after an exception handler is run. This
can happen when eventlet switches greenthreads or when running an
exception handler, code raises and catches an exception. In both
cases the exception context will be cleared.
To work around this, we save the exception state, run handler code, and
then re-raise the original exception. If another exception occurs, the
saved exception is logged and the new exception is re-raised.
In some cases the caller may not want to re-raise the exception, and
for those circumstances this context provides a reraise flag that
can be used to suppress the exception. For example::
except Exception:
with save_and_reraise_exception() as ctxt:
decide_if_need_reraise()
if not should_be_reraised:
ctxt.reraise = False
If another exception occurs and reraise flag is False,
the saved exception will not be logged.
If the caller wants to raise new exception during exception handling
he/she sets reraise to False initially with an ability to set it back to
True if needed::
except Exception:
with save_and_reraise_exception(reraise=False) as ctxt:
[if statements to determine whether to raise a new exception]
# Not raising a new exception, so reraise
ctxt.reraise = True
"""
def __init__(self, reraise=True):
self.reraise = reraise
def __enter__(self):
self.type_, self.value, self.tb, = sys.exc_info()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_type is not None:
if self.reraise:
logging.error(_LE('Original exception being dropped: %s'),
traceback.format_exception(self.type_,
self.value,
self.tb))
return False
if self.reraise:
six.reraise(self.type_, self.value, self.tb)
def forever_retry_uncaught_exceptions(infunc):
def inner_func(*args, **kwargs):
last_log_time = 0
last_exc_message = None
exc_count = 0
while True:
try:
return infunc(*args, **kwargs)
except Exception as exc:
this_exc_message = six.u(str(exc))
if this_exc_message == last_exc_message:
exc_count += 1
else:
exc_count = 1
# Do not log any more frequently than once a minute unless
# the exception message changes
cur_time = int(time.time())
if (cur_time - last_log_time > 60 or
this_exc_message != last_exc_message):
logging.exception(
_LE('Unexpected exception occurred %d time(s)... '
'retrying.') % exc_count)
last_log_time = cur_time
last_exc_message = this_exc_message
exc_count = 0
# This should be a very rare event. In case it isn't, do
# a sleep.
time.sleep(1)
return inner_func

View File

@ -0,0 +1,146 @@
# Copyright 2011 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import contextlib
import errno
import os
import tempfile
from kiloeyes.openstack.common import excutils
from kiloeyes.openstack.common import log as logging
LOG = logging.getLogger(__name__)
_FILE_CACHE = {}
def ensure_tree(path):
"""Create a directory (and any ancestor directories required)
:param path: Directory to create
"""
try:
os.makedirs(path)
except OSError as exc:
if exc.errno == errno.EEXIST:
if not os.path.isdir(path):
raise
else:
raise
def read_cached_file(filename, force_reload=False):
"""Read from a file if it has been modified.
:param force_reload: Whether to reload the file.
:returns: A tuple with a boolean specifying if the data is fresh
or not.
"""
global _FILE_CACHE
if force_reload:
delete_cached_file(filename)
reloaded = False
mtime = os.path.getmtime(filename)
cache_info = _FILE_CACHE.setdefault(filename, {})
if not cache_info or mtime > cache_info.get('mtime', 0):
LOG.debug("Reloading cached file %s" % filename)
with open(filename) as fap:
cache_info['data'] = fap.read()
cache_info['mtime'] = mtime
reloaded = True
return (reloaded, cache_info['data'])
def delete_cached_file(filename):
"""Delete cached file if present.
:param filename: filename to delete
"""
global _FILE_CACHE
if filename in _FILE_CACHE:
del _FILE_CACHE[filename]
def delete_if_exists(path, remove=os.unlink):
"""Delete a file, but ignore file not found error.
:param path: File to delete
:param remove: Optional function to remove passed path
"""
try:
remove(path)
except OSError as e:
if e.errno != errno.ENOENT:
raise
@contextlib.contextmanager
def remove_path_on_error(path, remove=delete_if_exists):
"""Protect code that wants to operate on PATH atomically.
Any exception will cause PATH to be removed.
:param path: File to work with
:param remove: Optional function to remove passed path
"""
try:
yield
except Exception:
with excutils.save_and_reraise_exception():
remove(path)
def file_open(*args, **kwargs):
"""Open file
see built-in open() documentation for more details
Note: The reason this is kept in a separate module is to easily
be able to provide a stub module that doesn't alter system
state at all (for unit tests)
"""
return open(*args, **kwargs)
def write_to_tempfile(content, path=None, suffix='', prefix='tmp'):
"""Create temporary file or use existing file.
This util is needed for creating temporary file with
specified content, suffix and prefix. If path is not None,
it will be used for writing content. If the path doesn't
exist it'll be created.
:param content: content for temporary file.
:param path: same as parameter 'dir' for mkstemp
:param suffix: same as parameter 'suffix' for mkstemp
:param prefix: same as parameter 'prefix' for mkstemp
For example: it can be used in database tests for creating
configuration files.
"""
if path:
ensure_tree(path)
(fd, path) = tempfile.mkstemp(suffix=suffix, dir=path, prefix=prefix)
try:
os.write(fd, content)
finally:
os.close(fd)
return path

View File

@ -0,0 +1,85 @@
#
# Copyright 2013 Mirantis, Inc.
# Copyright 2013 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import fixtures
from oslo_config import cfg
import six
class Config(fixtures.Fixture):
"""Allows overriding configuration settings for the test.
`conf` will be reset on cleanup.
"""
def __init__(self, conf=cfg.CONF):
self.conf = conf
def setUp(self):
super(Config, self).setUp()
# NOTE(morganfainberg): unregister must be added to cleanup before
# reset is because cleanup works in reverse order of registered items,
# and a reset must occur before unregistering options can occur.
self.addCleanup(self._unregister_config_opts)
self.addCleanup(self.conf.reset)
self._registered_config_opts = {}
def config(self, **kw):
"""Override configuration values.
The keyword arguments are the names of configuration options to
override and their values.
If a `group` argument is supplied, the overrides are applied to
the specified configuration option group, otherwise the overrides
are applied to the ``default`` group.
"""
group = kw.pop('group', None)
for k, v in six.iteritems(kw):
self.conf.set_override(k, v, group)
def _unregister_config_opts(self):
for group in self._registered_config_opts:
self.conf.unregister_opts(self._registered_config_opts[group],
group=group)
def register_opt(self, opt, group=None):
"""Register a single option for the test run.
Options registered in this manner will automatically be unregistered
during cleanup.
If a `group` argument is supplied, it will register the new option
to that group, otherwise the option is registered to the ``default``
group.
"""
self.conf.register_opt(opt, group=group)
self._registered_config_opts.setdefault(group, set()).add(opt)
def register_opts(self, opts, group=None):
"""Register multiple options for the test run.
This works in the same manner as register_opt() but takes a list of
options as the first argument. All arguments will be registered to the
same group if the ``group`` argument is supplied, otherwise all options
will be registered to the ``default`` group.
"""
for opt in opts:
self.register_opt(opt, group=group)

View File

@ -0,0 +1,51 @@
# Copyright 2011 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import fixtures
from kiloeyes.openstack.common import lockutils
class LockFixture(fixtures.Fixture):
"""External locking fixture.
This fixture is basically an alternative to the synchronized decorator with
the external flag so that tearDowns and addCleanups will be included in
the lock context for locking between tests. The fixture is recommended to
be the first line in a test method, like so::
def test_method(self):
self.useFixture(LockFixture)
...
or the first line in setUp if all the test methods in the class are
required to be serialized. Something like::
class TestCase(testtools.testcase):
def setUp(self):
self.useFixture(LockFixture)
super(TestCase, self).setUp()
...
This is because addCleanups are put on a LIFO queue that gets run after the
test method exits. (either by completing or raising an exception)
"""
def __init__(self, name, lock_file_prefix=None):
self.mgr = lockutils.lock(name, lock_file_prefix, True)
def setUp(self):
super(LockFixture, self).setUp()
self.addCleanup(self.mgr.__exit__, None, None, None)
self.lock = self.mgr.__enter__()

View File

@ -0,0 +1,34 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import fixtures
def get_logging_handle_error_fixture():
"""returns a fixture to make logging raise formatting exceptions.
Usage:
self.useFixture(logging.get_logging_handle_error_fixture())
"""
return fixtures.MonkeyPatch('logging.Handler.handleError',
_handleError)
def _handleError(self, record):
"""Monkey patch for logging.Handler.handleError.
The default handleError just logs the error to stderr but we want
the option of actually raising an exception.
"""
raise

View File

@ -0,0 +1,62 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
##############################################################################
##############################################################################
#
# DO NOT MODIFY THIS FILE
#
# This file is being graduated to the oslotest library. Please make all
# changes there, and only backport critical fixes here. - dhellmann
#
##############################################################################
##############################################################################
import fixtures
import mock
class PatchObject(fixtures.Fixture):
"""Deal with code around mock."""
def __init__(self, obj, attr, new=mock.DEFAULT, **kwargs):
self.obj = obj
self.attr = attr
self.kwargs = kwargs
self.new = new
def setUp(self):
super(PatchObject, self).setUp()
_p = mock.patch.object(self.obj, self.attr, self.new, **self.kwargs)
self.mock = _p.start()
self.addCleanup(_p.stop)
class Patch(fixtures.Fixture):
"""Deal with code around mock.patch."""
def __init__(self, obj, new=mock.DEFAULT, **kwargs):
self.obj = obj
self.kwargs = kwargs
self.new = new
def setUp(self):
super(Patch, self).setUp()
_p = mock.patch(self.obj, self.new, **self.kwargs)
self.mock = _p.start()
self.addCleanup(_p.stop)

View File

@ -0,0 +1,43 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
##############################################################################
##############################################################################
#
# DO NOT MODIFY THIS FILE
#
# This file is being graduated to the oslotest library. Please make all
# changes there, and only backport critical fixes here. - dhellmann
#
##############################################################################
##############################################################################
import fixtures
from six.moves import mox
class MoxStubout(fixtures.Fixture):
"""Deal with code around mox and stubout as a fixture."""
def setUp(self):
super(MoxStubout, self).setUp()
# emulate some of the mox stuff, we can't use the metaclass
# because it screws with our generators
self.mox = mox.Mox()
self.stubs = self.mox.stubs
self.addCleanup(self.mox.UnsetStubs)
self.addCleanup(self.mox.VerifyAll)

View File

@ -0,0 +1,479 @@
# Copyright 2012 Red Hat, Inc.
# Copyright 2013 IBM Corp.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
gettext for openstack-common modules.
Usual usage in an openstack.common module:
from kiloeyes.openstack.common.gettextutils import _
"""
import copy
import gettext
import locale
from logging import handlers
import os
from babel import localedata
import six
_AVAILABLE_LANGUAGES = {}
# FIXME(dhellmann): Remove this when moving to oslo.i18n.
USE_LAZY = False
class TranslatorFactory(object):
"""Create translator functions
"""
def __init__(self, domain, localedir=None):
"""Establish a set of translation functions for the domain.
:param domain: Name of translation domain,
specifying a message catalog.
:type domain: str
:param lazy: Delays translation until a message is emitted.
Defaults to False.
:type lazy: Boolean
:param localedir: Directory with translation catalogs.
:type localedir: str
"""
self.domain = domain
if localedir is None:
localedir = os.environ.get(domain.upper() + '_LOCALEDIR')
self.localedir = localedir
def _make_translation_func(self, domain=None):
"""Return a new translation function ready for use.
Takes into account whether or not lazy translation is being
done.
The domain can be specified to override the default from the
factory, but the localedir from the factory is always used
because we assume the log-level translation catalogs are
installed in the same directory as the main application
catalog.
"""
if domain is None:
domain = self.domain
t = gettext.translation(domain,
localedir=self.localedir,
fallback=True)
# Use the appropriate method of the translation object based
# on the python version.
m = t.gettext if six.PY3 else t.ugettext
def f(msg):
"""oslo.i18n.gettextutils translation function."""
if USE_LAZY:
return Message(msg, domain=domain)
return m(msg)
return f
@property
def primary(self):
"The default translation function."
return self._make_translation_func()
def _make_log_translation_func(self, level):
return self._make_translation_func(self.domain + '-log-' + level)
@property
def log_info(self):
"Translate info-level log messages."
return self._make_log_translation_func('info')
@property
def log_warning(self):
"Translate warning-level log messages."
return self._make_log_translation_func('warning')
@property
def log_error(self):
"Translate error-level log messages."
return self._make_log_translation_func('error')
@property
def log_critical(self):
"Translate critical-level log messages."
return self._make_log_translation_func('critical')
# NOTE(dhellmann): When this module moves out of the incubator into
# oslo.i18n, these global variables can be moved to an integration
# module within each application.
# Create the global translation functions.
_translators = TranslatorFactory('kiloeyes')
# The primary translation function using the well-known name "_"
_ = _translators.primary
# Translators for log levels.
#
# The abbreviated names are meant to reflect the usual use of a short
# name like '_'. The "L" is for "log" and the other letter comes from
# the level.
_LI = _translators.log_info
_LW = _translators.log_warning
_LE = _translators.log_error
_LC = _translators.log_critical
# NOTE(dhellmann): End of globals that will move to the application's
# integration module.
def enable_lazy():
"""Convenience function for configuring _() to use lazy gettext
Call this at the start of execution to enable the gettextutils._
function to use lazy gettext functionality. This is useful if
your project is importing _ directly instead of using the
gettextutils.install() way of importing the _ function.
"""
global USE_LAZY
USE_LAZY = True
def install(domain):
"""Install a _() function using the given translation domain.
Given a translation domain, install a _() function using gettext's
install() function.
The main difference from gettext.install() is that we allow
overriding the default localedir (e.g. /usr/share/locale) using
a translation-domain-specific environment variable (e.g.
NOVA_LOCALEDIR).
Note that to enable lazy translation, enable_lazy must be
called.
:param domain: the translation domain
"""
from six import moves
tf = TranslatorFactory(domain)
moves.builtins.__dict__['_'] = tf.primary
class Message(six.text_type):
"""A Message object is a unicode object that can be translated.
Translation of Message is done explicitly using the translate() method.
For all non-translation intents and purposes, a Message is simply unicode,
and can be treated as such.
"""
def __new__(cls, msgid, msgtext=None, params=None,
domain='kiloeyes', *args):
"""Create a new Message object.
In order for translation to work gettext requires a message ID, this
msgid will be used as the base unicode text. It is also possible
for the msgid and the base unicode text to be different by passing
the msgtext parameter.
"""
# If the base msgtext is not given, we use the default translation
# of the msgid (which is in English) just in case the system locale is
# not English, so that the base text will be in that locale by default.
if not msgtext:
msgtext = Message._translate_msgid(msgid, domain)
# We want to initialize the parent unicode with the actual object that
# would have been plain unicode if 'Message' was not enabled.
msg = super(Message, cls).__new__(cls, msgtext)
msg.msgid = msgid
msg.domain = domain
msg.params = params
return msg
def translate(self, desired_locale=None):
"""Translate this message to the desired locale.
:param desired_locale: The desired locale to translate the message to,
if no locale is provided the message will be
translated to the system's default locale.
:returns: the translated message in unicode
"""
translated_message = Message._translate_msgid(self.msgid,
self.domain,
desired_locale)
if self.params is None:
# No need for more translation
return translated_message
# This Message object may have been formatted with one or more
# Message objects as substitution arguments, given either as a single
# argument, part of a tuple, or as one or more values in a dictionary.
# When translating this Message we need to translate those Messages too
translated_params = _translate_args(self.params, desired_locale)
translated_message = translated_message % translated_params
return translated_message
@staticmethod
def _translate_msgid(msgid, domain, desired_locale=None):
if not desired_locale:
system_locale = locale.getdefaultlocale()
# If the system locale is not available to the runtime use English
if not system_locale[0]:
desired_locale = 'en_US'
else:
desired_locale = system_locale[0]
locale_dir = os.environ.get(domain.upper() + '_LOCALEDIR')
lang = gettext.translation(domain,
localedir=locale_dir,
languages=[desired_locale],
fallback=True)
if six.PY3:
translator = lang.gettext
else:
translator = lang.ugettext
translated_message = translator(msgid)
return translated_message
def __mod__(self, other):
# When we mod a Message we want the actual operation to be performed
# by the parent class (i.e. unicode()), the only thing we do here is
# save the original msgid and the parameters in case of a translation
params = self._sanitize_mod_params(other)
unicode_mod = super(Message, self).__mod__(params)
modded = Message(self.msgid,
msgtext=unicode_mod,
params=params,
domain=self.domain)
return modded
def _sanitize_mod_params(self, other):
"""Sanitize the object being modded with this Message.
- Add support for modding 'None' so translation supports it
- Trim the modded object, which can be a large dictionary, to only
those keys that would actually be used in a translation
- Snapshot the object being modded, in case the message is
translated, it will be used as it was when the Message was created
"""
if other is None:
params = (other,)
elif isinstance(other, dict):
# Merge the dictionaries
# Copy each item in case one does not support deep copy.
params = {}
if isinstance(self.params, dict):
for key, val in self.params.items():
params[key] = self._copy_param(val)
for key, val in other.items():
params[key] = self._copy_param(val)
else:
params = self._copy_param(other)
return params
def _copy_param(self, param):
try:
return copy.deepcopy(param)
except Exception:
# Fallback to casting to unicode this will handle the
# python code-like objects that can't be deep-copied
return six.text_type(param)
def __add__(self, other):
msg = _('Message objects do not support addition.')
raise TypeError(msg)
def __radd__(self, other):
return self.__add__(other)
if six.PY2:
def __str__(self):
# NOTE(luisg): Logging in python 2.6 tries to str() log records,
# and it expects specifically a UnicodeError in order to proceed.
msg = _('Message objects do not support str() because they may '
'contain non-ascii characters. '
'Please use unicode() or translate() instead.')
raise UnicodeError(msg)
def get_available_languages(domain):
"""Lists the available languages for the given translation domain.
:param domain: the domain to get languages for
"""
if domain in _AVAILABLE_LANGUAGES:
return copy.copy(_AVAILABLE_LANGUAGES[domain])
localedir = '%s_LOCALEDIR' % domain.upper()
find = lambda x: gettext.find(domain,
localedir=os.environ.get(localedir),
languages=[x])
# NOTE(mrodden): en_US should always be available (and first in case
# order matters) since our in-line message strings are en_US
language_list = ['en_US']
# NOTE(luisg): Babel <1.0 used a function called list(), which was
# renamed to locale_identifiers() in >=1.0, the requirements master list
# requires >=0.9.6, uncapped, so defensively work with both. We can remove
# this check when the master list updates to >=1.0, and update all projects
list_identifiers = (getattr(localedata, 'list', None) or
getattr(localedata, 'locale_identifiers'))
locale_identifiers = list_identifiers()
for i in locale_identifiers:
if find(i) is not None:
language_list.append(i)
# NOTE(luisg): Babel>=1.0,<1.3 has a bug where some OpenStack supported
# locales (e.g. 'zh_CN', and 'zh_TW') aren't supported even though they
# are perfectly legitimate locales:
# https://github.com/mitsuhiko/babel/issues/37
# In Babel 1.3 they fixed the bug and they support these locales, but
# they are still not explicitly "listed" by locale_identifiers().
# That is why we add the locales here explicitly if necessary so that
# they are listed as supported.
aliases = {'zh': 'zh_CN',
'zh_Hant_HK': 'zh_HK',
'zh_Hant': 'zh_TW',
'fil': 'tl_PH'}
for (locale_, alias) in six.iteritems(aliases):
if locale_ in language_list and alias not in language_list:
language_list.append(alias)
_AVAILABLE_LANGUAGES[domain] = language_list
return copy.copy(language_list)
def translate(obj, desired_locale=None):
"""Gets the translated unicode representation of the given object.
If the object is not translatable it is returned as-is.
If the locale is None the object is translated to the system locale.
:param obj: the object to translate
:param desired_locale: the locale to translate the message to, if None the
default system locale will be used
:returns: the translated object in unicode, or the original object if
it could not be translated
"""
message = obj
if not isinstance(message, Message):
# If the object to translate is not already translatable,
# let's first get its unicode representation
message = six.text_type(obj)
if isinstance(message, Message):
# Even after unicoding() we still need to check if we are
# running with translatable unicode before translating
return message.translate(desired_locale)
return obj
def _translate_args(args, desired_locale=None):
"""Translates all the translatable elements of the given arguments object.
This method is used for translating the translatable values in method
arguments which include values of tuples or dictionaries.
If the object is not a tuple or a dictionary the object itself is
translated if it is translatable.
If the locale is None the object is translated to the system locale.
:param args: the args to translate
:param desired_locale: the locale to translate the args to, if None the
default system locale will be used
:returns: a new args object with the translated contents of the original
"""
if isinstance(args, tuple):
return tuple(translate(v, desired_locale) for v in args)
if isinstance(args, dict):
translated_dict = {}
for (k, v) in six.iteritems(args):
translated_v = translate(v, desired_locale)
translated_dict[k] = translated_v
return translated_dict
return translate(args, desired_locale)
class TranslationHandler(handlers.MemoryHandler):
"""Handler that translates records before logging them.
The TranslationHandler takes a locale and a target logging.Handler object
to forward LogRecord objects to after translating them. This handler
depends on Message objects being logged, instead of regular strings.
The handler can be configured declaratively in the logging.conf as follows:
[handlers]
keys = translatedlog, translator
[handler_translatedlog]
class = handlers.WatchedFileHandler
args = ('/var/log/api-localized.log',)
formatter = context
[handler_translator]
class = openstack.common.log.TranslationHandler
target = translatedlog
args = ('zh_CN',)
If the specified locale is not available in the system, the handler will
log in the default locale.
"""
def __init__(self, locale=None, target=None):
"""Initialize a TranslationHandler
:param locale: locale to use for translating messages
:param target: logging.Handler object to forward
LogRecord objects to after translation
"""
# NOTE(luisg): In order to allow this handler to be a wrapper for
# other handlers, such as a FileHandler, and still be able to
# configure it using logging.conf, this handler has to extend
# MemoryHandler because only the MemoryHandlers' logging.conf
# parsing is implemented such that it accepts a target handler.
handlers.MemoryHandler.__init__(self, capacity=0, target=target)
self.locale = locale
def setFormatter(self, fmt):
self.target.setFormatter(fmt)
def emit(self, record):
# We save the message from the original record to restore it
# after translation, so other handlers are not affected by this
original_msg = record.msg
original_args = record.args
try:
self._translate_and_log_record(record)
finally:
record.msg = original_msg
record.args = original_args
def _translate_and_log_record(self, record):
record.msg = translate(record.msg, self.locale)
# In addition to translating the message, we also need to translate
# arguments that were passed to the log method that were not part
# of the main message e.g., log.info(_('Some message %s'), this_one))
record.args = _translate_args(record.args, self.locale)
self.target.emit(record)

View File

@ -0,0 +1,73 @@
# Copyright 2011 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Import related utilities and helper functions.
"""
import sys
import traceback
def import_class(import_str):
"""Returns a class from a string including module and class."""
mod_str, _sep, class_str = import_str.rpartition('.')
__import__(mod_str)
try:
return getattr(sys.modules[mod_str], class_str)
except AttributeError:
raise ImportError('Class %s cannot be found (%s)' %
(class_str,
traceback.format_exception(*sys.exc_info())))
def import_object(import_str, *args, **kwargs):
"""Import a class and return an instance of it."""
return import_class(import_str)(*args, **kwargs)
def import_object_ns(name_space, import_str, *args, **kwargs):
"""Tries to import object from default namespace.
Imports a class and return an instance of it, first by trying
to find the class in a default namespace, then failing back to
a full path if not found in the default namespace.
"""
import_value = "%s.%s" % (name_space, import_str)
try:
return import_class(import_value)(*args, **kwargs)
except ImportError:
return import_class(import_str)(*args, **kwargs)
def import_module(import_str):
"""Import a module."""
__import__(import_str)
return sys.modules[import_str]
def import_versioned_module(version, submodule=None):
module = 'kiloeyes.v%s' % version
if submodule:
module = '.'.join((module, submodule))
return import_module(module)
def try_import(import_str, default=None):
"""Try to import a module and if it fails return default."""
try:
return import_module(import_str)
except ImportError:
return default

View File

@ -0,0 +1,202 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2011 Justin Santa Barbara
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
'''
JSON related utilities.
This module provides a few things:
1) A handy function for getting an object down to something that can be
JSON serialized. See to_primitive().
2) Wrappers around loads() and dumps(). The dumps() wrapper will
automatically use to_primitive() for you if needed.
3) This sets up anyjson to use the loads() and dumps() wrappers if anyjson
is available.
'''
import codecs
import datetime
import functools
import inspect
import itertools
import sys
is_simplejson = False
if sys.version_info < (2, 7):
# On Python <= 2.6, json module is not C boosted, so try to use
# simplejson module if available
try:
import simplejson as json
# NOTE(mriedem): Make sure we have a new enough version of simplejson
# to support the namedobject_as_tuple argument. This can be removed
# in the Kilo release when python 2.6 support is dropped.
if 'namedtuple_as_object' in inspect.getargspec(json.dumps).args:
is_simplejson = True
else:
import json
except ImportError:
import json
else:
import json
import six
import six.moves.xmlrpc_client as xmlrpclib
from kiloeyes.openstack.common import gettextutils
from kiloeyes.openstack.common import importutils
from kiloeyes.openstack.common import strutils
from kiloeyes.openstack.common import timeutils
netaddr = importutils.try_import("netaddr")
_nasty_type_tests = [inspect.ismodule, inspect.isclass, inspect.ismethod,
inspect.isfunction, inspect.isgeneratorfunction,
inspect.isgenerator, inspect.istraceback, inspect.isframe,
inspect.iscode, inspect.isbuiltin, inspect.isroutine,
inspect.isabstract]
_simple_types = (six.string_types + six.integer_types
+ (type(None), bool, float))
def to_primitive(value, convert_instances=False, convert_datetime=True,
level=0, max_depth=3):
"""Convert a complex object into primitives.
Handy for JSON serialization. We can optionally handle instances,
but since this is a recursive function, we could have cyclical
data structures.
To handle cyclical data structures we could track the actual objects
visited in a set, but not all objects are hashable. Instead we just
track the depth of the object inspections and don't go too deep.
Therefore, convert_instances=True is lossy ... be aware.
"""
# handle obvious types first - order of basic types determined by running
# full tests on nova project, resulting in the following counts:
# 572754 <type 'NoneType'>
# 460353 <type 'int'>
# 379632 <type 'unicode'>
# 274610 <type 'str'>
# 199918 <type 'dict'>
# 114200 <type 'datetime.datetime'>
# 51817 <type 'bool'>
# 26164 <type 'list'>
# 6491 <type 'float'>
# 283 <type 'tuple'>
# 19 <type 'long'>
if isinstance(value, _simple_types):
return value
if isinstance(value, datetime.datetime):
if convert_datetime:
return timeutils.strtime(value)
else:
return value
# value of itertools.count doesn't get caught by nasty_type_tests
# and results in infinite loop when list(value) is called.
if type(value) == itertools.count:
return six.text_type(value)
# FIXME(vish): Workaround for LP bug 852095. Without this workaround,
# tests that raise an exception in a mocked method that
# has a @wrap_exception with a notifier will fail. If
# we up the dependency to 0.5.4 (when it is released) we
# can remove this workaround.
if getattr(value, '__module__', None) == 'mox':
return 'mock'
if level > max_depth:
return '?'
# The try block may not be necessary after the class check above,
# but just in case ...
try:
recursive = functools.partial(to_primitive,
convert_instances=convert_instances,
convert_datetime=convert_datetime,
level=level,
max_depth=max_depth)
if isinstance(value, dict):
return dict((k, recursive(v)) for k, v in six.iteritems(value))
elif isinstance(value, (list, tuple)):
return [recursive(lv) for lv in value]
# It's not clear why xmlrpclib created their own DateTime type, but
# for our purposes, make it a datetime type which is explicitly
# handled
if isinstance(value, xmlrpclib.DateTime):
value = datetime.datetime(*tuple(value.timetuple())[:6])
if convert_datetime and isinstance(value, datetime.datetime):
return timeutils.strtime(value)
elif isinstance(value, gettextutils.Message):
return value.data
elif hasattr(value, 'iteritems'):
return recursive(dict(value.iteritems()), level=level + 1)
elif hasattr(value, '__iter__'):
return recursive(list(value))
elif convert_instances and hasattr(value, '__dict__'):
# Likely an instance of something. Watch for cycles.
# Ignore class member vars.
return recursive(value.__dict__, level=level + 1)
elif netaddr and isinstance(value, netaddr.IPAddress):
return six.text_type(value)
else:
if any(test(value) for test in _nasty_type_tests):
return six.text_type(value)
return value
except TypeError:
# Class objects are tricky since they may define something like
# __iter__ defined but it isn't callable as list().
return six.text_type(value)
def dumps(value, default=to_primitive, **kwargs):
if is_simplejson:
kwargs['namedtuple_as_object'] = False
return json.dumps(value, default=default, **kwargs)
def dump(obj, fp, *args, **kwargs):
if is_simplejson:
kwargs['namedtuple_as_object'] = False
return json.dump(obj, fp, *args, **kwargs)
def loads(s, encoding='utf-8', **kwargs):
return json.loads(strutils.safe_decode(s, encoding), **kwargs)
def load(fp, encoding='utf-8', **kwargs):
return json.load(codecs.getreader(encoding)(fp), **kwargs)
try:
import anyjson
except ImportError:
pass
else:
anyjson._modules.append((__name__, 'dumps', TypeError,
'loads', ValueError, 'load'))
anyjson.force_implementation(__name__)

View File

@ -0,0 +1,45 @@
# Copyright 2011 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Local storage of variables using weak references"""
import threading
import weakref
class WeakLocal(threading.local):
def __getattribute__(self, attr):
rval = super(WeakLocal, self).__getattribute__(attr)
if rval:
# NOTE(mikal): this bit is confusing. What is stored is a weak
# reference, not the value itself. We therefore need to lookup
# the weak reference and return the inner value here.
rval = rval()
return rval
def __setattr__(self, attr, value):
value = weakref.ref(value)
return super(WeakLocal, self).__setattr__(attr, value)
# NOTE(mikal): the name "store" should be deprecated in the future
store = WeakLocal()
# A "weak" store uses weak references and allows an object to fall out of scope
# when it falls out of scope in the code that uses the thread local storage. A
# "strong" store will hold a reference to the object so that it never falls out
# of scope.
weak_store = WeakLocal()
strong_store = threading.local()

View File

@ -0,0 +1,322 @@
# Copyright 2011 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import contextlib
import errno
import functools
import logging
import os
import shutil
import subprocess
import sys
import tempfile
import threading
import time
import weakref
from oslo_config import cfg
from kiloeyes.openstack.common import fileutils
from kiloeyes.openstack.common.gettextutils import _, _LE, _LI
LOG = logging.getLogger(__name__)
util_opts = [
cfg.BoolOpt('disable_process_locking', default=False,
help='Enables or disables inter-process locks.'),
cfg.StrOpt('lock_path',
default=os.environ.get("MONASCA_LOCK_PATH"),
help='Directory to use for lock files.')
]
CONF = cfg.CONF
CONF.register_opts(util_opts)
def set_defaults(lock_path):
cfg.set_defaults(util_opts, lock_path=lock_path)
class _FileLock(object):
"""Lock implementation which allows multiple locks, working around
issues like bugs.debian.org/cgi-bin/bugreport.cgi?bug=632857 and does
not require any cleanup. Since the lock is always held on a file
descriptor rather than outside of the process, the lock gets dropped
automatically if the process crashes, even if __exit__ is not executed.
There are no guarantees regarding usage by multiple green threads in a
single process here. This lock works only between processes. Exclusive
access between local threads should be achieved using the semaphores
in the @synchronized decorator.
Note these locks are released when the descriptor is closed, so it's not
safe to close the file descriptor while another green thread holds the
lock. Just opening and closing the lock file can break synchronisation,
so lock files must be accessed only using this abstraction.
"""
def __init__(self, name):
self.lockfile = None
self.fname = name
def acquire(self):
basedir = os.path.dirname(self.fname)
if not os.path.exists(basedir):
fileutils.ensure_tree(basedir)
LOG.info(_LI('Created lock path: %s'), basedir)
self.lockfile = open(self.fname, 'w')
while True:
try:
# Using non-blocking locks since green threads are not
# patched to deal with blocking locking calls.
# Also upon reading the MSDN docs for locking(), it seems
# to have a laughable 10 attempts "blocking" mechanism.
self.trylock()
LOG.debug('Got file lock "%s"', self.fname)
return True
except IOError as e:
if e.errno in (errno.EACCES, errno.EAGAIN):
# external locks synchronise things like iptables
# updates - give it some time to prevent busy spinning
time.sleep(0.01)
else:
raise threading.ThreadError(_("Unable to acquire lock on"
" `%(filename)s` due to"
" %(exception)s") %
{'filename': self.fname,
'exception': e})
def __enter__(self):
self.acquire()
return self
def release(self):
try:
self.unlock()
self.lockfile.close()
LOG.debug('Released file lock "%s"', self.fname)
except IOError:
LOG.exception(_LE("Could not release the acquired lock `%s`"),
self.fname)
def __exit__(self, exc_type, exc_val, exc_tb):
self.release()
def exists(self):
return os.path.exists(self.fname)
def trylock(self):
raise NotImplementedError()
def unlock(self):
raise NotImplementedError()
class _WindowsLock(_FileLock):
def trylock(self):
msvcrt.locking(self.lockfile.fileno(), msvcrt.LK_NBLCK, 1)
def unlock(self):
msvcrt.locking(self.lockfile.fileno(), msvcrt.LK_UNLCK, 1)
class _FcntlLock(_FileLock):
def trylock(self):
fcntl.lockf(self.lockfile, fcntl.LOCK_EX | fcntl.LOCK_NB)
def unlock(self):
fcntl.lockf(self.lockfile, fcntl.LOCK_UN)
if os.name == 'nt':
import msvcrt
InterProcessLock = _WindowsLock
else:
import fcntl
InterProcessLock = _FcntlLock
_semaphores = weakref.WeakValueDictionary()
_semaphores_lock = threading.Lock()
def _get_lock_path(name, lock_file_prefix, lock_path=None):
# NOTE(mikal): the lock name cannot contain directory
# separators
name = name.replace(os.sep, '_')
if lock_file_prefix:
sep = '' if lock_file_prefix.endswith('-') else '-'
name = '%s%s%s' % (lock_file_prefix, sep, name)
local_lock_path = lock_path or CONF.lock_path
if not local_lock_path:
raise cfg.RequiredOptError('lock_path')
return os.path.join(local_lock_path, name)
def external_lock(name, lock_file_prefix=None, lock_path=None):
LOG.debug('Attempting to grab external lock "%(lock)s"',
{'lock': name})
lock_file_path = _get_lock_path(name, lock_file_prefix, lock_path)
return InterProcessLock(lock_file_path)
def remove_external_lock_file(name, lock_file_prefix=None):
"""Remove an external lock file when it's not used anymore
This will be helpful when we have a lot of lock files
"""
with internal_lock(name):
lock_file_path = _get_lock_path(name, lock_file_prefix)
try:
os.remove(lock_file_path)
except OSError:
LOG.info(_LI('Failed to remove file %(file)s'),
{'file': lock_file_path})
def internal_lock(name):
with _semaphores_lock:
try:
sem = _semaphores[name]
except KeyError:
sem = threading.Semaphore()
_semaphores[name] = sem
LOG.debug('Got semaphore "%(lock)s"', {'lock': name})
return sem
@contextlib.contextmanager
def lock(name, lock_file_prefix=None, external=False, lock_path=None):
"""Context based lock
This function yields a `threading.Semaphore` instance (if we don't use
eventlet.monkey_patch(), else `semaphore.Semaphore`) unless external is
True, in which case, it'll yield an InterProcessLock instance.
:param lock_file_prefix: The lock_file_prefix argument is used to provide
lock files on disk with a meaningful prefix.
:param external: The external keyword argument denotes whether this lock
should work across multiple processes. This means that if two different
workers both run a method decorated with @synchronized('mylock',
external=True), only one of them will execute at a time.
"""
int_lock = internal_lock(name)
with int_lock:
if external and not CONF.disable_process_locking:
ext_lock = external_lock(name, lock_file_prefix, lock_path)
with ext_lock:
yield ext_lock
else:
yield int_lock
LOG.debug('Released semaphore "%(lock)s"', {'lock': name})
def synchronized(name, lock_file_prefix=None, external=False, lock_path=None):
"""Synchronization decorator.
Decorating a method like so::
@synchronized('mylock')
def foo(self, *args):
...
ensures that only one thread will execute the foo method at a time.
Different methods can share the same lock::
@synchronized('mylock')
def foo(self, *args):
...
@synchronized('mylock')
def bar(self, *args):
...
This way only one of either foo or bar can be executing at a time.
"""
def wrap(f):
@functools.wraps(f)
def inner(*args, **kwargs):
try:
with lock(name, lock_file_prefix, external, lock_path):
LOG.debug('Got semaphore / lock "%(function)s"',
{'function': f.__name__})
return f(*args, **kwargs)
finally:
LOG.debug('Semaphore / lock released "%(function)s"',
{'function': f.__name__})
return inner
return wrap
def synchronized_with_prefix(lock_file_prefix):
"""Partial object generator for the synchronization decorator.
Redefine @synchronized in each project like so::
(in nova/utils.py)
from nova.openstack.common import lockutils
synchronized = lockutils.synchronized_with_prefix('nova-')
(in nova/foo.py)
from nova import utils
@utils.synchronized('mylock')
def bar(self, *args):
...
The lock_file_prefix argument is used to provide lock files on disk with a
meaningful prefix.
"""
return functools.partial(synchronized, lock_file_prefix=lock_file_prefix)
def main(argv):
"""Create a dir for locks and pass it to command from arguments
If you run this:
python -m openstack.common.lockutils python setup.py testr <etc>
a temporary directory will be created for all your locks and passed to all
your tests in an environment variable. The temporary dir will be deleted
afterwards and the return value will be preserved.
"""
lock_dir = tempfile.mkdtemp()
os.environ["MONASCA_LOCK_PATH"] = lock_dir
try:
ret_val = subprocess.call(argv[1:])
finally:
shutil.rmtree(lock_dir, ignore_errors=True)
return ret_val
if __name__ == '__main__':
sys.exit(main(sys.argv))

713
kiloeyes/openstack/common/log.py Executable file
View File

@ -0,0 +1,713 @@
# Copyright 2011 OpenStack Foundation.
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""OpenStack logging handler.
This module adds to logging functionality by adding the option to specify
a context object when calling the various log methods. If the context object
is not specified, default formatting is used. Additionally, an instance uuid
may be passed as part of the log message, which is intended to make it easier
for admins to find messages related to a specific instance.
It also allows setting of formatting information through conf.
"""
import inspect
import itertools
import logging
import logging.config
import logging.handlers
import os
import socket
import sys
import traceback
from oslo_config import cfg
import six
from six import moves
_PY26 = sys.version_info[0:2] == (2, 6)
from kiloeyes.openstack.common.gettextutils import _
from kiloeyes.openstack.common import importutils
from kiloeyes.openstack.common import jsonutils
from kiloeyes.openstack.common import local
# NOTE(flaper87): Pls, remove when graduating this module
# from the incubator.
from kiloeyes.openstack.common.strutils import mask_password # noqa
_DEFAULT_LOG_DATE_FORMAT = "%Y-%m-%d %H:%M:%S"
common_cli_opts = [
cfg.BoolOpt('debug',
short='d',
default=False,
help='Print debugging output (set logging level to '
'DEBUG instead of default WARNING level).'),
cfg.BoolOpt('verbose',
short='v',
default=False,
help='Print more verbose output (set logging level to '
'INFO instead of default WARNING level).'),
]
logging_cli_opts = [
cfg.StrOpt('log-config-append',
metavar='PATH',
deprecated_name='log-config',
help='The name of a logging configuration file. This file '
'is appended to any existing logging configuration '
'files. For details about logging configuration files, '
'see the Python logging module documentation.'),
cfg.StrOpt('log-format',
metavar='FORMAT',
help='DEPRECATED. '
'A logging.Formatter log message format string which may '
'use any of the available logging.LogRecord attributes. '
'This option is deprecated. Please use '
'logging_context_format_string and '
'logging_default_format_string instead.'),
cfg.StrOpt('log-date-format',
default=_DEFAULT_LOG_DATE_FORMAT,
metavar='DATE_FORMAT',
help='Format string for %%(asctime)s in log records. '
'Default: %(default)s .'),
cfg.StrOpt('log-file',
metavar='PATH',
deprecated_name='logfile',
help='(Optional) Name of log file to output to. '
'If no default is set, logging will go to stdout.'),
cfg.StrOpt('log-dir',
deprecated_name='logdir',
help='(Optional) The base directory used for relative '
'--log-file paths.'),
cfg.BoolOpt('use-syslog',
default=False,
help='Use syslog for logging. '
'Existing syslog format is DEPRECATED during I, '
'and will change in J to honor RFC5424.'),
cfg.BoolOpt('use-syslog-rfc-format',
# TODO(bogdando) remove or use True after existing
# syslog format deprecation in J
default=False,
help='(Optional) Enables or disables syslog rfc5424 format '
'for logging. If enabled, prefixes the MSG part of the '
'syslog message with APP-NAME (RFC5424). The '
'format without the APP-NAME is deprecated in I, '
'and will be removed in J.'),
cfg.StrOpt('syslog-log-facility',
default='LOG_USER',
help='Syslog facility to receive log lines.')
]
generic_log_opts = [
cfg.BoolOpt('use_stderr',
default=True,
help='Log output to standard error.')
]
DEFAULT_LOG_LEVELS = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN',
'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO',
'oslo.messaging=INFO', 'iso8601=WARN',
'requests.packages.urllib3.connectionpool=WARN',
'urllib3.connectionpool=WARN', 'websocket=WARN',
"keystonemiddleware=WARN", "routes.middleware=WARN",
"stevedore=WARN"]
log_opts = [
cfg.StrOpt('logging_context_format_string',
default='%(asctime)s.%(msecs)03d %(process)d %(levelname)s '
'%(name)s [%(request_id)s %(user_identity)s] '
'%(instance)s%(message)s',
help='Format string to use for log messages with context.'),
cfg.StrOpt('logging_default_format_string',
default='%(asctime)s.%(msecs)03d %(process)d %(levelname)s '
'%(name)s [-] %(instance)s%(message)s',
help='Format string to use for log messages without context.'),
cfg.StrOpt('logging_debug_format_suffix',
default='%(funcName)s %(pathname)s:%(lineno)d',
help='Data to append to log format when level is DEBUG.'),
cfg.StrOpt('logging_exception_prefix',
default='%(asctime)s.%(msecs)03d %(process)d TRACE %(name)s '
'%(instance)s',
help='Prefix each line of exception output with this format.'),
cfg.ListOpt('default_log_levels',
default=DEFAULT_LOG_LEVELS,
help='List of logger=LEVEL pairs.'),
cfg.BoolOpt('publish_errors',
default=False,
help='Enables or disables publication of error events.'),
cfg.BoolOpt('fatal_deprecations',
default=False,
help='Enables or disables fatal status of deprecations.'),
# NOTE(mikal): there are two options here because sometimes we are handed
# a full instance (and could include more information), and other times we
# are just handed a UUID for the instance.
cfg.StrOpt('instance_format',
default='[instance: %(uuid)s] ',
help='The format for an instance that is passed with the log '
'message.'),
cfg.StrOpt('instance_uuid_format',
default='[instance: %(uuid)s] ',
help='The format for an instance UUID that is passed with the '
'log message.'),
]
CONF = cfg.CONF
CONF.register_cli_opts(common_cli_opts)
CONF.register_cli_opts(logging_cli_opts)
CONF.register_opts(generic_log_opts)
CONF.register_opts(log_opts)
# our new audit level
# NOTE(jkoelker) Since we synthesized an audit level, make the logging
# module aware of it so it acts like other levels.
logging.AUDIT = logging.INFO + 1
logging.addLevelName(logging.AUDIT, 'AUDIT')
try:
NullHandler = logging.NullHandler
except AttributeError: # NOTE(jkoelker) NullHandler added in Python 2.7
class NullHandler(logging.Handler):
def handle(self, record):
pass
def emit(self, record):
pass
def createLock(self):
self.lock = None
def _dictify_context(context):
if context is None:
return None
if not isinstance(context, dict) and getattr(context, 'to_dict', None):
context = context.to_dict()
return context
def _get_binary_name():
return os.path.basename(inspect.stack()[-1][1])
def _get_log_file_path(binary=None):
logfile = CONF.log_file
logdir = CONF.log_dir
if logfile and not logdir:
return logfile
if logfile and logdir:
return os.path.join(logdir, logfile)
if logdir:
binary = binary or _get_binary_name()
return '%s.log' % (os.path.join(logdir, binary),)
return None
class BaseLoggerAdapter(logging.LoggerAdapter):
def audit(self, msg, *args, **kwargs):
self.log(logging.AUDIT, msg, *args, **kwargs)
def isEnabledFor(self, level):
if _PY26:
# This method was added in python 2.7 (and it does the exact
# same logic, so we need to do the exact same logic so that
# python 2.6 has this capability as well).
return self.logger.isEnabledFor(level)
else:
return super(BaseLoggerAdapter, self).isEnabledFor(level)
class LazyAdapter(BaseLoggerAdapter):
def __init__(self, name='unknown', version='unknown'):
self._logger = None
self.extra = {}
self.name = name
self.version = version
@property
def logger(self):
if not self._logger:
self._logger = getLogger(self.name, self.version)
if six.PY3:
# In Python 3, the code fails because the 'manager' attribute
# cannot be found when using a LoggerAdapter as the
# underlying logger. Work around this issue.
self._logger.manager = self._logger.logger.manager
return self._logger
class ContextAdapter(BaseLoggerAdapter):
warn = logging.LoggerAdapter.warning
def __init__(self, logger, project_name, version_string):
self.logger = logger
self.project = project_name
self.version = version_string
self._deprecated_messages_sent = dict()
@property
def handlers(self):
return self.logger.handlers
def deprecated(self, msg, *args, **kwargs):
"""Call this method when a deprecated feature is used.
If the system is configured for fatal deprecations then the message
is logged at the 'critical' level and :class:`DeprecatedConfig` will
be raised.
Otherwise, the message will be logged (once) at the 'warn' level.
:raises: :class:`DeprecatedConfig` if the system is configured for
fatal deprecations.
"""
stdmsg = _("Deprecated: %s") % msg
if CONF.fatal_deprecations:
self.critical(stdmsg, *args, **kwargs)
raise DeprecatedConfig(msg=stdmsg)
# Using a list because a tuple with dict can't be stored in a set.
sent_args = self._deprecated_messages_sent.setdefault(msg, list())
if args in sent_args:
# Already logged this message, so don't log it again.
return
sent_args.append(args)
self.warn(stdmsg, *args, **kwargs)
def process(self, msg, kwargs):
# NOTE(jecarey): If msg is not unicode, coerce it into unicode
# before it can get to the python logging and
# possibly cause string encoding trouble
if not isinstance(msg, six.text_type):
msg = six.text_type(msg)
if 'extra' not in kwargs:
kwargs['extra'] = {}
extra = kwargs['extra']
context = kwargs.pop('context', None)
if not context:
context = getattr(local.store, 'context', None)
if context:
extra.update(_dictify_context(context))
instance = kwargs.pop('instance', None)
instance_uuid = (extra.get('instance_uuid') or
kwargs.pop('instance_uuid', None))
instance_extra = ''
if instance:
instance_extra = CONF.instance_format % instance
elif instance_uuid:
instance_extra = (CONF.instance_uuid_format
% {'uuid': instance_uuid})
extra['instance'] = instance_extra
extra.setdefault('user_identity', kwargs.pop('user_identity', None))
extra['project'] = self.project
extra['version'] = self.version
extra['extra'] = extra.copy()
return msg, kwargs
class JSONFormatter(logging.Formatter):
def __init__(self, fmt=None, datefmt=None):
# NOTE(jkoelker) we ignore the fmt argument, but its still there
# since logging.config.fileConfig passes it.
self.datefmt = datefmt
def formatException(self, ei, strip_newlines=True):
lines = traceback.format_exception(*ei)
if strip_newlines:
lines = [moves.filter(
lambda x: x,
line.rstrip().splitlines()) for line in lines]
lines = list(itertools.chain(*lines))
return lines
def format(self, record):
message = {'message': record.getMessage(),
'asctime': self.formatTime(record, self.datefmt),
'name': record.name,
'msg': record.msg,
'args': record.args,
'levelname': record.levelname,
'levelno': record.levelno,
'pathname': record.pathname,
'filename': record.filename,
'module': record.module,
'lineno': record.lineno,
'funcname': record.funcName,
'created': record.created,
'msecs': record.msecs,
'relative_created': record.relativeCreated,
'thread': record.thread,
'thread_name': record.threadName,
'process_name': record.processName,
'process': record.process,
'traceback': None}
if hasattr(record, 'extra'):
message['extra'] = record.extra
if record.exc_info:
message['traceback'] = self.formatException(record.exc_info)
return jsonutils.dumps(message)
def _create_logging_excepthook(product_name):
def logging_excepthook(exc_type, value, tb):
extra = {'exc_info': (exc_type, value, tb)}
getLogger(product_name).critical(
"".join(traceback.format_exception_only(exc_type, value)),
**extra)
return logging_excepthook
class LogConfigError(Exception):
message = _('Error loading logging config %(log_config)s: %(err_msg)s')
def __init__(self, log_config, err_msg):
self.log_config = log_config
self.err_msg = err_msg
def __str__(self):
return self.message % dict(log_config=self.log_config,
err_msg=self.err_msg)
def _load_log_config(log_config_append):
try:
logging.config.fileConfig(log_config_append,
disable_existing_loggers=False)
except (moves.configparser.Error, KeyError) as exc:
raise LogConfigError(log_config_append, six.text_type(exc))
def setup(product_name, version='unknown'):
"""Setup logging."""
if CONF.log_config_append:
_load_log_config(CONF.log_config_append)
else:
_setup_logging_from_conf(product_name, version)
sys.excepthook = _create_logging_excepthook(product_name)
def set_defaults(logging_context_format_string=None,
default_log_levels=None):
# Just in case the caller is not setting the
# default_log_level. This is insurance because
# we introduced the default_log_level parameter
# later in a backwards in-compatible change
if default_log_levels is not None:
cfg.set_defaults(
log_opts,
default_log_levels=default_log_levels)
if logging_context_format_string is not None:
cfg.set_defaults(
log_opts,
logging_context_format_string=logging_context_format_string)
def _find_facility_from_conf():
facility_names = logging.handlers.SysLogHandler.facility_names
facility = getattr(logging.handlers.SysLogHandler,
CONF.syslog_log_facility,
None)
if facility is None and CONF.syslog_log_facility in facility_names:
facility = facility_names.get(CONF.syslog_log_facility)
if facility is None:
valid_facilities = facility_names.keys()
consts = ['LOG_AUTH', 'LOG_AUTHPRIV', 'LOG_CRON', 'LOG_DAEMON',
'LOG_FTP', 'LOG_KERN', 'LOG_LPR', 'LOG_MAIL', 'LOG_NEWS',
'LOG_AUTH', 'LOG_SYSLOG', 'LOG_USER', 'LOG_UUCP',
'LOG_LOCAL0', 'LOG_LOCAL1', 'LOG_LOCAL2', 'LOG_LOCAL3',
'LOG_LOCAL4', 'LOG_LOCAL5', 'LOG_LOCAL6', 'LOG_LOCAL7']
valid_facilities.extend(consts)
raise TypeError(_('syslog facility must be one of: %s') %
', '.join("'%s'" % fac
for fac in valid_facilities))
return facility
class RFCSysLogHandler(logging.handlers.SysLogHandler):
def __init__(self, *args, **kwargs):
self.binary_name = _get_binary_name()
# Do not use super() unless type(logging.handlers.SysLogHandler)
# is 'type' (Python 2.7).
# Use old style calls, if the type is 'classobj' (Python 2.6)
logging.handlers.SysLogHandler.__init__(self, *args, **kwargs)
def format(self, record):
# Do not use super() unless type(logging.handlers.SysLogHandler)
# is 'type' (Python 2.7).
# Use old style calls, if the type is 'classobj' (Python 2.6)
msg = logging.handlers.SysLogHandler.format(self, record)
msg = self.binary_name + ' ' + msg
return msg
def _setup_logging_from_conf(project, version):
log_root = getLogger(None).logger
for handler in log_root.handlers:
log_root.removeHandler(handler)
logpath = _get_log_file_path()
if logpath:
filelog = logging.handlers.WatchedFileHandler(logpath)
log_root.addHandler(filelog)
if CONF.use_stderr:
streamlog = ColorHandler()
log_root.addHandler(streamlog)
elif not logpath:
# pass sys.stdout as a positional argument
# python2.6 calls the argument strm, in 2.7 it's stream
streamlog = logging.StreamHandler(sys.stdout)
log_root.addHandler(streamlog)
if CONF.publish_errors:
try:
handler = importutils.import_object(
"kiloeyes.openstack.common.log_handler.PublishErrorsHandler",
logging.ERROR)
except ImportError:
handler = importutils.import_object(
"oslo.messaging.notify.log_handler.PublishErrorsHandler",
logging.ERROR)
log_root.addHandler(handler)
datefmt = CONF.log_date_format
for handler in log_root.handlers:
# NOTE(alaski): CONF.log_format overrides everything currently. This
# should be deprecated in favor of context aware formatting.
if CONF.log_format:
handler.setFormatter(logging.Formatter(fmt=CONF.log_format,
datefmt=datefmt))
log_root.info('Deprecated: log_format is now deprecated and will '
'be removed in the next release')
else:
handler.setFormatter(ContextFormatter(project=project,
version=version,
datefmt=datefmt))
if CONF.debug:
log_root.setLevel(logging.DEBUG)
elif CONF.verbose:
log_root.setLevel(logging.INFO)
else:
log_root.setLevel(logging.WARNING)
for pair in CONF.default_log_levels:
mod, _sep, level_name = pair.partition('=')
logger = logging.getLogger(mod)
# NOTE(AAzza) in python2.6 Logger.setLevel doesn't convert string name
# to integer code.
if sys.version_info < (2, 7):
level = logging.getLevelName(level_name)
logger.setLevel(level)
else:
logger.setLevel(level_name)
if CONF.use_syslog:
try:
facility = _find_facility_from_conf()
# TODO(bogdando) use the format provided by RFCSysLogHandler
# after existing syslog format deprecation in J
if CONF.use_syslog_rfc_format:
syslog = RFCSysLogHandler(facility=facility)
else:
syslog = logging.handlers.SysLogHandler(facility=facility)
log_root.addHandler(syslog)
except socket.error:
log_root.error('Unable to add syslog handler. Verify that syslog'
'is running.')
_loggers = {}
def getLogger(name='unknown', version='unknown'):
if name not in _loggers:
_loggers[name] = ContextAdapter(logging.getLogger(name),
name,
version)
return _loggers[name]
def getLazyLogger(name='unknown', version='unknown'):
"""Returns lazy logger.
Creates a pass-through logger that does not create the real logger
until it is really needed and delegates all calls to the real logger
once it is created.
"""
return LazyAdapter(name, version)
class WritableLogger(object):
"""A thin wrapper that responds to `write` and logs."""
def __init__(self, logger, level=logging.INFO):
self.logger = logger
self.level = level
def write(self, msg):
self.logger.log(self.level, msg.rstrip())
class ContextFormatter(logging.Formatter):
"""A context.RequestContext aware formatter configured through flags.
The flags used to set format strings are: logging_context_format_string
and logging_default_format_string. You can also specify
logging_debug_format_suffix to append extra formatting if the log level is
debug.
For information about what variables are available for the formatter see:
http://docs.python.org/library/logging.html#formatter
If available, uses the context value stored in TLS - local.store.context
"""
def __init__(self, *args, **kwargs):
"""Initialize ContextFormatter instance
Takes additional keyword arguments which can be used in the message
format string.
:keyword project: project name
:type project: string
:keyword version: project version
:type version: string
"""
self.project = kwargs.pop('project', 'unknown')
self.version = kwargs.pop('version', 'unknown')
logging.Formatter.__init__(self, *args, **kwargs)
def format(self, record):
"""Uses contextstring if request_id is set, otherwise default."""
# NOTE(jecarey): If msg is not unicode, coerce it into unicode
# before it can get to the python logging and
# possibly cause string encoding trouble
if not isinstance(record.msg, six.text_type):
record.msg = six.text_type(record.msg)
# store project info
record.project = self.project
record.version = self.version
# store request info
context = getattr(local.store, 'context', None)
if context:
d = _dictify_context(context)
for k, v in d.items():
setattr(record, k, v)
# NOTE(sdague): default the fancier formatting params
# to an empty string so we don't throw an exception if
# they get used
for key in ('instance', 'color', 'user_identity'):
if key not in record.__dict__:
record.__dict__[key] = ''
if record.__dict__.get('request_id'):
fmt = CONF.logging_context_format_string
else:
fmt = CONF.logging_default_format_string
if (record.levelno == logging.DEBUG and
CONF.logging_debug_format_suffix):
fmt += " " + CONF.logging_debug_format_suffix
if sys.version_info < (3, 2):
self._fmt = fmt
else:
self._style = logging.PercentStyle(fmt)
self._fmt = self._style._fmt
# Cache this on the record, Logger will respect our formatted copy
if record.exc_info:
record.exc_text = self.formatException(record.exc_info, record)
return logging.Formatter.format(self, record)
def formatException(self, exc_info, record=None):
"""Format exception output with CONF.logging_exception_prefix."""
if not record:
return logging.Formatter.formatException(self, exc_info)
stringbuffer = moves.StringIO()
traceback.print_exception(exc_info[0], exc_info[1], exc_info[2],
None, stringbuffer)
lines = stringbuffer.getvalue().split('\n')
stringbuffer.close()
if CONF.logging_exception_prefix.find('%(asctime)') != -1:
record.asctime = self.formatTime(record, self.datefmt)
formatted_lines = []
for line in lines:
pl = CONF.logging_exception_prefix % record.__dict__
fl = '%s%s' % (pl, line)
formatted_lines.append(fl)
return '\n'.join(formatted_lines)
class ColorHandler(logging.StreamHandler):
LEVEL_COLORS = {
logging.DEBUG: '\033[00;32m', # GREEN
logging.INFO: '\033[00;36m', # CYAN
logging.AUDIT: '\033[01;36m', # BOLD CYAN
logging.WARN: '\033[01;33m', # BOLD YELLOW
logging.ERROR: '\033[01;31m', # BOLD RED
logging.CRITICAL: '\033[01;31m', # BOLD RED
}
def format(self, record):
record.color = self.LEVEL_COLORS[record.levelno]
return logging.StreamHandler.format(self, record)
class DeprecatedConfig(Exception):
message = _("Fatal call to deprecated config: %(msg)s")
def __init__(self, msg):
super(Exception, self).__init__(self.message % dict(msg=msg))

View File

@ -0,0 +1,147 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2011 Justin Santa Barbara
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sys
import time
from eventlet import event
from eventlet import greenthread
from kiloeyes.openstack.common.gettextutils import _LE, _LW
from kiloeyes.openstack.common import log as logging
LOG = logging.getLogger(__name__)
# NOTE(zyluo): This lambda function was declared to avoid mocking collisions
# with time.time() called in the standard logging module
# during unittests.
_ts = lambda: time.time()
class LoopingCallDone(Exception):
"""Exception to break out and stop a LoopingCallBase.
The poll-function passed to LoopingCallBase can raise this exception to
break out of the loop normally. This is somewhat analogous to
StopIteration.
An optional return-value can be included as the argument to the exception;
this return-value will be returned by LoopingCallBase.wait()
"""
def __init__(self, retvalue=True):
""":param retvalue: Value that LoopingCallBase.wait() should return."""
self.retvalue = retvalue
class LoopingCallBase(object):
def __init__(self, f=None, *args, **kw):
self.args = args
self.kw = kw
self.f = f
self._running = False
self.done = None
def stop(self):
self._running = False
def wait(self):
return self.done.wait()
class FixedIntervalLoopingCall(LoopingCallBase):
"""A fixed interval looping call."""
def start(self, interval, initial_delay=None):
self._running = True
done = event.Event()
def _inner():
if initial_delay:
greenthread.sleep(initial_delay)
try:
while self._running:
start = _ts()
self.f(*self.args, **self.kw)
end = _ts()
if not self._running:
break
delay = end - start - interval
if delay > 0:
LOG.warn(_LW('task %(func_name)s run outlasted '
'interval by %(delay).2f sec'),
{'func_name': repr(self.f), 'delay': delay})
greenthread.sleep(-delay if delay < 0 else 0)
except LoopingCallDone as e:
self.stop()
done.send(e.retvalue)
except Exception:
LOG.exception(_LE('in fixed duration looping call'))
done.send_exception(*sys.exc_info())
return
else:
done.send(True)
self.done = done
greenthread.spawn_n(_inner)
return self.done
class DynamicLoopingCall(LoopingCallBase):
"""A looping call which sleeps until the next known event.
The function called should return how long to sleep for before being
called again.
"""
def start(self, initial_delay=None, periodic_interval_max=None):
self._running = True
done = event.Event()
def _inner():
if initial_delay:
greenthread.sleep(initial_delay)
try:
while self._running:
idle = self.f(*self.args, **self.kw)
if not self._running:
break
if periodic_interval_max is not None:
idle = min(idle, periodic_interval_max)
LOG.debug('Dynamic looping call %(func_name)s sleeping '
'for %(idle).02f seconds',
{'func_name': repr(self.f), 'idle': idle})
greenthread.sleep(idle)
except LoopingCallDone as e:
self.stop()
done.send(e.retvalue)
except Exception:
LOG.exception(_LE('in dynamic looping call'))
done.send_exception(*sys.exc_info())
return
else:
done.send(True)
self.done = done
greenthread.spawn(_inner)
return self.done

View File

@ -0,0 +1,512 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2011 Justin Santa Barbara
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Generic Node base class for all workers that run on hosts."""
import errno
import logging as std_logging
import os
import random
import signal
import sys
import time
try:
# Importing just the symbol here because the io module does not
# exist in Python 2.6.
from io import UnsupportedOperation # noqa
except ImportError:
# Python 2.6
UnsupportedOperation = None
import eventlet
from eventlet import event
from oslo_config import cfg
from kiloeyes.openstack.common import eventlet_backdoor
from kiloeyes.openstack.common.gettextutils import _LE, _LI, _LW
from kiloeyes.openstack.common import importutils
from kiloeyes.openstack.common import log as logging
from kiloeyes.openstack.common import systemd
from kiloeyes.openstack.common import threadgroup
rpc = importutils.try_import('kiloeyes.openstack.common.rpc')
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
def _sighup_supported():
return hasattr(signal, 'SIGHUP')
def _is_daemon():
# The process group for a foreground process will match the
# process group of the controlling terminal. If those values do
# not match, or ioctl() fails on the stdout file handle, we assume
# the process is running in the background as a daemon.
# http://www.gnu.org/software/bash/manual/bashref.html#Job-Control-Basics
try:
is_daemon = os.getpgrp() != os.tcgetpgrp(sys.stdout.fileno())
except OSError as err:
if err.errno == errno.ENOTTY:
# Assume we are a daemon because there is no terminal.
is_daemon = True
else:
raise
except UnsupportedOperation:
# Could not get the fileno for stdout, so we must be a daemon.
is_daemon = True
return is_daemon
def _is_sighup_and_daemon(signo):
if not (_sighup_supported() and signo == signal.SIGHUP):
# Avoid checking if we are a daemon, because the signal isn't
# SIGHUP.
return False
return _is_daemon()
def _signo_to_signame(signo):
signals = {signal.SIGTERM: 'SIGTERM',
signal.SIGINT: 'SIGINT'}
if _sighup_supported():
signals[signal.SIGHUP] = 'SIGHUP'
return signals[signo]
def _set_signals_handler(handler):
signal.signal(signal.SIGTERM, handler)
signal.signal(signal.SIGINT, handler)
if _sighup_supported():
signal.signal(signal.SIGHUP, handler)
class Launcher(object):
"""Launch one or more services and wait for them to complete."""
def __init__(self):
"""Initialize the service launcher.
:returns: None
"""
self.services = Services()
self.backdoor_port = eventlet_backdoor.initialize_if_enabled()
def launch_service(self, service):
"""Load and start the given service.
:param service: The service you would like to start.
:returns: None
"""
service.backdoor_port = self.backdoor_port
self.services.add(service)
def stop(self):
"""Stop all services which are currently running.
:returns: None
"""
self.services.stop()
def wait(self):
"""Waits until all services have been stopped, and then returns.
:returns: None
"""
self.services.wait()
def restart(self):
"""Reload config files and restart service.
:returns: None
"""
cfg.CONF.reload_config_files()
self.services.restart()
class SignalExit(SystemExit):
def __init__(self, signo, exccode=1):
super(SignalExit, self).__init__(exccode)
self.signo = signo
class ServiceLauncher(Launcher):
def _handle_signal(self, signo, frame):
# Allow the process to be killed again and die from natural causes
_set_signals_handler(signal.SIG_DFL)
raise SignalExit(signo)
def handle_signal(self):
_set_signals_handler(self._handle_signal)
def _wait_for_exit_or_signal(self, ready_callback=None):
status = None
signo = 0
LOG.debug('Full set of CONF:')
CONF.log_opt_values(LOG, std_logging.DEBUG)
try:
if ready_callback:
ready_callback()
super(ServiceLauncher, self).wait()
except SignalExit as exc:
signame = _signo_to_signame(exc.signo)
LOG.info(_LI('Caught %s, exiting'), signame)
status = exc.code
signo = exc.signo
except SystemExit as exc:
status = exc.code
finally:
self.stop()
if rpc:
try:
rpc.cleanup()
except Exception:
# We're shutting down, so it doesn't matter at this point.
LOG.exception(_LE('Exception during rpc cleanup.'))
return status, signo
def wait(self, ready_callback=None):
systemd.notify_once()
while True:
self.handle_signal()
status, signo = self._wait_for_exit_or_signal(ready_callback)
if not _is_sighup_and_daemon(signo):
return status
self.restart()
class ServiceWrapper(object):
def __init__(self, service, workers):
self.service = service
self.workers = workers
self.children = set()
self.forktimes = []
class ProcessLauncher(object):
def __init__(self, wait_interval=0.01):
"""Constructor.
:param wait_interval: The interval to sleep for between checks
of child process exit.
"""
self.children = {}
self.sigcaught = None
self.running = True
self.wait_interval = wait_interval
rfd, self.writepipe = os.pipe()
self.readpipe = eventlet.greenio.GreenPipe(rfd, 'r')
self.handle_signal()
def handle_signal(self):
_set_signals_handler(self._handle_signal)
def _handle_signal(self, signo, frame):
self.sigcaught = signo
self.running = False
# Allow the process to be killed again and die from natural causes
_set_signals_handler(signal.SIG_DFL)
def _pipe_watcher(self):
# This will block until the write end is closed when the parent
# dies unexpectedly
self.readpipe.read()
LOG.info(_LI('Parent process has died unexpectedly, exiting'))
sys.exit(1)
def _child_process_handle_signal(self):
# Setup child signal handlers differently
def _sigterm(*args):
signal.signal(signal.SIGTERM, signal.SIG_DFL)
raise SignalExit(signal.SIGTERM)
def _sighup(*args):
signal.signal(signal.SIGHUP, signal.SIG_DFL)
raise SignalExit(signal.SIGHUP)
signal.signal(signal.SIGTERM, _sigterm)
if _sighup_supported():
signal.signal(signal.SIGHUP, _sighup)
# Block SIGINT and let the parent send us a SIGTERM
signal.signal(signal.SIGINT, signal.SIG_IGN)
def _child_wait_for_exit_or_signal(self, launcher):
status = 0
signo = 0
# NOTE(johannes): All exceptions are caught to ensure this
# doesn't fallback into the loop spawning children. It would
# be bad for a child to spawn more children.
try:
launcher.wait()
except SignalExit as exc:
signame = _signo_to_signame(exc.signo)
LOG.info(_LI('Child caught %s, exiting'), signame)
status = exc.code
signo = exc.signo
except SystemExit as exc:
status = exc.code
except BaseException:
LOG.exception(_LE('Unhandled exception'))
status = 2
finally:
launcher.stop()
return status, signo
def _child_process(self, service):
self._child_process_handle_signal()
# Reopen the eventlet hub to make sure we don't share an epoll
# fd with parent and/or siblings, which would be bad
eventlet.hubs.use_hub()
# Close write to ensure only parent has it open
os.close(self.writepipe)
# Create greenthread to watch for parent to close pipe
eventlet.spawn_n(self._pipe_watcher)
# Reseed random number generator
random.seed()
launcher = Launcher()
launcher.launch_service(service)
return launcher
def _start_child(self, wrap):
if len(wrap.forktimes) > wrap.workers:
# Limit ourselves to one process a second (over the period of
# number of workers * 1 second). This will allow workers to
# start up quickly but ensure we don't fork off children that
# die instantly too quickly.
if time.time() - wrap.forktimes[0] < wrap.workers:
LOG.info(_LI('Forking too fast, sleeping'))
time.sleep(1)
wrap.forktimes.pop(0)
wrap.forktimes.append(time.time())
pid = os.fork()
if pid == 0:
launcher = self._child_process(wrap.service)
while True:
self._child_process_handle_signal()
status, signo = self._child_wait_for_exit_or_signal(launcher)
if not _is_sighup_and_daemon(signo):
break
launcher.restart()
os._exit(status)
LOG.info(_LI('Started child %d'), pid)
wrap.children.add(pid)
self.children[pid] = wrap
return pid
def launch_service(self, service, workers=1):
wrap = ServiceWrapper(service, workers)
LOG.info(_LI('Starting %d workers'), wrap.workers)
while self.running and len(wrap.children) < wrap.workers:
self._start_child(wrap)
def _wait_child(self):
try:
# Don't block if no child processes have exited
pid, status = os.waitpid(0, os.WNOHANG)
if not pid:
return None
except OSError as exc:
if exc.errno not in (errno.EINTR, errno.ECHILD):
raise
return None
if os.WIFSIGNALED(status):
sig = os.WTERMSIG(status)
LOG.info(_LI('Child %(pid)d killed by signal %(sig)d'),
dict(pid=pid, sig=sig))
else:
code = os.WEXITSTATUS(status)
LOG.info(_LI('Child %(pid)s exited with status %(code)d'),
dict(pid=pid, code=code))
if pid not in self.children:
LOG.warning(_LW('pid %d not in child list'), pid)
return None
wrap = self.children.pop(pid)
wrap.children.remove(pid)
return wrap
def _respawn_children(self):
while self.running:
wrap = self._wait_child()
if not wrap:
# Yield to other threads if no children have exited
# Sleep for a short time to avoid excessive CPU usage
# (see bug #1095346)
eventlet.greenthread.sleep(self.wait_interval)
continue
while self.running and len(wrap.children) < wrap.workers:
self._start_child(wrap)
def wait(self):
"""Loop waiting on children to die and respawning as necessary."""
systemd.notify_once()
LOG.debug('Full set of CONF:')
CONF.log_opt_values(LOG, std_logging.DEBUG)
try:
while True:
self.handle_signal()
self._respawn_children()
# No signal means that stop was called. Don't clean up here.
if not self.sigcaught:
return
signame = _signo_to_signame(self.sigcaught)
LOG.info(_LI('Caught %s, stopping children'), signame)
if not _is_sighup_and_daemon(self.sigcaught):
break
for pid in self.children:
os.kill(pid, signal.SIGHUP)
self.running = True
self.sigcaught = None
except eventlet.greenlet.GreenletExit:
LOG.info(_LI("Wait called after thread killed. Cleaning up."))
self.stop()
def stop(self):
"""Terminate child processes and wait on each."""
self.running = False
for pid in self.children:
try:
os.kill(pid, signal.SIGTERM)
except OSError as exc:
if exc.errno != errno.ESRCH:
raise
# Wait for children to die
if self.children:
LOG.info(_LI('Waiting on %d children to exit'), len(self.children))
while self.children:
self._wait_child()
class Service(object):
"""Service object for binaries running on hosts."""
def __init__(self, threads=1000):
self.tg = threadgroup.ThreadGroup(threads)
# signal that the service is done shutting itself down:
self._done = event.Event()
def reset(self):
# NOTE(Fengqian): docs for Event.reset() recommend against using it
self._done = event.Event()
def start(self):
pass
def stop(self):
self.tg.stop()
self.tg.wait()
# Signal that service cleanup is done:
if not self._done.ready():
self._done.send()
def wait(self):
self._done.wait()
class Services(object):
def __init__(self):
self.services = []
self.tg = threadgroup.ThreadGroup()
self.done = event.Event()
def add(self, service):
self.services.append(service)
self.tg.add_thread(self.run_service, service, self.done)
def stop(self):
# wait for graceful shutdown of services:
for service in self.services:
service.stop()
service.wait()
# Each service has performed cleanup, now signal that the run_service
# wrapper threads can now die:
if not self.done.ready():
self.done.send()
# reap threads:
self.tg.stop()
def wait(self):
self.tg.wait()
def restart(self):
self.stop()
self.done = event.Event()
for restart_service in self.services:
restart_service.reset()
self.tg.add_thread(self.run_service, restart_service, self.done)
@staticmethod
def run_service(service, done):
"""Service start wrapper.
:param service: service to run
:param done: event to wait on until a shutdown is triggered
:returns: None
"""
service.start()
done.wait()
def launch(service, workers=1):
if workers is None or workers == 1:
launcher = ServiceLauncher()
launcher.launch_service(service)
else:
launcher = ProcessLauncher()
launcher.launch_service(service, workers=workers)
return launcher

View File

@ -0,0 +1,311 @@
# Copyright 2011 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
System-level utilities and helper functions.
"""
import math
import re
import sys
import unicodedata
import six
from kiloeyes.openstack.common.gettextutils import _
UNIT_PREFIX_EXPONENT = {
'k': 1,
'K': 1,
'Ki': 1,
'M': 2,
'Mi': 2,
'G': 3,
'Gi': 3,
'T': 4,
'Ti': 4,
}
UNIT_SYSTEM_INFO = {
'IEC': (1024, re.compile(r'(^[-+]?\d*\.?\d+)([KMGT]i?)?(b|bit|B)$')),
'SI': (1000, re.compile(r'(^[-+]?\d*\.?\d+)([kMGT])?(b|bit|B)$')),
}
TRUE_STRINGS = ('1', 't', 'true', 'on', 'y', 'yes')
FALSE_STRINGS = ('0', 'f', 'false', 'off', 'n', 'no')
SLUGIFY_STRIP_RE = re.compile(r"[^\w\s-]")
SLUGIFY_HYPHENATE_RE = re.compile(r"[-\s]+")
# NOTE(flaper87): The following globals are used by `mask_password`
_SANITIZE_KEYS = ['adminPass', 'admin_pass', 'password', 'admin_password']
# NOTE(ldbragst): Let's build a list of regex objects using the list of
# _SANITIZE_KEYS we already have. This way, we only have to add the new key
# to the list of _SANITIZE_KEYS and we can generate regular expressions
# for XML and JSON automatically.
_SANITIZE_PATTERNS_2 = []
_SANITIZE_PATTERNS_1 = []
# NOTE(amrith): Some regular expressions have only one parameter, some
# have two parameters. Use different lists of patterns here.
_FORMAT_PATTERNS_1 = [r'(%(key)s\s*[=]\s*)[^\s^\'^\"]+']
_FORMAT_PATTERNS_2 = [r'(%(key)s\s*[=]\s*[\"\']).*?([\"\'])',
r'(%(key)s\s+[\"\']).*?([\"\'])',
r'([-]{2}%(key)s\s+)[^\'^\"^=^\s]+([\s]*)',
r'(<%(key)s>).*?(</%(key)s>)',
r'([\"\']%(key)s[\"\']\s*:\s*[\"\']).*?([\"\'])',
r'([\'"].*?%(key)s[\'"]\s*:\s*u?[\'"]).*?([\'"])',
r'([\'"].*?%(key)s[\'"]\s*,\s*\'--?[A-z]+\'\s*,\s*u?'
'[\'"]).*?([\'"])',
r'(%(key)s\s*--?[A-z]+\s*)\S+(\s*)']
for key in _SANITIZE_KEYS:
for pattern in _FORMAT_PATTERNS_2:
reg_ex = re.compile(pattern % {'key': key}, re.DOTALL)
_SANITIZE_PATTERNS_2.append(reg_ex)
for pattern in _FORMAT_PATTERNS_1:
reg_ex = re.compile(pattern % {'key': key}, re.DOTALL)
_SANITIZE_PATTERNS_1.append(reg_ex)
def int_from_bool_as_string(subject):
"""Interpret a string as a boolean and return either 1 or 0.
Any string value in:
('True', 'true', 'On', 'on', '1')
is interpreted as a boolean True.
Useful for JSON-decoded stuff and config file parsing
"""
return bool_from_string(subject) and 1 or 0
def bool_from_string(subject, strict=False, default=False):
"""Interpret a string as a boolean.
A case-insensitive match is performed such that strings matching 't',
'true', 'on', 'y', 'yes', or '1' are considered True and, when
`strict=False`, anything else returns the value specified by 'default'.
Useful for JSON-decoded stuff and config file parsing.
If `strict=True`, unrecognized values, including None, will raise a
ValueError which is useful when parsing values passed in from an API call.
Strings yielding False are 'f', 'false', 'off', 'n', 'no', or '0'.
"""
if not isinstance(subject, six.string_types):
subject = six.text_type(subject)
lowered = subject.strip().lower()
if lowered in TRUE_STRINGS:
return True
elif lowered in FALSE_STRINGS:
return False
elif strict:
acceptable = ', '.join(
"'%s'" % s for s in sorted(TRUE_STRINGS + FALSE_STRINGS))
msg = _("Unrecognized value '%(val)s', acceptable values are:"
" %(acceptable)s") % {'val': subject,
'acceptable': acceptable}
raise ValueError(msg)
else:
return default
def safe_decode(text, incoming=None, errors='strict'):
"""Decodes incoming text/bytes string using `incoming` if they're not
already unicode.
:param incoming: Text's current encoding
:param errors: Errors handling policy. See here for valid
values http://docs.python.org/2/library/codecs.html
:returns: text or a unicode `incoming` encoded
representation of it.
:raises TypeError: If text is not an instance of str
"""
if not isinstance(text, (six.string_types, six.binary_type)):
raise TypeError("%s can't be decoded" % type(text))
if isinstance(text, six.text_type):
return text
if not incoming:
incoming = (sys.stdin.encoding or
sys.getdefaultencoding())
try:
return text.decode(incoming, errors)
except UnicodeDecodeError:
# Note(flaper87) If we get here, it means that
# sys.stdin.encoding / sys.getdefaultencoding
# didn't return a suitable encoding to decode
# text. This happens mostly when global LANG
# var is not set correctly and there's no
# default encoding. In this case, most likely
# python will use ASCII or ANSI encoders as
# default encodings but they won't be capable
# of decoding non-ASCII characters.
#
# Also, UTF-8 is being used since it's an ASCII
# extension.
return text.decode('utf-8', errors)
def safe_encode(text, incoming=None,
encoding='utf-8', errors='strict'):
"""Encodes incoming text/bytes string using `encoding`.
If incoming is not specified, text is expected to be encoded with
current python's default encoding. (`sys.getdefaultencoding`)
:param incoming: Text's current encoding
:param encoding: Expected encoding for text (Default UTF-8)
:param errors: Errors handling policy. See here for valid
values http://docs.python.org/2/library/codecs.html
:returns: text or a bytestring `encoding` encoded
representation of it.
:raises TypeError: If text is not an instance of str
"""
if not isinstance(text, (six.string_types, six.binary_type)):
raise TypeError("%s can't be encoded" % type(text))
if not incoming:
incoming = (sys.stdin.encoding or
sys.getdefaultencoding())
if isinstance(text, six.text_type):
return text.encode(encoding, errors)
elif text and encoding != incoming:
# Decode text before encoding it with `encoding`
text = safe_decode(text, incoming, errors)
return text.encode(encoding, errors)
else:
return text
def string_to_bytes(text, unit_system='IEC', return_int=False):
"""Converts a string into an float representation of bytes.
The units supported for IEC ::
Kb(it), Kib(it), Mb(it), Mib(it), Gb(it), Gib(it), Tb(it), Tib(it)
KB, KiB, MB, MiB, GB, GiB, TB, TiB
The units supported for SI ::
kb(it), Mb(it), Gb(it), Tb(it)
kB, MB, GB, TB
Note that the SI unit system does not support capital letter 'K'
:param text: String input for bytes size conversion.
:param unit_system: Unit system for byte size conversion.
:param return_int: If True, returns integer representation of text
in bytes. (default: decimal)
:returns: Numerical representation of text in bytes.
:raises ValueError: If text has an invalid value.
"""
try:
base, reg_ex = UNIT_SYSTEM_INFO[unit_system]
except KeyError:
msg = _('Invalid unit system: "%s"') % unit_system
raise ValueError(msg)
match = reg_ex.match(text)
if match:
magnitude = float(match.group(1))
unit_prefix = match.group(2)
if match.group(3) in ['b', 'bit']:
magnitude /= 8
else:
msg = _('Invalid string format: %s') % text
raise ValueError(msg)
if not unit_prefix:
res = magnitude
else:
res = magnitude * pow(base, UNIT_PREFIX_EXPONENT[unit_prefix])
if return_int:
return int(math.ceil(res))
return res
def to_slug(value, incoming=None, errors="strict"):
"""Normalize string.
Convert to lowercase, remove non-word characters, and convert spaces
to hyphens.
Inspired by Django's `slugify` filter.
:param value: Text to slugify
:param incoming: Text's current encoding
:param errors: Errors handling policy. See here for valid
values http://docs.python.org/2/library/codecs.html
:returns: slugified unicode representation of `value`
:raises TypeError: If text is not an instance of str
"""
value = safe_decode(value, incoming, errors)
# NOTE(aababilov): no need to use safe_(encode|decode) here:
# encodings are always "ascii", error handling is always "ignore"
# and types are always known (first: unicode; second: str)
value = unicodedata.normalize("NFKD", value).encode(
"ascii", "ignore").decode("ascii")
value = SLUGIFY_STRIP_RE.sub("", value).strip().lower()
return SLUGIFY_HYPHENATE_RE.sub("-", value)
def mask_password(message, secret="***"):
"""Replace password with 'secret' in message.
:param message: The string which includes security information.
:param secret: value with which to replace passwords.
:returns: The unicode value of message with the password fields masked.
For example:
>>> mask_password("'adminPass' : 'aaaaa'")
"'adminPass' : '***'"
>>> mask_password("'admin_pass' : 'aaaaa'")
"'admin_pass' : '***'"
>>> mask_password('"password" : "aaaaa"')
'"password" : "***"'
>>> mask_password("'original_password' : 'aaaaa'")
"'original_password' : '***'"
>>> mask_password("u'original_password' : u'aaaaa'")
"u'original_password' : u'***'"
"""
message = six.text_type(message)
# NOTE(ldbragst): Check to see if anything in message contains any key
# specified in _SANITIZE_KEYS, if not then just return the message since
# we don't have to mask any passwords.
if not any(key in message for key in _SANITIZE_KEYS):
return message
substitute = r'\g<1>' + secret + r'\g<2>'
for pattern in _SANITIZE_PATTERNS_2:
message = re.sub(pattern, substitute, message)
substitute = r'\g<1>' + secret
for pattern in _SANITIZE_PATTERNS_1:
message = re.sub(pattern, substitute, message)
return message

View File

@ -0,0 +1,106 @@
# Copyright 2012-2014 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Helper module for systemd service readiness notification.
"""
import os
import socket
import sys
from kiloeyes.openstack.common import log as logging
LOG = logging.getLogger(__name__)
def _abstractify(socket_name):
if socket_name.startswith('@'):
# abstract namespace socket
socket_name = '\0%s' % socket_name[1:]
return socket_name
def _sd_notify(unset_env, msg):
notify_socket = os.getenv('NOTIFY_SOCKET')
if notify_socket:
sock = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
try:
sock.connect(_abstractify(notify_socket))
sock.sendall(msg)
if unset_env:
del os.environ['NOTIFY_SOCKET']
except EnvironmentError:
LOG.debug("Systemd notification failed", exc_info=True)
finally:
sock.close()
def notify():
"""Send notification to Systemd that service is ready.
For details see
http://www.freedesktop.org/software/systemd/man/sd_notify.html
"""
_sd_notify(False, 'READY=1')
def notify_once():
"""Send notification once to Systemd that service is ready.
Systemd sets NOTIFY_SOCKET environment variable with the name of the
socket listening for notifications from services.
This method removes the NOTIFY_SOCKET environment variable to ensure
notification is sent only once.
"""
_sd_notify(True, 'READY=1')
def onready(notify_socket, timeout):
"""Wait for systemd style notification on the socket.
:param notify_socket: local socket address
:type notify_socket: string
:param timeout: socket timeout
:type timeout: float
:returns: 0 service ready
1 service not ready
2 timeout occurred
"""
sock = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
sock.settimeout(timeout)
sock.bind(_abstractify(notify_socket))
try:
msg = sock.recv(512)
except socket.timeout:
return 2
finally:
sock.close()
if 'READY=1' in msg:
return 0
else:
return 1
if __name__ == '__main__':
# simple CLI for testing
if len(sys.argv) == 1:
notify()
elif len(sys.argv) >= 2:
timeout = float(sys.argv[1])
notify_socket = os.getenv('NOTIFY_SOCKET')
if notify_socket:
retval = onready(notify_socket, timeout)
sys.exit(retval)

View File

@ -0,0 +1,147 @@
# Copyright 2012 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import threading
import eventlet
from eventlet import greenpool
from kiloeyes.openstack.common import log as logging
from kiloeyes.openstack.common import loopingcall
LOG = logging.getLogger(__name__)
def _thread_done(gt, *args, **kwargs):
"""Callback function to be passed to GreenThread.link() when we spawn()
Calls the :class:`ThreadGroup` to notify if.
"""
kwargs['group'].thread_done(kwargs['thread'])
class Thread(object):
"""Wrapper around a greenthread, that holds a reference to the
:class:`ThreadGroup`. The Thread will notify the :class:`ThreadGroup` when
it has done so it can be removed from the threads list.
"""
def __init__(self, thread, group):
self.thread = thread
self.thread.link(_thread_done, group=group, thread=self)
def stop(self):
self.thread.kill()
def wait(self):
return self.thread.wait()
def link(self, func, *args, **kwargs):
self.thread.link(func, *args, **kwargs)
class ThreadGroup(object):
"""The point of the ThreadGroup class is to:
* keep track of timers and greenthreads (making it easier to stop them
when need be).
* provide an easy API to add timers.
"""
def __init__(self, thread_pool_size=10):
self.pool = greenpool.GreenPool(thread_pool_size)
self.threads = []
self.timers = []
def add_dynamic_timer(self, callback, initial_delay=None,
periodic_interval_max=None, *args, **kwargs):
timer = loopingcall.DynamicLoopingCall(callback, *args, **kwargs)
timer.start(initial_delay=initial_delay,
periodic_interval_max=periodic_interval_max)
self.timers.append(timer)
def add_timer(self, interval, callback, initial_delay=None,
*args, **kwargs):
pulse = loopingcall.FixedIntervalLoopingCall(callback, *args, **kwargs)
pulse.start(interval=interval,
initial_delay=initial_delay)
self.timers.append(pulse)
def add_thread(self, callback, *args, **kwargs):
gt = self.pool.spawn(callback, *args, **kwargs)
th = Thread(gt, self)
self.threads.append(th)
return th
def thread_done(self, thread):
self.threads.remove(thread)
def _stop_threads(self):
current = threading.current_thread()
# Iterate over a copy of self.threads so thread_done doesn't
# modify the list while we're iterating
for x in self.threads[:]:
if x is current:
# don't kill the current thread.
continue
try:
x.stop()
except Exception as ex:
LOG.exception(ex)
def stop_timers(self):
for x in self.timers:
try:
x.stop()
except Exception as ex:
LOG.exception(ex)
self.timers = []
def stop(self, graceful=False):
"""stop function has the option of graceful=True/False.
* In case of graceful=True, wait for all threads to be finished.
Never kill threads.
* In case of graceful=False, kill threads immediately.
"""
self.stop_timers()
if graceful:
# In case of graceful=True, wait for all threads to be
# finished, never kill threads
self.wait()
else:
# In case of graceful=False(Default), kill threads
# immediately
self._stop_threads()
def wait(self):
for x in self.timers:
try:
x.wait()
except eventlet.greenlet.GreenletExit:
pass
except Exception as ex:
LOG.exception(ex)
current = threading.current_thread()
# Iterate over a copy of self.threads so thread_done doesn't
# modify the list while we're iterating
for x in self.threads[:]:
if x is current:
continue
try:
x.wait()
except eventlet.greenlet.GreenletExit:
pass
except Exception as ex:
LOG.exception(ex)

View File

@ -0,0 +1,210 @@
# Copyright 2011 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Time related utilities and helper functions.
"""
import calendar
import datetime
import time
import iso8601
import six
# ISO 8601 extended time format with microseconds
_ISO8601_TIME_FORMAT_SUBSECOND = '%Y-%m-%dT%H:%M:%S.%f'
_ISO8601_TIME_FORMAT = '%Y-%m-%dT%H:%M:%S'
PERFECT_TIME_FORMAT = _ISO8601_TIME_FORMAT_SUBSECOND
def isotime(at=None, subsecond=False):
"""Stringify time in ISO 8601 format."""
if not at:
at = utcnow()
st = at.strftime(_ISO8601_TIME_FORMAT
if not subsecond
else _ISO8601_TIME_FORMAT_SUBSECOND)
tz = at.tzinfo.tzname(None) if at.tzinfo else 'UTC'
st += ('Z' if tz == 'UTC' else tz)
return st
def parse_isotime(timestr):
"""Parse time from ISO 8601 format."""
try:
return iso8601.parse_date(timestr)
except iso8601.ParseError as e:
raise ValueError(six.text_type(e))
except TypeError as e:
raise ValueError(six.text_type(e))
def strtime(at=None, fmt=PERFECT_TIME_FORMAT):
"""Returns formatted utcnow."""
if not at:
at = utcnow()
return at.strftime(fmt)
def parse_strtime(timestr, fmt=PERFECT_TIME_FORMAT):
"""Turn a formatted time back into a datetime."""
return datetime.datetime.strptime(timestr, fmt)
def normalize_time(timestamp):
"""Normalize time in arbitrary timezone to UTC naive object."""
offset = timestamp.utcoffset()
if offset is None:
return timestamp
return timestamp.replace(tzinfo=None) - offset
def is_older_than(before, seconds):
"""Return True if before is older than seconds."""
if isinstance(before, six.string_types):
before = parse_strtime(before).replace(tzinfo=None)
else:
before = before.replace(tzinfo=None)
return utcnow() - before > datetime.timedelta(seconds=seconds)
def is_newer_than(after, seconds):
"""Return True if after is newer than seconds."""
if isinstance(after, six.string_types):
after = parse_strtime(after).replace(tzinfo=None)
else:
after = after.replace(tzinfo=None)
return after - utcnow() > datetime.timedelta(seconds=seconds)
def utcnow_ts():
"""Timestamp version of our utcnow function."""
if utcnow.override_time is None:
# NOTE(kgriffs): This is several times faster
# than going through calendar.timegm(...)
return int(time.time())
return calendar.timegm(utcnow().timetuple())
def utcnow():
"""Overridable version of utils.utcnow."""
if utcnow.override_time:
try:
return utcnow.override_time.pop(0)
except AttributeError:
return utcnow.override_time
return datetime.datetime.utcnow()
def iso8601_from_timestamp(timestamp):
"""Returns an iso8601 formatted date from timestamp."""
return isotime(datetime.datetime.utcfromtimestamp(timestamp))
utcnow.override_time = None
def set_time_override(override_time=None):
"""Overrides utils.utcnow.
Make it return a constant time or a list thereof, one at a time.
:param override_time: datetime instance or list thereof. If not
given, defaults to the current UTC time.
"""
utcnow.override_time = override_time or datetime.datetime.utcnow()
def advance_time_delta(timedelta):
"""Advance overridden time using a datetime.timedelta."""
assert utcnow.override_time is not None
try:
for dt in utcnow.override_time:
dt += timedelta
except TypeError:
utcnow.override_time += timedelta
def advance_time_seconds(seconds):
"""Advance overridden time by seconds."""
advance_time_delta(datetime.timedelta(0, seconds))
def clear_time_override():
"""Remove the overridden time."""
utcnow.override_time = None
def marshall_now(now=None):
"""Make an rpc-safe datetime with microseconds.
Note: tzinfo is stripped, but not required for relative times.
"""
if not now:
now = utcnow()
return dict(day=now.day, month=now.month, year=now.year, hour=now.hour,
minute=now.minute, second=now.second,
microsecond=now.microsecond)
def unmarshall_time(tyme):
"""Unmarshall a datetime dict."""
return datetime.datetime(day=tyme['day'],
month=tyme['month'],
year=tyme['year'],
hour=tyme['hour'],
minute=tyme['minute'],
second=tyme['second'],
microsecond=tyme['microsecond'])
def delta_seconds(before, after):
"""Return the difference between two timing objects.
Compute the difference in seconds between two date, time, or
datetime objects (as a float, to microsecond resolution).
"""
delta = after - before
return total_seconds(delta)
def total_seconds(delta):
"""Return the total seconds of datetime.timedelta object.
Compute total seconds of datetime.timedelta, datetime.timedelta
doesn't have method total_seconds in Python2.6, calculate it manually.
"""
try:
return delta.total_seconds()
except AttributeError:
return ((delta.days * 24 * 3600) + delta.seconds +
float(delta.microseconds) / (10 ** 6))
def is_soon(dt, window):
"""Determines if time is going to happen in the next window seconds.
:param dt: the time
:param window: minimum seconds to remain to consider the time not soon
:return: True if expiration is within the given duration
"""
soon = (utcnow() + datetime.timedelta(seconds=window))
return normalize_time(dt) <= soon

39
kiloeyes/service.py Executable file
View File

@ -0,0 +1,39 @@
#!/usr/bin/env python
#
# Copyright 2013 IBM Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sys
from oslo_config import cfg
from kiloeyes.openstack.common import gettextutils
from kiloeyes.openstack.common import log
cfg.CONF.import_opt('default_log_levels', 'kiloeyes.openstack.common.log')
LOG = log.getLogger(__name__)
def prepare_service(argv=None):
gettextutils.install('kiloeyes')
gettextutils.enable_lazy()
log_levels = (cfg.CONF.default_log_levels)
cfg.set_defaults(log.log_opts, default_log_levels=log_levels)
if argv is None:
argv = sys.argv
cfg.CONF(argv[1:], project='kiloeyes')
log.setup('kiloeyes')
LOG.info('Service has started!')

85
kiloeyes/tests/__init__.py Executable file
View File

@ -0,0 +1,85 @@
# Copyright 2013 IBM Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import os
import tempfile
import fixtures
import testtools
_TRUE_VALUES = ('True', 'true', '1', 'yes')
_LOG_FORMAT = "%(levelname)8s [%(name)s] %(message)s"
class BaseTestCase(testtools.TestCase):
def setUp(self):
super(BaseTestCase, self).setUp()
self._set_timeout()
self._fake_output()
self._fake_logs()
self.useFixture(fixtures.NestedTempfile())
self.useFixture(fixtures.TempHomeDir())
self.tempdirs = []
def _set_timeout(self):
test_timeout = os.environ.get('OS_TEST_TIMEOUT', 0)
try:
test_timeout = int(test_timeout)
except ValueError:
# If timeout value is invalid do not set a timeout.
test_timeout = 0
if test_timeout > 0:
self.useFixture(fixtures.Timeout(test_timeout, gentle=True))
def _fake_output(self):
if os.environ.get('OS_STDOUT_CAPTURE') in _TRUE_VALUES:
stdout = self.useFixture(fixtures.StringStream('stdout')).stream
self.useFixture(fixtures.MonkeyPatch('sys.stdout', stdout))
if os.environ.get('OS_STDERR_CAPTURE') in _TRUE_VALUES:
stderr = self.useFixture(fixtures.StringStream('stderr')).stream
self.useFixture(fixtures.MonkeyPatch('sys.stderr', stderr))
def _fake_logs(self):
if os.environ.get('OS_DEBUG') in _TRUE_VALUES:
level = logging.DEBUG
else:
level = logging.INFO
capture_logs = os.environ.get('OS_LOG_CAPTURE') in _TRUE_VALUES
if capture_logs:
self.useFixture(
fixtures.FakeLogger(
format=_LOG_FORMAT,
level=level,
nuke_handlers=capture_logs,
)
)
else:
logging.basicConfig(format=_LOG_FORMAT, level=level)
def create_tempfiles(self, files, ext='.conf'):
tempfiles = []
for (basename, contents) in files:
if not os.path.isabs(basename):
(fd, path) = tempfile.mkstemp(prefix=basename, suffix=ext)
else:
path = basename + ext
fd = os.open(path, os.O_CREAT | os.O_WRONLY)
tempfiles.append(path)
try:
os.write(fd, contents)
finally:
os.close(fd)
return tempfiles

View File

View File

@ -0,0 +1,73 @@
# Copyright 2015 Carnegie Mellon University
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from kiloeyes.common import alarm_expr_calculator as calculator
from kiloeyes.openstack.common import log
from kiloeyes import tests
import random
import time
LOG = log.getLogger(__name__)
class TestAlarmExprCalculator(tests.BaseTestCase):
def setUp(self):
super(TestAlarmExprCalculator, self).setUp()
def test_calc_value(self):
self.assertEqual(0, calculator.calc_value('MAX', [0]))
data = []
self.assertEqual(None, calculator.calc_value('MAX', data))
random.seed(time.time())
for i in range(0, 30, 1):
data.append(random.uniform(0, 1000))
self.assertEqual(max(data), calculator.calc_value('MAX', data))
self.assertEqual(sum(data), calculator.calc_value('SUM', data))
self.assertEqual(len(data), calculator.calc_value('COUNT', data))
self.assertEqual(min(data), calculator.calc_value('MIN', data))
self.assertEqual(sum(data) / len(data),
calculator.calc_value('AVG', data))
def test_compare_thresh(self):
values = [501, 500, 4999]
self.assertEqual('ALARM',
calculator.compare_thresh(values, 'GTE', 500))
self.assertEqual('OK', calculator.compare_thresh(values, 'GT', 500))
values = [501, 500, 4999, None]
self.assertEqual('UNDETERMINED',
calculator.compare_thresh(values, 'LTE', 5000))
values = [501, 500, 4999, None]
self.assertEqual('OK', calculator.compare_thresh(values, 'LT', 4999))
def test_calc_logic(self):
op = 'AND'
subs = ['ALARM', 'OK', 'ALARM', 'UNDETERMINED']
self.assertEqual('OK', calculator.calc_logic(op, subs))
subs = ['ALARM', 'UNDETERMINED', 'ALARM', 'UNDETERMINED']
self.assertEqual('UNDETERMINED', calculator.calc_logic(op, subs))
subs = ['ALARM', 'ALARM', 'ALARM']
self.assertEqual('ALARM', calculator.calc_logic(op, subs))
op = 'OR'
subs = ['ALARM', 'OK', 'ALARM', 'UNDETERMINED']
self.assertEqual('ALARM', calculator.calc_logic(op, subs))
subs = ['UNDETERMINED', 'OK', 'UNDETERMINED']
self.assertEqual('UNDETERMINED', calculator.calc_logic(op, subs))
subs = ['OK', 'OK', 'OK']
self.assertEqual('OK', calculator.calc_logic(op, subs))
op = 'NOT'
self.assertEqual('UNDETERMINED', calculator.calc_logic(op, subs))

View File

@ -0,0 +1,201 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Carnegie Mellon University
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from kiloeyes.common import alarm_expr_parser
from kiloeyes.openstack.common import log
from kiloeyes import tests
LOG = log.getLogger(__name__)
class TestAlarmExprParser(tests.BaseTestCase):
def __init__(self, *args, **kwargs):
super(TestAlarmExprParser, self).__init__(*args, **kwargs)
self.expr0 = (
"max(-_.千幸福的笑脸{घोड़ा=馬, "
"dn2=dv2,千幸福的笑脸घ=千幸福的笑脸घ}) gte 100 "
"times 3 And "
"(min(ເຮືອນ{dn3=dv3,家=дом}) < 10 or sum(biz{dn5=dv58}) >9 9and "
"count(fizzle) lt 0 or count(baz) > 1)".decode('utf8'))
self.expr1 = ("max(foo{hostname=mini-mon,千=千}, 120)"
" > 100 and (max(bar)>100 "
" or max(biz)>100)".decode('utf8'))
self.expr2 = "max(foo)>=100 times 10"
self.expr3 = "max(foo)>=100 time 10"
self.expr4 = ("max(foo{hostname=mini-mon,千==千}, 120)"
" > 100 and (max(bar)>100 "
" or max(biz)>100)".decode('utf8'))
self.expr5 = "maxi(foo)>=100 times 10"
self.expr6 = ("max(foo{hostname=mini-mon,千=千}, 120)"
" = 100 and (max(bar)>100 "
" or max(biz)>100)".decode('utf8'))
self.expr7 = "(max(foo)>=100 times 10"
self.expr8 = ("max(foo,me{hostname=mini-mon,千=千}, 120)"
" = 100 and (max(bar)>100 "
" or max(biz)>100)".decode('utf8'))
def setUp(self):
super(TestAlarmExprParser, self).setUp()
def test_wrong_input(self):
expr = alarm_expr_parser.AlarmExprParser(self.expr3).parse_result
self.assertEqual(None, expr)
expr = alarm_expr_parser.AlarmExprParser(self.expr4).parse_result
self.assertEqual(None, expr)
expr = alarm_expr_parser.AlarmExprParser(self.expr5).parse_result
self.assertEqual(None, expr)
expr = alarm_expr_parser.AlarmExprParser(self.expr6).parse_result
self.assertEqual(None, expr)
expr = alarm_expr_parser.AlarmExprParser(self.expr7).parse_result
self.assertEqual(None, expr)
def test_logic(self):
expr = alarm_expr_parser.AlarmExprParser(self.expr0).parse_result
self.assertEqual(u'AND', expr.logic_operator)
self.assertEqual(None, expr.sub_expr_list[0].logic_operator)
self.assertEqual(u'OR', expr.sub_expr_list[1].logic_operator)
self.assertEqual(u'AND', expr.sub_expr_list[1].
sub_expr_list[1].logic_operator)
expr = alarm_expr_parser.AlarmExprParser(self.expr1).parse_result
self.assertEqual(u'AND', expr.logic_operator)
self.assertEqual('OR', expr.sub_expr_list[1].logic_operator)
self.assertEqual(None, expr.sub_expr_list[0].logic_operator)
self.assertEqual(
'max(foo{hostname=mini-mon,千=千}, 120) > 100'.decode('utf8'),
expr.sub_expr_list[0].fmtd_sub_expr_str)
expr = alarm_expr_parser.AlarmExprParser(self.expr2).parse_result
self.assertEqual(None, expr.logic_operator)
def test_expr(self):
expr = alarm_expr_parser.AlarmExprParser(self.expr0).parse_result
self.assertEqual("max(-_.千幸福的笑脸{घोड़ा=馬,"
"dn2=dv2,千幸福的笑脸घ=千幸福的笑脸घ})gte100"
"times3", expr.sub_expr_list[0].
sub_expr_str.encode('utf8'))
self.assertEqual("sum(biz{dn5=dv58})>99",
expr.sub_expr_list[1].sub_expr_list[1].
sub_expr_list[0].sub_expr_str.encode('utf8'))
expr = alarm_expr_parser.AlarmExprParser(self.expr2).parse_result
self.assertEqual("max(foo)>=100times10",
expr.sub_expr_str.encode('utf8'))
def test_func(self):
expr = alarm_expr_parser.AlarmExprParser(self.expr1).parse_result
self.assertEqual("max", expr.sub_expr_list[1].
sub_expr_list[1].func.encode('utf8'))
expr = alarm_expr_parser.AlarmExprParser(self.expr2).parse_result
self.assertEqual("max", expr.func.encode('utf8'))
def test_threshold(self):
expr = alarm_expr_parser.AlarmExprParser(self.expr1).parse_result
self.assertEqual(100,
float(expr.sub_expr_list[1].
sub_expr_list[1].threshold.encode('utf8')))
expr = alarm_expr_parser.AlarmExprParser(self.expr2).parse_result
self.assertEqual(100, float(expr.threshold))
def test_periods(self):
expr = alarm_expr_parser.AlarmExprParser(self.expr1).parse_result
self.assertEqual(1, int(expr.sub_expr_list[1].
sub_expr_list[1].periods.encode('utf8')))
expr = alarm_expr_parser.AlarmExprParser(self.expr2).parse_result
self.assertEqual(10, int(expr.periods))
def test_period(self):
expr = alarm_expr_parser.AlarmExprParser(self.expr1).parse_result
self.assertEqual(60, int(expr.sub_expr_list[1].
sub_expr_list[1].period.encode('utf8')))
def test_operator(self):
expr = alarm_expr_parser.AlarmExprParser(self.expr1).parse_result
self.assertEqual('GT', expr.sub_expr_list[1].
sub_expr_list[1].normalized_operator.encode('utf8'))
expr = alarm_expr_parser.AlarmExprParser(self.expr2).parse_result
self.assertEqual('>=', expr.operator)
def test_metric_name(self):
expr = alarm_expr_parser.AlarmExprParser(self.expr1).parse_result
self.assertEqual('biz', expr.sub_expr_list[1].
sub_expr_list[1].metric_name.encode('utf8'))
def test_dimensions_str(self):
expr = alarm_expr_parser.AlarmExprParser(self.expr2).parse_result
temp = expr.dimensions_str
self.assertEqual('', temp)
def test_sub_expr_list(self):
expr = alarm_expr_parser.AlarmExprParser(self.expr2).parse_result
temp = expr.sub_expr_list
self.assertEqual([], temp)
def test_dimensions_list(self):
expr = alarm_expr_parser.AlarmExprParser(self.expr0).parse_result
temp = []
for e in expr.sub_expr_list[0].dimensions_as_list:
temp.append(e.encode('utf8'))
self.assertEqual(['घोड़ा=馬', 'dn2=dv2',
'千幸福的笑脸घ=千幸福的笑脸घ'], temp)
expr = alarm_expr_parser.AlarmExprParser(self.expr2).parse_result
self.assertEqual([], expr.dimensions_as_list)
def test_dimensions_dict(self):
expr = alarm_expr_parser.AlarmExprParser(self.expr0).parse_result
temp = {}
od = expr.sub_expr_list[0].dimensions_as_dict
for e in od.keys():
temp[e.encode('utf8')] = od[e].encode('utf8')
self.assertEqual({'घोड़ा': '',
'dn2': 'dv2',
'千幸福的笑脸घ': '千幸福的笑脸घ'}, temp)
expr = alarm_expr_parser.AlarmExprParser(self.expr2).parse_result
self.assertEqual({}, expr.dimensions_as_dict)
def test_related_metrics(self):
rm = alarm_expr_parser.AlarmExprParser(self.expr2).related_metrics
e_result = []
e_result.append({
'name': 'foo',
'dimensions': {}
})
self.assertEqual(e_result, rm)
rm = alarm_expr_parser.AlarmExprParser(self.expr1).related_metrics
self.assertEqual(3, len(rm))
def test_sub_alarm_expressions(self):
sae = (alarm_expr_parser.AlarmExprParser(self.expr1).
sub_alarm_expressions)
self.assertEqual(3, len(sae))
def test_wrong_format_expr(self):
sub_expr_list = (alarm_expr_parser.AlarmExprParser(self.expr8).
sub_expr_list)
self.assertEqual(None, sub_expr_list)

View File

@ -0,0 +1,78 @@
#
# Copyright 2015 Carnegie Mellon University
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
from kiloeyes.common import alarm_expr_validator as validator
from kiloeyes.openstack.common import log
from kiloeyes import tests
import os
LOG = log.getLogger(__name__)
class TestCaseUtil(object):
def __init__(self):
path = os.path.split(os.path.realpath(__file__))[0]
path += '/test_case_alarm_expr_validator.json'
f = open(path)
try:
self.test_cases = json.load(f)
finally:
f.close()
def get_alarm_def(self, name):
return json.dumps(self.test_cases[name])
class TestAlarmExprCalculator(tests.BaseTestCase):
def setUp(self):
super(TestAlarmExprCalculator, self).setUp()
self.util = TestCaseUtil()
def test_is_valid_alarm_definition(self):
self.assertEqual(True, validator.is_valid_alarm_definition(
self.util.get_alarm_def("alarm_def_1")))
self.assertEqual(True, validator.is_valid_alarm_definition(
self.util.get_alarm_def("alarm_def_0")))
self.assertEqual(True, validator.is_valid_alarm_definition(
self.util.get_alarm_def("alarm_def_1_update")))
self.assertEqual(True, validator.is_valid_alarm_definition(
self.util.get_alarm_def("alarm_def_1_update_wrong_0")))
self.assertEqual(False, validator.is_valid_alarm_definition(
self.util.get_alarm_def("alarm_def_0_wrong_0")))
self.assertEqual(False, validator.is_valid_alarm_definition(
self.util.get_alarm_def("alarm_def_0_wrong_1")))
self.assertEqual(True, validator.is_valid_alarm_definition(
self.util.get_alarm_def("alarm_def_1_update_wrong_1")))
def test_is_valid_update_alarm_definition(self):
self.assertEqual(True, validator.is_valid_update_alarm_definition(
self.util.get_alarm_def("alarm_def_1"),
self.util.get_alarm_def("alarm_def_1_update")))
self.assertEqual(False, validator.is_valid_update_alarm_definition(
self.util.get_alarm_def("alarm_def_1"),
self.util.get_alarm_def("alarm_def_0_wrong_0")))
self.assertEqual(False, validator.is_valid_update_alarm_definition(
self.util.get_alarm_def("alarm_def_1"),
self.util.get_alarm_def("alarm_def_1_update_wrong_0")))
self.assertEqual(False, validator.is_valid_update_alarm_definition(
self.util.get_alarm_def("alarm_def_1"),
self.util.get_alarm_def("alarm_def_1_update_wrong_1")))
self.assertEqual(False, validator.is_valid_update_alarm_definition(
self.util.get_alarm_def("alarm_def_1"),
self.util.get_alarm_def("alarm_def_1_update_wrong_2")))
self.assertEqual(False, validator.is_valid_update_alarm_definition(
self.util.get_alarm_def("alarm_def_1"),
self.util.get_alarm_def("alarm_def_1_update_wrong_3")))

View File

@ -0,0 +1,164 @@
{
"alarm_def_0": {
"id": "f9935bcc-9641-4cbf-8224-0993a947ea83",
"name": "Average CPU percent greater than 10",
"description": "The average CPU percent is greater than 10",
"expression": "max(-_.千幸福的笑脸{घोड़ा=馬, dn2=dv2,千幸福的笑脸घ=千幸福的笑脸घ}) gte 100 times 1 And (min(ເຮືອນ{dn3=dv3,家=дом}) < 10 or sum(biz{dn5=dv58}) >9 9and count(fizzle) lt 0 or count(baz) > 1)",
"match_by": [],
"severity": "LOW",
"ok_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"alarm_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"undetermined_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
]
},
"alarm_def_0_wrong_0": {
"id": "f9935bcc-9641-4cbf-8224-0993a947ea83",
"name": "Average CPU percent greater than 10",
"description": "The average CPU percent is greater than 10",
"expression": "max(-_.千幸福的笑脸{घोड़ा=馬, dn2=dv2,千幸福的笑脸घ=千幸福的笑脸घ}) gte 100 times 1 And (min(ເຮືອນ{dn3=dv3,家=дом}) < 10 or sum(biz{dn5=dv58}) >9 9and count(fizzle) lt 0 or count(baz) > 1)",
"match_by": [],
"severity": "LOW",
"ok_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"alarm_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
]
},
"alarm_def_0_wrong_1": {
"id": "f9935bcc-9641-4cbf-8224-0993a947ea83",
"name": "Average CPU percent greater than 10",
"description": "The average CPU percent is greater than 10",
"expression": "max(-_.千幸福的笑脸{घोड़ा=馬, dn2=dv2,千幸福的笑脸घ=千幸福的笑脸घ}) gte 100 times 1 And (min(ເຮືອນ{dn3=dv3,家=дом}) < 10 or sum(biz=5{dn5=dv58}) >9 9and count(fizzle) lt 0 or count(baz) > 1)",
"match_by": [],
"severity": "LOW",
"ok_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"alarm_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"undetermined_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
]
},
"alarm_def_1": {
"id": "f9935bcc-9641-4cbf-8224-0993a947ea83",
"name": "Average CPU percent greater than 10",
"description": "The average CPU percent is greater than 10",
"expression": "max(biz{key2=value2})>1400",
"match_by": [
"hostname"
],
"severity": "LOW",
"ok_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"alarm_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"undetermined_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
]
},
"alarm_def_1_update": {
"id": "f9935bcc-9641-4cbf-8224-0993a947ea83",
"name": "Average CPU percent greater than 10",
"description": "The average CPU percent is greater than 10",
"expression": "min(biz{key2=value2})<1450",
"match_by": [
"hostname"
],
"severity": "LOW",
"ok_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"alarm_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"undetermined_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
]
},
"alarm_def_1_update_wrong_0": {
"id": "f9935bcc-9641-4cbf-8224-0993a947ea83",
"name": "Average CPU percent greater than 10",
"description": "The average CPU percent is greater than 10",
"expression": "min(biz{key2=value1})<1450",
"match_by": [
"hostname"
],
"severity": "LOW",
"ok_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"alarm_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"undetermined_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
]
},
"alarm_def_1_update_wrong_1": {
"id": "f9935bcc-9641-4cbf-8224-0993a947ea83",
"name": "Average CPU percent greater than 10",
"description": "The average CPU percent is greater than 10",
"expression": "min(biz{key2=value2})<1450",
"match_by": [
"os"
],
"severity": "LOW",
"ok_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"alarm_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"undetermined_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
]
},
"alarm_def_1_update_wrong_2": {
"id": "f9935bcc-9641-4cbf-8224-0993a947ea83",
"name": "Average CPU percent greater than 10",
"description": "The average CPU percent is greater than 10",
"expression": "min(biz{key2=value2})<1450 and max(baz)<500",
"match_by": [
"hostname"
],
"severity": "LOW",
"ok_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"alarm_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"undetermined_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
]
},
"alarm_def_1_update_wrong_3": {
"id": "f9935bcc-9641-4cbf-8224-0993a947ea83",
"name": "Average CPU percent greater than 10",
"description": "The average CPU percent is greater than 10",
"expression": "min(baz{key2=value2})<1450",
"match_by": [
"hostname"
],
"severity": "LOW",
"ok_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"alarm_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"undetermined_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
]
}
}

View File

@ -0,0 +1,96 @@
# Copyright 2015 Carnegie Mellon University
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import email.mime.text
import mock
import smtplib
from kiloeyes.common import email_sender
from kiloeyes.openstack.common import log
from kiloeyes import tests
LOG = log.getLogger(__name__)
class TestEmailSender(tests.BaseTestCase):
content = "Mocked Mail Content"
def setUp(self):
super(TestEmailSender, self).setUp()
def testEmailSender_Success(self):
with mock.patch.object(smtplib.SMTP,
"set_debuglevel",
return_value=True):
with mock.patch.object(smtplib.SMTP,
"connect",
return_value=True):
with mock.patch.object(smtplib.SMTP,
"starttls",
return_value=True):
with mock.patch.object(smtplib.SMTP,
"login",
return_value=True):
with mock.patch.object(smtplib.SMTP,
"sendmail",
return_value=True):
with mock.patch.object(email.mime.text,
"MIMEText",
return_value=(
email.mime.text.
MIMEText(
self.content))):
with mock.patch.object(smtplib.SMTP,
"quit",
return_value=True):
ret = (email_sender.EmailSender().
send_emails(
["hanc@andrew.cmu.edu", ],
"Mocked email subject",
"Mocked email content"))
self.assertEqual(ret, True)
def testEmailSender_Failure(self):
with mock.patch.object(smtplib.SMTP,
"set_debuglevel",
return_value=True):
with mock.patch.object(smtplib.SMTP,
"connect",
return_value=True):
with mock.patch.object(smtplib.SMTP,
"starttls",
return_value=True):
with mock.patch.object(smtplib.SMTP,
"login",
return_value=True):
with mock.patch.object(smtplib.SMTP,
"sendmail",
return_value=True,
side_effect=Exception(
'Mock_Exception')):
with mock.patch.object(email.mime.text,
"MIMEText",
return_value=(
email.mime.text.
MIMEText(
self.content))):
with mock.patch.object(smtplib.SMTP,
"quit",
return_value=True):
ret = (email_sender.EmailSender().
send_emails(
["hanc@andrew.cmu.edu", ],
"Mocked email subject",
"Mocked email content"))
self.assertEqual(ret, False)

View File

@ -0,0 +1,59 @@
# Copyright 2013 IBM Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from kiloeyes.common import es_conn
from kiloeyes.microservice import timed_strategy
from kiloeyes.openstack.common.fixture import config
from kiloeyes.openstack.common import log
from kiloeyes import tests
import json
import mock
import requests
LOG = log.getLogger(__name__)
class TestESConnection(tests.BaseTestCase):
def setUp(self):
super(TestESConnection, self).setUp()
self.CONF = self.useFixture(config.Config()).conf
def test_send_messages_with_id(self):
self.CONF.set_override('id_field', 'id', group='es_conn')
self.CONF.set_override('uri', 'http://fake', group='es_conn')
self.CONF.set_override('time_unit', 'h', group='timed_strategy')
strategy = timed_strategy.TimedStrategy()
conn = es_conn.ESConnection('alarms', strategy, 'pre_')
req_result = mock.Mock()
req_result.status_code = 204
msg = {'id': 'whatever'}
with mock.patch.object(requests, 'post', return_value=req_result):
conn.send_messages(json.dumps(msg))
self.assertTrue(requests.post.called)
def test_send_messages_without_id(self):
self.CONF.set_override('id_field', 'id', group='es_conn')
self.CONF.set_override('uri', 'http://fake', group='es_conn')
self.CONF.set_override('time_unit', 'h', group='timed_strategy')
strategy = timed_strategy.TimedStrategy()
conn = es_conn.ESConnection('alarms', strategy, 'pre_')
req_result = mock.Mock()
req_result.status_code = 204
msg = {'not_id': 'whatever'}
with mock.patch.object(requests, 'post', return_value=req_result):
res = conn.send_messages(json.dumps(msg))
self.assertFalse(requests.post.called)
self.assertEqual(res, 400)

View File

View File

@ -0,0 +1,828 @@
{
"alarm_def": {
"alarm_def_utf8": {
"alarm_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"ok_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"name": "alarm definition for test",
"severity": "LOW",
"undetermined_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"match_by": [],
"expression": "max(-_.\u5343\u5e78\u798f\u7684\u7b11\u8138{\u0918\u094b\u0921\u093c\u093e=\u99ac, dn2=dv2,\u5343\u5e78\u798f\u7684\u7b11\u8138\u0918=\u5343\u5e78\u798f\u7684\u7b11\u8138\u0918}) gte 100 times 1 And (min(\u0ec0\u0eae\u0eb7\u0ead\u0e99{dn3=dv3,\u5bb6=\u0434\u043e\u043c}) < 10 or sum(biz{dn5=dv58}) >9 9and count(fizzle) lt 0 or count(baz) > 1)",
"id": "f9935bcc-9641-4cbf-8224-0993a947ea83",
"description": "the definition used for testing thresh processor"
},
"alarm_def_match_by": {
"id": "f9935bcc-9641-4cbf-8224-0993a947ea83",
"name": "alarm definition for test",
"description": "the definition used for testing thresh processor",
"expression": "max(biz{key2=value2})>1400",
"match_by": [
"hostname"
],
"severity": "LOW",
"ok_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"alarm_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"undetermined_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
]
},
"alarm_def_match_by_update": {
"id": "f9935bcc-9641-4cbf-8224-0993a947ea83",
"name": "alarm definition for test",
"description": "the definition used for testing thresh processor",
"expression": "min(biz{key2=value2})<1450",
"match_by": [
"hostname"
],
"severity": "LOW",
"ok_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"alarm_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"undetermined_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
]
},
"alarm_def_count": {
"id": "f9935bcc-9641-4cbf-8224-0993a947ea83",
"name": "alarm definition for test",
"description": "the definition used for testing thresh processor",
"expression": "count(biz{key2=value2})>3",
"match_by": [
"hostname"
],
"severity": "LOW",
"ok_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"alarm_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"undetermined_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
]
},
"alarm_def_more_dimensions": {
"id": "f9935bcc-9641-4cbf-8224-0993a947ea83",
"name": "alarm definition for test",
"description": "the definition used for testing thresh processor",
"expression": "avg(biz{key2=value2})>1400",
"match_by": [
"hostname"
],
"severity": "LOW",
"ok_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"alarm_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"undetermined_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
]
},
"alarm_def_periods": {
"id": "f9935bcc-9641-4cbf-8224-0993a947ea83",
"name": "alarm definition for test",
"description": "the definition used for testing thresh processor",
"expression": "max(foo)>=100 times 4",
"match_by": [],
"severity": "LOW",
"ok_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"alarm_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"undetermined_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
]
},
"alarm_def_periods_update": {
"id": "f9935bcc-9641-4cbf-8224-0993a947ea83",
"name": "alarm definition for test",
"description": "the definition used for testing thresh processor",
"expression": "max(foo,80)>=100 times 6",
"match_by": [],
"severity": "LOW",
"ok_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"alarm_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"undetermined_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
]
},
"alarm_def_wrong": {
"id": "f9935bcc-9641-4cbf-8224-0993a947ea83",
"name": "alarm definition for test",
"description": "the definition used for testing thresh processor",
"expression": "max(foo{hostname=mini-mon,千=千}, 120) = 100 and (max(bar)>100 or max(biz)>100)",
"match_by": [],
"severity": "LOW",
"ok_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"alarm_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"undetermined_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
]
},
"alarm_def_multi_match_by": {
"id": "f9935bcc-9641-4cbf-8224-0993a947ea83",
"name": "alarm definition for test",
"description": "the definition used for testing thresh processor",
"expression": "avg(biz{key2=value2})>1400",
"match_by": [
"hostname",
"system"
],
"severity": "LOW",
"ok_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"alarm_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"undetermined_actions": [
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
]
}
},
"metrics": {
"metrics_periods_0": [
{
"time_offset": -140,
"name": "foo",
"value": 0,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -120,
"name": "foo",
"value": 10,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -100,
"name": "foo",
"value": 20,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -80,
"name": "foo",
"value": 30,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -60,
"name": "foo",
"value": 40,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -40,
"name": "foo",
"value": 50,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -20,
"name": "foo",
"value": 60,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": 0,
"name": "foo",
"value": 70,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": 20,
"name": "foo",
"value": 80,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": 40,
"name": "foo",
"value": 90,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
}
],
"metrics_periods_1": [
{
"time_offset": -370,
"name": "foo",
"value": 750,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -350,
"name": "foo",
"value": 825,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -330,
"name": "foo",
"value": 900,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -310,
"name": "foo",
"value": 975,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -290,
"name": "foo",
"value": 1050,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -270,
"name": "foo",
"value": 1125,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -250,
"name": "foo",
"value": 1200,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -230,
"name": "foo",
"value": 1275,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -210,
"name": "foo",
"value": 1350,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -190,
"name": "foo",
"value": 1425,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -170,
"name": "foo",
"value": 1500,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -150,
"name": "foo",
"value": 1575,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -130,
"name": "foo",
"value": 1650,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -110,
"name": "foo",
"value": 1725,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -90,
"name": "foo",
"value": 1800,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -70,
"name": "foo",
"value": 1875,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -50,
"name": "foo",
"value": 1950,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -30,
"name": "foo",
"value": 2025,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -10,
"name": "foo",
"value": 2100,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": 10,
"name": "foo",
"value": 2175,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
}
],
"metrics_periods_2": [
{
"time_offset": -140,
"name": "foo",
"value": 200,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -120,
"name": "foo",
"value": 210,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -100,
"name": "foo",
"value": 220,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -80,
"name": "foo",
"value": 230,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -60,
"name": "foo",
"value": 240,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -40,
"name": "foo",
"value": 250,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": -20,
"name": "foo",
"value": 260,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": 0,
"name": "foo",
"value": 270,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": 20,
"name": "foo",
"value": 280,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
},
{
"time_offset": 40,
"name": "foo",
"value": 290,
"dimensions": {
"key2": "value2",
"key1": "value1"
}
}
],
"metrics_match_by": [
{
"time_offset": 0,
"name": "biz",
"value": 1300,
"dimensions": {
"key2": "value2",
"hostname": "h1"
}
},
{
"time_offset": 0,
"name": "biz",
"value": 1500,
"dimensions": {
"key3": "value3",
"key2": "value2",
"key1": "value1",
"hostname": "h1"
}
},
{
"time_offset": 0,
"name": "biz",
"value": 1500,
"dimensions": {
"key2": "value2",
"hostname": "h2"
}
},
{
"time_offset": 0,
"name": "biz",
"value": 1200,
"dimensions": {
"key2": "value2",
"hostname": "h3"
}
}
],
"metrics_count_alarm": [
{
"time_offset": -35,
"name": "biz",
"value": 1300,
"dimensions": {
"key2": "value2",
"hostname": "h1"
}
},
{
"time_offset": -30,
"name": "biz",
"value": 1200,
"dimensions": {
"key2": "value2",
"hostname": "h1"
}
},
{
"time_offset": -20,
"name": "biz",
"value": 1200,
"dimensions": {
"key2": "value2",
"hostname": "h1"
}
},
{
"time_offset": -10,
"name": "biz",
"value": 1200,
"dimensions": {
"key2": "value2",
"hostname": "h1"
}
},
{
"time_offset": 0,
"name": "biz",
"value": 1601,
"dimensions": {
"key2": "value2",
"hostname": "h1"
}
}
],
"metrics_count_ok": [
{
"time_offset": -40,
"name": "biz",
"value": 1300,
"dimensions": {
"key2": "value2",
"hostname": "h1"
}
}
],
"metrics_more_dimensions": [
{
"time_offset": -200,
"name": "biz",
"value": 1300,
"dimensions": {
"key2": "value2",
"hostname": "h1"
}
},
{
"time_offset": -30,
"name": "biz",
"value": 1200,
"dimensions": {
"key2": "value2",
"hostname": "h1"
}
},
{
"time_offset": 0,
"name": "biz",
"value": 1601,
"dimensions": {
"key3": "value3",
"key2": "value2",
"key1": "value1",
"hostname": "h1"
}
}
],
"metrics_multi_match_by": [
{
"time_offset": 0,
"name": "biz",
"value": 2000,
"dimensions": {
"key2": "value2",
"hostname": "h1"
}
},
{
"time_offset": 0,
"name": "biz",
"value": 1300,
"dimensions": {
"key2": "value2",
"hostname": "h1",
"system": "windows"
}
},
{
"time_offset": -30,
"name": "biz",
"value": 1200,
"dimensions": {
"key2": "value2",
"hostname": "h1",
"system": "linux"
}
},
{
"time_offset": 0,
"name": "biz",
"value": 1601,
"dimensions": {
"key3": "value3",
"key2": "value2",
"key1": "value1",
"hostname": "h1",
"system": "windows"
}
},
{
"time_offset": 0,
"name": "biz",
"value": 1601,
"dimensions": {
"key3": "value3",
"key2": "value2",
"key1": "value1",
"hostname": "h2",
"system": "linux"
}
}
],
"metrics_utf8": [
{
"time_offset": 0,
"name": "baz",
"value": 1500,
"dimensions": {
"\u0918\u094b\u0921\u093c\u093e": "\u99ac",
"\u5343\u5e78\u798f\u7684\u7b11\u8138\u0918": "\u5343\u5e78\u798f\u7684\u7b11\u8138\u0918",
"dn2": "dv2"
}
},
{
"time_offset": 0,
"name": "-_.\u5343\u5e78\u798f\u7684\u7b11\u8138",
"value": 1500,
"dimensions": {
"\u0918\u094b\u0921\u093c\u093e": "\u99ac",
"\u5343\u5e78\u798f\u7684\u7b11\u8138\u0918": "\u5343\u5e78\u798f\u7684\u7b11\u8138\u0918",
"dn2": "dv2"
}
},
{
"time_offset": 0,
"name": "\u0ec0\u0eae\u0eb7\u0ead\u0e99",
"value": 5,
"dimensions": {
"\u5bb6": "\u0434\u043e\u043c",
"dn3": "dv3"
}
},
{
"time_offset": 0,
"name": "biz",
"value": 5,
"dimensions": {
"dn5": "dv58"
}
},
{
"time_offset": 0,
"name": "biz",
"value": 95,
"dimensions": {
"dn5": "dv58"
}
}
],
"metrics_match_by_wrong": [
{
"time_offset": 0,
"name": "biz",
"value": 1300,
"dimensions": {
"key2": "value2",
"hostname": "h1"
}
},
{
"time_offset": 0,
"name": "biz",
"dimensions": {
"key3": "value3",
"key2": "value2",
"key1": "value1",
"hostname": "h1"
}
}
],
"metrics_not_match": [
{
"time_offset": 0,
"name": "biz",
"value": 1300,
"dimensions": {
"key2": "value2",
"hostname": "h1"
}
},
{
"time_offset": 0,
"name": "biz",
"value": 15000,
"dimensions": {
"key3": "value3",
"key1": "value1",
"hostname": "h1"
}
},
{
"time_offset": 0,
"name": "biz",
"value": 15000,
"dimensions": {
"key3": "value3",
"key2": "value1",
"hostname": "h1"
}
},
{
"time_offset": 0,
"name": "biz",
"value": 15000,
"dimensions": {
"key3": "value3",
"key2": "value1"
}
}
]
}
}

View File

@ -0,0 +1,59 @@
# Copyright 2013 IBM Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
import time
from kiloeyes.microservice import metrics_fixer
from kiloeyes.openstack.common import log
from kiloeyes import tests
LOG = log.getLogger(__name__)
class TestMetricsFixer(tests.BaseTestCase):
def setUp(self):
super(TestMetricsFixer, self).setUp()
def test__add_hash(self):
item = {'name': 'name1', 'dimensions': {'name1': 'value1'},
'timestamp': time.time()}
result = metrics_fixer.MetricsFixer._add_hash(item)
data = json.loads(result)
self.assertTrue(data.get('dimensions_hash'))
self.assertTrue(data['timestamp'])
item = {'name': 'name1', 'timestamp': time.time()}
result = metrics_fixer.MetricsFixer._add_hash(item)
data = json.loads(result)
self.assertFalse(data.get('dimensions_hash'))
self.assertTrue(data['timestamp'])
def test_process_msg_one(self):
items = [{'name': 'name1', 'dimensions': {'name1': 'value1'},
'timestamp': time.time()}]
fixer = metrics_fixer.MetricsFixer()
result = fixer.process_msg(json.dumps(items))
self.assertTrue(isinstance(result, str))
self.assertTrue(result.startswith('{"index":{}}'))
def test_process_msg_multiple(self):
items = [{'name': 'name1', 'dimensions': {'name1': 'value1'},
'timestamp': time.time()}]
items.append({'dimensions': {'p1': 1, 'p3': 100.12}})
fixer = metrics_fixer.MetricsFixer()
result = fixer.process_msg(json.dumps(items))
self.assertTrue(isinstance(result, str))

View File

@ -0,0 +1,108 @@
# Copyright 2015 Carnegie Mellon University
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import ast
import json
import mock
from kiloeyes.common import email_sender
from kiloeyes.microservice import notification_processor
from kiloeyes.openstack.common import log
from kiloeyes import tests
LOG = log.getLogger(__name__)
response_str = """
{
"hits":{
"hits":[
{
"_score":1.0,
"_type":"notification_methods",
"_id":"c60ec47e-5038-4bf1-9f95-4046c6e9a719",
"_source":{
"type":"EMAIL",
"id":"c60ec47e-5038-4bf1-9f95-4046c6e9a719",
"name":"NotificationMethod",
"address":"hanc@andrew.cmu.edu"
},
"_index":"admin"
}
],
"total":1,
"max_score":1.0
},
"_shards":{
"successful":5,
"failed":0,
"total":5
},
"took":2
}
"""
class Msg(object):
class message(object):
value = 'message content'
class Es_conn(object):
def get_message_by_id(self, id):
return response_str
class TestNotificationProcessor(tests.BaseTestCase):
def setUp(self):
super(TestNotificationProcessor, self).setUp()
def test_handle_alarm_msg(self):
_es_conn = Es_conn()
msg = Msg()
email_sender.EmailSender = mock.Mock()
r = ("{'metrics': {'timestamp': 1432672915.409,"
"'name': 'biz', 'value': 1500,"
"'dimensions': {'key2': 'value2', 'key1': 'value1'}},"
"'state_updated_timestamp': 1432672915,"
"'state': 'ALARM',"
"'alarm_definition':"
"{'alarm_actions': ['c60ec47e-5038-4bf1-9f95-4046c6e9a759'],"
"'undetermined_actions': "
"['c60ec47e-5038-4bf1-9f95-4046c6e9a759'],"
"'name': 'Average CPU percent greater than 10',"
"'match_by': ['hostname'],"
"'severity': 'LOW',"
"'ok_actions': ['c60ec47e-5038-4bf1-9f95-4046c6e9a759'],"
"'expression': 'max(foo{hostname=mini-mon,mu=na}, 120)"
"> 1100 and max(bar { asd = asd} )>1200 or avg(biz)>1300',"
"'id': 'c60ec47e-5038-4bf1-9f95-4046c6e91111',"
"'description': 'The average CPU percent is greater than 10'}}")
with mock.patch.object(email_sender.EmailSender, 'send_emails',
return_value=""):
with mock.patch.object(notification_processor.
NotificationProcessor,
"_get_notification_method_response",
return_value=json.loads(response_str).
get("hits")):
with mock.patch.object(json, 'loads',
return_value=ast.literal_eval(r)):
np = notification_processor.NotificationProcessor()
np.handle_alarm_msg(_es_conn, msg)
self.assertEqual(np.email_addresses[0],
"hanc@andrew.cmu.edu")

View File

@ -0,0 +1,147 @@
# Copyright 2013 IBM Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import dateutil.parser as dparser
import time
from kiloeyes.microservice import timed_strategy
from kiloeyes.openstack.common.fixture import config
from kiloeyes.openstack.common import log
from kiloeyes import tests
LOG = log.getLogger(__name__)
class TestStrategy(tests.BaseTestCase):
def setUp(self):
super(TestStrategy, self).setUp()
self.CONF = self.useFixture(config.Config()).conf
def test_hour(self):
self.CONF.set_override('time_unit', 'h', group='timed_strategy')
self.strategy = timed_strategy.TimedStrategy()
day = dparser.parse('monkey 2014-07-10 love 12:34:56', fuzzy=True)
self.strategy.set_time(day)
self.assertEqual('20140710120000', self.strategy.get_index())
# test input integer
the_int = int(time.mktime(day.timetuple()))
self.strategy.set_time(the_int)
self.assertEqual('20140710120000', self.strategy.get_index())
# test the str input
self.strategy.set_time('Nov 15, 2014 00:27:05')
self.assertEqual('20141115000000',
self.strategy.get_index())
def test_day(self):
self.CONF.set_override('time_unit', 'd', group='timed_strategy')
self.strategy = timed_strategy.TimedStrategy()
day = dparser.parse('monkey 2014-07-10 love 12:34:56', fuzzy=True)
self.strategy.set_time(day)
self.assertEqual('20140710000000', self.strategy.get_index())
day = dparser.parse('2014-07-10', fuzzy=True)
self.strategy.set_time(day)
self.assertEqual('20140710000000', self.strategy.get_index())
# test input integer
the_int = int(time.mktime(day.timetuple()))
self.strategy.set_time(the_int)
self.assertEqual('20140710000000', self.strategy.get_index())
# test the str input
self.strategy.set_time('Nov 15, 2014')
self.assertEqual('20141115000000',
self.strategy.get_index())
def test_week(self):
self.CONF.set_override('time_unit', 'w', group='timed_strategy')
self.strategy = timed_strategy.TimedStrategy()
day = dparser.parse('2013-10-31', fuzzy=True)
self.strategy.set_time(day)
self.assertEqual('20131027000000', self.strategy.get_index())
day = dparser.parse('2013-11-1', fuzzy=True)
self.strategy.set_time(day)
self.assertEqual('20131027000000', self.strategy.get_index())
day = dparser.parse('2013-11-3', fuzzy=True)
self.strategy.set_time(day)
self.assertEqual('20131103000000', self.strategy.get_index())
day = dparser.parse('2014-09-12', fuzzy=True)
self.strategy.set_time(day)
self.assertEqual('20140907000000', self.strategy.get_index())
day = dparser.parse('monkey 2014-07-10 love 12:34:56', fuzzy=True)
self.strategy.set_time(day)
self.assertEqual('20140706000000', self.strategy.get_index())
# test input integer
the_int = int(time.mktime(day.timetuple()))
self.strategy.set_time(the_int)
self.assertEqual('20140706000000', self.strategy.get_index())
# test the str input
self.strategy.set_time('Nov 15, 2014')
self.assertEqual('20141109000000',
self.strategy.get_index())
def test_month(self):
self.CONF.set_override('time_unit', 'm', group='timed_strategy')
self.strategy = timed_strategy.TimedStrategy()
day = dparser.parse('2014-10-31', fuzzy=True)
self.strategy.set_time(day)
self.assertEqual('20141001000000', self.strategy.get_index())
day = dparser.parse('2014-11-1', fuzzy=True)
self.strategy.set_time(day)
self.assertEqual('20141101000000', self.strategy.get_index())
day = dparser.parse('2014-11-15', fuzzy=True)
self.strategy.set_time(day)
self.assertEqual('20141101000000', self.strategy.get_index())
# test input integer
the_int = int(time.mktime(day.timetuple()))
self.strategy.set_time(the_int)
self.assertEqual('20141101000000', self.strategy.get_index())
# test the str input
self.strategy.set_time('Nov 15, 2014')
self.assertEqual('20141101000000',
self.strategy.get_index())
def test_year(self):
self.CONF.set_override('time_unit', 'y', group='timed_strategy')
self.strategy = timed_strategy.TimedStrategy()
day = dparser.parse('2014-10-31', fuzzy=True)
self.strategy.set_time(day)
self.assertEqual('20140101000000', self.strategy.get_index())
day = dparser.parse('2014-11-1', fuzzy=True)
self.strategy.set_time(day)
self.assertEqual('20140101000000', self.strategy.get_index())
day = dparser.parse('2014-11-15', fuzzy=True)
self.strategy.set_time(day)
self.assertEqual('20140101000000', self.strategy.get_index())
# test input integer
the_int = int(time.mktime(day.timetuple()))
self.strategy.set_time(the_int)
self.assertEqual('20140101000000', self.strategy.get_index())
# test the str input
self.strategy.set_time('Nov 15, 2014')
self.assertEqual('20140101000000',
self.strategy.get_index())

View File

@ -0,0 +1,212 @@
# Copyright 2015 Carnegie Mellon University
##
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from oslo_config import fixture as fixture_config
from oslotest import base
from stevedore import driver
from kiloeyes.common import es_conn
from kiloeyes.common import kafka_conn
from kiloeyes.microservice import threshold_engine as engine
class TestThresholdEngine(base.BaseTestCase):
def setUp(self):
self.CONF = self.useFixture(fixture_config.Config()).conf
self.CONF.kafka_opts.uri = 'fake_url'
self.CONF.thresholdengine.metrics_topic = 'fake_metrics'
self.CONF.thresholdengine.alarm_topic = 'fake_alarms'
self.CONF.thresholdengine.check_alarm_interval = 10
self.CONF.alarmdefinitions.index_strategy = ''
self.CONF.alarmdefinitions.doc_type = 'fake_doc_type'
self.CONF.alarmdefinitions.dimensions = 'fake_key:fake_value'
self.CONF.alarmdefinitions.name = 'fake_name'
self.CONF.alarmdefinitions.check_alarm_def_interval = 120
self.CONF.es_conn.uri = 'fake_es_url'
super(TestThresholdEngine, self).setUp()
self.thresh_engine = engine.ThresholdEngine()
def test_initialization(self):
params = {'query': {
'bool': {'must': [
{'query_string': {
'default_field': 'alarmdefinitions.expression_data.'
'dimensions.fake_key',
'query': 'fake_value'}},
{'query_string': {'default_field': 'name',
'query': 'fake_name'}}]}}}
# Test Kafka connection uri and topic
self.assertEqual(self.thresh_engine.thread_metrics.
_consume_kafka_conn.uri, 'fake_url')
self.assertEqual(self.thresh_engine.thread_metrics.
_consume_kafka_conn.topic, 'fake_metrics')
self.assertEqual(self.thresh_engine.thread_alarm_def.params, params)
self.assertEqual(self.thresh_engine.thread_alarm_def.interval, 120)
self.assertEqual(self.thresh_engine.
thread_alarm_def._es_conn.doc_type, 'fake_doc_type')
self.assertEqual(self.thresh_engine.
thread_alarm_def._es_conn.uri, 'fake_es_url/')
self.assertEqual(self.thresh_engine.thread_alarm.
_publish_kafka_conn.uri, 'fake_url')
self.assertEqual(self.thresh_engine.thread_alarm.
_publish_kafka_conn.topic, 'fake_alarms')
self.assertEqual(self.thresh_engine.thread_alarm.interval, 10)
def get_response_str(self, fake_alarm_def):
ad_list = []
for ad in fake_alarm_def:
ad_list.append({'_source': ad})
return {"hits": {"hits": ad_list}}
def test_refresh_alarm_definitions(self):
# test refresh alarm definitions
ad = [{'id': 'fake_id_0', 'expression': 'fake_expr_0'},
{'id': 'fake_id_1', 'expression': 'fake_expr_1'},
{'id': 'fake_id_2', 'expression': 'fake_expr_2'}]
res = mock.Mock()
res.status_code = 200
response_json = self.get_response_str(ad)
res.json.return_value = response_json
processor = mock.Mock()
with mock.patch.object(driver.DriverManager, '__init__',
return_value=None):
with mock.patch.object(driver.DriverManager, 'driver',
return_value=processor):
with mock.patch.object(es_conn.ESConnection, 'get_messages',
return_value=res):
(self.thresh_engine.thread_alarm_def.
refresh_alarm_processors())
tp = self.thresh_engine.thread_alarm_def.threshold_processors
self.assertEqual(3, len(tp))
self.assertIn('fake_id_1', tp)
self.assertIn('fake_id_2', tp)
self.assertNotIn('fake_id_3', tp)
ad = [{'id': 'fake_id_3', 'expression': 'fake_expr_3'},
{'id': 'fake_id_1', 'expression': 'fake_expr_update'},
{'id': 'fake_id_2', 'expression': 'fake_expr_2'}]
res = mock.Mock()
res.status_code = 200
response_json = self.get_response_str(ad)
res.json.return_value = response_json
processor = mock.Mock()
with mock.patch.object(driver.DriverManager, '__init__',
return_value=None):
with mock.patch.object(driver.DriverManager, 'driver',
return_value=processor):
with mock.patch.object(es_conn.ESConnection, 'get_messages',
return_value=res):
(self.thresh_engine.thread_alarm_def.
refresh_alarm_processors())
tp = self.thresh_engine.thread_alarm_def.threshold_processors
self.assertEqual(3, len(tp))
self.assertNotIn('fake_id_0', tp)
self.assertIn('fake_id_2', tp)
self.assertIn('fake_id_3', tp)
self.assertEqual('fake_expr_update',
tp['fake_id_1']['json']['expression'])
# test http request fails
ad = []
res = mock.Mock()
res.status_code = 201
response_json = self.get_response_str(ad)
res.json.return_value = response_json
processor = mock.Mock()
with mock.patch.object(driver.DriverManager, '__init__',
return_value=None):
with mock.patch.object(driver.DriverManager, 'driver',
return_value=processor):
with mock.patch.object(es_conn.ESConnection, 'get_messages',
return_value=res):
(self.thresh_engine.thread_alarm_def.
refresh_alarm_processors())
tp = self.thresh_engine.thread_alarm_def.threshold_processors
self.assertEqual(3, len(tp))
def test_consume_metrics(self):
# test consume received metrics
raw_metrics = [
'{"timestamp": "2015-06-25T14:01:36Z", "name": "biz1", '
'"value": 1300, '
'"dimensions": {"key2": "value2", "hostname": "h1"}}',
'{"timestamp": "2015-06-25T14:01:36Z", "name": "biz2", '
'"value": 1500, '
'"dimensions": {"key2": "value2", "hostname": "h2"}}',
'{"timestamp": "2015-06-25T14:01:36Z", "name": "biz3", '
'"value": 1200, '
'"dimensions": {"key2": "value2", "hostname": "h3"}}']
metrics = [mock.Mock(), mock.Mock(), mock.Mock()]
for i in range(len(raw_metrics)):
metrics[i].message.value = raw_metrics[i]
pre = self.thresh_engine.thread_metrics.threshold_processors.copy()
with mock.patch.object(kafka_conn.KafkaConnection, 'get_messages',
return_value=metrics):
self.thresh_engine.thread_metrics.read_metrics()
self.assertEqual(pre,
self.thresh_engine.thread_metrics.
threshold_processors)
# read one alarm definition and test consume metrics again
processor = mock.Mock()
res = mock.Mock()
res.status_code = 200
response_json = self.get_response_str([{'id': 'fake_id_1'}])
res.json.return_value = response_json
with mock.patch.object(driver.DriverManager, '__init__',
return_value=None):
with mock.patch.object(driver.DriverManager, 'driver',
return_value=processor):
with mock.patch.object(es_conn.ESConnection, 'get_messages',
return_value=res):
(self.thresh_engine.thread_alarm_def.
refresh_alarm_processors())
pre = self.thresh_engine.thread_metrics.threshold_processors.copy()
with mock.patch.object(kafka_conn.KafkaConnection, 'get_messages',
return_value=metrics):
self.thresh_engine.thread_metrics.read_metrics()
self.assertEqual(pre,
self.thresh_engine.thread_metrics.
threshold_processors)
def test_publish_alarms(self):
# read one alarm definition
processor = mock.Mock()
res = mock.Mock()
res.status_code = 200
response_json = self.get_response_str([{'id': 'fake_id_1'}])
res.json.return_value = response_json
with mock.patch.object(driver.DriverManager, '__init__',
return_value=None):
with mock.patch.object(driver.DriverManager, 'driver',
return_value=processor):
with mock.patch.object(es_conn.ESConnection, 'get_messages',
return_value=res):
(self.thresh_engine.thread_alarm_def.
refresh_alarm_processors())
# test send alarms
pre = self.thresh_engine.thread_alarm.threshold_processors.copy()
with mock.patch.object(kafka_conn.KafkaConnection, 'send_messages',
return_value=None):
self.thresh_engine.thread_alarm.send_alarm()
self.assertEqual(pre,
self.thresh_engine.thread_alarm.
threshold_processors)

View File

@ -0,0 +1,249 @@
#
# Copyright 2015 Carnegie Mellon University
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
import mock
import os
from kiloeyes.microservice import threshold_processor as processor
from kiloeyes.openstack.common import log
from kiloeyes.openstack.common import timeutils as tu
from kiloeyes import tests
LOG = log.getLogger(__name__)
class TestCaseUtil(object):
def __init__(self):
path = os.path.split(os.path.realpath(__file__))[0]
path += '/test_case_threshold_processor.json'
f = open(path)
try:
self.test_cases = json.load(f)
finally:
f.close()
def get_alarm_def(self, name):
return self.test_cases["alarm_def"][name]
def get_metrics(self, name):
ts = self.test_cases["metrics"][name]
for t in ts:
o = t["time_offset"]
t["timestamp"] = tu.utcnow_ts() + o
yield json.dumps(t)
class TestThresholdProcessor(tests.BaseTestCase):
def setUp(self):
super(TestThresholdProcessor, self).setUp()
self.util = TestCaseUtil()
def test__init_(self):
"""Test processor _init_.
If alarm definition is not in standard format,
the processor cannot be successfully initialized.
Alarm_definition3 is a bad one.
Processor _init_ will fail on this case.
"""
tp = None
try:
ad = self.util.get_alarm_def("alarm_def_utf8")
tp = processor.ThresholdProcessor(ad)
except Exception:
tp = None
self.assertIsInstance(tp, processor.ThresholdProcessor)
try:
ad = self.util.get_alarm_def("alarm_def_match_by")
tp = processor.ThresholdProcessor(ad)
except Exception:
tp = None
self.assertIsInstance(tp, processor.ThresholdProcessor)
try:
ad = self.util.get_alarm_def("alarm_def_periods")
tp = processor.ThresholdProcessor(ad)
except Exception:
tp = None
self.assertIsInstance(tp, processor.ThresholdProcessor)
try:
ad = self.util.get_alarm_def("alarm_def_wrong")
tp = processor.ThresholdProcessor(ad)
except Exception:
tp = None
self.assertIsNone(tp)
def test_process_alarms(self):
"""Test if alarm is correctly produced."""
# test utf8 dimensions and compound logic expr
# init processor
ad = self.util.get_alarm_def("alarm_def_utf8")
tp = processor.ThresholdProcessor(ad)
# send metrics to the processor
metrics_list = self.util.get_metrics("metrics_utf8")
for metrics in metrics_list:
timestamp = json.loads(metrics)['timestamp']
with mock.patch.object(tu, 'utcnow_ts',
return_value=timestamp):
tp.process_metrics(metrics)
# manually call the function to update alarms
alarms = tp.process_alarms()
self.assertEqual(1, len(alarms))
self.assertEqual('ALARM', json.loads(alarms[0])['state'])
# test more than 1 periods
ad = self.util.get_alarm_def("alarm_def_periods")
tp = processor.ThresholdProcessor(ad)
metrics_list = self.util.get_metrics("metrics_periods_0")
for metrics in metrics_list:
timestamp = json.loads(metrics)['timestamp']
with mock.patch.object(tu, 'utcnow_ts',
return_value=timestamp):
tp.process_metrics(metrics)
alarms = tp.process_alarms()
self.assertEqual(1, len(alarms))
self.assertEqual('OK', json.loads(alarms[0])['state'])
ad = self.util.get_alarm_def("alarm_def_periods")
tp = processor.ThresholdProcessor(ad)
metrics_list = self.util.get_metrics("metrics_periods_1")
for metrics in metrics_list:
timestamp = json.loads(metrics)['timestamp']
with mock.patch.object(tu, 'utcnow_ts',
return_value=timestamp):
tp.process_metrics(metrics)
alarms = tp.process_alarms()
self.assertEqual(1, len(alarms))
self.assertEqual('ALARM', json.loads(alarms[0])['state'])
ad = self.util.get_alarm_def("alarm_def_periods")
tp = processor.ThresholdProcessor(ad)
metrics_list = self.util.get_metrics("metrics_periods_2")
for metrics in metrics_list:
timestamp = json.loads(metrics)['timestamp']
with mock.patch.object(tu, 'utcnow_ts',
return_value=timestamp):
tp.process_metrics(metrics)
alarms = tp.process_alarms()
self.assertEqual(0, len(alarms))
# test alarms with match_up
ad = self.util.get_alarm_def("alarm_def_match_by")
tp = processor.ThresholdProcessor(ad)
metrics_list = self.util.get_metrics("metrics_match_by")
for metrics in metrics_list:
timestamp = json.loads(metrics)['timestamp']
with mock.patch.object(tu, 'utcnow_ts',
return_value=timestamp):
tp.process_metrics(metrics)
alarms = tp.process_alarms()
self.assertEqual(3, len(alarms))
self.assertEqual('ALARM', tp.expr_data_queue['h1,']['state'])
self.assertEqual('ALARM', tp.expr_data_queue['h2,']['state'])
self.assertEqual('OK', tp.expr_data_queue['h3,']['state'])
# test alarms with multiple match_ups
ad = self.util.get_alarm_def("alarm_def_multi_match_by")
tp = processor.ThresholdProcessor(ad)
metrics_list = self.util.get_metrics("metrics_multi_match_by")
for metrics in metrics_list:
timestamp = json.loads(metrics)['timestamp']
with mock.patch.object(tu, 'utcnow_ts',
return_value=timestamp):
tp.process_metrics(metrics)
alarms = tp.process_alarms()
self.assertEqual(3, len(alarms))
# test alarms with func count
ad = self.util.get_alarm_def("alarm_def_count")
tp = processor.ThresholdProcessor(ad)
metrics_list = self.util.get_metrics("metrics_count_ok")
for metrics in metrics_list:
timestamp = json.loads(metrics)['timestamp']
with mock.patch.object(tu, 'utcnow_ts',
return_value=timestamp):
tp.process_metrics(metrics)
alarms = tp.process_alarms()
self.assertEqual(1, len(alarms))
self.assertEqual('OK', json.loads(alarms[0])['state'])
metrics_list = self.util.get_metrics("metrics_count_alarm")
for metrics in metrics_list:
timestamp = json.loads(metrics)['timestamp']
with mock.patch.object(tu, 'utcnow_ts',
return_value=timestamp):
tp.process_metrics(metrics)
alarms = tp.process_alarms()
self.assertEqual(1, len(alarms))
self.assertEqual(1, len(json.loads(alarms[0])['metrics']))
self.assertEqual('ALARM', json.loads(alarms[0])['state'])
# test alarms with metrics having more dimensions
ad = self.util.get_alarm_def("alarm_def_more_dimensions")
tp = processor.ThresholdProcessor(ad)
metrics_list = self.util.get_metrics("metrics_more_dimensions")
for metrics in metrics_list:
timestamp = json.loads(metrics)['timestamp']
with mock.patch.object(tu, 'utcnow_ts',
return_value=timestamp):
tp.process_metrics(metrics)
alarms = tp.process_alarms()
self.assertEqual(1, len(alarms))
self.assertEqual(1, len(json.loads(alarms[0])['metrics']))
self.assertEqual('ALARM', json.loads(alarms[0])['state'])
# test when receiving wrong format metrics
ad = self.util.get_alarm_def("alarm_def_match_by")
tp = processor.ThresholdProcessor(ad)
metrics_list = self.util.get_metrics("metrics_match_by_wrong")
for metrics in metrics_list:
timestamp = json.loads(metrics)['timestamp']
with mock.patch.object(tu, 'utcnow_ts',
return_value=timestamp):
tp.process_metrics(metrics)
alarms = tp.process_alarms()
self.assertEqual(1, len(alarms))
self.assertEqual([1300],
json.loads(alarms[0])
['sub_alarms'][0]['current_values'])
# test when received metrics dimension not match
ad = self.util.get_alarm_def("alarm_def_match_by")
tp = processor.ThresholdProcessor(ad)
alarms = tp.process_alarms()
metrics_list = self.util.get_metrics("metrics_not_match")
for metrics in metrics_list:
tp.process_metrics(metrics)
alarms = tp.process_alarms()
self.assertEqual('OK', json.loads(alarms[0])['state'])
# test a success update alarm definition
ad = self.util.get_alarm_def("alarm_def_match_by")
tp = processor.ThresholdProcessor(ad)
metrics_list = self.util.get_metrics("metrics_match_by")
for metrics in metrics_list:
timestamp = json.loads(metrics)['timestamp']
with mock.patch.object(tu, 'utcnow_ts',
return_value=timestamp):
tp.process_metrics(metrics)
alarms = tp.process_alarms()
ad = self.util.get_alarm_def("alarm_def_match_by_update")
re = tp.update_thresh_processor(ad)
self.assertEqual(True, re)
alarms = tp.process_alarms()
self.assertEqual(3, len(alarms))
ad = self.util.get_alarm_def("alarm_def_periods")
tp = processor.ThresholdProcessor(ad)
ad = self.util.get_alarm_def("alarm_def_periods_update")
re = tp.update_thresh_processor(ad)
self.assertEqual(True, re)

103
kiloeyes/tests/setup_metrics.py Executable file
View File

@ -0,0 +1,103 @@
#!/usr/bin/python
# Copyright 2014 IBM Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# this script will create a set of metrics at the endpoint specified as the
# program parameter
#
#
import json
import random
import requests
import string
import sys
import time
MOLD = {"name": "name1",
"timestamp": '2014-12-01',
"value": 100
}
MOLD_DIMENSIONS = {"key1": None}
def setup_metrics(argv):
for a in range(100):
MOLD_DIMENSIONS['key1'] = (
''.join(random.sample(string.ascii_uppercase * 6, 6)))
MOLD_DIMENSIONS['key2'] = (
''.join(random.sample(string.ascii_uppercase * 6, 6)))
MOLD_DIMENSIONS['key_' + str(a)] = (
''.join(random.sample(string.ascii_uppercase * 6, 6)))
"""
import hashlib
key_str = json.dumps(MOLD_DIMENSIONS, sort_keys=True,
indent=None,
separators=(',', ':'))
key = hashlib.md5(key_str).hexdigest()
MOLD['dimensions_hash'] = key
"""
MOLD['dimensions'] = MOLD_DIMENSIONS
print('starting round %s' % a)
# Generate unique 100 metrics
for i in range(100):
MOLD['name'] = ''.join(random.sample(string.ascii_uppercase * 6,
6))
for j in range(10):
MOLD['value'] = round((i + 1) * j * random.random(), 2)
the_time = time.time()
# single messages
for k in range(10):
factor = round(random.random(), 2) * 100
MOLD['timestamp'] = the_time + k * 50000 * factor
MOLD['value'] = i * j * k * random.random()
res = requests.post(argv[1], data=json.dumps(MOLD))
if res.status_code != 201 and res.status_code != 204:
print(json.dumps(MOLD))
exit(0)
# multiple messages
for k in range(3):
msg = "["
factor = round(random.random(), 2) * 100
MOLD['timestamp'] = the_time + k * 50000 * factor
MOLD['value'] = i * j * k * random.random()
msg += json.dumps(MOLD)
for l in range(9):
factor = round(random.random(), 2) * 100
MOLD['timestamp'] = the_time + k * 50000 * factor
MOLD['value'] = i * j * k * random.random()
msg += ',' + json.dumps(MOLD)
msg += "]"
res = requests.post(argv[1], data=msg)
if res.status_code != 201 and res.status_code != 204:
print(json.dumps(MOLD))
exit(0)
del MOLD_DIMENSIONS['key_' + str(a)]
print('round finished %s' % a)
if __name__ == '__main__':
if len(sys.argv) == 2:
setup_metrics(sys.argv)
else:
print('Usage: setup_metrics endpoint. For example:')
print(' setup_metrics http://host:9000/data_2015')

View File

View File

@ -0,0 +1,438 @@
# Copyright 2015 Carnegie Mellon University
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import falcon
import mock
import os
from oslo_config import fixture as fixture_config
from oslotest import base
import requests
from kiloeyes.common import alarm_expr_parser
from kiloeyes.common import alarm_expr_validator
from kiloeyes.common import es_conn
from kiloeyes.v2.elasticsearch import alarmdefinitions
try:
import ujson as json
except ImportError:
import json
class TestAlarmDefinitionUtil(base.BaseTestCase):
def setUp(self):
super(TestAlarmDefinitionUtil, self).setUp()
self.req = mock.Mock()
class TestAlarmDefinitionDispatcher(base.BaseTestCase):
def setUp(self):
self.CONF = self.useFixture(fixture_config.Config()).conf
self.CONF.set_override('doc_type', 'fake', group='alarmdefinitions')
self.CONF.set_override('uri', 'fake_es_uri', group='es_conn')
super(TestAlarmDefinitionDispatcher, self).setUp()
self.dispatcher_get = (
alarmdefinitions.AlarmDefinitionDispatcher({}))
self.dispatcher_get_by_id = (
alarmdefinitions.AlarmDefinitionDispatcher({}))
self.dispatcher_post = (
alarmdefinitions.AlarmDefinitionDispatcher({}))
self.dispatcher_put = (
alarmdefinitions.AlarmDefinitionDispatcher({}))
self.dispatcher_delete = (
alarmdefinitions.AlarmDefinitionDispatcher({}))
dir_path = os.path.dirname(os.path.realpath(__file__))
alarms_data_json = open(os.path.join(dir_path,
'test_alarmdefinitions_data')
).read().replace('\n', '')
self.data = json.loads(alarms_data_json)
def test_initialization(self):
# test that the doc type of the es connection is fake
self.assertEqual(self.dispatcher_get._es_conn.doc_type, 'fake')
self.assertEqual(self.dispatcher_get._es_conn.uri, 'fake_es_uri/')
def test_do_get_alarm_definitions(self):
res = mock.Mock()
req = mock.Mock()
req_result = mock.Mock()
response_str = self.data
req_result.json.return_value = response_str
req_result.status_code = 200
req.query_string = 'name=CPU usage test&dimensions=os:linux'
with mock.patch.object(es_conn.ESConnection, 'get_messages',
return_value=req_result):
self.dispatcher_get.do_get_alarm_definitions_filtered(req, res)
# test that the response code is 200
self.assertEqual(res.status, getattr(falcon, 'HTTP_200'))
json_result = json.loads(res.body)
obj = json_result['elements']
# test that the first response object has the required properties
self.assertEqual(obj[0]['id'],
'8c85be40-bfcb-465c-b450-4eea670806a6')
self.assertEqual(obj[0]['name'], "CPU usage test")
self.assertEqual(obj[0]['alarm_actions'],
"c60ec47e-5038-4bf1-9f95-4046c6e9a719")
self.assertEqual(obj[0]['undetermined_actions'],
"c60ec47e-5038-4bf1-9t95-4046c6e9a759")
self.assertEqual(obj[0]['ok_actions'],
"c60ec47e-5038-4bf1-9f95-4046cte9a759")
self.assertEqual(obj[0]['match_by'], "hostname")
self.assertEqual(obj[0]['severity'], "LOW")
self.assertEqual(obj[0]['expression'],
"max(cpu.usage{os=linux},600)>15")
self.assertNotEqual(obj[0]['expression_data'], None)
self.assertEqual(obj[0]['description'], "Max CPU 15")
# test that the second response object has the required properties
self.assertEqual(obj[1]['id'],
'eb43fe12-b442-40b6-aab6-f34450cf90dd')
self.assertEqual(obj[1]['name'], "CPU usage in last 4 minutes")
self.assertEqual(obj[1]['alarm_actions'],
"c60ec47e-5038-4bf1-9f95-4046c6e9a719")
self.assertEqual(obj[1]['undetermined_actions'],
"c60ec47e-5038-4bf1-9t95-4046c6e9a759")
self.assertEqual(obj[1]['ok_actions'],
"c60ec47e-5038-4bf1-9f95-4046cte9a759")
self.assertEqual(obj[1]['match_by'], "hostname")
self.assertEqual(obj[1]['severity'], "LOW")
self.assertEqual(obj[1]['expression'],
"max(cpu.usage,60)>10 times 4")
self.assertNotEqual(obj[1]['expression_data'], None)
self.assertEqual(obj[1]['description'],
"max CPU greater than 10")
self.assertEqual(len(obj), 2)
def test_do_get_alarm_definitions_by_id(self):
res = mock.Mock()
req = mock.Mock()
req_result = mock.Mock()
req_result.json.return_value = self.data
req_result.status_code = 200
with mock.patch.object(requests, 'get', return_value=req_result):
self.dispatcher_get_by_id.do_get_alarm_definitions_by_id(
req, res, id="72df5ccb-ec6a-4bb4-a15c-939467ccdde0")
# test that the response code is 200
self.assertEqual(res.status, getattr(falcon, 'HTTP_200'))
obj = json.loads(res.body)
self.assertEqual(obj['id'],
"72df5ccb-ec6a-4bb4-a15c-939467ccdde0")
self.assertEqual(obj['name'], "CPU usage test")
self.assertEqual(obj['alarm_actions'],
"c60ec47e-5038-4bf1-9f95-4046c6e9a719")
self.assertEqual(obj['undetermined_actions'],
"c60ec47e-5038-4bf1-9t95-4046c6e9a759")
self.assertEqual(obj['ok_actions'],
"c60ec47e-5038-4bf1-9f95-4046cte9a759")
self.assertEqual(obj['match_by'], "hostname")
self.assertEqual(obj['severity'], "LOW")
self.assertEqual(obj['expression'],
"max(cpu.usage{os=linux},600)>15")
self.assertEqual(obj['description'], "Max CPU 15")
def test_do_post_alarm_definitions(self):
req = mock.Mock()
res = mock.Mock()
req_result = mock.Mock()
req_result.status_code = 201
with mock.patch.object(requests, 'post', return_value=req_result):
with mock.patch.object(req.stream, 'read',
return_value="{ 'name': 'CPU usage test', "
"'alarm_actions': "
"'c60ec47e-5038-4bf1-9f95-"
"4046c6e9a719', "
"'undetermined_actions': "
"'c60ec47e-5038-4bf1-9t95-"
"4046c6e9a759', 'ok_actions':"
" 'c60ec47e-5038-4bf1-9f95-"
"4046cte9a759', "
"'match_by': 'hostname', "
"'severity': 'LOW', "
"'expression': "
"'max(cpu.usage{os=linux},"
"600)"
">15', 'description': "
"'Max CPU 15'"
"}"
):
self.dispatcher_post.do_post_alarm_definitions(
req, res)
self.assertEqual(res.status, getattr(falcon, 'HTTP_201'))
def test_do_put_alarm_definitions(self):
req = mock.Mock()
res = mock.Mock()
req_result = mock.Mock()
req_result.status_code = 200
req_get_result = mock.Mock()
req_get_result.json.return_value = self.data
req_get_result.status_code = 200
with mock.patch.object(requests, 'get', return_value=req_get_result):
with mock.patch.object(requests, 'put', return_value=req_result):
with mock.patch.object(
req.stream, 'read',
return_value="{ 'name': 'CPU usage test', "
"'alarm_actions': "
"'c60ec47e-5038-4bf1-9f95-"
"4046c6e9a719', "
"'undetermined_actions': "
"'c60ec47e-5038-4bf1-9t95-"
"4046c6e9a759', 'ok_actions':"
" 'c60ec47e-5038-4bf1-9f95-"
"4046cte9a759', "
"'match_by': 'hostname', "
"'severity': 'LOW', "
"'expression': "
"'max(cpu.usage{os=linux},"
"600)"
">15', 'description': "
"'Max CPU 15'"
"}"
):
self.dispatcher_put.do_put_alarm_definitions(
req, res, id="8c85be40-bfcb-465c-b450-4eea670806a6")
self.assertEqual(res.status, getattr(falcon, 'HTTP_200'))
def test_do_delete_alarm_definitions(self):
with mock.patch.object(es_conn.ESConnection, 'del_messages',
return_value=200):
res = mock.Mock()
self.dispatcher_delete.do_delete_alarm_definitions(
mock.Mock(), res, id="72df5ccb-ec6a-4bb4-a15c-939467ccdde0")
self.assertEqual(res.status, getattr(falcon, 'HTTP_200'))
def test_do_delete_alarm_definitions_exception(self):
with mock.patch.object(es_conn.ESConnection, 'del_messages',
return_value=0,
side_effect=Exception('Exception')):
res = mock.Mock()
self.dispatcher_delete.do_delete_alarm_definitions(
mock.Mock(), res, id="72df5ccb-ec6a-4bb4-a15c-939467ccdde0")
self.assertEqual(res.status, getattr(falcon, 'HTTP_400'))
def test_do_get_alarm_definitions_filtered_exception(self):
res = mock.Mock()
req = mock.Mock()
req_result = mock.Mock()
req_result.json.return_value = ''
req_result.status_code = 400
req.query_string = 'name=CPU usage test&dimensions=os:linux'
with mock.patch.object(es_conn.ESConnection, 'get_messages',
return_value=req_result):
self.dispatcher_get.do_get_alarm_definitions_filtered(req, res)
# test that the response code is 400
self.assertEqual(res.status, getattr(falcon, 'HTTP_400'))
def test_do_post_alarm_definitions_exception(self):
req = mock.Mock()
res = mock.Mock()
req_result = mock.Mock()
req_result.status_code = 201
with mock.patch.object(requests, 'post', return_value=req_result):
with mock.patch.object(req.stream, 'read',
return_value="{ 'name': 'CPU usage test', "
"'alarm_actions': "
"'c60ec47e-5038-4bf1-9f95-"
"4046c6e9a719', "
"'undetermined_actions': "
"'c60ec47e-5038-4bf1-9t95-"
"4046c6e9a759', 'ok_actions':"
" 'c60ec47e-5038-4bf1-9f95-"
"4046cte9a759', "
"'match_by': 'hostname', "
"'severity': 'LOW', "
"'expression': "
"'max(cpu.usage{os=linux},"
"600)"
">15', 'description': "
"'Max CPU 15'"
"}"
):
with mock.patch.object(alarm_expr_validator,
'is_valid_alarm_definition',
return_value=False):
self.dispatcher_post.do_post_alarm_definitions(
req, res)
# test that the response code is 400
self.assertEqual(res.status, getattr(falcon, 'HTTP_400'))
def test_do_post_alarm_definitions_parse_exception(self):
req = mock.Mock()
res = mock.Mock()
req_result = mock.Mock()
req_result.status_code = 201
with mock.patch.object(requests, 'post', return_value=req_result):
with mock.patch.object(req.stream, 'read',
return_value="{ 'name': 'CPU usage test', "
"'alarm_actions': "
"'c60ec47e-5038-4bf1-9f95-"
"4046c6e9a719', "
"'undetermined_actions': "
"'c60ec47e-5038-4bf1-9t95-"
"4046c6e9a759', 'ok_actions':"
" 'c60ec47e-5038-4bf1-9f95-"
"4046cte9a759', "
"'match_by': 'hostname', "
"'severity': 'TEST', "
"'expression': "
"'max(cpu.usage{os=linux},"
"600)"
">15', 'description': "
"'Max CPU 15'"
"}"
):
with mock.patch.object(alarm_expr_validator,
'is_valid_alarm_definition',
return_value=True):
with mock.patch.object(alarm_expr_parser,
'AlarmExprParser',
return_value=None,
side_effect=(Exception('Exc'
'eption'))):
self.dispatcher_post.do_post_alarm_definitions(
req, res)
# test that the response code is 400
self.assertEqual(res.status, getattr(falcon,
'HTTP_400'))
def test_do_put_alarm_definitions_exception(self):
req = mock.Mock()
res = mock.Mock()
req_result = mock.Mock()
req_result.status_code = 400
req_get_result = mock.Mock()
req_get_result.json.return_value = self.data
req_get_result.status_code = 200
with mock.patch.object(requests, 'get', return_value=req_get_result):
with mock.patch.object(requests, 'put', return_value=req_result):
with mock.patch.object(
req.stream, 'read',
return_value="{ 'name': 'CPU usage test', "
""
"'c60ec47e-5038-4bf1-9f95-"
"4046c6e9a719', "
"'undetermined_actions': "
"'c60ec47e-5038-4bf1-9t95-"
"4046c6e9a759', 'ok_actions':"
" 'c60ec47e-5038-4bf1-9f95-"
"4046cte9a759', "
"'match_by': 'hostname', "
"'severity': 'LOW', "
"'expression': "
"'max(cpu.usage{os=linux},"
"600)"
">15', 'description': "
"'Max CPU 15'"
"}"
):
with mock.patch.object(alarm_expr_validator,
'is_valid_alarm_definition',
return_value=False):
with mock.patch.object(
alarm_expr_parser.AlarmExprParser,
'sub_alarm_expressions', return_value=None,
side_effect=(Exception('Exception'))):
self.dispatcher_put.do_put_alarm_definitions(
req, res,
id="8c85be40-bfcb-465c-b450-4eea670806a6")
# test that the response code is 400
self.assertEqual(res.status, getattr(falcon,
'HTTP_400'))
def test_do_put_alarm_definitions_else_exception(self):
req = mock.Mock()
res = mock.Mock()
req_result = mock.Mock()
req_result.status_code = 400
req_get_result = mock.Mock()
req_get_result.json.return_value = self.data
req_get_result.status_code = 200
with mock.patch.object(requests, 'get', return_value=req_get_result):
with mock.patch.object(requests, 'put', return_value=req_result):
with mock.patch.object(
req.stream, 'read',
return_value="{ 'name': 'CPU usage test', "
"'alarm_actions': "
"'c60ec47e-5038-4bf1-9f95-"
"4046c6e9a719', "
"'undetermined_actions': "
"'c60ec47e-5038-4bf1-9t95-"
"4046c6e9a759', 'ok_actions':"
" 'c60ec47e-5038-4bf1-9f95-"
"4046cte9a759', "
"'match_by': 'hostname', "
"'severity': 'LOW', "
"'expression': "
"'max(cpu.usage{os=linux},"
"600)"
">15', 'description': "
"'Max CPU 15'"
"}"
):
with mock.patch.object(alarm_expr_validator,
'is_valid_alarm_definition',
return_value=False):
with mock.patch.object(
alarm_expr_parser.AlarmExprParser,
'sub_alarm_expressions', return_value=None,
side_effect=(Exception('Exception'))):
self.dispatcher_put.do_put_alarm_definitions(
req, res,
id="8c85be40-bfcb-465c-b450-4eea670806a6")
# test that the response code is 400
self.assertEqual(res.status, getattr(falcon,
'HTTP_400'))
def test_do_get_alarm_definitions_by_id_exception(self):
res = mock.Mock()
req = mock.Mock()
req_result = mock.Mock()
req_result.json.return_value = ''
req_result.status_code = 400
with mock.patch.object(es_conn.ESConnection, 'get_message_by_id',
return_value=req_result):
self.dispatcher_get_by_id.do_get_alarm_definitions_by_id(
req, res, id="72df5ccb-ec6a-4bb4-a15c-939467ccdde0")
# test that the response code is 400
self.assertEqual(res.status, getattr(falcon, 'HTTP_400'))

View File

@ -0,0 +1,82 @@
{
"hits":{
"hits":[
{
"_score":1.0,
"_type":"alarmdefinitions",
"_id":"8c85be40-bfcb-465c-b450-4eea670806a6",
"_source":{
"id":"8c85be40-bfcb-465c-b450-4eea670806a6",
"name":"CPU usage test",
"alarm_actions":
"c60ec47e-5038-4bf1-9f95-4046c6e9a719",
"undetermined_actions":
"c60ec47e-5038-4bf1-9t95-4046c6e9a759",
"ok_actions":
"c60ec47e-5038-4bf1-9f95-4046cte9a759",
"match_by":"hostname",
"severity":"LOW",
"expression":
"max(cpu.usage{os=linux},600)>15",
"expression_data": [
{
"function": "AVG",
"metric_name": "cpu.avg.lt.10",
"period": "6000",
"threshold": "10",
"periods": "1",
"operator": "LTE",
"dimensions": {
"os": "linux"
}
}
],
"description": "Max CPU 15"
},
"_index":"data_20150601000000"
},
{
"_score":1.0,
"_type":"alarmdefinitions",
"_id":"eb43fe12-b442-40b6-aab6-f34450cf90dd",
"_source":{
"id":"eb43fe12-b442-40b6-aab6-f34450cf90dd",
"name":"CPU usage in last 4 minutes",
"alarm_actions":
"c60ec47e-5038-4bf1-9f95-4046c6e9a719",
"undetermined_actions":
"c60ec47e-5038-4bf1-9t95-4046c6e9a759",
"ok_actions":
"c60ec47e-5038-4bf1-9f95-4046cte9a759",
"match_by":"hostname",
"severity":"LOW",
"expression":
"max(cpu.usage,60)>10 times 4",
"expression_data": [
{
"function": "AVG",
"metric_name": "cpu.avg.lt.10",
"period": "6000",
"threshold": "10",
"periods": "1",
"operator": "LTE",
"dimensions": {
"os": "linux"
}
}
],
"description": "max CPU greater than 10"
},
"_index":"data_20150601000000"
}
],
"total":2,
"max_score":1.0
},
"_shards":{
"successful":5,
"failed":0,
"total":5
},
"took":2
}

View File

@ -0,0 +1,281 @@
# Copyright 2015 Carnegie Mellon University
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import falcon
import mock
import os
from oslo_config import fixture as fixture_config
from oslotest import base
from kiloeyes.common import es_conn
from kiloeyes.v2.elasticsearch import alarms
try:
import ujson as json
except ImportError:
import json
class TestAlarmDispatcher(base.BaseTestCase):
def setUp(self):
self.CONF = self.useFixture(fixture_config.Config()).conf
self.CONF.set_override('doc_type', 'fake', group='alarms')
self.CONF.set_override('uri', 'fake_es_uri', group='es_conn')
super(TestAlarmDispatcher, self).setUp()
self.dispatcher_get = (
alarms.AlarmDispatcher({}))
self.dispatcher_get_by_id = (
alarms.AlarmDispatcher({}))
self.dispatcher_put = (
alarms.AlarmDispatcher({}))
self.dispatcher_delete = (
alarms.AlarmDispatcher({}))
dir_path = os.path.dirname(os.path.realpath(__file__))
alarms_data_json = open(os.path.join(dir_path,
'test_alarms_data')
).read().replace('\n', '')
self.data = json.loads(alarms_data_json)
get_alarms_data = open(os.path.join(dir_path,
'test_get_alarms_data')
).read().replace('\n', '')
self.get_alarms_data = json.loads(get_alarms_data)
def test_initialization(self):
# test that the doc type of the es connection is fake
self.assertEqual(self.dispatcher_get._es_conn.doc_type, 'fake')
self.assertEqual(self.dispatcher_get._es_conn.uri, 'fake_es_uri/')
def test_do_get_alarms(self):
res = mock.Mock()
req = mock.Mock()
req_result = mock.Mock()
response_str = self.get_alarms_data
req_result.json.return_value = response_str
req_result.status_code = 200
req.query_string = 'metric_dimensions=hostname:h7,os:linux&state=OK'
with mock.patch.object(es_conn.ESConnection, 'get_messages',
return_value=req_result):
self.dispatcher_get.do_get_alarms(req, res)
# test that the response code is 200
self.assertEqual(res.status, getattr(falcon, 'HTTP_200'))
json_result = json.loads(res.body)
obj = json_result['elements']
# test that the first response object has the required properties
self.assertEqual(obj[0]['id'],
'1bcbe772-f12b-44ef-a1b5-7685baeaaba2')
self.assertNotEqual(obj[0]['alarm_definition'], None)
self.assertNotEqual(obj[0]['metrics'], None)
self.assertEqual(obj[0]['state'], 'OK')
self.assertNotEqual(obj[0]['sub_alarms'], None)
# test that the second response object has the required properties
self.assertEqual(obj[1]['id'],
'256acdac-2f05-4e3e-85a3-802055bf2863')
self.assertNotEqual(obj[1]['alarm_definition'], None)
self.assertNotEqual(obj[1]['metrics'], None)
self.assertEqual(obj[1]['state'], 'UNDETERMINED')
self.assertNotEqual(obj[1]['sub_alarms'], None)
self.assertEqual(len(obj), 2)
def test_do_get_alarms_by_id(self):
res = mock.Mock()
req = mock.Mock()
req_result = mock.Mock()
req_result.json.return_value = self.data
req_result.status_code = 200
with mock.patch.object(es_conn.ESConnection, 'get_message_by_id',
return_value=req_result):
self.dispatcher_get_by_id.do_get_alarms_by_id(
req, res, id="d718fb26-d16d-4705-8f02-13a1468619c9")
# test that the response code is 200
self.assertEqual(res.status, getattr(falcon, 'HTTP_200'))
obj = json.loads(res.body)
# test that the response object has the required properties
self.assertEqual(obj['id'],
"d718fb26-d16d-4705-8f02-13a1468619c9")
self.assertNotEqual(obj['metrics'], None)
self.assertEqual(obj['state'], 'OK')
self.assertNotEqual(obj['sub_alarms'], None)
self.assertEqual(obj['created_timestamp'], '2015-06-17T18:43:21Z')
self.assertEqual(obj['updated_timestamp'], '2015-06-17T18:43:27Z')
self.assertEqual(obj['state_updated_timestamp'],
'2015-06-17T18:43:27Z')
def test_do_put_alarms(self):
req = mock.Mock()
res = mock.Mock()
req_result = ("{ 'id': 'd718fb26-d16d-4705-"
"8f02-13a1468619c9', "
"'links': ["
"{"
"'href': 'http://127.0.0.1:"
"9090/v2.0/alarms/'"
"'d718fb26-d16d-4705-8f02-"
"13a1468619c9', "
"'rel': 'self}], '"
"'metrics': [{ "
"'name': 'cpu.usage', "
"'dimensions': { "
"'hostname': "
"'host7', 'os': 'linux' }}],"
"'state': 'OK', "
"'sub_alarms': [{"
"'sub_alarm_expression': {"
"'function': 'AVG', "
"'metric_name': "
"'cpu.usage', "
"'period': '600', "
"'threshold': '10', "
"'periods': '1', "
"'operator': 'LTE', "
"'dimensions': {'os': "
"'linux'}}, "
"'current_values': "
"[10.0498869723], "
"'sub_alarm_state': 'OK'}], "
"'created_timestamp': "
"'2015-06-17T16:43:21Z', "
"'state_updated_timestamp': "
"'2015-06-17T16:43:27Z'"
"}")
json_result = json.dumps(req_result)
with mock.patch.object(es_conn.ESConnection, 'put_messages',
return_value=200):
with mock.patch.object(req.stream, 'read',
return_value=json_result):
self.dispatcher_put.do_put_alarms(
req, res, id="d718fb26-d16d-4705-8f02-13a1468619c9")
self.assertEqual(res.status, getattr(falcon, 'HTTP_200'))
def test_do_delete_alarms(self):
with mock.patch.object(es_conn.ESConnection, 'del_messages',
return_value=200):
res = mock.Mock()
self.dispatcher_delete.do_delete_alarms(
mock.Mock(), res, id="d718fb26-d16d-4705-8f02-13a1468619c9")
self.assertEqual(res.status, getattr(falcon, 'HTTP_200'))
def test_do_get_alarms_exception(self):
res = mock.Mock()
req = mock.Mock()
req_result = mock.Mock()
req_result.json.return_value = ''
req_result.status_code = 400
req.query_string = 'metric_dimensions=hostname:h7,os:linux&state=OK'
with mock.patch.object(es_conn.ESConnection, 'get_messages',
return_value=req_result):
self.dispatcher_get.do_get_alarms(req, res)
# test that the response code is 400
self.assertEqual(res.status, getattr(falcon, 'HTTP_400'))
def test_do_get_alarms_by_id_exception(self):
res = mock.Mock()
req = mock.Mock()
req_result = mock.Mock()
req_result.json.return_value = ''
req_result.status_code = 400
with mock.patch.object(es_conn.ESConnection, 'get_message_by_id',
return_value=req_result):
self.dispatcher_get_by_id.do_get_alarms_by_id(
req, res, id="d718fb26-d16d-4705-8f02-13a1468619c9")
# test that the response code is 400
self.assertEqual(res.status, getattr(falcon, 'HTTP_400'))
def test_do_put_alarms_exception(self):
req = mock.Mock()
res = mock.Mock()
req_result = ("{ 'id': 'd718fb26-d16d-4705-"
"8f02-13a1468619c9', "
"'links': ["
"{"
"'href': 'http://127.0.0.1:"
"9090/v2.0/alarms/'"
"'d718fb26-d16d-4705-8f02-"
"13a1468619c9', "
"'rel': 'self}], '"
"'metrics': [{ "
"'name': 'cpu.usage', "
"'dimensions': { "
"'hostname': "
"'host7', 'os': 'linux' }}],"
"'state': 'OK', "
"'sub_alarms': [{"
"'sub_alarm_expression': {"
"'function': 'AVG', "
"'metric_name': "
"'cpu.usage', "
"'period': '600', "
"'threshold': '10', "
"'periods': '1', "
"'operator': 'LTE', "
"'dimensions': {'os': "
"'linux'}}, "
""
"[10.0498869723], "
"'sub_alarm_state': 'OK'}], "
"'created_timestamp': "
"'2015-06-17T16:43:21Z', "
"'state_updated_timestamp': "
"'2015-06-17T16:43:27Z'"
"}")
json_result = json.dumps(req_result)
with mock.patch.object(es_conn.ESConnection, 'put_messages',
return_value=400,
side_effect=Exception('Exception')):
with mock.patch.object(req.stream, 'read',
return_value=json_result):
self.dispatcher_put.do_put_alarms(
req, res, id="d718fb26-d16d-4705-8f02-13a1468619c9")
# test that the response code is 400
self.assertEqual(res.status, getattr(falcon, 'HTTP_400'))
def test_do_delete_alarms_exception(self):
with mock.patch.object(es_conn.ESConnection, 'del_messages',
return_value=400,
side_effect=Exception('Exception')):
res = mock.Mock()
self.dispatcher_delete.do_delete_alarms(
mock.Mock(), res, id="d718fb26-d16d-4705-8f02-13a1468619c9")
# test that the response code is 400
self.assertEqual(res.status, getattr(falcon, 'HTTP_400'))

View File

@ -0,0 +1,116 @@
{
"hits":{
"hits":[
{
"_score":1.0,
"_type":"alarms",
"_id":"cb71cc0f-ade1-433e-aa2d-22b12067cba0",
"_source":{
"id":"cb71cc0f-ade1-433e-aa2d-22b12067cba0",
"alarm_definition":{
"alarm_actions":[
"c60ec47e-5038-4bf1-9f95-4046c6e9a719"
],
"undetermined_actions":[
"c60ec47e-5038-4bf1-9t95-4046c6e9a759"
],
"name": "Avg CPU percent greater than 10"
},
"metrics":[
{
"name": "cpu.usage",
"dimensions": {
"hostname": "h7",
"os": "linux"
}
}
],
"state":"OK",
"sub_alarms":[
{
"sub_alarm_expression":{
"function":"AVG",
"metric_name":"cpu.usage",
"period":"600",
"threshold":"10",
"periods":"1",
"operator":"LTE",
"dimensions":{
"os":"linux"
}
},
"current_values":[
13.0498869723
],
"sub_alarm_state":"OK"
}
],
"created_timestamp":"2015-06-17T18:43:21Z",
"updated_timestamp": "2015-06-17T18:43:27Z",
"state_updated_timestamp":
"2015-06-17T18:43:27Z"
},
"_index":"data_20150601000000"
},
{
"_score":1.0,
"_type":"alarms",
"_id":"1cfd25cb-f60d-4f5b-845f-8048c0678a8f",
"_source":{
"id":"1cfd25cb-f60d-4f5b-845f-8048c0678a8f",
"alarm_definition":{
"alarm_actions":[
"c60ec47e-5038-4bf1-9f95-4046c6e9a719"
],
"undetermined_actions":[
"c60ec47e-5038-4bf1-9t95-4046c6e9a759"
],
"name":"Avg CPU percent greater than 10"
},
"metrics":[
{
"name": "cpu.usage",
"dimensions": {
"hostname": "h7",
"os": "linux"
}
}
],
"state":"OK",
"sub_alarms":[
{
"sub_alarm_expression":{
"function":"AVG",
"metric_name":"cpu.usage",
"period":"600",
"threshold":"10",
"periods":"1",
"operator":"LTE",
"dimensions":{
"os":"linux"
}
},
"current_values":[
13.0498869723
],
"sub_alarm_state":"OK"
}
],
"created_timestamp":"2015-06-17T19:43:21Z",
"updated_timestamp": "2015-06-17T19:43:27Z",
"state_updated_timestamp":
"2015-06-17T19:43:27Z"
},
"_index":"data_20150601000000"
}
],
"total":2,
"max_score":1.0
},
"_shards":{
"successful":5,
"failed":0,
"total":5
},
"took":2
}

View File

@ -0,0 +1,212 @@
{
"took":2,
"timed_out":false,
"_shards":{
"total":5,
"successful":5,
"failed":0
},
"hits":{
"total":8,
"max_score":0.0,
"hits":[]
},
"aggregations":{
"latest_state":{
"doc_count_error_upper_bound":0,
"sum_other_doc_count":0,
"buckets":[
{
"key":"CM-demo-alarm-def-01",
"doc_count":4,
"top_state_hits":{
"hits":{
"total":4,
"max_score":null,
"hits":[
{
"_index":"data_20150701000000",
"_type":"alarms",
"_id":"1bcbe772-f12b-44ef-a1b5-7685baeaaba2",
"_score":null,
"_source":{
"id":"1bcbe772-f12b-44ef-a1b5-7685baeaaba2",
"sub_alarms":[
{
"sub_alarm_expression":{
"periods":"1",
"dimensions":{
"key2":"OWWSMF",
"key1":"ULPAPW"
},
"period":"60",
"metric_name":"afhdpc",
"operator":"GT",
"threshold":"5",
"function":"MAX"
},
"current_values":[
null
],
"sub_alarm_state":"UNDETERMINED"
}
],
"metrics":[
{
"name":"AFHDPC",
"dimensions":{
"key5":"101",
"key_1":"100",
"key2":"OWWSMF",
"key1":"ULPAPW"
}
}
],
"state":"OK",
"updated_timestamp":"2015-07-30T20:27:01Z",
"created_timestamp":"2015-07-30T20:23:33Z",
"alarm_definition":{
"expression":"max(AFHDPC{key1=ULPAPW,key2=OWWSMF}) > 5",
"id":"eb9675f3-3b60-4e0a-9881-434d9db47e65",
"alarm_actions":[
"c6bff1bb-f384-46e4-80b2-4d564cb7dd96"
],
"expression_data":[
{
"periods":"1",
"dimensions":{
"key2":"OWWSMF",
"key1":"ULPAPW"
},
"period":"60",
"metric_name":"afhdpc",
"operator":"GT",
"threshold":"5",
"function":"MAX"
}
],
"description":"demo alarm definition",
"name":"CM-demo-alarm-def-01",
"undetermined_actions":[
"c6bff1bb-f384-46e4-80b2-4d564cb7dd96"
],
"severity":"LOW",
"match_by":[
"key2",
"key1",
"key_1",
"key5"
],
"ok_actions":[
"c6bff1bb-f384-46e4-80b2-4d564cb7dd96"
]
},
"state_updated_timestamp":"2015-07-30T20:27:01Z"
},
"sort":[
1438288021000
]
}
]
}
}
},
{
"key":"CM-demo-alarm-def-1",
"doc_count":4,
"top_state_hits":{
"hits":{
"total":4,
"max_score":null,
"hits":[
{
"_index":"data_20150701000000",
"_type":"alarms",
"_id":"256acdac-2f05-4e3e-85a3-802055bf2863",
"_score":null,
"_source":{
"id":"256acdac-2f05-4e3e-85a3-802055bf2863",
"sub_alarms":[
{
"sub_alarm_expression":{
"periods":"1",
"dimensions":{
"key2":"OWWSMF",
"key1":"ULPAPW"
},
"period":"60",
"metric_name":"afhdpc",
"operator":"GT",
"threshold":"5",
"function":"MAX"
},
"current_values":[
null
],
"sub_alarm_state":"UNDETERMINED"
}
],
"metrics":[
{
"name":"AFHDPC",
"dimensions":{
"key5":"101",
"key_1":"100",
"key2":"OWWSMF",
"key1":"ULPAPW"
}
}
],
"state":"UNDETERMINED",
"updated_timestamp":"2015-07-30T20:27:01Z",
"created_timestamp":"2015-07-30T20:23:33Z",
"alarm_definition":{
"expression":"max(AFHDPC{key1=ULPAPW,key2=OWWSMF}) > 5",
"id":"e8187b06-8f02-42d4-9303-883b5c420642",
"alarm_actions":[
"c3ed4102-4b5b-444b-b015-8e7031a96b6d"
],
"expression_data":[
{
"periods":"1",
"dimensions":{
"key2":"OWWSMF",
"key1":"ULPAPW"
},
"period":"60",
"metric_name":"afhdpc",
"operator":"GT",
"threshold":"5",
"function":"MAX"
}
],
"description":"more description",
"name":"CM-demo-alarm-def-1",
"undetermined_actions":[
"c3ed4102-4b5b-444b-b015-8e7031a96b6d"
],
"severity":"LOW",
"match_by":[
"key2",
"key1",
"key_1",
"key5"
],
"ok_actions":[
"c3ed4102-4b5b-444b-b015-8e7031a96b6d"
]
},
"state_updated_timestamp":"2015-07-30T20:27:01Z"
},
"sort":[
1438288021000
]
}
]
}
}
}
]
}
}
}

View File

@ -0,0 +1,343 @@
# Copyright 2013 IBM Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import falcon
import mock
from oslo_config import fixture as fixture_config
from oslotest import base
import requests
from kiloeyes.common import kafka_conn
from kiloeyes.v2.elasticsearch import metrics
try:
import ujson as json
except ImportError:
import json
class TestParamUtil(base.BaseTestCase):
def setUp(self):
super(TestParamUtil, self).setUp()
self.req = mock.Mock()
self.req.get_param.side_effect = self._side_effect
def _side_effect(self, arg):
if arg == 'name':
return 'tongli'
elif arg == 'dimensions':
return 'key1:100, key2:200'
elif arg == 'start_time':
return '2015-01-31T13:35:00Z'
elif arg == 'end_time':
return '2015-11-30T14:05:00Z'
elif arg == 'period':
return '200'
elif arg == 'statistics':
return 'sum, avg'
def test_common(self):
result = [{'match': {'name': 'tongli'}},
{'range': {'timestamp': {'gte': 1422711300.0,
'lt': 1448892300.0}}},
{'match': {'dimensions.key1': 100.0}},
{'match': {'dimensions.key2': 200.0}}]
q = []
ret = metrics.ParamUtil.common(self.req, q)
self.assertEqual(q, result)
self.assertTrue(ret)
def test_period(self):
ret = metrics.ParamUtil.period(self.req)
self.assertEqual(ret, '200s')
def test_stats(self):
ret = metrics.ParamUtil.stats(self.req)
self.assertEqual(ret, ['sum', 'avg'])
class TestMetricDispatcher(base.BaseTestCase):
def setUp(self):
super(TestMetricDispatcher, self).setUp()
self.CONF = self.useFixture(fixture_config.Config()).conf
self.CONF.set_override('uri', 'fake_url', group='kafka_opts')
self.CONF.set_override('topic', 'fake', group='metrics')
self.CONF.set_override('doc_type', 'fake', group='metrics')
self.CONF.set_override('index_prefix', 'also_fake', group='metrics')
self.CONF.set_override('uri', 'fake_es_uri', group='es_conn')
res = mock.Mock()
res.status_code = 200
res.json.return_value = {"data": {"mappings": {"fake": {
"properties": {
"dimensions": {"properties": {
"key1": {"type": "long"}, "key2": {"type": "long"},
"rkey0": {"type": "long"}, "rkey1": {"type": "long"},
"rkey2": {"type": "long"}, "rkey3": {"type": "long"}}},
"name": {"type": "string", "index": "not_analyzed"},
"timestamp": {"type": "string", "index": "not_analyzed"},
"value": {"type": "double"}}}}}}
with mock.patch.object(requests, 'get',
return_value=res):
self.dispatcher = metrics.MetricDispatcher({})
def test_initialization(self):
# test that the kafka connection uri should be 'fake' as it was passed
# in from configuration
self.assertEqual(self.dispatcher._kafka_conn.uri, 'fake_url')
# test that the topic is metrics as it was passed into dispatcher
self.assertEqual(self.dispatcher._kafka_conn.topic, 'fake')
# test that the doc type of the es connection is fake
self.assertEqual(self.dispatcher._es_conn.doc_type, 'fake')
self.assertEqual(self.dispatcher._es_conn.uri, 'fake_es_uri/')
# test that the query url is correctly formed
self.assertEqual(self.dispatcher._query_url, (
'fake_es_uri/also_fake*/fake/_search?search_type=count'))
def test_post_data(self):
with mock.patch.object(kafka_conn.KafkaConnection, 'send_messages',
return_value=204):
res = mock.Mock()
self.dispatcher.post_data(mock.Mock(), res)
# test that the response code is 204
self.assertEqual(getattr(falcon, 'HTTP_204'), res.status)
with mock.patch.object(kafka_conn.KafkaConnection, 'send_messages',
return_value=400):
res = mock.Mock()
self.dispatcher.post_data(mock.Mock(), res)
# test that the response code is 204
self.assertEqual(getattr(falcon, 'HTTP_400'), res.status)
def test_do_get_metrics(self):
res = mock.Mock()
req = mock.Mock()
def _side_effect(arg):
if arg == 'name':
return 'tongli'
elif arg == 'dimensions':
return 'key1:100, key2:200'
req.get_param.side_effect = _side_effect
req_result = mock.Mock()
response_str = """
{"aggregations":{"by_name":{"doc_count_error_upper_bound":0,
"sum_other_doc_count":0,"buckets":[{"key":"BABMGD","doc_count":300,
"by_dim":{"buckets":[{"key": "64e6ce08b3b8547b7c32e5cfa5b7d81f",
"doc_count":300,"metrics":{"hits":{"hits":[{
"_source":{"name":"BABMGD","dimensions":{"key2":"NVITDU",
"key1":"FUFMPY","key_43":"ROQBZM"}}}]}}}]}},
{"key":"BABSYZ","doc_count":300,"by_dim":{"buckets":[{
"key":"84863c7cfee6837a77eb476ea9f35f87","doc_count":300,
"metrics":{"hits":{"hits":[{"_source":{"name":"BABSYZ",
"dimensions":{"key2":"UIPAJD","key1":"DKPNKA",
"key_10": "RADAJP"}}}]}}}]}}]}}}
"""
req_result.json.return_value = json.loads(response_str)
req_result.status_code = 200
with mock.patch.object(requests, 'post', return_value=req_result):
self.dispatcher.do_get_metrics(req, res)
# test that the response code is 200
self.assertEqual(res.status, getattr(falcon, 'HTTP_200'))
obj = json.loads(res.body)
self.assertEqual(obj[0]['name'], 'BABMGD')
self.assertEqual(obj[0]['dimensions']['key2'], 'NVITDU')
self.assertEqual(len(obj), 2)
def test_do_post_metrics(self):
with mock.patch.object(kafka_conn.KafkaConnection, 'send_messages',
return_value=204):
res = mock.Mock()
self.dispatcher.do_post_metrics(mock.Mock(), res)
self.assertEqual(getattr(falcon, 'HTTP_204'), res.status)
def test_do_get_measurements(self):
res = mock.Mock()
req = mock.Mock()
def _side_effect(arg):
if arg == 'name':
return 'tongli'
elif arg == 'dimensions':
return 'key1:100, key2:200'
elif arg == 'start_time':
return '2014-01-01'
req.get_param.side_effect = _side_effect
req_result = mock.Mock()
response_str = """
{"took":226,"timed_out":false,"_shards":{"total":5,"successful":5,
"failed":0},"hits":{"total":6600,"max_score":0.0,"hits":[]},
"aggregations":{"by_name":{"doc_count_error_upper_bound":293,
"sum_other_doc_count":5791,"buckets":[{"key":"ABYTPK",
"doc_count":300,"by_dim":{"doc_count_error_upper_bound":0,
"sum_other_doc_count":0,
"buckets":[{"key":"e62ef04ee44abcccdd177087d159c1e3","doc_count":300,
"dimension":{"hits":{"total":300,"max_score":1.4142135,
"hits":[{"_index":"data_20150121","_type":"metrics",
"_id":"AUsShaLKTZaMxA7_0_Hj","_score":1.4142135,
"_source":{"name":"ABYTPK","dimensions":{"key_81":"MKKNSA",
"key2":"TJJQGE","key1":"GYYLEG"}}}]}},
"measures":{"hits":{"total":300,"max_score":null,
"hits":[{"_index":"data_20150121","_type":"metrics",
"_id":"AUsShaKuTZaMxA7_0_Hd","_score":null,
"_source":{"timestamp":1.421944922765286E9,"value":0.0},
"sort":[1.421944922765286E9]},{"_index":"data_20150121",
"_type":"metrics","_id":"AUsShaM8TZaMxA7_0_H7",
"_score":null,"_source":{"timestamp":1.421944922907783E9,
"value":0.0},"sort":[1.421944922907783E9]},{"_index":"data_20150121",
"_type":"metrics","_id":"AUsShaR2TZaMxA7_0_IZ","_score":null,
"_source":{"timestamp":1.421944923222439E9,"value":0.0},
"sort":[1.421944923222439E9]}]}}}]}},{"key":"ABUYPI","doc_count":256,
"by_dim":{"doc_count_error_upper_bound":0,"sum_other_doc_count":0,
"buckets":[{"key":"3dba425d350f6f46f8eda8a883231e58",
"doc_count":256,"dimension":{"hits":{"total":256,
"max_score":1.4142135,"hits":[{"_index":"data_20150121",
"_type":"metrics","_id":"AUsSaTfYTZaMxA7_zaxn","_score":1.4142135,
"_source":{"name":"ABUYPI","dimensions":{"key2":"BEBGIY",
"key1":"JZAZQS","key_67":"EAJWVV"}}}]}},
"measures":{"hits":{"total":256,"max_score":null,
"hits":[{"_index":"data_20150121","_type":"metrics",
"_id":"AUsSaTfQTZaMxA7_zaxl","_score":null,
"_source":{"timestamp":1.421943060399819E9,"value":0.0},
"sort":[1.421943060399819E9]},{"_index":"data_20150121",
"_type":"metrics","_id":"AUsSaThJTZaMxA7_zayD","_score":null,
"_source":{"timestamp":1.421943060519964E9,"value":0.0},
"sort":[1.421943060519964E9]},{"_index":"data_20150121",
"_type":"metrics","_id":"AUsSaTjKTZaMxA7_zayh","_score":null,
"_source":{"timestamp":1.421943060648909E9,"value":0.0},
"sort":[1.421943060648909E9]}]}}}]}},
{"key":"ABEPJR","doc_count":253,
"by_dim":{"doc_count_error_upper_bound":0,"sum_other_doc_count":0,
"buckets":[{"key":"6d6bbdda3ed7f14c76e746e2fbd52a37",
"doc_count":253,"dimension":{"hits":{"total":253,
"max_score":1.4142135,"hits":[{"_index":"data_20150121",
"_type":"metrics","_id":"AUsR6STQTZaMxA7_sjp6",
"_score":1.4142135,"_source":{"name":"ABEPJR",
"dimensions":{"key_7":"ZAUVQN","key2":"NSXBUW","key1":"UXTDSW"}}}]}},
"measures":{"hits":{"total":253,"max_score":null,
"hits":[{"_index":"data_20150121","_type":"metrics",
"_id":"AUsR6SItTZaMxA7_sjnV","_score":null,
"_source":{"timestamp":1.421934666252589E9,"value":0.0},
"sort":[1.421934666252589E9]},{"_index":"data_20150121",
"_type":"metrics","_id":"AUsR6SKqTZaMxA7_sjnz","_score":null,
"_source":{"timestamp":1.421934666377047E9,"value":0.0},
"sort":[1.421934666377047E9]},{"_index":"data_20150121",
"_type":"metrics","_id":"AUsR6SMiTZaMxA7_sjoR","_score":null,
"_source":{"timestamp":1.421934666497888E9,"value":0.0},
"sort":[1.421934666497888E9]}]}}}]}}]}}}
"""
req_result.json.return_value = json.loads(response_str)
req_result.status_code = 200
with mock.patch.object(requests, 'post', return_value=req_result):
self.dispatcher.do_get_measurements(req, res)
# test that the response code is 200
self.assertEqual(res.status, getattr(falcon, 'HTTP_200'))
obj = json.loads(res.body)
# there should be total of 3 objects
self.assertEqual(len(obj), 3)
self.assertIsNotNone(obj[0]['name'])
self.assertIsNotNone(obj[0]['dimensions'])
self.assertIsNotNone(obj[0]['columns'])
self.assertIsNotNone(obj[0]['measurements'])
def test_do_get_statistics(self):
res = mock.Mock()
req = mock.Mock()
def _side_effect(arg):
if arg == 'name':
return 'tongli'
elif arg == 'dimensions':
return 'key1:100, key2:200'
elif arg == 'start_time':
return '2014-01-01'
elif arg == 'end_time':
return None
elif arg == 'period':
return None
elif arg == 'statistics':
return 'avg, sum, max'
req.get_param.side_effect = _side_effect
req_result = mock.Mock()
response_str = """
{"took":2006,"timed_out":false,"_shards":{"total":5,"successful":5,
"failed":0},"hits":{"total":600,"max_score":0.0,"hits":[]},
"aggregations":{"by_name":{"doc_count_error_upper_bound":0,
"sum_other_doc_count":0,"buckets":[{"key":"BABMGD","doc_count":300,
"by_dim":{"doc_count_error_upper_bound":0,"sum_other_doc_count":0,
"buckets":[{"key":"64e6ce08b3b8547b7c32e5cfa5b7d81f","doc_count":300,
"periods":{"buckets":[{"key":1421700000,"doc_count":130,
"statistics":{"count":130,"min":0.0,"max":595.0274095324651,
"avg":91.83085293930924,"sum":11938.0108821102}},
{"key":1422000000,"doc_count":170,"statistics":{"count":170,
"min":0.0,"max":1623.511307756313,"avg":324.69434786459897,
"sum":55198.039136981824}}]},"dimension":{"hits":{"total":300,
"max_score":1.4142135,"hits":[{"_index":"data_20150121",
"_type":"metrics","_id":"AUsSNF5mTZaMxA7_wmFx","_score":1.4142135,
"_source":{"name":"BABMGD","dimensions":{"key2":"NVITDU",
"key1":"FUFMPY","key_43":"ROQBZM"}}}]}}}]}},{"key":"BABSYZ",
"doc_count":300,"by_dim":{"doc_count_error_upper_bound":0,
"sum_other_doc_count":0,
"buckets":[{"key":"84863c7cfee6837a77eb476ea9f35f87","doc_count":300,
"periods":{"buckets":[{"key":1421700000,"doc_count":130,
"statistics":{"count":130,"min":0.0,"max":588.7273873368565,
"avg":100.45023098906705,"sum":13058.530028578716}},
{"key":1422000000,"doc_count":170,"statistics":{"count":170,
"min":0.0,"max":1515.5538517109185,"avg":332.5777043693029,
"sum":56538.209742781495}}]},"dimension":{"hits":{"total":300,
"max_score":1.4142135,"hits":[{"_index":"data_20150121",
"_type":"metrics","_id":"AUsR7oGETZaMxA7_s0Y0","_score":1.4142135,
"_source":{"name":"BABSYZ","dimensions":{"key2":"UIPAJD",
"key1":"DKPNKA","key_10":"RADAJP"}}}]}}}]}}]}}}
"""
req_result.json.return_value = json.loads(response_str)
req_result.status_code = 200
with mock.patch.object(requests, 'post', return_value=req_result):
self.dispatcher.do_get_statistics(req, res)
# test that the response code is 200
self.assertEqual(res.status, getattr(falcon, 'HTTP_200'))
obj = json.loads(res.body)
# there should be total of 2 objects
self.assertEqual(len(obj), 2)
self.assertIsNotNone(obj[0]['name'])
self.assertIsNotNone(obj[0]['dimensions'])
self.assertIsNotNone(obj[0]['columns'])
self.assertEqual(obj[0]['columns'],
["timestamp", "avg", "sum", "max"])
self.assertIsNotNone(obj[0]['statistics'])

View File

@ -0,0 +1,240 @@
# Copyright 2015 Carnegie Mellon University
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import ast
import falcon
import mock
from oslo_config import fixture as fixture_config
from oslotest import base
import requests
from kiloeyes.common import es_conn
from kiloeyes.v2.elasticsearch import notificationmethods
import json
response_str = """
{
"hits":{
"hits":[
{
"_score":1.0,
"_type":"notification_methods",
"_id":"c60ec47e-5038-4bf1-9f95-4046c6e9a719",
"_source":{
"type":"EMAIL",
"id":"c60ec47e-5038-4bf1-9f95-4046c6e9a719",
"name":"NotificationMethod",
"address":"hanc@andrew.cmu.edu"
},
"_index":"admin"
}
],
"total":1,
"max_score":1.0
},
"_shards":{
"successful":5,
"failed":0,
"total":5
},
"took":2
}
"""
class TestParamUtil(base.BaseTestCase):
def setUp(self):
super(TestParamUtil, self).setUp()
self.req = mock.Mock()
class Es_conn(object):
def get_message_by_id(self, id):
return response_str
class TestNotificationMethodDispatcher(base.BaseTestCase):
def setUp(self):
self.CONF = self.useFixture(fixture_config.Config()).conf
self.CONF.set_override('doc_type', 'fake',
group='notificationmethods')
self.CONF.set_override('uri', 'fake_es_uri', group='es_conn')
super(TestNotificationMethodDispatcher, self).setUp()
res = mock.Mock()
res.status_code = 200
res.json.return_value = {
"id": "35cc6f1c-3a29-49fb-a6fc-d9d97d190508",
"links": [
{
"rel": "self",
"href": "http://192.168.10.4:8080/v2.0/notification-"
"methods/35cc6f1c-3a29-49fb-a6fc-d9d97d190508"
}
],
"name": "Name of notification method",
"type": "EMAIL",
"address": "john.doe@hp.com"
}
with mock.patch.object(requests, 'get',
return_value=res):
self.dispatcher_get = (
notificationmethods.NotificationMethodDispatcher({}))
res.json.return_value = {}
with mock.patch.object(requests, 'post',
return_value=res):
self.dispatcher_post = (
notificationmethods.NotificationMethodDispatcher({}))
with mock.patch.object(requests, 'put',
return_value=res):
self.dispatcher_put = (
notificationmethods.NotificationMethodDispatcher({}))
with mock.patch.object(requests, 'delete',
return_value=res):
self.dispatcher_delete = (
notificationmethods.NotificationMethodDispatcher({}))
def test_initialization(self):
# test that the doc type of the es connection is fake
self.assertEqual(self.dispatcher_get._es_conn.doc_type, 'fake')
self.assertEqual(self.dispatcher_get._es_conn.uri, 'fake_es_uri/')
def test_handle_notification_msg(self):
with mock.patch.object(es_conn.ESConnection,
'post_messages',
return_value=200):
msg = ast.literal_eval(
'{"type":"EMAIL","request":"POST", '
'"id":'
'"c60ec47e-5038-4bf1-9f95-4046c6e9a719",'
'"name":"NotificationMethod",'
'"address":"hanc@andrew.cmu.edu"}')
np = notificationmethods.NotificationMethodDispatcher({})
np.handle_notification_msg(msg)
with mock.patch.object(es_conn.ESConnection, 'put_messages',
return_value=200):
msg = ast.literal_eval(
'{"type":"EMAIL","request":"PUT", '
'"id":'
'"c60ec47e-5038-4bf1-9f95-4046c6e9a719",'
'"name":"NotificationMethod",'
'"address":"hanc@andrew.cmu.edu"}')
np = notificationmethods.NotificationMethodDispatcher({})
np.handle_notification_msg(msg)
with mock.patch.object(es_conn.ESConnection, 'del_messages',
return_value=200):
msg = ast.literal_eval(
'{"type":"EMAIL","request":"DEL", '
'"id":'
'"c60ec47e-5038-4bf1-9f95-4046c6e9a719",'
'"name":"NotificationMethod",'
'"address":"hanc@andrew.cmu.edu"}')
np = notificationmethods.NotificationMethodDispatcher({})
np.handle_notification_msg(msg)
def test_do_get_notification_methods(self):
res = mock.Mock()
req = mock.Mock()
req.uri = 'some url'
req_result = mock.Mock()
req_result.json.return_value = json.loads(response_str)
req_result.status_code = 200
with mock.patch.object(requests, 'post', return_value=req_result):
self.dispatcher_get.do_get_notification_methods(req, res)
# test that the response code is 200
self.assertEqual(res.status, getattr(falcon, 'HTTP_200'))
obj = json.loads(res.body)
self.assertTrue(obj['links'])
self.assertTrue(obj['elements'])
self.assertEqual(len(obj['elements']), 1)
def test_do_get_notification_method_by_id(self):
res = mock.Mock()
req = mock.Mock()
req.uri = 'some url'
req_result = mock.Mock()
req_result.json.return_value = json.loads(response_str)
req_result.status_code = 200
with mock.patch.object(requests, 'get', return_value=req_result):
(self.dispatcher_get.
do_get_notification_method_by_id(
req, res,
id="c60ec47e-5038-4bf1-9f95-4046c6e9a719"))
# test that the response code is 200
self.assertEqual(res.status, getattr(falcon, 'HTTP_200'))
obj = json.loads(res.body)
self.assertEqual(obj['id'], 'c60ec47e-5038-4bf1-9f95-4046c6e9a719')
self.assertEqual(obj['type'], 'EMAIL')
self.assertEqual(obj['name'], 'NotificationMethod')
self.assertEqual(obj['address'], 'hanc@andrew.cmu.edu')
def test_do_post_notifications(self):
with mock.patch.object(
notificationmethods.NotificationMethodDispatcher,
'handle_notification_msg', return_value=200):
with mock.patch.object(ast, 'literal_eval',
return_value=ast.literal_eval(
"{'type': 'PAGEDUTY', "
"'name': 'NotificationMethod2', "
"'address': '1234567'}")):
res = mock.Mock()
(self.dispatcher_post.
do_post_notification_methods(mock.Mock(), res))
self.assertEqual(getattr(falcon, 'HTTP_200'), res.status)
def test_do_put_notifications(self):
with mock.patch.object(
notificationmethods.NotificationMethodDispatcher,
'handle_notification_msg', return_value=200):
with mock.patch.object(ast, 'literal_eval',
return_value=ast.literal_eval(
"{'type': 'PAGEDUTY', "
"'name': 'NotificationMethod2', "
"'address': '1234567'}")):
res = mock.Mock()
(self.dispatcher_post.
do_put_notification_methods(
mock.Mock(), res,
id="c60ec47e-5038-4bf1-9f95-4046c6e9a719"))
self.assertEqual(getattr(falcon, 'HTTP_200'), res.status)
def test_do_delete_notifications(self):
with mock.patch.object(
notificationmethods.NotificationMethodDispatcher,
'handle_notification_msg', return_value=200):
res = mock.Mock()
(self.dispatcher_post.
do_delete_notification_methods(
mock.Mock(), res,
id="c60ec47e-5038-4bf1-9f95-4046c6e9a719"))
self.assertEqual(getattr(falcon, 'HTTP_200'), res.status)

View File

@ -0,0 +1,51 @@
# Copyright 2013 IBM Corp
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import falcon
import mock
from oslotest import base
from kiloeyes.v2.elasticsearch import versions
try:
import ujson as json
except ImportError:
import json
class TestVersionDispatcher(base.BaseTestCase):
def setUp(self):
super(TestVersionDispatcher, self).setUp()
self.dispatcher = versions.VersionDispatcher({})
def test_get_versions(self):
req = mock.Mock()
req.uri = 'http://fake'
res = mock.Mock()
self.dispatcher.get_versions(req, res)
body = json.loads(res.body)
self.assertEqual(body[0]['id'], 'v2.0')
self.assertEqual(getattr(falcon, 'HTTP_200'), res.status)
def test_get_version_by_id(self):
req = mock.Mock()
req.uri = 'http://fake'
res = mock.Mock()
self.dispatcher.get_version_by_id(req, res, 'v2.0')
body = json.loads(res.body)
self.assertEqual(body['id'], 'v2.0')
self.assertEqual(getattr(falcon, 'HTTP_200'), res.status)

0
kiloeyes/v2/__init__.py Normal file
View File

View File

View File

@ -0,0 +1,351 @@
# Copyright 2015 Carnegie Mellon University
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import ast
import falcon
from oslo_config import cfg
from stevedore import driver
import uuid
from kiloeyes.common import alarm_expr_parser
from kiloeyes.common import alarm_expr_validator
from kiloeyes.common import es_conn
from kiloeyes.common import namespace
from kiloeyes.common import resource_api
from kiloeyes.openstack.common import log
try:
import ujson as json
except ImportError:
import json
alarmdefinitions_opts = [
cfg.StrOpt('doc_type', default='alarmdefinitions',
help='The doc_type that alarm definitions will be saved to.'),
cfg.StrOpt('index_strategy', default='fixed',
help='The index strategy used to create index name.'),
cfg.StrOpt('index_prefix', default='data_',
help='The index prefix where metrics were saved to.'),
cfg.IntOpt('size', default=1000,
help=('The query result limit. Any result set more than '
'the limit will be discarded.')),
]
cfg.CONF.register_opts(alarmdefinitions_opts, group='alarmdefinitions')
STATES = ['LOW', 'MEDIUM', 'HIGH', 'CRITICAL']
LOG = log.getLogger(__name__)
class AlarmDefinitionUtil(object):
@staticmethod
def severityparsing(msg):
try:
severity = msg["severity"]
if severity in STATES:
return msg
else:
msg["severity"] = "LOW"
return msg
except Exception:
return msg
class AlarmDefinitionDispatcher(object):
def __init__(self, global_conf):
LOG.debug('Initializing AlarmDefinition V2API!')
super(AlarmDefinitionDispatcher, self).__init__()
self.doc_type = cfg.CONF.alarmdefinitions.doc_type
self.size = cfg.CONF.alarmdefinitions.size
# load index strategy
if cfg.CONF.alarmdefinitions.index_strategy:
self.index_strategy = driver.DriverManager(
namespace.STRATEGY_NS,
cfg.CONF.alarmdefinitions.index_strategy,
invoke_on_load=True,
invoke_kwds={}).driver
LOG.debug(self.index_strategy)
else:
self.index_strategy = None
self.index_prefix = cfg.CONF.alarmdefinitions.index_prefix
self._es_conn = es_conn.ESConnection(
self.doc_type, self.index_strategy, self.index_prefix)
def _get_alarm_definitions_response(self, res):
if res and res.status_code == 200:
obj = res.json()
if obj:
return obj.get('hits')
return None
def _get_alarm_definitions_helper(self, query_string):
query = {}
queries = []
field_string = 'alarmdefinitions.expression_data.dimensions.'
if query_string:
params = query_string.split('&')
for current_param in params:
current_param_split = current_param.split('=')
if current_param_split[0] == 'dimensions':
current_dimension_split = (
current_param_split[1].split(','))
for current_dimension in current_dimension_split:
current_dimen_data = current_dimension.split(':')
queries.append({
'query_string': {
'default_field': (field_string +
current_dimen_data[0]),
'query': current_dimen_data[1]
}
})
elif current_param_split[0] in ['limit', 'offset']:
# ignore the limit and offset for now.
pass
else:
queries.append({
'query_string': {
'default_field': current_param_split[0],
'query': current_param_split[1]
}
})
LOG.debug(queries)
query = {
'query': {
'bool': {
'must': queries
}
}
}
LOG.debug('Parsed Query: %s' % query)
return query
@resource_api.Restify('/v2.0/alarm-definitions/', method='post')
def do_post_alarm_definitions(self, req, res):
LOG.debug('Creating the alarm definitions')
msg = req.stream.read()
LOG.debug("Message: %s" % msg)
post_msg = ast.literal_eval(msg)
# random uuid generation for alarm definition
id = str(uuid.uuid4())
post_msg["id"] = id
post_msg = AlarmDefinitionUtil.severityparsing(post_msg)
post_msg_json = json.dumps(post_msg)
LOG.debug("Validating Alarm Definition Data: %s" % post_msg_json)
if alarm_expr_validator.is_valid_alarm_definition(post_msg_json):
LOG.debug("Post Alarm Definition method: %s" % post_msg)
try:
expression_parsed = (
alarm_expr_parser.AlarmExprParser(post_msg["expression"]))
expression_data = expression_parsed.sub_alarm_expressions
expression_data_list = []
for temp in expression_data:
expression_data_list.append(expression_data[temp])
post_msg["expression_data"] = expression_data_list
LOG.debug(post_msg)
es_res = self._es_conn.post_messages(json.dumps(post_msg), id)
LOG.debug('Query to ElasticSearch returned Status: %s' %
es_res)
res.status = getattr(falcon, 'HTTP_%s' % es_res)
except Exception:
LOG.exception('Error occurred while handling '
'Alarm Definition Post Request.')
res.status = getattr(falcon, 'HTTP_400')
else:
LOG.error('Alarm definition is not valid.')
res.status = getattr(falcon, 'HTTP_400')
@resource_api.Restify('/v2.0/alarm-definitions/{id}', method='get')
def do_get_alarm_definitions_by_id(self, req, res, id):
LOG.debug('The alarm definitions GET request is received!')
LOG.debug(id)
es_res = self._es_conn.get_message_by_id(id)
res.status = getattr(falcon, 'HTTP_%s' % es_res.status_code)
LOG.debug('Query to ElasticSearch returned Status: %s' %
es_res.status_code)
es_res = self._get_alarm_definitions_response(es_res)
LOG.debug('Query to ElasticSearch returned: %s' % es_res)
res.body = ''
try:
if es_res["hits"]:
res_data = es_res["hits"][0]
if res_data:
res.body = json.dumps({
"id": id,
"links": [{"rel": "self",
"href": req.uri}],
"name": res_data["_source"]["name"],
"description": res_data["_source"]["description"],
"expression": res_data["_source"]["expression"],
"expression_data":
res_data["_source"]["expression_data"],
"severity": res_data["_source"]["severity"],
"match_by": res_data["_source"]["match_by"],
"alarm_actions": res_data["_source"]["alarm_actions"],
"ok_actions": res_data["_source"]["ok_actions"],
"undetermined_actions": res_data["_source"]
["undetermined_actions"]})
res.content_type = 'application/json;charset=utf-8'
except Exception:
LOG.exception('Error occurred while handling Alarm Definition '
'Get Request.')
@resource_api.Restify('/v2.0/alarm-definitions/{id}', method='put')
def do_put_alarm_definitions(self, req, res, id):
LOG.debug("Put the alarm definitions with id: %s" % id)
es_res = self._es_conn.get_message_by_id(id)
LOG.debug('Query to ElasticSearch returned Status: %s' %
es_res.status_code)
es_res = self._get_alarm_definitions_response(es_res)
LOG.debug('Query to ElasticSearch returned: %s' % es_res)
original_data = {}
try:
if es_res["hits"]:
res_data = es_res["hits"][0]
if res_data:
original_data = json.dumps({
"id": id,
"name": res_data["_source"]["name"],
"description": res_data["_source"]["description"],
"expression": res_data["_source"]["expression"],
"expression_data":
res_data["_source"]["expression_data"],
"severity": res_data["_source"]["severity"],
"match_by": res_data["_source"]["match_by"],
"alarm_actions": res_data["_source"]["alarm_actions"],
"ok_actions": res_data["_source"]["ok_actions"],
"undetermined_actions": res_data["_source"]
["undetermined_actions"]})
msg = req.stream.read()
put_msg = ast.literal_eval(msg)
put_msg = AlarmDefinitionUtil.severityparsing(put_msg)
expression_parsed = (
alarm_expr_parser.AlarmExprParser(put_msg["expression"])
)
expression_data = expression_parsed.sub_alarm_expressions
expression_data_list = []
for temp in expression_data:
expression_data_list.append(expression_data[temp])
put_msg["expression_data"] = expression_data_list
put_msg_json = json.dumps(put_msg)
LOG.debug("Alarm Definition Put Data: %s" % put_msg_json)
if alarm_expr_validator.is_valid_update_alarm_definition(
original_data, put_msg_json):
es_res = self._es_conn.put_messages(put_msg_json, id)
LOG.debug('Query to ElasticSearch returned Status: %s' %
es_res)
res.status = getattr(falcon, 'HTTP_%s' % es_res)
else:
res.status = getattr(falcon, 'HTTP_400')
LOG.debug("Validating Alarm Definition Failed !!")
except Exception:
res.status = getattr(falcon, 'HTTP_400')
LOG.exception('Error occurred while handling Alarm '
'Definition Put Request.')
@resource_api.Restify('/v2.0/alarm-definitions/{id}', method='delete')
def do_delete_alarm_definitions(self, req, res, id):
LOG.debug("Delete the alarm definitions with id: %s" % id)
try:
es_res = self._es_conn.del_messages(id)
LOG.debug('Query to ElasticSearch returned Status: %s' %
es_res)
res.status = getattr(falcon, 'HTTP_%s' % es_res)
except Exception:
res.status = getattr(falcon, 'HTTP_400')
LOG.exception('Error occurred while handling Alarm '
'Definition Delete Request.')
@resource_api.Restify('/v2.0/alarm-definitions/', method='get')
def do_get_alarm_definitions_filtered(self, req, res):
LOG.debug('The alarm definitions GET request is received!')
query_string = req.query_string
LOG.debug('Request Query String: %s' % query_string)
params = self._get_alarm_definitions_helper(query_string)
LOG.debug('Query Data: %s' % params)
es_res = self._es_conn.get_messages(params)
res.status = getattr(falcon, 'HTTP_%s' % es_res.status_code)
LOG.debug('Query to ElasticSearch returned Status: %s' %
es_res.status_code)
es_res = self._get_alarm_definitions_response(es_res)
LOG.debug('Query to ElasticSearch returned: %s' % es_res)
res.body = ''
result_elements = []
try:
if es_res["hits"]:
res_data = es_res["hits"]
for current_alarm in res_data:
if current_alarm:
result_elements.append({
"id": current_alarm["_source"]["id"],
"links": [{"rel": "self",
"href": req.uri}],
"name": current_alarm["_source"]["name"],
"description":
current_alarm["_source"]["description"],
"expression":
current_alarm["_source"]["expression"],
"expression_data":
current_alarm["_source"]["expression_data"],
"severity":
current_alarm["_source"]["severity"],
"match_by":
current_alarm["_source"]["match_by"],
"alarm_actions":
current_alarm["_source"]["alarm_actions"],
"ok_actions":
current_alarm["_source"]["ok_actions"],
"undetermined_actions":
current_alarm["_source"]
["undetermined_actions"]})
res.body = json.dumps({
"links": [{"rel": "self", "href": req.uri}],
"elements": result_elements
})
else:
res.body = ""
res.content_type = 'application/json;charset=utf-8'
except Exception:
res.status = getattr(falcon, 'HTTP_400')
LOG.exception('Error occurred while handling Alarm '
'Definitions Get Request.')

Some files were not shown because too many files have changed in this diff Show More