Remove bundled intree monasca-api tempest plugin
* https://review.openstack.org/#/c/526844/ move the intree bundled tempest plugin to a new repo monasca-tempest-plugin. Let's use it and remove the bundled tempest plugin from repo. * Moved post_host script to main root directory * Removed dummy gate_hook.sh script * Fixed the jobs for the same Story: 2001400 Task: 6085 Depends-On: I2ce2bd8238d44a451faeba9ddbfe27d900e9adef Change-Id: I79cea368271bbef33914dba7b95f5546a1b8d3c1
This commit is contained in:
parent
dbb0fcb67c
commit
80b4f18e1e
.zuul.yaml
contrib
monasca_tempest_tests
README.md__init__.pyclients.pyconfig.py
contrib
plugin.pyservices
tests
__init__.py
api
__init__.pybase.pyconstants.pyhelpers.pytest_alarm_definitions.pytest_alarm_state_history_multiple_transitions.pytest_alarm_transitions.pytest_alarms.pytest_alarms_count.pytest_alarms_state_history_one_transition.pytest_dimensions.pytest_measurements.pytest_metrics.pytest_metrics_names.pytest_notification_method_type.pytest_notification_methods.pytest_read_only_role.pytest_statistics.pytest_versions.py
playbooks/legacy/monasca-tempest-base
setup.cfgtox.ini@ -17,6 +17,7 @@
|
|||||||
- openstack/monasca-ui
|
- openstack/monasca-ui
|
||||||
- openstack/python-monascaclient
|
- openstack/python-monascaclient
|
||||||
- openstack/tempest
|
- openstack/tempest
|
||||||
|
- openstack/monasca-tempest-plugin
|
||||||
|
|
||||||
- job:
|
- job:
|
||||||
name: monasca-tempest-python-mysql
|
name: monasca-tempest-python-mysql
|
||||||
|
@ -91,12 +91,4 @@ sudo chown -R $USER:stack $TEMPEST_DIR
|
|||||||
|
|
||||||
load_devstack_utilities
|
load_devstack_utilities
|
||||||
setup_monasca_api
|
setup_monasca_api
|
||||||
set_tempest_conf
|
set_tempest_conf
|
||||||
|
|
||||||
(cd $TEMPEST_DIR; testr init)
|
|
||||||
(cd $TEMPEST_DIR; testr list-tests monasca_tempest_tests > monasca_tempest_tests)
|
|
||||||
(cd $TEMPEST_DIR; cat monasca_tempest_tests)
|
|
||||||
(cd $TEMPEST_DIR; cat monasca_tempest_tests | grep gate > monasca_tempest_tests_gate)
|
|
||||||
(cd $TEMPEST_DIR; testr run --subunit --load-list=monasca_tempest_tests_gate | subunit-trace --fails)
|
|
||||||
|
|
||||||
|
|
@ -1,186 +0,0 @@
|
|||||||
# Introduction
|
|
||||||
The Monasca Tempest Tests use the [OpenStack Tempest Plugin Interface](https://docs.openstack.org/tempest/latest/plugin.html). This README describes how to configure and run them using a variety of methods.
|
|
||||||
Currently the devstack environment is needed to run the tests. Instructions on setting up a devstack environment can be found here: https://github.com/openstack/monasca-api/devstack/README.md.
|
|
||||||
|
|
||||||
# Configuring to run the Monasca Tempest Tests
|
|
||||||
1. Clone the OpenStack Tempest repo, and cd to it.
|
|
||||||
|
|
||||||
```
|
|
||||||
git clone https://git.openstack.org/openstack/tempest.git
|
|
||||||
cd tempest
|
|
||||||
```
|
|
||||||
2. Create a virtualenv for running the Tempest tests and activate it. For example in the Tempest root dir
|
|
||||||
|
|
||||||
```
|
|
||||||
virtualenv .venv
|
|
||||||
source .venv/bin/activate
|
|
||||||
```
|
|
||||||
3. Install the Tempest requirements in the virtualenv.
|
|
||||||
|
|
||||||
```
|
|
||||||
pip install -r requirements.txt -r test-requirements.txt
|
|
||||||
```
|
|
||||||
4. Create ```etc/tempest.conf``` in the Tempest root dir by running the following command:
|
|
||||||
|
|
||||||
```
|
|
||||||
oslo-config-generator --config-file tempest/cmd/config-generator.tempest.conf --output-file etc/tempest.conf
|
|
||||||
```
|
|
||||||
|
|
||||||
Add the following sections to ```tempest.conf``` for testing using the devstack environment.
|
|
||||||
|
|
||||||
```
|
|
||||||
[identity]
|
|
||||||
|
|
||||||
auth_version = v3
|
|
||||||
uri = http://127.0.0.1/identity_admin/v2.0/
|
|
||||||
uri_v3 = http://127.0.0.1/identity_admin/v3/
|
|
||||||
user_lockout_failure_attempts = 2
|
|
||||||
user_locakout_duration = 5
|
|
||||||
user_unique_last_password_count = 2
|
|
||||||
admin_domain_scope = True
|
|
||||||
|
|
||||||
[auth]
|
|
||||||
|
|
||||||
use_dynamic_credentials = True
|
|
||||||
admin_project_name = admin
|
|
||||||
admin_username = admin
|
|
||||||
admin_password = secretadmin
|
|
||||||
admin_domain_name = Default
|
|
||||||
```
|
|
||||||
|
|
||||||
Edit the variable values in the identity section to match your particular environment.
|
|
||||||
|
|
||||||
5. Create ```etc/logging.conf``` in the Tempest root dir by making a copying ```logging.conf.sample```.
|
|
||||||
|
|
||||||
6. Clone the monasca-api repo in a directory somewhere outside of the Tempest root dir.
|
|
||||||
|
|
||||||
7. Install the monasca-api in your venv, which will also register
|
|
||||||
the Monasca Tempest Plugin as, monasca_tests.
|
|
||||||
|
|
||||||
cd into the monasca-api root directory. Making sure that the tempest virtual env is still active,
|
|
||||||
run the following command.
|
|
||||||
|
|
||||||
```
|
|
||||||
python setup.py install
|
|
||||||
```
|
|
||||||
|
|
||||||
See the [OpenStack Tempest Plugin Interface](https://docs.openstack.org/tempest/latest/plugin.html), for more details on Tempest Plugins and the plugin registration process.
|
|
||||||
|
|
||||||
# Running the Monasca Tempest Tests
|
|
||||||
The Monasca Tempest Tests can be run using a variety of methods including:
|
|
||||||
1. [Testr](https://wiki.openstack.org/wiki/Testr)
|
|
||||||
2. [Os-testr](https://docs.openstack.org/os-testr/latest/)
|
|
||||||
3. [PyCharm](https://www.jetbrains.com/pycharm/)
|
|
||||||
4. Tempest Scripts in Devstack
|
|
||||||
|
|
||||||
## Run the tests from the CLI using testr
|
|
||||||
|
|
||||||
[Testr](https://wiki.openstack.org/wiki/Testr) is a test runner that can be used to run the Tempest tests.
|
|
||||||
|
|
||||||
1. Initializing testr is necessary to set up the .testrepository directory before using it for the first time. In the Tempest root dir:
|
|
||||||
|
|
||||||
```
|
|
||||||
testr init
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Create a list of the Monasca Tempest Tests in a file:
|
|
||||||
|
|
||||||
```
|
|
||||||
testr list-tests monasca_tempest_tests > monasca_tempest_tests
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Run the tests using testr:
|
|
||||||
|
|
||||||
```
|
|
||||||
testr run --load-list=monasca_tempest_tests
|
|
||||||
```
|
|
||||||
You can also use testr to create a list of specific tests for your needs.
|
|
||||||
|
|
||||||
## Run the tests using Tempest Run command
|
|
||||||
|
|
||||||
``tempest run`` is a domain-specific command to be used as the primary
|
|
||||||
entry point for running Tempest tests.
|
|
||||||
|
|
||||||
1. In the Tempest root dir:
|
|
||||||
|
|
||||||
```
|
|
||||||
tempest run -r monasca_tempest_tests
|
|
||||||
```
|
|
||||||
|
|
||||||
## Run the tests from the CLI using os-testr (no file necessary)
|
|
||||||
[Os-testr](https://docs.openstack.org/os-testr/latest/) is a test wrapper that can be used to run the Monasca Tempest tests.
|
|
||||||
|
|
||||||
1. In the Tempest root dir:
|
|
||||||
|
|
||||||
```
|
|
||||||
ostestr --serial --regex monasca_tempest_tests
|
|
||||||
```
|
|
||||||
```--serial``` option is necessary here. Monasca tempest tests can't be run in parallel (default option in ostestr) because some tests depend on the same data and will randomly fail.
|
|
||||||
|
|
||||||
## Running/Debugging the Monasca Tempest Tests in PyCharm
|
|
||||||
|
|
||||||
You need to install `nose` for running tests from PyCharm:
|
|
||||||
```
|
|
||||||
pip install nose
|
|
||||||
```
|
|
||||||
|
|
||||||
Assuming that you have already created a PyCharm project for the ```monasca-api``` do the following:
|
|
||||||
|
|
||||||
1. In PyCharm, Edit Configurations and add a new Python tests configuration by selecting Python tests->Nosetests.
|
|
||||||
2. Name the test. For example TestVersions.
|
|
||||||
3. Set the path to the script with the tests to run. For example, ~/repos/monasca-api/monasca_tempest_tests/api/test_versions.py
|
|
||||||
4. Set the name of the Class to test. For example TestVersions.
|
|
||||||
5. Set the working directory to your local root Tempest repo. For example, ~/repos/tempest.
|
|
||||||
6. Select the Python interpreter for your project to be the same as the one virtualenv created above. For example, ~/repos/tempest/.venv
|
|
||||||
7. Run the tests. You should also be able to debug them.
|
|
||||||
8. Step and repeat for other tests.
|
|
||||||
|
|
||||||
## Run the tests from the CLI using tempest scripts in devstack
|
|
||||||
|
|
||||||
1. Create a virtualenv in devstack for running the tempest tests and activate it:
|
|
||||||
|
|
||||||
```
|
|
||||||
cd /opt/stack/tempest
|
|
||||||
virtualenv .venv
|
|
||||||
source .venv/bin/activate
|
|
||||||
```
|
|
||||||
2. Install the tempest requirements in the virtualenv:
|
|
||||||
|
|
||||||
```
|
|
||||||
pip install -r requirements.txt -r test-requirements.txt
|
|
||||||
```
|
|
||||||
3. If you want to test changes in monasca-api code on your local machine, change directory to monasca-api and install the latest monasca-api code:
|
|
||||||
|
|
||||||
```
|
|
||||||
cd /vagrant_home/<monasca-api directory>
|
|
||||||
python setup.py install
|
|
||||||
```
|
|
||||||
Or if you want to use the current monasca api in devstack:
|
|
||||||
|
|
||||||
```
|
|
||||||
cd /opt/stack/monasca-api
|
|
||||||
python setup.py install
|
|
||||||
```
|
|
||||||
|
|
||||||
4. Run tempest tests:
|
|
||||||
|
|
||||||
```
|
|
||||||
cd /opt/stack/tempest
|
|
||||||
testr init
|
|
||||||
ostestr --serial --regex monasca_tempest_tests
|
|
||||||
```
|
|
||||||
|
|
||||||
# References
|
|
||||||
This section provides a few additional references that might be useful:
|
|
||||||
* [Tempest - The OpenStack Integration Test Suite](https://docs.openstack.org/tempest/latest/overview.html#quickstart)
|
|
||||||
* [Tempest Configuration Guide](https://github.com/openstack/tempest/blob/master/doc/source/configuration.rst#id1)
|
|
||||||
* [OpenStack Tempest Plugin Interface](https://docs.openstack.org/tempest/latest/plugin.html)
|
|
||||||
|
|
||||||
In addition to the above references, another source of information is the following OpenStack projects:
|
|
||||||
* [Manila Tempest Tests](https://github.com/openstack/manila/tree/master/manila_tempest_tests)
|
|
||||||
* [Congress Tempest Tests](https://github.com/openstack/congress/tree/master/congress_tempest_tests).
|
|
||||||
In particular, the Manila Tempest Tests were used as a reference implementation to develop the Monasca Tempest Tests. There is also a wiki [HOWTO use tempest with manila](https://wiki.openstack.org/wiki/Manila/docs/HOWTO_use_tempest_with_manila) that might be useful for Monasca too.
|
|
||||||
|
|
||||||
# Issues
|
|
||||||
* Update documentation for testing using Devstack when available.
|
|
||||||
* Consider changing from monasca_tempest_tests to monasca_api_tempest_tests.
|
|
@ -1,23 +0,0 @@
|
|||||||
# (C) Copyright 2015,2017 Hewlett Packard Enterprise Development LP
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from tempest import clients
|
|
||||||
|
|
||||||
from monasca_tempest_tests.services import monasca_client
|
|
||||||
|
|
||||||
|
|
||||||
class Manager(clients.Manager):
|
|
||||||
def __init__(self, credentials=None):
|
|
||||||
super(Manager, self).__init__(credentials)
|
|
||||||
self.monasca_client = monasca_client.MonascaClient(self.auth_provider)
|
|
@ -1,41 +0,0 @@
|
|||||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from oslo_config import cfg
|
|
||||||
|
|
||||||
|
|
||||||
service_option = cfg.BoolOpt("monasca",
|
|
||||||
default=True,
|
|
||||||
help="Whether or not Monasca is expected to be "
|
|
||||||
"available")
|
|
||||||
|
|
||||||
monitoring_group = cfg.OptGroup(name="monitoring",
|
|
||||||
title="Monitoring Service Options")
|
|
||||||
|
|
||||||
MonitoringGroup = [
|
|
||||||
cfg.StrOpt("region",
|
|
||||||
default="",
|
|
||||||
help="The monitoring region name to use. If empty, the value "
|
|
||||||
"of identity.region is used instead. If no such region "
|
|
||||||
"is found in the service catalog, the first found one is "
|
|
||||||
"used."),
|
|
||||||
cfg.StrOpt("catalog_type",
|
|
||||||
default="monitoring",
|
|
||||||
help="Catalog type of the monitoring service."),
|
|
||||||
cfg.StrOpt('endpoint_type',
|
|
||||||
default='publicURL',
|
|
||||||
choices=['public', 'admin', 'internal',
|
|
||||||
'publicURL', 'adminURL', 'internalURL'],
|
|
||||||
help="The endpoint type to use for the monitoring service.")
|
|
||||||
]
|
|
@ -1,40 +0,0 @@
|
|||||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import os
|
|
||||||
|
|
||||||
from tempest.test_discover import plugins
|
|
||||||
|
|
||||||
from monasca_tempest_tests import config as config_monitoring
|
|
||||||
|
|
||||||
|
|
||||||
class MonascaTempestPlugin(plugins.TempestPlugin):
|
|
||||||
def load_tests(self):
|
|
||||||
base_path = os.path.split(os.path.dirname(
|
|
||||||
os.path.abspath(__file__)))[0]
|
|
||||||
test_dir = "monasca_tempest_tests/tests"
|
|
||||||
full_test_dir = os.path.join(base_path, test_dir)
|
|
||||||
return full_test_dir, base_path
|
|
||||||
|
|
||||||
def register_opts(self, conf):
|
|
||||||
conf.register_opt(config_monitoring.service_option,
|
|
||||||
group='service_available')
|
|
||||||
conf.register_group(config_monitoring.monitoring_group)
|
|
||||||
conf.register_opts(config_monitoring.MonitoringGroup,
|
|
||||||
group='monitoring')
|
|
||||||
|
|
||||||
def get_opt_lists(self):
|
|
||||||
return [(config_monitoring.monitoring_group.name,
|
|
||||||
config_monitoring.MonitoringGroup),
|
|
||||||
('service_available', [config_monitoring.service_option])]
|
|
@ -1,346 +0,0 @@
|
|||||||
# (C) Copyright 2015-2016 Hewlett Packard Enterprise Development LP
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
from oslo_serialization import jsonutils as json
|
|
||||||
|
|
||||||
from tempest import config
|
|
||||||
from tempest.lib.common import rest_client
|
|
||||||
|
|
||||||
CONF = config.CONF
|
|
||||||
|
|
||||||
|
|
||||||
class MonascaClient(rest_client.RestClient):
|
|
||||||
|
|
||||||
def __init__(self, auth_provider):
|
|
||||||
super(MonascaClient, self).__init__(
|
|
||||||
auth_provider,
|
|
||||||
CONF.monitoring.catalog_type,
|
|
||||||
CONF.monitoring.region or CONF.identity.region,
|
|
||||||
endpoint_type=CONF.monitoring.endpoint_type)
|
|
||||||
|
|
||||||
def get_version(self):
|
|
||||||
resp, response_body = self.get('')
|
|
||||||
return resp, response_body
|
|
||||||
|
|
||||||
def create_metrics(self, metrics, tenant_id=None):
|
|
||||||
uri = 'metrics'
|
|
||||||
if tenant_id:
|
|
||||||
uri = uri + '?tenant_id=%s' % tenant_id
|
|
||||||
request_body = json.dumps(metrics)
|
|
||||||
resp, response_body = self.post(uri, request_body)
|
|
||||||
return resp, response_body
|
|
||||||
|
|
||||||
def list_metrics(self, query_params=None):
|
|
||||||
uri = 'metrics'
|
|
||||||
if query_params is not None:
|
|
||||||
uri = uri + query_params
|
|
||||||
resp, response_body = self.get(uri)
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def list_metrics_names(self, query_params=None):
|
|
||||||
uri = 'metrics/names'
|
|
||||||
if query_params is not None:
|
|
||||||
uri = uri + query_params
|
|
||||||
resp, response_body = self.get(uri)
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def list_dimension_names(self, query_params=None):
|
|
||||||
uri = 'metrics/dimensions/names'
|
|
||||||
if query_params is not None:
|
|
||||||
uri = uri + query_params
|
|
||||||
resp, response_body = self.get(uri)
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def list_dimension_values(self, query_params=None):
|
|
||||||
uri = 'metrics/dimensions/names/values'
|
|
||||||
if query_params is not None:
|
|
||||||
uri = uri + query_params
|
|
||||||
resp, response_body = self.get(uri)
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def list_measurements(self, query_params=None):
|
|
||||||
uri = 'metrics/measurements'
|
|
||||||
if query_params is not None:
|
|
||||||
uri = uri + query_params
|
|
||||||
resp, response_body = self.get(uri)
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def list_statistics(self, query_params=None):
|
|
||||||
uri = 'metrics/statistics'
|
|
||||||
if query_params is not None:
|
|
||||||
uri = uri + query_params
|
|
||||||
resp, response_body = self.get(uri)
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def create_notifications(self, notification):
|
|
||||||
uri = 'notification-methods'
|
|
||||||
request_body = json.dumps(notification)
|
|
||||||
resp, response_body = self.post(uri, request_body)
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def create_notification_method(self,
|
|
||||||
name=None,
|
|
||||||
type=None,
|
|
||||||
address=None,
|
|
||||||
period=None):
|
|
||||||
uri = 'notification-methods'
|
|
||||||
request_body = {}
|
|
||||||
if name is not None:
|
|
||||||
request_body['name'] = name
|
|
||||||
if type is not None:
|
|
||||||
request_body['type'] = type
|
|
||||||
if address is not None:
|
|
||||||
request_body['address'] = address
|
|
||||||
if period is not None:
|
|
||||||
request_body['period'] = period
|
|
||||||
resp, response_body = self.post(uri, json.dumps(request_body))
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def delete_notification_method(self, id):
|
|
||||||
uri = 'notification-methods/' + id
|
|
||||||
resp, response_body = self.delete(uri)
|
|
||||||
return resp, response_body
|
|
||||||
|
|
||||||
def get_notification_method(self, id):
|
|
||||||
uri = 'notification-methods/' + id
|
|
||||||
resp, response_body = self.get(uri)
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def list_notification_methods(self, query_params=None):
|
|
||||||
uri = 'notification-methods'
|
|
||||||
if query_params is not None:
|
|
||||||
uri = uri + query_params
|
|
||||||
resp, response_body = self.get(uri)
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def update_notification_method(self, id, name, type, address, period=None):
|
|
||||||
uri = 'notification-methods/' + id
|
|
||||||
request_body = {}
|
|
||||||
request_body['name'] = name
|
|
||||||
request_body['type'] = type
|
|
||||||
request_body['address'] = address
|
|
||||||
if period is not None:
|
|
||||||
request_body['period'] = period
|
|
||||||
resp, response_body = self.put(uri, json.dumps(request_body))
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def patch_notification_method(self, id,
|
|
||||||
name=None, type=None,
|
|
||||||
address=None, period=None):
|
|
||||||
uri = 'notification-methods/' + id
|
|
||||||
request_body = {}
|
|
||||||
if name is not None:
|
|
||||||
request_body['name'] = name
|
|
||||||
if type is not None:
|
|
||||||
request_body['type'] = type
|
|
||||||
if address is not None:
|
|
||||||
request_body['address'] = address
|
|
||||||
if period is not None:
|
|
||||||
request_body['period'] = period
|
|
||||||
resp, response_body = self.patch(uri, json.dumps(request_body))
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def list_notification_method_types(self, query_params=None):
|
|
||||||
uri = 'notification-methods/types'
|
|
||||||
if query_params is not None:
|
|
||||||
uri = uri + query_params
|
|
||||||
resp, response_body = self.get(uri)
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def create_alarm_definitions(self, alarm_definitions):
|
|
||||||
uri = 'alarm-definitions'
|
|
||||||
request_body = json.dumps(alarm_definitions)
|
|
||||||
resp, response_body = self.post(uri, request_body)
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def list_alarm_definitions(self, query_params=None):
|
|
||||||
uri = 'alarm-definitions'
|
|
||||||
if query_params is not None:
|
|
||||||
uri = uri + query_params
|
|
||||||
resp, response_body = self.get(uri)
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def get_alarm_definition(self, id):
|
|
||||||
uri = 'alarm-definitions/' + id
|
|
||||||
resp, response_body = self.get(uri)
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def delete_alarm_definition(self, id):
|
|
||||||
uri = 'alarm-definitions/' + id
|
|
||||||
resp, response_body = self.delete(uri)
|
|
||||||
return resp, response_body
|
|
||||||
|
|
||||||
def update_alarm_definition(self, id, name, expression, description,
|
|
||||||
actions_enabled, match_by,
|
|
||||||
severity, alarm_actions,
|
|
||||||
ok_actions, undetermined_actions,
|
|
||||||
**kwargs):
|
|
||||||
uri = 'alarm-definitions/' + id
|
|
||||||
request_body = {}
|
|
||||||
request_body['name'] = name
|
|
||||||
request_body['expression'] = expression
|
|
||||||
request_body['description'] = description
|
|
||||||
request_body['actions_enabled'] = actions_enabled
|
|
||||||
request_body['match_by'] = match_by
|
|
||||||
request_body['severity'] = severity
|
|
||||||
request_body['alarm_actions'] = alarm_actions
|
|
||||||
request_body['ok_actions'] = ok_actions
|
|
||||||
request_body['undetermined_actions'] = undetermined_actions
|
|
||||||
|
|
||||||
for key, value in kwargs.items():
|
|
||||||
request_body[key] = value
|
|
||||||
|
|
||||||
resp, response_body = self.put(uri, json.dumps(request_body))
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def patch_alarm_definition(self,
|
|
||||||
id,
|
|
||||||
name=None,
|
|
||||||
description=None,
|
|
||||||
expression=None,
|
|
||||||
actions_enabled=None,
|
|
||||||
match_by=None,
|
|
||||||
severity=None,
|
|
||||||
alarm_actions=None,
|
|
||||||
ok_actions=None,
|
|
||||||
undetermined_actions=None,
|
|
||||||
**kwargs):
|
|
||||||
uri = 'alarm-definitions/' + id
|
|
||||||
request_body = {}
|
|
||||||
if name is not None:
|
|
||||||
request_body['name'] = name
|
|
||||||
if description is not None:
|
|
||||||
request_body['description'] = description
|
|
||||||
if expression is not None:
|
|
||||||
request_body['expression'] = expression
|
|
||||||
if actions_enabled is not None:
|
|
||||||
request_body['actions_enabled'] = actions_enabled
|
|
||||||
if match_by is not None:
|
|
||||||
request_body['match_by'] = match_by
|
|
||||||
if severity is not None:
|
|
||||||
request_body['severity'] = severity
|
|
||||||
if alarm_actions is not None:
|
|
||||||
request_body['alarm_actions'] = alarm_actions
|
|
||||||
if ok_actions is not None:
|
|
||||||
request_body['ok_actions'] = ok_actions
|
|
||||||
if undetermined_actions is not None:
|
|
||||||
request_body['undetermined_actions'] = undetermined_actions
|
|
||||||
|
|
||||||
for key, value in kwargs.items():
|
|
||||||
request_body[key] = value
|
|
||||||
|
|
||||||
resp, response_body = self.patch(uri, json.dumps(request_body))
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def list_alarms(self, query_params=None):
|
|
||||||
uri = 'alarms'
|
|
||||||
if query_params is not None:
|
|
||||||
uri = uri + query_params
|
|
||||||
resp, response_body = self.get(uri)
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def get_alarm(self, id):
|
|
||||||
uri = 'alarms/' + id
|
|
||||||
resp, response_body = self.get(uri)
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def delete_alarm(self, id):
|
|
||||||
uri = 'alarms/' + id
|
|
||||||
resp, response_body = self.delete(uri)
|
|
||||||
return resp, response_body
|
|
||||||
|
|
||||||
def update_alarm(self, id, state, lifecycle_state, link, **kwargs):
|
|
||||||
uri = 'alarms/' + id
|
|
||||||
request_body = {}
|
|
||||||
request_body['state'] = state
|
|
||||||
request_body['lifecycle_state'] = lifecycle_state
|
|
||||||
request_body['link'] = link
|
|
||||||
|
|
||||||
for key, value in kwargs.items():
|
|
||||||
request_body[key] = value
|
|
||||||
|
|
||||||
resp, response_body = self.put(uri, json.dumps(request_body))
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def patch_alarm(self, id, state=None, lifecycle_state=None, link=None,
|
|
||||||
**kwargs):
|
|
||||||
uri = 'alarms/' + id
|
|
||||||
request_body = {}
|
|
||||||
if state is not None:
|
|
||||||
request_body['state'] = state
|
|
||||||
if lifecycle_state is not None:
|
|
||||||
request_body['lifecycle_state'] = lifecycle_state
|
|
||||||
if link is not None:
|
|
||||||
request_body['link'] = link
|
|
||||||
|
|
||||||
for key, value in kwargs.items():
|
|
||||||
request_body[key] = value
|
|
||||||
|
|
||||||
resp, response_body = self.patch(uri, json.dumps(request_body))
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def count_alarms(self, query_params=None):
|
|
||||||
uri = 'alarms/count'
|
|
||||||
if query_params is not None:
|
|
||||||
uri += query_params
|
|
||||||
resp, response_body = self.get(uri)
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def list_alarms_state_history(self, query_params=None):
|
|
||||||
uri = 'alarms/state-history'
|
|
||||||
if query_params is not None:
|
|
||||||
uri = uri + query_params
|
|
||||||
resp, response_body = self.get(uri)
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def list_alarm_state_history(self, id, query_params=None):
|
|
||||||
uri = 'alarms/' + id + '/state-history'
|
|
||||||
if query_params is not None:
|
|
||||||
uri = uri + query_params
|
|
||||||
resp, response_body = self.get(uri)
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
# For Negative Tests
|
|
||||||
def update_alarm_definition_with_no_ok_actions(self, id, name,
|
|
||||||
expression, description,
|
|
||||||
actions_enabled, match_by,
|
|
||||||
severity, alarm_actions,
|
|
||||||
undetermined_actions,
|
|
||||||
**kwargs):
|
|
||||||
uri = 'alarm-definitions/' + id
|
|
||||||
request_body = {}
|
|
||||||
request_body['name'] = name
|
|
||||||
request_body['expression'] = expression
|
|
||||||
request_body['description'] = description
|
|
||||||
request_body['actions_enabled'] = actions_enabled
|
|
||||||
request_body['match_by'] = match_by
|
|
||||||
request_body['severity'] = severity
|
|
||||||
request_body['alarm_actions'] = alarm_actions
|
|
||||||
request_body['undetermined_actions'] = undetermined_actions
|
|
||||||
|
|
||||||
for key, value in kwargs.items():
|
|
||||||
request_body[key] = value
|
|
||||||
|
|
||||||
resp, response_body = self.put(uri, json.dumps(request_body))
|
|
||||||
return resp, json.loads(response_body)
|
|
||||||
|
|
||||||
def update_notification_method_with_no_address(self, id, name, type,
|
|
||||||
period=None):
|
|
||||||
uri = 'notification-methods/' + id
|
|
||||||
request_body = {}
|
|
||||||
request_body['name'] = name
|
|
||||||
request_body['type'] = type
|
|
||||||
if period is not None:
|
|
||||||
request_body['period'] = period
|
|
||||||
resp, response_body = self.put(uri, json.dumps(request_body))
|
|
||||||
return resp, json.loads(response_body)
|
|
@ -1,97 +0,0 @@
|
|||||||
# (C) Copyright 2015-2017 Hewlett Packard Enterprise Development LP
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import six.moves.urllib.parse as urlparse
|
|
||||||
from tempest.common import credentials_factory
|
|
||||||
from tempest import config
|
|
||||||
from tempest.lib import exceptions
|
|
||||||
import tempest.test
|
|
||||||
|
|
||||||
from monasca_tempest_tests import clients
|
|
||||||
|
|
||||||
CONF = config.CONF
|
|
||||||
|
|
||||||
|
|
||||||
class BaseMonascaTest(tempest.test.BaseTestCase):
|
|
||||||
"""Base test case class for all Monasca API tests."""
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def skip_checks(cls):
|
|
||||||
super(BaseMonascaTest, cls).skip_checks()
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_setup(cls):
|
|
||||||
super(BaseMonascaTest, cls).resource_setup()
|
|
||||||
auth_version = CONF.identity.auth_version
|
|
||||||
cls.cred_provider = credentials_factory.get_credentials_provider(
|
|
||||||
cls.__name__,
|
|
||||||
force_tenant_isolation=True,
|
|
||||||
identity_version=auth_version)
|
|
||||||
credentials = cls.cred_provider.get_creds_by_roles(
|
|
||||||
['monasca-user', 'monasca-read-only-user', 'admin']).credentials
|
|
||||||
cls.os = clients.Manager(credentials=credentials)
|
|
||||||
cls.monasca_client = cls.os.monasca_client
|
|
||||||
cls.projects_client = cls.os.projects_client
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def cleanup_resources(method, list_of_ids):
|
|
||||||
for resource_id in list_of_ids:
|
|
||||||
try:
|
|
||||||
method(resource_id)
|
|
||||||
except exceptions.NotFound:
|
|
||||||
pass
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_cleanup(cls):
|
|
||||||
super(BaseMonascaTest, cls).resource_cleanup()
|
|
||||||
resp, response_body = cls.monasca_client.list_alarm_definitions()
|
|
||||||
if resp.status == 200:
|
|
||||||
if 'elements' in response_body:
|
|
||||||
elements = response_body['elements']
|
|
||||||
for element in elements:
|
|
||||||
id = element['id']
|
|
||||||
cls.monasca_client.delete_alarm_definition(id)
|
|
||||||
|
|
||||||
resp, response_body = cls.monasca_client.list_notification_methods()
|
|
||||||
if resp.status == 200:
|
|
||||||
if 'elements' in response_body:
|
|
||||||
elements = response_body['elements']
|
|
||||||
for element in elements:
|
|
||||||
id = element['id']
|
|
||||||
cls.monasca_client.delete_notification_method(id)
|
|
||||||
|
|
||||||
resp, response_body = cls.monasca_client.list_alarms()
|
|
||||||
if resp.status == 200:
|
|
||||||
if 'elements' in response_body:
|
|
||||||
elements = response_body['elements']
|
|
||||||
for element in elements:
|
|
||||||
id = element['id']
|
|
||||||
cls.monasca_client.delete_alarm(id)
|
|
||||||
cls.cred_provider.clear_creds()
|
|
||||||
|
|
||||||
def _get_offset(self, response_body):
|
|
||||||
next_link = None
|
|
||||||
self_link = None
|
|
||||||
for link in response_body['links']:
|
|
||||||
if link['rel'] == 'next':
|
|
||||||
next_link = link['href']
|
|
||||||
if link['rel'] == 'self':
|
|
||||||
self_link = link['href']
|
|
||||||
if not next_link:
|
|
||||||
query_parms = urlparse.parse_qs(urlparse.urlparse(self_link).query)
|
|
||||||
self.fail("No next link returned with query parameters: {}".format(query_parms))
|
|
||||||
query_params = urlparse.parse_qs(urlparse.urlparse(next_link).query)
|
|
||||||
if 'offset' not in query_params:
|
|
||||||
self.fail("No offset in next link: {}".format(next_link))
|
|
||||||
return query_params['offset'][0]
|
|
@ -1,47 +0,0 @@
|
|||||||
# (C) Copyright 2015-2016 Hewlett Packard Enterprise Development Company LP
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
MAX_RETRIES = 60
|
|
||||||
RETRY_WAIT_SECS = 1
|
|
||||||
ONE_MINUTE_TIME_OUT = 60
|
|
||||||
|
|
||||||
ALARM_DEFINITION_CREATION_WAIT = 1
|
|
||||||
|
|
||||||
MAX_METRIC_NAME_LENGTH = 255
|
|
||||||
MAX_DIMENSION_KEY_LENGTH = 255
|
|
||||||
MAX_DIMENSION_VALUE_LENGTH = 255
|
|
||||||
INVALID_DIMENSION_CHARS = "<>={},\"\;&"
|
|
||||||
INVALID_NAME_CHARS = INVALID_DIMENSION_CHARS + "()"
|
|
||||||
|
|
||||||
MAX_ALARM_DEFINITION_NAME_LENGTH = 255
|
|
||||||
MAX_ALARM_DEFINITION_DESCRIPTION_LENGTH = 255
|
|
||||||
MAX_ALARM_DEFINITION_ACTIONS_LENGTH = 50
|
|
||||||
|
|
||||||
MAX_NOTIFICATION_METHOD_NAME_LENGTH = 250
|
|
||||||
MAX_NOTIFICATION_METHOD_TYPE_LENGTH = 100
|
|
||||||
MAX_NOTIFICATION_METHOD_ADDRESS_LENGTH = 512
|
|
||||||
INVALID_CHARS_NOTIFICATION = "<>={}(),\"\;&"
|
|
||||||
|
|
||||||
MAX_LIST_MEASUREMENTS_NAME_LENGTH = 255
|
|
||||||
|
|
||||||
MAX_LIST_STATISTICS_NAME_LENGTH = 255
|
|
||||||
|
|
||||||
MAX_ALARM_LIFECYCLE_STATE_LENGTH = 50
|
|
||||||
MAX_ALARM_METRIC_NAME_LENGTH = 255
|
|
||||||
MAX_ALARM_METRIC_DIMENSIONS_KEY_LENGTH = 255
|
|
||||||
MAX_ALARM_METRIC_DIMENSIONS_VALUE_LENGTH = 255
|
|
||||||
MAX_ALARM_LINK_LENGTH = 512
|
|
||||||
|
|
||||||
MAX_VALUE_META_NAME_LENGTH = 255
|
|
||||||
MAX_VALUE_META_TOTAL_LENGTH = 2048
|
|
@ -1,179 +0,0 @@
|
|||||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
|
||||||
# (C) Copyright SUSE LLC
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
import datetime
|
|
||||||
import time
|
|
||||||
|
|
||||||
import six.moves.urllib.parse as urlparse
|
|
||||||
from tempest.lib.common.utils import data_utils
|
|
||||||
|
|
||||||
NUM_ALARM_DEFINITIONS = 2
|
|
||||||
NUM_MEASUREMENTS = 100
|
|
||||||
|
|
||||||
|
|
||||||
def create_metric(name='name-1',
|
|
||||||
dimensions={
|
|
||||||
'key-1': 'value-1',
|
|
||||||
'key-2': 'value-2'
|
|
||||||
},
|
|
||||||
timestamp=None,
|
|
||||||
value=0.0,
|
|
||||||
value_meta={
|
|
||||||
'key-1': 'value-1',
|
|
||||||
'key-2': 'value-2'
|
|
||||||
},
|
|
||||||
):
|
|
||||||
metric = {}
|
|
||||||
if name is not None:
|
|
||||||
metric['name'] = name
|
|
||||||
if dimensions is not None:
|
|
||||||
metric['dimensions'] = dimensions
|
|
||||||
if timestamp is not None:
|
|
||||||
metric['timestamp'] = timestamp
|
|
||||||
else:
|
|
||||||
metric['timestamp'] = int(time.time() * 1000)
|
|
||||||
if value is not None:
|
|
||||||
metric['value'] = value
|
|
||||||
if value_meta is not None:
|
|
||||||
metric['value_meta'] = value_meta
|
|
||||||
return metric
|
|
||||||
|
|
||||||
|
|
||||||
def create_notification(name=data_utils.rand_name('notification-'),
|
|
||||||
type='EMAIL',
|
|
||||||
address='john.doe@domain.com',
|
|
||||||
period=0):
|
|
||||||
notification = {}
|
|
||||||
if name is not None:
|
|
||||||
notification['name'] = name
|
|
||||||
if type is not None:
|
|
||||||
notification['type'] = type
|
|
||||||
if address is not None:
|
|
||||||
notification['address'] = address
|
|
||||||
if period is not None:
|
|
||||||
notification['period'] = period
|
|
||||||
return notification
|
|
||||||
|
|
||||||
|
|
||||||
def create_alarm_definition(name=None,
|
|
||||||
description=None,
|
|
||||||
expression=None,
|
|
||||||
match_by=None,
|
|
||||||
severity=None,
|
|
||||||
alarm_actions=None,
|
|
||||||
ok_actions=None,
|
|
||||||
undetermined_actions=None):
|
|
||||||
alarm_definition = {}
|
|
||||||
if name is not None:
|
|
||||||
alarm_definition['name'] = name
|
|
||||||
if description is not None:
|
|
||||||
alarm_definition['description'] = description
|
|
||||||
if expression is not None:
|
|
||||||
alarm_definition['expression'] = expression
|
|
||||||
if match_by is not None:
|
|
||||||
alarm_definition['match_by'] = match_by
|
|
||||||
if severity is not None:
|
|
||||||
alarm_definition['severity'] = severity
|
|
||||||
if alarm_actions is not None:
|
|
||||||
alarm_definition['alarm_actions'] = alarm_actions
|
|
||||||
if ok_actions is not None:
|
|
||||||
alarm_definition['ok_actions'] = ok_actions
|
|
||||||
if undetermined_actions is not None:
|
|
||||||
alarm_definition['undetermined_actions'] = undetermined_actions
|
|
||||||
return alarm_definition
|
|
||||||
|
|
||||||
|
|
||||||
def delete_alarm_definitions(monasca_client):
|
|
||||||
# Delete alarm definitions
|
|
||||||
resp, response_body = monasca_client.list_alarm_definitions()
|
|
||||||
elements = response_body['elements']
|
|
||||||
if elements:
|
|
||||||
for element in elements:
|
|
||||||
alarm_def_id = element['id']
|
|
||||||
monasca_client.delete_alarm_definition(alarm_def_id)
|
|
||||||
|
|
||||||
|
|
||||||
def timestamp_to_iso(timestamp):
|
|
||||||
time_utc = datetime.datetime.utcfromtimestamp(timestamp / 1000.0)
|
|
||||||
time_iso_base = time_utc.strftime("%Y-%m-%dT%H:%M:%S")
|
|
||||||
time_iso_base += 'Z'
|
|
||||||
return time_iso_base
|
|
||||||
|
|
||||||
|
|
||||||
def timestamp_to_iso_millis(timestamp):
|
|
||||||
time_utc = datetime.datetime.utcfromtimestamp(timestamp / 1000.0)
|
|
||||||
time_iso_base = time_utc.strftime("%Y-%m-%dT%H:%M:%S")
|
|
||||||
time_iso_microsecond = time_utc.strftime(".%f")
|
|
||||||
time_iso_millisecond = time_iso_base + time_iso_microsecond[0:4] + 'Z'
|
|
||||||
return time_iso_millisecond
|
|
||||||
|
|
||||||
|
|
||||||
def get_query_param(uri, query_param_name):
|
|
||||||
query_param_val = None
|
|
||||||
parsed_uri = urlparse.urlparse(uri)
|
|
||||||
for query_param in parsed_uri.query.split('&'):
|
|
||||||
parsed_query_name, parsed_query_val = query_param.split('=', 1)
|
|
||||||
if query_param_name == parsed_query_name:
|
|
||||||
query_param_val = parsed_query_val
|
|
||||||
return query_param_val
|
|
||||||
|
|
||||||
|
|
||||||
def get_expected_elements_inner_offset_limit(all_elements, offset, limit, inner_key):
|
|
||||||
expected_elements = []
|
|
||||||
total_statistics = 0
|
|
||||||
|
|
||||||
if offset is None:
|
|
||||||
offset_id = None
|
|
||||||
offset_time = ""
|
|
||||||
passed_offset = True
|
|
||||||
else:
|
|
||||||
offset_tuple = offset.split('_')
|
|
||||||
offset_id = offset_tuple[0] if len(offset_tuple) > 1 else u'0'
|
|
||||||
offset_time = offset_tuple[1] if len(offset_tuple) > 1 else offset_tuple[0]
|
|
||||||
passed_offset = False
|
|
||||||
|
|
||||||
for element in all_elements:
|
|
||||||
element_id = element['id']
|
|
||||||
if (not passed_offset) and element_id != offset_id:
|
|
||||||
continue
|
|
||||||
next_element = None
|
|
||||||
|
|
||||||
for value in element[inner_key]:
|
|
||||||
if passed_offset or (element_id == offset_id and value[0] > offset_time):
|
|
||||||
if not passed_offset:
|
|
||||||
passed_offset = True
|
|
||||||
if not next_element:
|
|
||||||
next_element = element.copy()
|
|
||||||
next_element[inner_key] = [value]
|
|
||||||
else:
|
|
||||||
next_element[inner_key].append(value)
|
|
||||||
total_statistics += 1
|
|
||||||
if total_statistics >= limit:
|
|
||||||
break
|
|
||||||
|
|
||||||
if next_element:
|
|
||||||
expected_elements.append(next_element)
|
|
||||||
|
|
||||||
if total_statistics >= limit:
|
|
||||||
break
|
|
||||||
|
|
||||||
if element_id == offset_id:
|
|
||||||
passed_offset = True
|
|
||||||
|
|
||||||
# if index is used in the element id, reset to start at zero
|
|
||||||
if expected_elements and expected_elements[0]['id'].isdigit():
|
|
||||||
for i in range(len(expected_elements)):
|
|
||||||
expected_elements[i]['id'] = str(i)
|
|
||||||
|
|
||||||
return expected_elements
|
|
File diff suppressed because it is too large
Load Diff
@ -1,141 +0,0 @@
|
|||||||
# (C) Copyright 2016-2017 Hewlett Packard Enterprise Development LP
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import time
|
|
||||||
|
|
||||||
from monasca_tempest_tests.tests.api import base
|
|
||||||
from monasca_tempest_tests.tests.api import constants
|
|
||||||
from monasca_tempest_tests.tests.api import helpers
|
|
||||||
from tempest.lib.common.utils import data_utils
|
|
||||||
from tempest.lib import decorators
|
|
||||||
|
|
||||||
MIN_HISTORY = 2
|
|
||||||
|
|
||||||
|
|
||||||
class TestAlarmStateHistoryMultipleTransitions(base.BaseMonascaTest):
|
|
||||||
# For testing list alarm state history with the same alarm ID, two alarm
|
|
||||||
# transitions are needed. One transit from ALARM state to UNDETERMINED
|
|
||||||
# state and the other one from UNDETERMINED state to ALARM state.
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_setup(cls):
|
|
||||||
super(TestAlarmStateHistoryMultipleTransitions, cls).resource_setup()
|
|
||||||
alarm_definition = helpers.create_alarm_definition(
|
|
||||||
name=data_utils.rand_name('alarm_state_history'),
|
|
||||||
expression="min(name-1) < 1.0")
|
|
||||||
cls.monasca_client.create_alarm_definitions(alarm_definition)
|
|
||||||
for timer in range(constants.MAX_RETRIES):
|
|
||||||
# create some metrics to prime the system and create
|
|
||||||
# MIN_HISTORY alarms
|
|
||||||
metric = helpers.create_metric(
|
|
||||||
name="name-1", dimensions={'key1': 'value1'}, value=0.0)
|
|
||||||
cls.monasca_client.create_metrics(metric)
|
|
||||||
# sleep 1 second between metrics to make sure timestamps
|
|
||||||
# are different in the second field. Influxdb has a bug
|
|
||||||
# where it does not sort properly by milliseconds. .014
|
|
||||||
# is sorted as greater than .138
|
|
||||||
time.sleep(1.0)
|
|
||||||
resp, response_body = cls.monasca_client.\
|
|
||||||
list_alarms_state_history()
|
|
||||||
elements = response_body['elements']
|
|
||||||
if len(elements) >= 1:
|
|
||||||
break
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
|
|
||||||
time.sleep(constants.MAX_RETRIES)
|
|
||||||
|
|
||||||
for timer in range(constants.MAX_RETRIES * 2):
|
|
||||||
metric = helpers.create_metric(
|
|
||||||
name="name-1", dimensions={'key2': 'value2'}, value=2.0)
|
|
||||||
cls.monasca_client.create_metrics(metric)
|
|
||||||
# sleep 0.05 second between metrics to make sure timestamps
|
|
||||||
# are different
|
|
||||||
time.sleep(0.05)
|
|
||||||
resp, response_body = \
|
|
||||||
cls.monasca_client.list_alarms_state_history()
|
|
||||||
elements = response_body['elements']
|
|
||||||
if len(elements) >= 2:
|
|
||||||
return
|
|
||||||
else:
|
|
||||||
num_transitions = len(elements)
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
assert False, "Required {} alarm state transitions, but found {}".\
|
|
||||||
format(MIN_HISTORY, num_transitions)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_cleanup(cls):
|
|
||||||
super(TestAlarmStateHistoryMultipleTransitions, cls).\
|
|
||||||
resource_cleanup()
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_alarm_state_history(self):
|
|
||||||
# Get the alarm state history for a specific alarm by ID
|
|
||||||
resp, response_body = self.monasca_client.list_alarms_state_history()
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
if elements:
|
|
||||||
element = elements[0]
|
|
||||||
alarm_id = element['alarm_id']
|
|
||||||
resp, response_body = self.monasca_client.list_alarm_state_history(
|
|
||||||
alarm_id)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
|
|
||||||
# Test Response Body
|
|
||||||
self.assertTrue(set(['links', 'elements']) ==
|
|
||||||
set(response_body))
|
|
||||||
elements = response_body['elements']
|
|
||||||
links = response_body['links']
|
|
||||||
self.assertIsInstance(links, list)
|
|
||||||
link = links[0]
|
|
||||||
self.assertTrue(set(['rel', 'href']) ==
|
|
||||||
set(link))
|
|
||||||
self.assertEqual(link['rel'], u'self')
|
|
||||||
definition = elements[0]
|
|
||||||
self.assertTrue(set(['id', 'alarm_id', 'metrics', 'new_state',
|
|
||||||
'old_state', 'reason', 'reason_data',
|
|
||||||
'sub_alarms', 'timestamp']) ==
|
|
||||||
set(definition))
|
|
||||||
else:
|
|
||||||
error_msg = "Failed test_list_alarm_state_history: at least one " \
|
|
||||||
"alarm state history is needed."
|
|
||||||
self.fail(error_msg)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_alarm_state_history_with_offset_limit(self):
|
|
||||||
# Get the alarm state history for a specific alarm by ID
|
|
||||||
resp, response_body = self.monasca_client.list_alarms_state_history()
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
if len(elements) >= MIN_HISTORY:
|
|
||||||
element = elements[0]
|
|
||||||
second_element = elements[1]
|
|
||||||
alarm_id = element['alarm_id']
|
|
||||||
query_parms = '?limit=1'
|
|
||||||
resp, response_body = self.monasca_client.\
|
|
||||||
list_alarm_state_history(alarm_id, query_parms)
|
|
||||||
elements = response_body['elements']
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertEqual(1, len(elements))
|
|
||||||
|
|
||||||
query_parms = '?offset=' + str(element['timestamp'])
|
|
||||||
resp, response_body = self.monasca_client.\
|
|
||||||
list_alarm_state_history(alarm_id, query_parms)
|
|
||||||
elements_new = response_body['elements']
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertEqual(1, len(elements_new))
|
|
||||||
self.assertEqual(second_element, elements_new[0])
|
|
||||||
else:
|
|
||||||
error_msg = "Failed test_list_alarm_state_history_with_offset" \
|
|
||||||
"_limit: two alarms state history are needed."
|
|
||||||
self.fail(error_msg)
|
|
@ -1,180 +0,0 @@
|
|||||||
# (C) Copyright 2016-2017 Hewlett Packard Enterprise Development LP
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import time
|
|
||||||
|
|
||||||
from monasca_tempest_tests.tests.api import base
|
|
||||||
from monasca_tempest_tests.tests.api import helpers
|
|
||||||
from tempest.lib.common.utils import data_utils
|
|
||||||
from tempest.lib import decorators
|
|
||||||
|
|
||||||
WAIT_SECS = 10
|
|
||||||
|
|
||||||
|
|
||||||
class TestAlarmTransitions(base.BaseMonascaTest):
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_setup(cls):
|
|
||||||
super(TestAlarmTransitions, cls).resource_setup()
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_cleanup(cls):
|
|
||||||
super(TestAlarmTransitions, cls).resource_cleanup()
|
|
||||||
|
|
||||||
def _wait_for_alarm_creation(self, definition_id):
|
|
||||||
for x in range(WAIT_SECS):
|
|
||||||
time.sleep(1)
|
|
||||||
resp, resp_body = self.monasca_client.list_alarms(
|
|
||||||
query_params="?alarm_definition_id=" + definition_id)
|
|
||||||
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
if len(resp_body['elements']) != 0:
|
|
||||||
break
|
|
||||||
self.assertEqual(1, len(resp_body['elements']))
|
|
||||||
alarm_id = resp_body['elements'][0]['id']
|
|
||||||
initial_state = resp_body['elements'][0]['state']
|
|
||||||
return alarm_id, initial_state
|
|
||||||
|
|
||||||
def _wait_for_alarm_transition(self, alarm_id, expected_state):
|
|
||||||
for x in range(WAIT_SECS):
|
|
||||||
time.sleep(1)
|
|
||||||
resp, resp_body = self.monasca_client.get_alarm(alarm_id)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
if resp_body['state'] == expected_state:
|
|
||||||
break
|
|
||||||
self.assertEqual(expected_state, resp_body['state'])
|
|
||||||
|
|
||||||
def _send_measurement(self, metric_def, value):
|
|
||||||
metric = helpers.create_metric(name=metric_def['name'],
|
|
||||||
dimensions=metric_def['dimensions'],
|
|
||||||
value=value)
|
|
||||||
resp, resp_body = self.monasca_client.create_metrics([metric])
|
|
||||||
self.assertEqual(204, resp.status)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_alarm_max_function(self):
|
|
||||||
metric_def = {
|
|
||||||
'name': data_utils.rand_name("max_test"),
|
|
||||||
'dimensions': {
|
|
||||||
'dim_to_match': data_utils.rand_name("max_match")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
expression = "max(" + metric_def['name'] + ") > 14"
|
|
||||||
definition = helpers.create_alarm_definition(name="Test Max Function",
|
|
||||||
description="",
|
|
||||||
expression=expression,
|
|
||||||
match_by=["dim_to_match"])
|
|
||||||
resp, resp_body = (self.monasca_client
|
|
||||||
.create_alarm_definitions(definition))
|
|
||||||
self.assertEqual(201, resp.status)
|
|
||||||
definition_id = resp_body['id']
|
|
||||||
time.sleep(1)
|
|
||||||
|
|
||||||
self._send_measurement(metric_def, 1)
|
|
||||||
|
|
||||||
alarm_id, initial_state = self._wait_for_alarm_creation(definition_id)
|
|
||||||
self.assertEqual("UNDETERMINED", initial_state)
|
|
||||||
|
|
||||||
self._send_measurement(metric_def, 20)
|
|
||||||
|
|
||||||
self._wait_for_alarm_transition(alarm_id, "ALARM")
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_alarm_max_with_deterministic(self):
|
|
||||||
metric_def = {
|
|
||||||
'name': data_utils.rand_name("max_deterministic_test"),
|
|
||||||
'dimensions': {
|
|
||||||
'dim_to_match': data_utils.rand_name("max_match")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
expression = "max(" + metric_def['name'] + ",deterministic) > 14"
|
|
||||||
definition = helpers.create_alarm_definition(name="Test Max Deterministic Function",
|
|
||||||
description="",
|
|
||||||
expression=expression,
|
|
||||||
match_by=["dim_to_match"])
|
|
||||||
resp, resp_body = self.monasca_client.create_alarm_definitions(definition)
|
|
||||||
self.assertEqual(201, resp.status)
|
|
||||||
definition_id = resp_body['id']
|
|
||||||
time.sleep(1)
|
|
||||||
|
|
||||||
self._send_measurement(metric_def, 1)
|
|
||||||
|
|
||||||
alarm_id, initial_state = self._wait_for_alarm_creation(definition_id)
|
|
||||||
self.assertEqual("OK", initial_state)
|
|
||||||
|
|
||||||
self._send_measurement(metric_def, 20)
|
|
||||||
|
|
||||||
self._wait_for_alarm_transition(alarm_id, "ALARM")
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_alarm_last_function(self):
|
|
||||||
metric_def = {
|
|
||||||
'name': data_utils.rand_name("last_test"),
|
|
||||||
'dimensions': {
|
|
||||||
'dim_to_match': data_utils.rand_name("last_match")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
expression = "last(" + metric_def['name'] + ") > 14"
|
|
||||||
definition = helpers.create_alarm_definition(name="Test Last Function",
|
|
||||||
description="",
|
|
||||||
expression=expression,
|
|
||||||
match_by=["dim_to_match"])
|
|
||||||
resp, resp_body = self.monasca_client.create_alarm_definitions(definition)
|
|
||||||
self.assertEqual(201, resp.status)
|
|
||||||
definition_id = resp_body['id']
|
|
||||||
time.sleep(1)
|
|
||||||
|
|
||||||
self._send_measurement(metric_def, 1)
|
|
||||||
|
|
||||||
alarm_id, initial_state = self._wait_for_alarm_creation(definition_id)
|
|
||||||
self.assertEqual("OK", initial_state)
|
|
||||||
|
|
||||||
self._send_measurement(metric_def, 20)
|
|
||||||
|
|
||||||
self._wait_for_alarm_transition(alarm_id, "ALARM")
|
|
||||||
|
|
||||||
self._send_measurement(metric_def, 3)
|
|
||||||
|
|
||||||
self._wait_for_alarm_transition(alarm_id, "OK")
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_alarm_last_with_deterministic(self):
|
|
||||||
metric_def = {
|
|
||||||
'name': data_utils.rand_name("last_deterministic_test"),
|
|
||||||
'dimensions': {
|
|
||||||
'dim_to_match': data_utils.rand_name("last_match")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
expression = "last(" + metric_def['name'] + ",deterministic) > 14"
|
|
||||||
definition = helpers.create_alarm_definition(name="Test Last Deterministic Function",
|
|
||||||
description="",
|
|
||||||
expression=expression,
|
|
||||||
match_by=["dim_to_match"])
|
|
||||||
resp, resp_body = self.monasca_client.create_alarm_definitions(definition)
|
|
||||||
self.assertEqual(201, resp.status)
|
|
||||||
definition_id = resp_body['id']
|
|
||||||
time.sleep(1)
|
|
||||||
|
|
||||||
self._send_measurement(metric_def, 1)
|
|
||||||
|
|
||||||
alarm_id, initial_state = self._wait_for_alarm_creation(definition_id)
|
|
||||||
self.assertEqual("OK", initial_state)
|
|
||||||
|
|
||||||
self._send_measurement(metric_def, 20)
|
|
||||||
|
|
||||||
self._wait_for_alarm_transition(alarm_id, "ALARM")
|
|
||||||
|
|
||||||
self._send_measurement(metric_def, 3)
|
|
||||||
|
|
||||||
self._wait_for_alarm_transition(alarm_id, "OK")
|
|
File diff suppressed because it is too large
Load Diff
@ -1,355 +0,0 @@
|
|||||||
# (C) Copyright 2016-2017 Hewlett Packard Enterprise Development LP
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import time
|
|
||||||
import urllib
|
|
||||||
|
|
||||||
from monasca_tempest_tests.tests.api import base
|
|
||||||
from monasca_tempest_tests.tests.api import helpers
|
|
||||||
from tempest.lib.common.utils import data_utils
|
|
||||||
from tempest.lib import decorators
|
|
||||||
from tempest.lib import exceptions
|
|
||||||
|
|
||||||
|
|
||||||
GROUP_BY_ALLOWED_PARAMS = {'alarm_definition_id', 'name', 'state', 'severity',
|
|
||||||
'link', 'lifecycle_state', 'metric_name',
|
|
||||||
'dimension_name', 'dimension_value'}
|
|
||||||
|
|
||||||
|
|
||||||
class TestAlarmsCount(base.BaseMonascaTest):
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_setup(cls):
|
|
||||||
super(TestAlarmsCount, cls).resource_setup()
|
|
||||||
|
|
||||||
num_hosts = 20
|
|
||||||
|
|
||||||
alarm_definitions = []
|
|
||||||
expected_alarm_counts = []
|
|
||||||
metrics_to_send = []
|
|
||||||
|
|
||||||
# OK, LOW
|
|
||||||
expression = "max(test_metric_01) > 10"
|
|
||||||
name = data_utils.rand_name('test-counts-01')
|
|
||||||
alarm_definitions.append(helpers.create_alarm_definition(
|
|
||||||
name=name,
|
|
||||||
expression=expression,
|
|
||||||
severity='LOW',
|
|
||||||
match_by=['hostname', 'unique']))
|
|
||||||
for i in range(100):
|
|
||||||
metrics_to_send.append(helpers.create_metric(
|
|
||||||
name='test_metric_01',
|
|
||||||
dimensions={'hostname': 'test_' + str(i % num_hosts),
|
|
||||||
'unique': str(i)},
|
|
||||||
value=1
|
|
||||||
))
|
|
||||||
expected_alarm_counts.append(100)
|
|
||||||
|
|
||||||
# ALARM, MEDIUM
|
|
||||||
expression = "max(test_metric_02) > 10"
|
|
||||||
name = data_utils.rand_name('test-counts-02')
|
|
||||||
alarm_definitions.append(helpers.create_alarm_definition(
|
|
||||||
name=name,
|
|
||||||
expression=expression,
|
|
||||||
severity='MEDIUM',
|
|
||||||
match_by=['hostname', 'unique']))
|
|
||||||
for i in range(75):
|
|
||||||
metrics_to_send.append(helpers.create_metric(
|
|
||||||
name='test_metric_02',
|
|
||||||
dimensions={'hostname': 'test_' + str(i % num_hosts),
|
|
||||||
'unique': str(i)},
|
|
||||||
value=11
|
|
||||||
))
|
|
||||||
# append again to move from undetermined to alarm
|
|
||||||
metrics_to_send.append(helpers.create_metric(
|
|
||||||
name='test_metric_02',
|
|
||||||
dimensions={'hostname': 'test_' + str(i % num_hosts),
|
|
||||||
'unique': str(i)},
|
|
||||||
value=11
|
|
||||||
))
|
|
||||||
expected_alarm_counts.append(75)
|
|
||||||
|
|
||||||
# OK, HIGH, shared dimension
|
|
||||||
expression = "max(test_metric_03) > 100"
|
|
||||||
name = data_utils.rand_name('test_counts-03')
|
|
||||||
alarm_definitions.append(helpers.create_alarm_definition(
|
|
||||||
name=name,
|
|
||||||
expression=expression,
|
|
||||||
severity='HIGH',
|
|
||||||
match_by=['hostname', 'unique']))
|
|
||||||
for i in range(50):
|
|
||||||
metrics_to_send.append(helpers.create_metric(
|
|
||||||
name='test_metric_03',
|
|
||||||
dimensions={'hostname': 'test_' + str(i % num_hosts),
|
|
||||||
'unique': str(i),
|
|
||||||
'height': '55'},
|
|
||||||
value=i
|
|
||||||
))
|
|
||||||
expected_alarm_counts.append(50)
|
|
||||||
|
|
||||||
# UNDERTERMINED, CRITICAL
|
|
||||||
expression = "max(test_metric_undet) > 100"
|
|
||||||
name = data_utils.rand_name('test-counts-04')
|
|
||||||
alarm_definitions.append(helpers.create_alarm_definition(
|
|
||||||
name=name,
|
|
||||||
expression=expression,
|
|
||||||
severity='CRITICAL',
|
|
||||||
match_by=['hostname', 'unique']))
|
|
||||||
for i in range(25):
|
|
||||||
metrics_to_send.append(helpers.create_metric(
|
|
||||||
name='test_metric_undet',
|
|
||||||
dimensions={'hostname': 'test_' + str(i % num_hosts),
|
|
||||||
'unique': str(i)},
|
|
||||||
value=1
|
|
||||||
))
|
|
||||||
expected_alarm_counts.append(25)
|
|
||||||
|
|
||||||
# create alarm definitions
|
|
||||||
cls.alarm_definition_ids = []
|
|
||||||
for definition in alarm_definitions:
|
|
||||||
resp, response_body = cls.monasca_client.create_alarm_definitions(
|
|
||||||
definition)
|
|
||||||
if resp.status == 201:
|
|
||||||
cls.alarm_definition_ids.append(response_body['id'])
|
|
||||||
else:
|
|
||||||
msg = "Failed to create alarm_definition during setup: {} {}".format(resp.status, response_body)
|
|
||||||
assert False, msg
|
|
||||||
|
|
||||||
# create alarms
|
|
||||||
for metric in metrics_to_send:
|
|
||||||
metric['timestamp'] = int(time.time() * 1000)
|
|
||||||
cls.monasca_client.create_metrics(metric)
|
|
||||||
# ensure metric timestamps are unique
|
|
||||||
time.sleep(0.01)
|
|
||||||
|
|
||||||
# check that alarms exist
|
|
||||||
time_out = time.time() + 70
|
|
||||||
while time.time() < time_out:
|
|
||||||
setup_complete = True
|
|
||||||
alarm_count = 0
|
|
||||||
for i in range(len(cls.alarm_definition_ids)):
|
|
||||||
resp, response_body = cls.monasca_client.list_alarms(
|
|
||||||
'?alarm_definition_id=' + cls.alarm_definition_ids[i])
|
|
||||||
if resp.status != 200:
|
|
||||||
msg = "Error listing alarms: {} {}".format(resp.status, response_body)
|
|
||||||
assert False, msg
|
|
||||||
if len(response_body['elements']) < expected_alarm_counts[i]:
|
|
||||||
setup_complete = False
|
|
||||||
alarm_count += len(response_body['elements'])
|
|
||||||
break
|
|
||||||
|
|
||||||
if setup_complete:
|
|
||||||
# allow alarm transitions to occur
|
|
||||||
# time.sleep(15)
|
|
||||||
return
|
|
||||||
|
|
||||||
msg = "Failed to create all specified alarms during setup, alarm_count was {}".format(alarm_count)
|
|
||||||
assert False, msg
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_cleanup(cls):
|
|
||||||
super(TestAlarmsCount, cls).resource_cleanup()
|
|
||||||
|
|
||||||
def _verify_counts_format(self, response_body, group_by=None, expected_length=None):
|
|
||||||
expected_keys = ['links', 'counts', 'columns']
|
|
||||||
for key in expected_keys:
|
|
||||||
self.assertIn(key, response_body)
|
|
||||||
self.assertIsInstance(response_body[key], list)
|
|
||||||
|
|
||||||
expected_columns = ['count']
|
|
||||||
if isinstance(group_by, list):
|
|
||||||
expected_columns.extend(group_by)
|
|
||||||
self.assertEqual(expected_columns, response_body['columns'])
|
|
||||||
|
|
||||||
if expected_length is not None:
|
|
||||||
self.assertEqual(expected_length, len(response_body['counts']))
|
|
||||||
else:
|
|
||||||
expected_length = len(response_body['counts'])
|
|
||||||
|
|
||||||
for i in range(expected_length):
|
|
||||||
self.assertEqual(len(expected_columns), len(response_body['counts'][i]))
|
|
||||||
|
|
||||||
# test with no params
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_count(self):
|
|
||||||
resp, response_body = self.monasca_client.count_alarms()
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self._verify_counts_format(response_body)
|
|
||||||
self.assertEqual(250, response_body['counts'][0][0])
|
|
||||||
|
|
||||||
# test with each group_by parameter singularly
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_group_by_singular(self):
|
|
||||||
resp, response_body = self.monasca_client.list_alarms("?state=ALARM")
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
alarm_state_count = len(response_body['elements'])
|
|
||||||
resp, response_body = self.monasca_client.list_alarms("?state=undetermined")
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
undet_state_count = len(response_body['elements'])
|
|
||||||
|
|
||||||
resp, response_body = self.monasca_client.count_alarms("?group_by=state")
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self._verify_counts_format(response_body, group_by=['state'])
|
|
||||||
|
|
||||||
self.assertEqual('ALARM', response_body['counts'][0][1])
|
|
||||||
self.assertEqual(alarm_state_count, response_body['counts'][0][0])
|
|
||||||
self.assertEqual('UNDETERMINED', response_body['counts'][-1][1])
|
|
||||||
self.assertEqual(undet_state_count, response_body['counts'][-1][0])
|
|
||||||
|
|
||||||
resp, response_body = self.monasca_client.count_alarms("?group_by=name")
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self._verify_counts_format(response_body, group_by=['name'], expected_length=4)
|
|
||||||
|
|
||||||
# test with group by a parameter that is not allowed
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_group_by_not_allowed(self):
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.count_alarms, "?group_by=not_allowed")
|
|
||||||
|
|
||||||
# test with a few group_by fields
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_group_by_multiple(self):
|
|
||||||
resp, response_body = self.monasca_client.list_alarms()
|
|
||||||
alarm_low_count = 0
|
|
||||||
for alarm in response_body['elements']:
|
|
||||||
if alarm['state'] is 'ALARM' and alarm['severity'] is 'LOW':
|
|
||||||
alarm_low_count += 1
|
|
||||||
|
|
||||||
# Using urlencode mimics the CLI behavior. Without the urlencode, falcon
|
|
||||||
# treats group_by as a list, with the urlencode it treats group_by as
|
|
||||||
# a string. The API needs to handle both.
|
|
||||||
# test_with_all_group_by_params tests multiple group_by without
|
|
||||||
# urlencode
|
|
||||||
query_params = urllib.urlencode([('group_by', 'state,severity')])
|
|
||||||
resp, response_body = self.monasca_client.count_alarms("?" + query_params)
|
|
||||||
self._verify_counts_format(response_body, group_by=['state', 'severity'])
|
|
||||||
|
|
||||||
def run_count_test(self, query_string):
|
|
||||||
resp, response_body = self.monasca_client.list_alarms(query_string)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
expected_count = len(response_body['elements'])
|
|
||||||
# Make sure something was found
|
|
||||||
self.assertTrue(expected_count > 0)
|
|
||||||
|
|
||||||
resp, response_body = self.monasca_client.count_alarms(query_string)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self._verify_counts_format(response_body, expected_length=1)
|
|
||||||
self.assertEqual(expected_count, response_body['counts'][0][0])
|
|
||||||
|
|
||||||
# test filter by severity
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_filter_severity(self):
|
|
||||||
self.run_count_test("?severity=LOW")
|
|
||||||
|
|
||||||
# test filter by state
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_filter_state(self):
|
|
||||||
self.run_count_test("?state=ALARM")
|
|
||||||
|
|
||||||
# test filter by metric name
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_filter_metric_name(self):
|
|
||||||
self.run_count_test("?metric_name=test_metric_01")
|
|
||||||
|
|
||||||
# test with multiple metric dimensions
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_filter_multiple_dimensions(self):
|
|
||||||
self.run_count_test("?metric_dimensions=hostname:test_1,unique:1")
|
|
||||||
|
|
||||||
# test with filter and group_by parameters
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_filter_and_group_by_params(self):
|
|
||||||
resp, response_body = self.monasca_client.list_alarms("?state=ALARM")
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
expected_count = 0
|
|
||||||
for element in response_body['elements']:
|
|
||||||
if element['alarm_definition']['severity'] == 'MEDIUM':
|
|
||||||
expected_count += 1
|
|
||||||
|
|
||||||
resp, response_body = self.monasca_client.count_alarms("?state=ALARM&group_by=severity")
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self._verify_counts_format(response_body, group_by=['severity'])
|
|
||||||
self.assertEqual(expected_count, response_body['counts'][0][0])
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_with_all_group_by_params(self):
|
|
||||||
resp, response_body = self.monasca_client.list_alarms()
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
expected_num_count = len(response_body['elements'])
|
|
||||||
|
|
||||||
query_params = "?group_by=" + ','.join(GROUP_BY_ALLOWED_PARAMS)
|
|
||||||
resp, response_body = self.monasca_client.count_alarms(query_params)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self._verify_counts_format(response_body, group_by=list(GROUP_BY_ALLOWED_PARAMS))
|
|
||||||
|
|
||||||
# Expect duplicates
|
|
||||||
msg = "Not enough distinct counts. Expected at least {}, found {}".format(expected_num_count,
|
|
||||||
len(response_body['counts']))
|
|
||||||
assert expected_num_count <= len(response_body['counts']), msg
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_limit(self):
|
|
||||||
resp, response_body = self.monasca_client.count_alarms(
|
|
||||||
"?group_by=metric_name,dimension_name,dimension_value")
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self._verify_counts_format(response_body,
|
|
||||||
group_by=['metric_name', 'dimension_name', 'dimension_value'])
|
|
||||||
assert len(response_body['counts']) > 1, "Too few counts to test limit, found 1"
|
|
||||||
|
|
||||||
resp, response_body = self.monasca_client.count_alarms(
|
|
||||||
"?group_by=metric_name,dimension_name,dimension_value&limit=1")
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self._verify_counts_format(response_body,
|
|
||||||
group_by=['metric_name', 'dimension_name', 'dimension_value'],
|
|
||||||
expected_length=1)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_offset(self):
|
|
||||||
resp, response_body = self.monasca_client.count_alarms(
|
|
||||||
"?group_by=metric_name,dimension_name,dimension_value")
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self._verify_counts_format(response_body,
|
|
||||||
group_by=['metric_name', 'dimension_name', 'dimension_value'])
|
|
||||||
expected_counts = len(response_body['counts']) - 1
|
|
||||||
|
|
||||||
resp, response_body = self.monasca_client.count_alarms(
|
|
||||||
"?group_by=metric_name,dimension_name,dimension_value&offset=1")
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self._verify_counts_format(response_body,
|
|
||||||
group_by=['metric_name', 'dimension_name', 'dimension_value'],
|
|
||||||
expected_length=expected_counts)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_invalid_offset(self):
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.count_alarms, "?group_by=metric_name&offset=not_an_int")
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_limit_and_offset(self):
|
|
||||||
resp, response_body = self.monasca_client.count_alarms(
|
|
||||||
"?group_by=metric_name,dimension_name,dimension_value")
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self._verify_counts_format(response_body,
|
|
||||||
group_by=['metric_name', 'dimension_name', 'dimension_value'])
|
|
||||||
expected_first_result = response_body['counts'][1]
|
|
||||||
|
|
||||||
resp, response_body = self.monasca_client.count_alarms(
|
|
||||||
"?group_by=metric_name,dimension_name,dimension_value&offset=1&limit=5")
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self._verify_counts_format(response_body,
|
|
||||||
group_by=['metric_name', 'dimension_name', 'dimension_value'],
|
|
||||||
expected_length=5)
|
|
||||||
self.assertEqual(expected_first_result, response_body['counts'][0])
|
|
@ -1,237 +0,0 @@
|
|||||||
# (C) Copyright 2015-2017 Hewlett Packard Enterprise Development LP
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import time
|
|
||||||
|
|
||||||
from monasca_tempest_tests.tests.api import base
|
|
||||||
from monasca_tempest_tests.tests.api import constants
|
|
||||||
from monasca_tempest_tests.tests.api import helpers
|
|
||||||
from oslo_utils import timeutils
|
|
||||||
from tempest.lib.common.utils import data_utils
|
|
||||||
from tempest.lib import decorators
|
|
||||||
|
|
||||||
NUM_ALARM_DEFINITIONS = 3
|
|
||||||
MIN_HISTORY = 3
|
|
||||||
|
|
||||||
|
|
||||||
class TestAlarmsStateHistoryOneTransition(base.BaseMonascaTest):
|
|
||||||
# Alarms state histories with one transition but different alarm
|
|
||||||
# definitions are needed for this test class.
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_setup(cls):
|
|
||||||
super(TestAlarmsStateHistoryOneTransition, cls).resource_setup()
|
|
||||||
|
|
||||||
for i in range(MIN_HISTORY):
|
|
||||||
alarm_definition = helpers.create_alarm_definition(
|
|
||||||
name=data_utils.rand_name('alarm_state_history' + str(i + 1)),
|
|
||||||
expression="min(name-" + str(i + 1) + ") < " + str(i + 1))
|
|
||||||
cls.monasca_client.create_alarm_definitions(alarm_definition)
|
|
||||||
|
|
||||||
num_transitions = 0
|
|
||||||
for timer in range(constants.MAX_RETRIES):
|
|
||||||
for i in range(MIN_HISTORY):
|
|
||||||
# Create some metrics to prime the system and waiting for the
|
|
||||||
# alarms to be created and then for them to change state.
|
|
||||||
# MIN_HISTORY number of Alarms State History are needed.
|
|
||||||
metric = helpers.create_metric(name="name-" + str(i + 1))
|
|
||||||
cls.monasca_client.create_metrics(metric)
|
|
||||||
# Ensure alarms transition at different times
|
|
||||||
time.sleep(0.1)
|
|
||||||
resp, response_body = cls.monasca_client.\
|
|
||||||
list_alarms_state_history()
|
|
||||||
elements = response_body['elements']
|
|
||||||
if len(elements) >= MIN_HISTORY:
|
|
||||||
return
|
|
||||||
else:
|
|
||||||
num_transitions = len(elements)
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
assert False, "Required {} alarm state transitions, but found {}".\
|
|
||||||
format(MIN_HISTORY, num_transitions)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_cleanup(cls):
|
|
||||||
super(TestAlarmsStateHistoryOneTransition, cls).resource_cleanup()
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_alarms_state_history(self):
|
|
||||||
resp, response_body = self.monasca_client.list_alarms_state_history()
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
# Test response body
|
|
||||||
self.assertTrue(set(['links', 'elements']) == set(response_body))
|
|
||||||
elements = response_body['elements']
|
|
||||||
number_of_alarms = len(elements)
|
|
||||||
if number_of_alarms < 1:
|
|
||||||
error_msg = "Failed test_list_alarms_state_history: need " \
|
|
||||||
"at least one alarms state history to test."
|
|
||||||
self.fail(error_msg)
|
|
||||||
else:
|
|
||||||
element = elements[0]
|
|
||||||
self.assertTrue(set(['id', 'alarm_id', 'metrics', 'old_state',
|
|
||||||
'new_state', 'reason', 'reason_data',
|
|
||||||
'timestamp', 'sub_alarms'])
|
|
||||||
== set(element))
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_alarms_state_history_with_dimensions(self):
|
|
||||||
resp, response_body = self.monasca_client.list_alarms_state_history()
|
|
||||||
elements = response_body['elements']
|
|
||||||
if elements:
|
|
||||||
element = elements[0]
|
|
||||||
dimension = element['metrics'][0]['dimensions']
|
|
||||||
dimension_items = dimension.items()
|
|
||||||
dimension_item = dimension_items[0]
|
|
||||||
dimension_item_0 = dimension_item[0]
|
|
||||||
dimension_item_1 = dimension_item[1]
|
|
||||||
name = element['metrics'][0]['name']
|
|
||||||
|
|
||||||
query_parms = '?dimensions=' + str(dimension_item_0) + ':' + str(
|
|
||||||
dimension_item_1)
|
|
||||||
resp, response_body = self.monasca_client.\
|
|
||||||
list_alarms_state_history(query_parms)
|
|
||||||
name_new = response_body['elements'][0]['metrics'][0]['name']
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertEqual(name, name_new)
|
|
||||||
else:
|
|
||||||
error_msg = "Failed test_list_alarms_state_history_with_" \
|
|
||||||
"dimensions: need at least one alarms state history " \
|
|
||||||
"to test."
|
|
||||||
self.fail(error_msg)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_alarms_state_history_with_start_time(self):
|
|
||||||
# 1, get all histories
|
|
||||||
resp, all_response_body = self.monasca_client.\
|
|
||||||
list_alarms_state_history()
|
|
||||||
all_elements = all_response_body['elements']
|
|
||||||
|
|
||||||
if len(all_elements) < 3:
|
|
||||||
error_msg = "Failed test_list_alarms_state_history_with_" \
|
|
||||||
"start_time: need 3 or more alarms state history " \
|
|
||||||
"to test."
|
|
||||||
self.fail(error_msg)
|
|
||||||
|
|
||||||
# 2, query second(timestamp) <= x
|
|
||||||
min_element, second_element, max_element = \
|
|
||||||
self._get_elements_with_min_max_timestamp(all_elements)
|
|
||||||
start_time = second_element['timestamp']
|
|
||||||
query_params = '?start_time=' + str(start_time)
|
|
||||||
resp, selected_response_body = self.monasca_client.\
|
|
||||||
list_alarms_state_history(query_params)
|
|
||||||
selected_elements = selected_response_body['elements']
|
|
||||||
|
|
||||||
# 3. compare #1 and #2
|
|
||||||
expected_elements = all_elements
|
|
||||||
expected_elements.remove(min_element)
|
|
||||||
self.assertEqual(expected_elements, selected_elements)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_alarms_state_history_with_end_time(self):
|
|
||||||
# 1, get all histories
|
|
||||||
resp, all_response_body = self.monasca_client.\
|
|
||||||
list_alarms_state_history()
|
|
||||||
all_elements = all_response_body['elements']
|
|
||||||
|
|
||||||
if len(all_elements) < 3:
|
|
||||||
error_msg = "Failed test_list_alarms_state_history_with_" \
|
|
||||||
"end_time: need 3 or more alarms state history " \
|
|
||||||
"to test."
|
|
||||||
self.fail(error_msg)
|
|
||||||
|
|
||||||
# 2, query x <= second(timestamp)
|
|
||||||
min_element, second_element, max_element = \
|
|
||||||
self._get_elements_with_min_max_timestamp(all_elements)
|
|
||||||
end_time = second_element['timestamp']
|
|
||||||
query_params = '?end_time=' + str(end_time)
|
|
||||||
resp, selected_response_body = self.monasca_client.\
|
|
||||||
list_alarms_state_history(query_params)
|
|
||||||
selected_elements = selected_response_body['elements']
|
|
||||||
|
|
||||||
# 3. compare #1 and #2
|
|
||||||
expected_elements = all_elements
|
|
||||||
expected_elements.remove(max_element)
|
|
||||||
self.assertEqual(expected_elements, selected_elements)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_alarms_state_history_with_start_end_time(self):
|
|
||||||
# 1, get all histories
|
|
||||||
resp, all_response_body = self.monasca_client.\
|
|
||||||
list_alarms_state_history()
|
|
||||||
all_elements = all_response_body['elements']
|
|
||||||
|
|
||||||
if len(all_elements) < 3:
|
|
||||||
error_msg = "Failed test_list_alarms_state_history_with_" \
|
|
||||||
"start_end_time: need 3 or more alarms state history" \
|
|
||||||
"to test."
|
|
||||||
self.fail(error_msg)
|
|
||||||
|
|
||||||
# 2, query min(timestamp) <= x <= max(timestamp)
|
|
||||||
min_element, second_element, max_element = \
|
|
||||||
self._get_elements_with_min_max_timestamp(all_elements)
|
|
||||||
start_time = min_element['timestamp']
|
|
||||||
end_time = max_element['timestamp']
|
|
||||||
query_params = '?start_time=' + str(start_time) + '&end_time=' + \
|
|
||||||
str(end_time)
|
|
||||||
resp, selected_response_body = self.monasca_client.\
|
|
||||||
list_alarms_state_history(query_params)
|
|
||||||
selected_elements = selected_response_body['elements']
|
|
||||||
|
|
||||||
# 3. compare #1 and #2
|
|
||||||
self.assertEqual(all_elements, selected_elements)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_alarms_state_history_with_offset_limit(self):
|
|
||||||
resp, response_body = self.monasca_client.list_alarms_state_history()
|
|
||||||
elements_set1 = response_body['elements']
|
|
||||||
number_of_alarms = len(elements_set1)
|
|
||||||
if number_of_alarms >= MIN_HISTORY:
|
|
||||||
query_parms = '?limit=' + str(number_of_alarms)
|
|
||||||
resp, response_body = self.monasca_client.\
|
|
||||||
list_alarms_state_history(query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements_set2 = response_body['elements']
|
|
||||||
self.assertEqual(number_of_alarms, len(elements_set2))
|
|
||||||
for index in range(MIN_HISTORY - 1):
|
|
||||||
self.assertEqual(elements_set1[index], elements_set2[index])
|
|
||||||
for index in range(MIN_HISTORY - 1):
|
|
||||||
alarm_history = elements_set2[index]
|
|
||||||
max_limit = len(elements_set2) - index
|
|
||||||
for limit in range(1, max_limit):
|
|
||||||
first_index = index + 1
|
|
||||||
last_index = first_index + limit
|
|
||||||
expected_elements = elements_set2[first_index:last_index]
|
|
||||||
|
|
||||||
query_parms = '?offset=' + str(alarm_history['timestamp'])\
|
|
||||||
+ '&limit=' + str(limit)
|
|
||||||
resp, response_body = self.\
|
|
||||||
monasca_client.list_alarms_state_history(query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
new_elements = response_body['elements']
|
|
||||||
self.assertEqual(limit, len(new_elements))
|
|
||||||
for i in range(len(expected_elements)):
|
|
||||||
self.assertEqual(expected_elements[i], new_elements[i])
|
|
||||||
else:
|
|
||||||
error_msg = ("Failed test_list_alarms_state_history_with_offset "
|
|
||||||
"limit: need three alarms state history to test. "
|
|
||||||
"Current number of alarms = {}").format(
|
|
||||||
number_of_alarms)
|
|
||||||
self.fail(error_msg)
|
|
||||||
|
|
||||||
def _get_elements_with_min_max_timestamp(self, elements):
|
|
||||||
sorted_elements = sorted(elements, key=lambda element: timeutils.
|
|
||||||
parse_isotime(element['timestamp']))
|
|
||||||
min_element = sorted_elements[0]
|
|
||||||
second_element = sorted_elements[1]
|
|
||||||
max_element = sorted_elements[-1]
|
|
||||||
return min_element, second_element, max_element
|
|
@ -1,249 +0,0 @@
|
|||||||
# (C) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
# (C) Copyright 2017 SUSE LLC
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import time
|
|
||||||
from urllib import urlencode
|
|
||||||
|
|
||||||
from tempest.lib.common.utils import data_utils
|
|
||||||
from tempest.lib import decorators
|
|
||||||
from tempest.lib import exceptions
|
|
||||||
|
|
||||||
from monasca_tempest_tests.tests.api import base
|
|
||||||
from monasca_tempest_tests.tests.api import constants
|
|
||||||
from monasca_tempest_tests.tests.api import helpers
|
|
||||||
|
|
||||||
|
|
||||||
class TestDimensions(base.BaseMonascaTest):
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_setup(cls):
|
|
||||||
super(TestDimensions, cls).resource_setup()
|
|
||||||
metric_name1 = data_utils.rand_name()
|
|
||||||
name1 = "name_1"
|
|
||||||
name2 = "name_2"
|
|
||||||
value1 = "value_1"
|
|
||||||
value2 = "value_2"
|
|
||||||
|
|
||||||
timestamp = int(round(time.time() * 1000))
|
|
||||||
time_iso = helpers.timestamp_to_iso(timestamp)
|
|
||||||
|
|
||||||
metric1 = helpers.create_metric(name=metric_name1,
|
|
||||||
dimensions={name1: value1,
|
|
||||||
name2: value2
|
|
||||||
})
|
|
||||||
cls.monasca_client.create_metrics(metric1)
|
|
||||||
metric1 = helpers.create_metric(name=metric_name1,
|
|
||||||
dimensions={name1: value2})
|
|
||||||
cls.monasca_client.create_metrics(metric1)
|
|
||||||
|
|
||||||
metric_name2 = data_utils.rand_name()
|
|
||||||
name3 = "name_3"
|
|
||||||
value3 = "value_3"
|
|
||||||
metric2 = helpers.create_metric(name=metric_name2,
|
|
||||||
dimensions={name3: value3})
|
|
||||||
cls.monasca_client.create_metrics(metric2)
|
|
||||||
|
|
||||||
metric_name3 = data_utils.rand_name()
|
|
||||||
metric3 = helpers.create_metric(name=metric_name3,
|
|
||||||
dimensions={name1: value3})
|
|
||||||
|
|
||||||
cls.monasca_client.create_metrics(metric3)
|
|
||||||
|
|
||||||
cls._test_metric1 = metric1
|
|
||||||
cls._test_metric2 = metric2
|
|
||||||
cls._test_metric_names = {metric_name1, metric_name2, metric_name3}
|
|
||||||
cls._dim_names_metric1 = [name1, name2]
|
|
||||||
cls._dim_names_metric2 = [name3]
|
|
||||||
cls._dim_names = cls._dim_names_metric1 + cls._dim_names_metric2
|
|
||||||
cls._dim_values_for_metric1 = [value1, value2]
|
|
||||||
cls._dim_values = [value1, value2, value3]
|
|
||||||
|
|
||||||
param = '?start_time=' + time_iso
|
|
||||||
returned_name_set = set()
|
|
||||||
for i in range(constants.MAX_RETRIES):
|
|
||||||
resp, response_body = cls.monasca_client.list_metrics(
|
|
||||||
param)
|
|
||||||
elements = response_body['elements']
|
|
||||||
metric_name1_count = 0
|
|
||||||
for element in elements:
|
|
||||||
returned_name_set.add(str(element['name']))
|
|
||||||
if (str(element['name']) == metric_name1):
|
|
||||||
metric_name1_count += 1
|
|
||||||
# Java version of influxdb never returns both metric1 in the list but Python does.
|
|
||||||
if cls._test_metric_names.issubset(returned_name_set) \
|
|
||||||
and (metric_name1_count == 2 or i == constants.MAX_RETRIES - 1):
|
|
||||||
return
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
|
|
||||||
assert False, 'Unable to initialize metrics'
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_cleanup(cls):
|
|
||||||
super(TestDimensions, cls).resource_cleanup()
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_list_dimension_values_without_metric_name(self):
|
|
||||||
param = '?dimension_name=' + self._dim_names[0]
|
|
||||||
resp, response_body = self.monasca_client.list_dimension_values(param)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertTrue({'links', 'elements'} == set(response_body))
|
|
||||||
response_values_length = len(response_body['elements'])
|
|
||||||
values = [str(response_body['elements'][i]['dimension_value'])
|
|
||||||
for i in range(response_values_length)]
|
|
||||||
self.assertEqual(values, self._dim_values)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_list_dimension_values_with_metric_name(self):
|
|
||||||
parms = '?metric_name=' + self._test_metric1['name']
|
|
||||||
parms += '&dimension_name=' + self._dim_names[0]
|
|
||||||
resp, response_body = self.monasca_client.list_dimension_values(parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertTrue({'links', 'elements'} == set(response_body))
|
|
||||||
response_values_length = len(response_body['elements'])
|
|
||||||
values = [str(response_body['elements'][i]['dimension_value'])
|
|
||||||
for i in range(response_values_length)]
|
|
||||||
self.assertEqual(values, self._dim_values_for_metric1)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_list_dimension_values_limit_and_offset(self):
|
|
||||||
param = '?dimension_name=' + self._dim_names[0]
|
|
||||||
resp, response_body = self.monasca_client.list_dimension_values(param)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
num_dim_values = len(elements)
|
|
||||||
for limit in range(1, num_dim_values):
|
|
||||||
start_index = 0
|
|
||||||
params = [('limit', limit)]
|
|
||||||
offset = None
|
|
||||||
while True:
|
|
||||||
num_expected_elements = limit
|
|
||||||
if (num_expected_elements + start_index) > num_dim_values:
|
|
||||||
num_expected_elements = num_dim_values - start_index
|
|
||||||
|
|
||||||
these_params = list(params)
|
|
||||||
# If not the first call, use the offset returned by the last
|
|
||||||
# call
|
|
||||||
if offset:
|
|
||||||
these_params.extend([('offset', str(offset))])
|
|
||||||
query_parms = '?dimension_name=' + self._dim_names[0] + '&' + \
|
|
||||||
urlencode(these_params)
|
|
||||||
resp, response_body = \
|
|
||||||
self.monasca_client.list_dimension_values(query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
if not response_body['elements']:
|
|
||||||
self.fail("No metrics returned")
|
|
||||||
response_values_length = len(response_body['elements'])
|
|
||||||
if response_values_length == 0:
|
|
||||||
self.fail("No dimension names returned")
|
|
||||||
new_elements = [str(response_body['elements'][i]
|
|
||||||
['dimension_value']) for i in
|
|
||||||
range(response_values_length)]
|
|
||||||
self.assertEqual(num_expected_elements, len(new_elements))
|
|
||||||
|
|
||||||
expected_elements = elements[start_index:start_index + limit]
|
|
||||||
expected_dimension_values = \
|
|
||||||
[expected_elements[i]['dimension_value'] for i in range(
|
|
||||||
len(expected_elements))]
|
|
||||||
self.assertEqual(expected_dimension_values, new_elements)
|
|
||||||
start_index += num_expected_elements
|
|
||||||
if start_index >= num_dim_values:
|
|
||||||
break
|
|
||||||
# Get the next set
|
|
||||||
offset = self._get_offset(response_body)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_list_dimension_values_no_dimension_name(self):
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.list_dimension_values)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_list_dimension_names(self):
|
|
||||||
resp, response_body = self.monasca_client.list_dimension_names()
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertTrue({'links', 'elements'} == set(response_body))
|
|
||||||
response_names_length = len(response_body['elements'])
|
|
||||||
names = [str(response_body['elements'][i]['dimension_name']) for i
|
|
||||||
in range(response_names_length)]
|
|
||||||
self.assertEqual(names, self._dim_names)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_list_dimension_names_with_metric_name(self):
|
|
||||||
self._test_list_dimension_names_with_metric_name(
|
|
||||||
self._test_metric1['name'], self._dim_names_metric1)
|
|
||||||
self._test_list_dimension_names_with_metric_name(
|
|
||||||
self._test_metric2['name'], self._dim_names_metric2)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_list_dimension_names_limit_and_offset(self):
|
|
||||||
resp, response_body = self.monasca_client.list_dimension_names()
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
num_dim_names = len(elements)
|
|
||||||
for limit in range(1, num_dim_names):
|
|
||||||
start_index = 0
|
|
||||||
params = [('limit', limit)]
|
|
||||||
offset = None
|
|
||||||
while True:
|
|
||||||
num_expected_elements = limit
|
|
||||||
if (num_expected_elements + start_index) > num_dim_names:
|
|
||||||
num_expected_elements = num_dim_names - start_index
|
|
||||||
|
|
||||||
these_params = list(params)
|
|
||||||
# If not the first call, use the offset returned by the last
|
|
||||||
# call
|
|
||||||
if offset:
|
|
||||||
these_params.extend([('offset', str(offset))])
|
|
||||||
query_parms = '?' + urlencode(these_params)
|
|
||||||
resp, response_body = self.monasca_client.list_dimension_names(
|
|
||||||
query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
if not response_body['elements']:
|
|
||||||
self.fail("No metrics returned")
|
|
||||||
response_names_length = len(response_body['elements'])
|
|
||||||
if response_names_length == 0:
|
|
||||||
self.fail("No dimension names returned")
|
|
||||||
new_elements = [str(response_body['elements'][i]
|
|
||||||
['dimension_name']) for i in
|
|
||||||
range(response_names_length)]
|
|
||||||
self.assertEqual(num_expected_elements, len(new_elements))
|
|
||||||
|
|
||||||
expected_elements = elements[start_index:start_index + limit]
|
|
||||||
expected_dimension_names = \
|
|
||||||
[expected_elements[i]['dimension_name'] for i in range(
|
|
||||||
len(expected_elements))]
|
|
||||||
self.assertEqual(expected_dimension_names, new_elements)
|
|
||||||
start_index += num_expected_elements
|
|
||||||
if start_index >= num_dim_names:
|
|
||||||
break
|
|
||||||
# Get the next set
|
|
||||||
offset = self._get_offset(response_body)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_list_dimension_names_with_wrong_metric_name(self):
|
|
||||||
self._test_list_dimension_names_with_metric_name(
|
|
||||||
'wrong_metric_name', [])
|
|
||||||
|
|
||||||
def _test_list_dimension_names_with_metric_name(self, metric_name,
|
|
||||||
dimension_names):
|
|
||||||
param = '?metric_name=' + metric_name
|
|
||||||
resp, response_body = self.monasca_client.list_dimension_names(param)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertTrue(set(['links', 'elements']) == set(response_body))
|
|
||||||
response_names_length = len(response_body['elements'])
|
|
||||||
names = [str(response_body['elements'][i]['dimension_name']) for i
|
|
||||||
in range(response_names_length)]
|
|
||||||
self.assertEqual(names, dimension_names)
|
|
@ -1,409 +0,0 @@
|
|||||||
# (C) Copyright 2015-2016 Hewlett Packard Enterprise Development LP
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import time
|
|
||||||
|
|
||||||
from monasca_tempest_tests.tests.api import base
|
|
||||||
from monasca_tempest_tests.tests.api import constants
|
|
||||||
from monasca_tempest_tests.tests.api import helpers
|
|
||||||
from tempest.lib.common.utils import data_utils
|
|
||||||
from tempest.lib import decorators
|
|
||||||
from tempest.lib import exceptions
|
|
||||||
|
|
||||||
NUM_MEASUREMENTS = 50
|
|
||||||
ONE_SECOND = 1000
|
|
||||||
|
|
||||||
|
|
||||||
class TestMeasurements(base.BaseMonascaTest):
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_setup(cls):
|
|
||||||
super(TestMeasurements, cls).resource_setup()
|
|
||||||
|
|
||||||
start_timestamp = int(time.time() * 1000)
|
|
||||||
start_time = str(helpers.timestamp_to_iso(start_timestamp))
|
|
||||||
metrics = []
|
|
||||||
name1 = data_utils.rand_name()
|
|
||||||
name2 = data_utils.rand_name()
|
|
||||||
cls._names_list = [name1, name2]
|
|
||||||
key = data_utils.rand_name('key')
|
|
||||||
value = data_utils.rand_name('value')
|
|
||||||
cls._key = key
|
|
||||||
cls._value = value
|
|
||||||
cls._start_timestamp = start_timestamp
|
|
||||||
|
|
||||||
for i in range(NUM_MEASUREMENTS):
|
|
||||||
metric = helpers.create_metric(
|
|
||||||
name=name1,
|
|
||||||
timestamp=start_timestamp + (i * 10),
|
|
||||||
value=i)
|
|
||||||
metrics.append(metric)
|
|
||||||
cls.monasca_client.create_metrics(metrics)
|
|
||||||
|
|
||||||
# Create metric2 for test_list_measurements_with_dimensions
|
|
||||||
metric2 = helpers.create_metric(
|
|
||||||
name=name1, timestamp=start_timestamp + ONE_SECOND * 2,
|
|
||||||
dimensions={key: value}, value=NUM_MEASUREMENTS)
|
|
||||||
cls.monasca_client.create_metrics(metric2)
|
|
||||||
|
|
||||||
# Create metric3 for test_list_measurements_with_offset_limit
|
|
||||||
metric3 = [
|
|
||||||
helpers.create_metric(
|
|
||||||
name=name2, timestamp=start_timestamp + ONE_SECOND * 3,
|
|
||||||
dimensions={'key1': 'value1', 'key2': 'value5', 'key3': 'value7'}),
|
|
||||||
helpers.create_metric(
|
|
||||||
name=name2, timestamp=start_timestamp + ONE_SECOND * 3 + 10,
|
|
||||||
dimensions={'key1': 'value2', 'key2': 'value5', 'key3': 'value7'}),
|
|
||||||
helpers.create_metric(
|
|
||||||
name=name2, timestamp=start_timestamp + ONE_SECOND * 3 + 20,
|
|
||||||
dimensions={'key1': 'value3', 'key2': 'value6', 'key3': 'value7'}),
|
|
||||||
helpers.create_metric(
|
|
||||||
name=name2, timestamp=start_timestamp + ONE_SECOND * 3 + 30,
|
|
||||||
dimensions={'key1': 'value4', 'key2': 'value6', 'key3': 'value8'})
|
|
||||||
]
|
|
||||||
cls.monasca_client.create_metrics(metric3)
|
|
||||||
|
|
||||||
# Create metric3 for test_list_measurements_with_no_merge_metrics
|
|
||||||
metric4 = helpers.create_metric(
|
|
||||||
name=name1, timestamp=start_timestamp + ONE_SECOND * 4,
|
|
||||||
dimensions={'key-1': 'value-1'},
|
|
||||||
value=NUM_MEASUREMENTS + 1)
|
|
||||||
cls.monasca_client.create_metrics(metric4)
|
|
||||||
|
|
||||||
end_time = str(helpers.timestamp_to_iso(
|
|
||||||
start_timestamp + NUM_MEASUREMENTS + ONE_SECOND * 5))
|
|
||||||
queries = []
|
|
||||||
queries.append('?name={}&start_time={}&end_time={}&merge_metrics=true'.
|
|
||||||
format(name1, start_time, end_time))
|
|
||||||
queries.append('?name={}&start_time={}&end_time={}&merge_metrics=true'.
|
|
||||||
format(name2, start_time, end_time))
|
|
||||||
|
|
||||||
for timer in range(constants.MAX_RETRIES):
|
|
||||||
responses = map(cls.monasca_client.list_measurements, queries)
|
|
||||||
resp_first = responses[0][0]
|
|
||||||
response_body_first = responses[0][1]
|
|
||||||
resp_second = responses[1][0]
|
|
||||||
response_body_second = responses[1][1]
|
|
||||||
if resp_first.status == 200 and resp_second.status == 200 \
|
|
||||||
and len(response_body_first['elements']) == 1 \
|
|
||||||
and len(response_body_second['elements']) == 1:
|
|
||||||
len_meas_first = len(
|
|
||||||
response_body_first['elements'][0]['measurements'])
|
|
||||||
len_meas_second = len(
|
|
||||||
response_body_second['elements'][0]['measurements'])
|
|
||||||
if len_meas_first == NUM_MEASUREMENTS + 2 \
|
|
||||||
and len_meas_second == 4:
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
else:
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
|
|
||||||
cls._start_time = start_time
|
|
||||||
cls._end_time = end_time
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_measurements(self):
|
|
||||||
query_parms = '?name=' + str(self._names_list[0]) + \
|
|
||||||
'&merge_metrics=true' + \
|
|
||||||
'&start_time=' + str(self._start_time) + \
|
|
||||||
'&end_time=' + str(self._end_time)
|
|
||||||
resp, response_body = self.monasca_client.list_measurements(
|
|
||||||
query_parms)
|
|
||||||
self._verify_list_measurements(resp, response_body)
|
|
||||||
elements = response_body['elements']
|
|
||||||
self._verify_list_measurements_elements(
|
|
||||||
elements=elements, test_key=None, test_value=None)
|
|
||||||
measurements = elements[0]['measurements']
|
|
||||||
self._verify_list_measurements_meas_len(
|
|
||||||
measurements, test_len=NUM_MEASUREMENTS + 2)
|
|
||||||
i = 0
|
|
||||||
for measurement in measurements:
|
|
||||||
self._verify_list_measurements_measurement(measurement, i)
|
|
||||||
i += 1
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_list_measurements_with_no_start_time(self):
|
|
||||||
query_parms = '?name=' + str(self._names_list[0])
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.list_measurements, query_parms)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_list_measurements_with_no_name(self):
|
|
||||||
query_parms = '?start_time=' + str(self._start_time) + '&end_time=' + \
|
|
||||||
str(self._end_time)
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.list_measurements, query_parms)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_measurements_with_dimensions(self):
|
|
||||||
query_parms = '?name=' + self._names_list[0] + '&start_time=' + \
|
|
||||||
str(self._start_time) + '&end_time=' + \
|
|
||||||
str(self._end_time) + '&dimensions=' + self._key + ':' \
|
|
||||||
+ self._value
|
|
||||||
resp, response_body = self.monasca_client.list_measurements(
|
|
||||||
query_parms)
|
|
||||||
self._verify_list_measurements(resp, response_body)
|
|
||||||
elements = response_body['elements']
|
|
||||||
self._verify_list_measurements_elements(
|
|
||||||
elements=elements, test_key=None, test_value=None)
|
|
||||||
measurements = elements[0]['measurements']
|
|
||||||
self._verify_list_measurements_meas_len(measurements, 1)
|
|
||||||
measurement = measurements[0]
|
|
||||||
self._verify_list_measurements_measurement(
|
|
||||||
measurement=measurement, test_value=NUM_MEASUREMENTS)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_measurements_with_endtime(self):
|
|
||||||
time_iso = helpers.timestamp_to_iso(
|
|
||||||
self._start_timestamp + ONE_SECOND * 2)
|
|
||||||
query_parms = '?name=' + str(self._names_list[0]) + \
|
|
||||||
'&merge_metrics=true' \
|
|
||||||
'&start_time=' + str(self._start_time) + \
|
|
||||||
'&end_time=' + str(time_iso)
|
|
||||||
resp, response_body = self.monasca_client.list_measurements(
|
|
||||||
query_parms)
|
|
||||||
self._verify_list_measurements(resp, response_body)
|
|
||||||
elements = response_body['elements']
|
|
||||||
self._verify_list_measurements_elements(elements=elements,
|
|
||||||
test_key=None, test_value=None)
|
|
||||||
measurements = elements[0]['measurements']
|
|
||||||
self._verify_list_measurements_meas_len(measurements=measurements,
|
|
||||||
test_len=NUM_MEASUREMENTS)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_list_measurements_with_endtime_equals_starttime(self):
|
|
||||||
query_parms = '?name=' + str(self._names_list[0]) + \
|
|
||||||
'&merge_metrics=true' \
|
|
||||||
'&start_time=' + str(self._start_time) + \
|
|
||||||
'&end_time=' + str(self._start_time)
|
|
||||||
self.assertRaises(exceptions.BadRequest,
|
|
||||||
self.monasca_client.list_measurements, query_parms)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_measurements_with_offset_limit(self):
|
|
||||||
query_parms = '?name=' + str(self._names_list[1]) + \
|
|
||||||
'&merge_metrics=true&start_time=' + self._start_time + \
|
|
||||||
'&end_time=' + self._end_time
|
|
||||||
resp, response_body = self.monasca_client.list_measurements(
|
|
||||||
query_parms)
|
|
||||||
self._verify_list_measurements(resp, response_body)
|
|
||||||
elements = response_body['elements']
|
|
||||||
self._verify_list_measurements_elements(elements=elements,
|
|
||||||
test_key=None, test_value=None)
|
|
||||||
measurements = elements[0]['measurements']
|
|
||||||
self._verify_list_measurements_meas_len(measurements=measurements,
|
|
||||||
test_len=4)
|
|
||||||
|
|
||||||
for measurement_index in range(1, len(measurements) - 3):
|
|
||||||
max_limit = len(measurements) - measurement_index
|
|
||||||
|
|
||||||
# Get first offset from api
|
|
||||||
query_parms = '?name=' + str(self._names_list[1]) + \
|
|
||||||
'&merge_metrics=true&start_time=' + measurements[measurement_index - 1][0] + \
|
|
||||||
'&end_time=' + self._end_time + \
|
|
||||||
'&limit=1'
|
|
||||||
resp, response_body = self.monasca_client.list_measurements(query_parms)
|
|
||||||
for link in response_body['links']:
|
|
||||||
if link['rel'] == 'next':
|
|
||||||
next_link = link['href']
|
|
||||||
if not next_link:
|
|
||||||
self.fail("No next link returned with query parameters: {}".formet(query_parms))
|
|
||||||
offset = helpers.get_query_param(next_link, "offset")
|
|
||||||
|
|
||||||
first_index = measurement_index + 1
|
|
||||||
|
|
||||||
for limit in range(1, max_limit):
|
|
||||||
last_index = measurement_index + limit + 1
|
|
||||||
expected_measurements = measurements[first_index:last_index]
|
|
||||||
|
|
||||||
query_parms = '?name=' + str(self._names_list[1]) + \
|
|
||||||
'&merge_metrics=true&start_time=' + \
|
|
||||||
self._start_time + '&end_time=' + \
|
|
||||||
self._end_time + '&limit=' + str(limit) + \
|
|
||||||
'&offset=' + str(offset)
|
|
||||||
|
|
||||||
resp, response_body = self.monasca_client.list_measurements(
|
|
||||||
query_parms)
|
|
||||||
self._verify_list_measurements(resp, response_body)
|
|
||||||
new_measurements = response_body['elements'][0]['measurements']
|
|
||||||
|
|
||||||
self.assertEqual(limit, len(new_measurements))
|
|
||||||
for i in range(len(expected_measurements)):
|
|
||||||
self.assertEqual(expected_measurements[i],
|
|
||||||
new_measurements[i])
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_measurements_with_merge_metrics(self):
|
|
||||||
query_parms = '?name=' + str(self._names_list[0]) + \
|
|
||||||
'&merge_metrics=true' + \
|
|
||||||
'&start_time=' + str(self._start_time) + \
|
|
||||||
'&end_time=' + str(self._end_time)
|
|
||||||
resp, response_body = self.monasca_client.list_measurements(
|
|
||||||
query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_measurements_with_group_by_one(self):
|
|
||||||
query_parms = '?name=' + str(self._names_list[1]) + \
|
|
||||||
'&group_by=key2' + \
|
|
||||||
'&start_time=' + str(self._start_time) + \
|
|
||||||
'&end_time=' + str(self._end_time)
|
|
||||||
resp, response_body = self.monasca_client.list_measurements(
|
|
||||||
query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
self.assertEqual(len(elements), 2)
|
|
||||||
self._verify_list_measurements_elements(elements, None, None)
|
|
||||||
for measurements in elements:
|
|
||||||
self.assertEqual(1, len(measurements['dimensions'].keys()))
|
|
||||||
self.assertEqual([u'key2'], measurements['dimensions'].keys())
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_measurements_with_group_by_multiple(self):
|
|
||||||
query_parms = '?name=' + str(self._names_list[1]) + \
|
|
||||||
'&group_by=key2,key3' + \
|
|
||||||
'&start_time=' + str(self._start_time) + \
|
|
||||||
'&end_time=' + str(self._end_time)
|
|
||||||
resp, response_body = self.monasca_client.list_measurements(
|
|
||||||
query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
self.assertEqual(len(elements), 3)
|
|
||||||
self._verify_list_measurements_elements(elements, None, None)
|
|
||||||
for measurements in elements:
|
|
||||||
self.assertEqual(2, len(measurements['dimensions'].keys()))
|
|
||||||
self.assertEqual({u'key2', u'key3'}, set(measurements['dimensions'].keys()))
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_measurements_with_group_by_all(self):
|
|
||||||
query_parms = '?name=' + str(self._names_list[1]) + \
|
|
||||||
'&group_by=*' + \
|
|
||||||
'&start_time=' + str(self._start_time) + \
|
|
||||||
'&end_time=' + str(self._end_time)
|
|
||||||
resp, response_body = self.monasca_client.list_measurements(
|
|
||||||
query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
self.assertEqual(len(elements), 4)
|
|
||||||
self._verify_list_measurements_elements(elements, None, None)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_measurements_with_group_by_and_merge(self):
|
|
||||||
query_parms = '?name=' + str(self._names_list[1]) + \
|
|
||||||
'&group_by=*' + \
|
|
||||||
'&merge_metrics=true' + \
|
|
||||||
'&start_time=' + str(self._start_time) + \
|
|
||||||
'&end_time=' + str(self._end_time)
|
|
||||||
resp, response_body = self.monasca_client.list_measurements(
|
|
||||||
query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
self.assertEqual(len(elements), 4)
|
|
||||||
self._verify_list_measurements_elements(elements, None, None)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_list_measurements_with_name_exceeds_max_length(self):
|
|
||||||
long_name = "x" * (constants.MAX_LIST_MEASUREMENTS_NAME_LENGTH + 1)
|
|
||||||
query_parms = '?name=' + str(long_name) + '&merge_metrics=true' + \
|
|
||||||
'&start_time=' + str(self._start_time) + \
|
|
||||||
'&end_time=' + str(self._end_time)
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.list_measurements, query_parms)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_list_measurements_with_no_merge_metrics(self):
|
|
||||||
query_parms = '?name=' + str(self._names_list[0]) + \
|
|
||||||
'&start_time=' + str(self._start_time) + '&end_time=' \
|
|
||||||
+ str(self._end_time)
|
|
||||||
self.assertRaises(exceptions.Conflict,
|
|
||||||
self.monasca_client.list_measurements, query_parms)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_measurements_with_duplicate_query_param_merges_positive(
|
|
||||||
self):
|
|
||||||
queries = []
|
|
||||||
queries.append('?name={}&merge_metrics=true&start_time={}&end_time={'
|
|
||||||
'}&merge_metrics=true'.
|
|
||||||
format(self._names_list[0], self._start_time,
|
|
||||||
self._end_time))
|
|
||||||
queries.append('?name={}&merge_metrics=true&start_time={}&end_time={'
|
|
||||||
'}&merge_metrics=false'.
|
|
||||||
format(self._names_list[0], self._start_time,
|
|
||||||
self._end_time))
|
|
||||||
responses = map(self.monasca_client.list_measurements, queries)
|
|
||||||
for i in range(2):
|
|
||||||
self._verify_list_measurements(responses[i][0], responses[i][1])
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_list_measurements_with_duplicate_query_param_merges_negative(
|
|
||||||
self):
|
|
||||||
queries = []
|
|
||||||
queries.append('?name={}&merge_metrics=false&start_time={}&end_time={'
|
|
||||||
'}&merge_metrics=true'.
|
|
||||||
format(self._names_list[0], self._start_time,
|
|
||||||
self._end_time))
|
|
||||||
queries.append('?name={}&merge_metrics=false&start_time={}&end_time={'
|
|
||||||
'}&merge_metrics=false'.
|
|
||||||
format(self._names_list[0], self._start_time,
|
|
||||||
self._end_time))
|
|
||||||
for i in range(2):
|
|
||||||
self.assertRaises(exceptions.Conflict,
|
|
||||||
self.monasca_client.list_measurements,
|
|
||||||
queries[i])
|
|
||||||
|
|
||||||
def _verify_list_measurements_measurement(self, measurement, test_value):
|
|
||||||
self.assertEqual(test_value, float(measurement[1]))
|
|
||||||
|
|
||||||
def _verify_list_measurements(self, resp, response_body):
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertTrue(set(['links', 'elements']) == set(response_body))
|
|
||||||
|
|
||||||
def _verify_list_measurements_elements(self, elements, test_key,
|
|
||||||
test_value):
|
|
||||||
if not elements:
|
|
||||||
error_msg = "Failed: at least one element is needed. " \
|
|
||||||
"Number of element = 0."
|
|
||||||
self.fail(error_msg)
|
|
||||||
|
|
||||||
for element in elements:
|
|
||||||
# element = elements[0]
|
|
||||||
self.assertEqual(set(element),
|
|
||||||
set(['columns', 'dimensions', 'id',
|
|
||||||
'measurements', 'name']))
|
|
||||||
self.assertTrue(type(element['name']) is unicode)
|
|
||||||
self.assertTrue(type(element['dimensions']) is dict)
|
|
||||||
self.assertTrue(type(element['columns']) is list)
|
|
||||||
self.assertTrue(type(element['measurements']) is list)
|
|
||||||
self.assertEqual(set(element['columns']),
|
|
||||||
set(['timestamp', 'value', 'value_meta']))
|
|
||||||
self.assertTrue(str(element['id']) is not None)
|
|
||||||
if test_key is not None and test_value is not None:
|
|
||||||
self.assertEqual(str(element['dimensions'][test_key]),
|
|
||||||
test_value)
|
|
||||||
|
|
||||||
def _verify_list_measurements_meas_len(self, measurements, test_len):
|
|
||||||
if measurements:
|
|
||||||
len_measurements = len(measurements)
|
|
||||||
self.assertEqual(len_measurements, test_len)
|
|
||||||
else:
|
|
||||||
error_msg = "Failed: one specific measurement is needed. " \
|
|
||||||
"Number of measurements = 0"
|
|
||||||
self.fail(error_msg)
|
|
@ -1,711 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
# (C) Copyright 2014-2016 Hewlett Packard Enterprise Development LP
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
# TODO(RMH): Check if ' should be added in the list of INVALID_CHARS.
|
|
||||||
# TODO(RMH): test_create_metric_no_value, should return 422 if value not sent
|
|
||||||
import time
|
|
||||||
|
|
||||||
from six.moves import urllib_parse as urlparse
|
|
||||||
|
|
||||||
from tempest.lib.common.utils import data_utils
|
|
||||||
from tempest.lib import decorators
|
|
||||||
from tempest.lib import exceptions
|
|
||||||
|
|
||||||
from monasca_tempest_tests.tests.api import base
|
|
||||||
from monasca_tempest_tests.tests.api import constants
|
|
||||||
from monasca_tempest_tests.tests.api import helpers
|
|
||||||
|
|
||||||
|
|
||||||
class TestMetrics(base.BaseMonascaTest):
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_create_metric(self):
|
|
||||||
name = data_utils.rand_name('name')
|
|
||||||
key = data_utils.rand_name('key')
|
|
||||||
value = data_utils.rand_name('value')
|
|
||||||
timestamp = int(round(time.time() * 1000))
|
|
||||||
time_iso = helpers.timestamp_to_iso(timestamp)
|
|
||||||
end_timestamp = int(round((time.time() + 3600 * 24) * 1000))
|
|
||||||
end_time_iso = helpers.timestamp_to_iso(end_timestamp)
|
|
||||||
value_meta_key = data_utils.rand_name('value_meta_key')
|
|
||||||
value_meta_value = data_utils.rand_name('value_meta_value')
|
|
||||||
metric = helpers.create_metric(name=name,
|
|
||||||
dimensions={key: value},
|
|
||||||
timestamp=timestamp,
|
|
||||||
value=1.23,
|
|
||||||
value_meta={
|
|
||||||
value_meta_key: value_meta_value
|
|
||||||
})
|
|
||||||
resp, response_body = self.monasca_client.create_metrics(metric)
|
|
||||||
self.assertEqual(204, resp.status)
|
|
||||||
query_param = '?name=' + name + '&start_time=' + time_iso + \
|
|
||||||
'&end_time=' + end_time_iso
|
|
||||||
for i in range(constants.MAX_RETRIES):
|
|
||||||
resp, response_body = self.monasca_client.\
|
|
||||||
list_measurements(query_param)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
for element in elements:
|
|
||||||
if str(element['name']) == name:
|
|
||||||
self._verify_list_measurements_element(element, key, value)
|
|
||||||
measurement = element['measurements'][0]
|
|
||||||
self._verify_list_measurements_measurement(
|
|
||||||
measurement, metric, value_meta_key, value_meta_value)
|
|
||||||
return
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
if i == constants.MAX_RETRIES - 1:
|
|
||||||
error_msg = "Failed test_create_metric: " \
|
|
||||||
"timeout on waiting for metrics: at least " \
|
|
||||||
"one metric is needed. Current number of " \
|
|
||||||
"metrics = 0"
|
|
||||||
self.fail(error_msg)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_create_metric_with_multibyte_character(self):
|
|
||||||
name = data_utils.rand_name('name').decode('utf8')
|
|
||||||
key = data_utils.rand_name('key').decode('utf8')
|
|
||||||
value = data_utils.rand_name('value').decode('utf8')
|
|
||||||
timestamp = int(round(time.time() * 1000))
|
|
||||||
time_iso = helpers.timestamp_to_iso(timestamp)
|
|
||||||
end_timestamp = int(round((time.time() + 3600 * 24) * 1000))
|
|
||||||
end_time_iso = helpers.timestamp_to_iso(end_timestamp)
|
|
||||||
value_meta_key = data_utils.rand_name('value_meta_key').decode('utf8')
|
|
||||||
value_meta_value = data_utils.rand_name('value_meta_value').decode('utf8')
|
|
||||||
metric = helpers.create_metric(name=name,
|
|
||||||
dimensions={key: value},
|
|
||||||
timestamp=timestamp,
|
|
||||||
value=1.23,
|
|
||||||
value_meta={
|
|
||||||
value_meta_key: value_meta_value
|
|
||||||
})
|
|
||||||
resp, response_body = self.monasca_client.create_metrics(metric)
|
|
||||||
self.assertEqual(204, resp.status)
|
|
||||||
query_param = '?name=' + urlparse.quote(name.encode('utf8')) + \
|
|
||||||
'&start_time=' + time_iso + '&end_time=' + end_time_iso
|
|
||||||
for i in range(constants.MAX_RETRIES):
|
|
||||||
resp, response_body = self.monasca_client.\
|
|
||||||
list_measurements(query_param)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
for element in elements:
|
|
||||||
if element['name'] == name:
|
|
||||||
self._verify_list_measurements_element(element, key, value)
|
|
||||||
measurement = element['measurements'][0]
|
|
||||||
self._verify_list_measurements_measurement(
|
|
||||||
measurement, metric, value_meta_key, value_meta_value)
|
|
||||||
return
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
if i == constants.MAX_RETRIES - 1:
|
|
||||||
error_msg = "Failed test_create_metric: " \
|
|
||||||
"timeout on waiting for metrics: at least " \
|
|
||||||
"one metric is needed. Current number of " \
|
|
||||||
"metrics = 0"
|
|
||||||
self.fail(error_msg)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_create_metrics(self):
|
|
||||||
name = data_utils.rand_name('name')
|
|
||||||
key = data_utils.rand_name('key')
|
|
||||||
value = data_utils.rand_name('value')
|
|
||||||
timestamp = int(round(time.time() * 1000))
|
|
||||||
time_iso = helpers.timestamp_to_iso(timestamp)
|
|
||||||
end_timestamp = int(round(timestamp + 3600 * 24 * 1000))
|
|
||||||
end_time_iso = helpers.timestamp_to_iso(end_timestamp)
|
|
||||||
value_meta_key1 = data_utils.rand_name('meta_key')
|
|
||||||
value_meta_value1 = data_utils.rand_name('meta_value')
|
|
||||||
value_meta_key2 = data_utils.rand_name('value_meta_key')
|
|
||||||
value_meta_value2 = data_utils.rand_name('value_meta_value')
|
|
||||||
metrics = [
|
|
||||||
helpers.create_metric(name=name,
|
|
||||||
dimensions={key: value},
|
|
||||||
timestamp=timestamp,
|
|
||||||
value=1.23,
|
|
||||||
value_meta={
|
|
||||||
value_meta_key1: value_meta_value1
|
|
||||||
}),
|
|
||||||
helpers.create_metric(name=name,
|
|
||||||
dimensions={key: value},
|
|
||||||
timestamp=timestamp + 6000,
|
|
||||||
value=4.56,
|
|
||||||
value_meta={
|
|
||||||
value_meta_key2: value_meta_value2
|
|
||||||
})
|
|
||||||
]
|
|
||||||
resp, response_body = self.monasca_client.create_metrics(metrics)
|
|
||||||
self.assertEqual(204, resp.status)
|
|
||||||
query_param = '?name=' + name + '&start_time=' + str(time_iso) + \
|
|
||||||
'&end_time=' + str(end_time_iso)
|
|
||||||
for i in range(constants.MAX_RETRIES):
|
|
||||||
resp, response_body = self.monasca_client.\
|
|
||||||
list_measurements(query_param)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
for element in elements:
|
|
||||||
if str(element['name']) == name \
|
|
||||||
and len(element['measurements']) == 2:
|
|
||||||
self._verify_list_measurements_element(element, key, value)
|
|
||||||
first_measurement = element['measurements'][0]
|
|
||||||
second_measurement = element['measurements'][1]
|
|
||||||
self._verify_list_measurements_measurement(
|
|
||||||
first_measurement, metrics[0], value_meta_key1,
|
|
||||||
value_meta_value1)
|
|
||||||
self._verify_list_measurements_measurement(
|
|
||||||
second_measurement, metrics[1], value_meta_key2,
|
|
||||||
value_meta_value2)
|
|
||||||
return
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
if i == constants.MAX_RETRIES - 1:
|
|
||||||
error_msg = "Failed test_create_metrics: " \
|
|
||||||
"timeout on waiting for metrics: at least " \
|
|
||||||
"one metric is needed. Current number of " \
|
|
||||||
"metrics = 0"
|
|
||||||
self.fail(error_msg)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_create_metric_with_no_name(self):
|
|
||||||
metric = helpers.create_metric(name=None)
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.create_metrics,
|
|
||||||
metric)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_create_metric_with_empty_name(self):
|
|
||||||
metric = helpers.create_metric(name='')
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.create_metrics,
|
|
||||||
metric)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_create_metric_with_empty_value_in_dimensions(self):
|
|
||||||
name = data_utils.rand_name('name')
|
|
||||||
metric = helpers.create_metric(name=name,
|
|
||||||
dimensions={'key': ''})
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.create_metrics,
|
|
||||||
metric)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_create_metric_with_empty_key_in_dimensions(self):
|
|
||||||
name = data_utils.rand_name('name')
|
|
||||||
metric = helpers.create_metric(name=name,
|
|
||||||
dimensions={'': 'value'})
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.create_metrics,
|
|
||||||
metric)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_create_metric_with_no_dimensions(self):
|
|
||||||
name = data_utils.rand_name('name')
|
|
||||||
timestamp = int(round(time.time() * 1000))
|
|
||||||
time_iso = helpers.timestamp_to_iso(timestamp)
|
|
||||||
end_timestamp = int(round(timestamp + 3600 * 24 * 1000))
|
|
||||||
end_time_iso = helpers.timestamp_to_iso(end_timestamp)
|
|
||||||
value_meta_key = data_utils.rand_name('value_meta_key')
|
|
||||||
value_meta_value = data_utils.rand_name('value_meta_value')
|
|
||||||
metric = helpers.create_metric(name=name,
|
|
||||||
dimensions=None,
|
|
||||||
timestamp=timestamp,
|
|
||||||
value=1.23,
|
|
||||||
value_meta={
|
|
||||||
value_meta_key: value_meta_value})
|
|
||||||
resp, response_body = self.monasca_client.create_metrics(metric)
|
|
||||||
self.assertEqual(204, resp.status)
|
|
||||||
query_param = '?name=' + str(name) + '&start_time=' + str(time_iso) \
|
|
||||||
+ '&end_time=' + str(end_time_iso)
|
|
||||||
for i in range(constants.MAX_RETRIES):
|
|
||||||
resp, response_body = self.monasca_client.\
|
|
||||||
list_measurements(query_param)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
for element in elements:
|
|
||||||
if str(element['name']) == name:
|
|
||||||
self._verify_list_measurements_element(
|
|
||||||
element, test_key=None, test_value=None)
|
|
||||||
if len(element['measurements']) > 0:
|
|
||||||
measurement = element['measurements'][0]
|
|
||||||
self._verify_list_measurements_measurement(
|
|
||||||
measurement, metric, value_meta_key,
|
|
||||||
value_meta_value)
|
|
||||||
return
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
if i == constants.MAX_RETRIES - 1:
|
|
||||||
error_msg = "Failed test_create_metric_with_no_dimensions: " \
|
|
||||||
"timeout on waiting for metrics: at least " \
|
|
||||||
"one metric is needed. Current number of " \
|
|
||||||
"metrics = 0"
|
|
||||||
self.fail(error_msg)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_create_metric_with_colon_in_dimension_value(self):
|
|
||||||
name = data_utils.rand_name('name')
|
|
||||||
key = 'url'
|
|
||||||
value = 'http://localhost:8070/v2.0'
|
|
||||||
timestamp = int(round(time.time() * 1000))
|
|
||||||
time_iso = helpers.timestamp_to_iso(timestamp)
|
|
||||||
end_timestamp = int(round((time.time() + 3600 * 24) * 1000))
|
|
||||||
end_time_iso = helpers.timestamp_to_iso(end_timestamp)
|
|
||||||
metric = helpers.create_metric(name=name,
|
|
||||||
dimensions={key: value})
|
|
||||||
resp, response_body = self.monasca_client.create_metrics(metric)
|
|
||||||
self.assertEqual(204, resp.status)
|
|
||||||
query_param = '?name=' + name + '&start_time=' + time_iso + \
|
|
||||||
'&end_time=' + end_time_iso + \
|
|
||||||
'&dimensions=' + key + ':' + value
|
|
||||||
for i in range(constants.MAX_RETRIES):
|
|
||||||
resp, response_body = self.monasca_client. \
|
|
||||||
list_measurements(query_param)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
for element in elements:
|
|
||||||
if str(element['name']) == name:
|
|
||||||
self._verify_list_measurements_element(element, key, value)
|
|
||||||
measurement = element['measurements'][0]
|
|
||||||
self._verify_list_measurements_measurement(
|
|
||||||
measurement, metric, None, None)
|
|
||||||
return
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
if i == constants.MAX_RETRIES - 1:
|
|
||||||
error_msg = "Failed test_create_metric: " \
|
|
||||||
"timeout on waiting for metrics: at least " \
|
|
||||||
"one metric is needed. Current number of " \
|
|
||||||
"metrics = 0"
|
|
||||||
self.fail(error_msg)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_create_metric_with_no_timestamp(self):
|
|
||||||
metric = helpers.create_metric()
|
|
||||||
metric['timestamp'] = None
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.create_metrics,
|
|
||||||
metric)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_create_metric_no_value(self):
|
|
||||||
timestamp = int(round(time.time() * 1000))
|
|
||||||
metric = helpers.create_metric(timestamp=timestamp,
|
|
||||||
value=None)
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.create_metrics,
|
|
||||||
metric)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_create_metric_with_name_exceeds_max_length(self):
|
|
||||||
long_name = "x" * (constants.MAX_METRIC_NAME_LENGTH + 1)
|
|
||||||
metric = helpers.create_metric(long_name)
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.create_metrics,
|
|
||||||
metric)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_create_metric_with_invalid_chars_in_name(self):
|
|
||||||
for invalid_char in constants.INVALID_NAME_CHARS:
|
|
||||||
metric = helpers.create_metric(invalid_char)
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.create_metrics,
|
|
||||||
metric)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_create_metric_with_invalid_chars_in_dimensions(self):
|
|
||||||
for invalid_char in constants.INVALID_DIMENSION_CHARS:
|
|
||||||
metric = helpers.create_metric('name-1', {'key-1': invalid_char})
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.create_metrics,
|
|
||||||
metric)
|
|
||||||
for invalid_char in constants.INVALID_DIMENSION_CHARS:
|
|
||||||
metric = helpers.create_metric('name-1', {invalid_char: 'value-1'})
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.create_metrics,
|
|
||||||
metric)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_create_metric_dimension_key_exceeds_max_length(self):
|
|
||||||
long_key = "x" * (constants.MAX_DIMENSION_KEY_LENGTH + 1)
|
|
||||||
metric = helpers.create_metric('name-1', {long_key: 'value-1'})
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.create_metrics,
|
|
||||||
metric)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_create_metric_dimension_value_exceeds_max_length(self):
|
|
||||||
long_value = "x" * (constants.MAX_DIMENSION_VALUE_LENGTH + 1)
|
|
||||||
metric = helpers.create_metric('name-1', {'key-1': long_value})
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.create_metrics,
|
|
||||||
metric)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_create_metric_with_value_meta_name_exceeds_max_length(self):
|
|
||||||
long_value_meta_name = "x" * (constants.MAX_VALUE_META_NAME_LENGTH + 1)
|
|
||||||
value_meta_dict = {long_value_meta_name: "value_meta_value"}
|
|
||||||
metric = helpers.create_metric(name='name', value_meta=value_meta_dict)
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.create_metrics,
|
|
||||||
metric)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_create_metric_with_value_meta_exceeds_max_length(self):
|
|
||||||
value_meta_name = "x"
|
|
||||||
long_value_meta_value = "y" * constants.MAX_VALUE_META_TOTAL_LENGTH
|
|
||||||
value_meta_dict = {value_meta_name: long_value_meta_value}
|
|
||||||
metric = helpers.create_metric(name='name', value_meta=value_meta_dict)
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.create_metrics,
|
|
||||||
metric)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_list_metrics(self):
|
|
||||||
resp, response_body = self.monasca_client.list_metrics()
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertTrue(set(['links', 'elements']) == set(response_body))
|
|
||||||
elements = response_body['elements']
|
|
||||||
element = elements[0]
|
|
||||||
self._verify_list_metrics_element(element, test_key=None,
|
|
||||||
test_value=None, test_name=None)
|
|
||||||
self.assertTrue(set(['id', 'name', 'dimensions']) == set(element))
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_list_metrics_with_dimensions(self):
|
|
||||||
name = data_utils.rand_name('name')
|
|
||||||
key = data_utils.rand_name('key')
|
|
||||||
value = data_utils.rand_name('value')
|
|
||||||
metric = helpers.create_metric(name=name, dimensions={key: value})
|
|
||||||
resp, response_body = self.monasca_client.create_metrics(metric)
|
|
||||||
self.assertEqual(204, resp.status)
|
|
||||||
query_param = '?dimensions=' + key + ':' + value
|
|
||||||
for i in range(constants.MAX_RETRIES):
|
|
||||||
resp, response_body = self.monasca_client.list_metrics(query_param)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
for element in elements:
|
|
||||||
if str(element['dimensions'][key]) == value:
|
|
||||||
self._verify_list_metrics_element(element, test_name=name)
|
|
||||||
return
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
if i == constants.MAX_RETRIES - 1:
|
|
||||||
error_msg = "Failed test_list_metrics_with_dimensions: " \
|
|
||||||
"timeout on waiting for metrics: at least " \
|
|
||||||
"one metric is needed. Current number of " \
|
|
||||||
"metrics = 0"
|
|
||||||
self.fail(error_msg)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_list_metrics_dimension_query_multi_value_with_diff_names(self):
|
|
||||||
metrics, name, key_service, values = \
|
|
||||||
self._create_metrics_with_different_dimensions(same_name=False)
|
|
||||||
metric_dimensions = self._get_metric_dimensions(
|
|
||||||
key_service, values, same_metric_name=False)
|
|
||||||
query_param = '?dimensions=' + key_service + ':' + values[0] + '|' +\
|
|
||||||
values[1]
|
|
||||||
self._verify_dimensions(query_param, metric_dimensions)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_list_metrics_dimension_query_no_value_with_diff_names(self):
|
|
||||||
metrics, name, key_service, values = \
|
|
||||||
self._create_metrics_with_different_dimensions(same_name=False)
|
|
||||||
metric_dimensions = self._get_metric_dimensions(
|
|
||||||
key_service, values, same_metric_name=False)
|
|
||||||
query_param = '?dimensions=' + key_service
|
|
||||||
self._verify_dimensions(query_param, metric_dimensions)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_list_metrics_dimension_query_multi_value_with_same_name(self):
|
|
||||||
# Skip the test for now due to InfluxDB Inconsistency
|
|
||||||
return
|
|
||||||
metrics, name, key_service, values = \
|
|
||||||
self._create_metrics_with_different_dimensions(same_name=True)
|
|
||||||
metric_dimensions = self._get_metric_dimensions(
|
|
||||||
key_service, values, same_metric_name=True)
|
|
||||||
query_param = '?name=' + name + '&dimensions=' + key_service + ':' +\
|
|
||||||
values[0] + '|' + values[1]
|
|
||||||
self._verify_dimensions(query_param, metric_dimensions)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_list_metrics_dimension_query_no_value_with_same_name(self):
|
|
||||||
# Skip the test for now due to InfluxDB Inconsistency
|
|
||||||
return
|
|
||||||
metrics, name, key_service, values = \
|
|
||||||
self._create_metrics_with_different_dimensions(same_name=True)
|
|
||||||
metric_dimensions = self._get_metric_dimensions(
|
|
||||||
key_service, values, same_metric_name=True)
|
|
||||||
query_param = '?name=' + name + '&dimensions=' + key_service
|
|
||||||
self._verify_dimensions(query_param, metric_dimensions)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_list_metrics_with_name(self):
|
|
||||||
name = data_utils.rand_name('name')
|
|
||||||
key = data_utils.rand_name('key')
|
|
||||||
value = data_utils.rand_name('value')
|
|
||||||
metric = helpers.create_metric(name=name,
|
|
||||||
dimensions={key: value})
|
|
||||||
resp, response_body = self.monasca_client.create_metrics(metric)
|
|
||||||
self.assertEqual(204, resp.status)
|
|
||||||
query_param = '?name=' + str(name)
|
|
||||||
for i in range(constants.MAX_RETRIES):
|
|
||||||
resp, response_body = self.monasca_client.list_metrics(query_param)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
for element in elements:
|
|
||||||
if str(element['name']) == name:
|
|
||||||
self._verify_list_metrics_element(element, test_key=key,
|
|
||||||
test_value=value)
|
|
||||||
return
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
if i == constants.MAX_RETRIES - 1:
|
|
||||||
error_msg = "Failed test_list_metrics_with_name: " \
|
|
||||||
"timeout on waiting for metrics: at least " \
|
|
||||||
"one metric is needed. Current number of " \
|
|
||||||
"metrics = 0"
|
|
||||||
self.fail(error_msg)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_list_metrics_with_project(self):
|
|
||||||
name = data_utils.rand_name('name')
|
|
||||||
key = data_utils.rand_name('key')
|
|
||||||
value = data_utils.rand_name('value')
|
|
||||||
project = self.projects_client.create_project(
|
|
||||||
name=data_utils.rand_name('test_project'))['project']
|
|
||||||
# Delete the project at the end of the test
|
|
||||||
self.addCleanup(self.projects_client.delete_project, project['id'])
|
|
||||||
metric = helpers.create_metric(name=name,
|
|
||||||
dimensions={key: value})
|
|
||||||
resp, response_body = self.monasca_client.create_metrics(
|
|
||||||
metric, tenant_id=project['id'])
|
|
||||||
self.assertEqual(204, resp.status)
|
|
||||||
query_param = '?tenant_id=' + str(project['id'])
|
|
||||||
for i in range(constants.MAX_RETRIES):
|
|
||||||
resp, response_body = self.monasca_client.list_metrics(query_param)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
for element in elements:
|
|
||||||
if str(element['name']) == name:
|
|
||||||
self._verify_list_metrics_element(element, test_key=key,
|
|
||||||
test_value=value)
|
|
||||||
return
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
if i == constants.MAX_RETRIES - 1:
|
|
||||||
error_msg = "Failed test_list_metrics_with_tenant: " \
|
|
||||||
"timeout on waiting for metrics: at least " \
|
|
||||||
"one metric is needed. Current number of " \
|
|
||||||
"metrics = 0"
|
|
||||||
self.fail(error_msg)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_list_metrics_with_offset_limit(self):
|
|
||||||
name = data_utils.rand_name()
|
|
||||||
key1 = data_utils.rand_name()
|
|
||||||
key2 = data_utils.rand_name()
|
|
||||||
|
|
||||||
metrics = [
|
|
||||||
helpers.create_metric(name=name, dimensions={
|
|
||||||
key1: 'value-1', key2: 'value-1'}),
|
|
||||||
helpers.create_metric(name=name, dimensions={
|
|
||||||
key1: 'value-2', key2: 'value-2'}),
|
|
||||||
helpers.create_metric(name=name, dimensions={
|
|
||||||
key1: 'value-3', key2: 'value-3'}),
|
|
||||||
helpers.create_metric(name=name, dimensions={
|
|
||||||
key1: 'value-4', key2: 'value-4'})
|
|
||||||
]
|
|
||||||
self.monasca_client.create_metrics(metrics)
|
|
||||||
query_param = '?name=' + name
|
|
||||||
for i in range(constants.MAX_RETRIES):
|
|
||||||
resp, response_body = self.monasca_client.list_metrics(query_param)
|
|
||||||
elements = response_body['elements']
|
|
||||||
if elements and len(elements) == 4:
|
|
||||||
break
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
if i == constants.MAX_RETRIES - 1:
|
|
||||||
error_msg = ("Failed test_list_metrics_with_offset_limit: "
|
|
||||||
"timeout on waiting for metrics: 4 metrics "
|
|
||||||
"are needed. Current number of elements = "
|
|
||||||
"{}").format(len(elements))
|
|
||||||
self.fail(error_msg)
|
|
||||||
|
|
||||||
first_element = elements[0]
|
|
||||||
query_parms = '?name=' + name + '&limit=4'
|
|
||||||
resp, response_body = self.monasca_client.list_metrics(query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
self.assertEqual(4, len(elements))
|
|
||||||
self.assertEqual(first_element, elements[0])
|
|
||||||
|
|
||||||
for metric_index in range(len(elements) - 1):
|
|
||||||
metric = elements[metric_index]
|
|
||||||
max_limit = 3 - metric_index
|
|
||||||
|
|
||||||
for limit in range(1, max_limit):
|
|
||||||
first_index = metric_index + 1
|
|
||||||
last_index = first_index + limit
|
|
||||||
expected_elements = elements[first_index:last_index]
|
|
||||||
|
|
||||||
query_parms = '?name=' + name + '&offset=' + \
|
|
||||||
str(metric['id']) + '&limit=' + \
|
|
||||||
str(limit)
|
|
||||||
resp, response_body = self.\
|
|
||||||
monasca_client.list_metrics(query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
new_elements = response_body['elements']
|
|
||||||
|
|
||||||
self.assertEqual(limit, len(new_elements))
|
|
||||||
for i in range(len(expected_elements)):
|
|
||||||
self.assertEqual(expected_elements[i], new_elements[i])
|
|
||||||
|
|
||||||
def _verify_list_measurements_element(self, element, test_key, test_value):
|
|
||||||
self.assertEqual(set(element),
|
|
||||||
set(['columns', 'dimensions', 'id', 'measurements',
|
|
||||||
'name']))
|
|
||||||
self.assertEqual(set(element['columns']),
|
|
||||||
set(['timestamp', 'value', 'value_meta']))
|
|
||||||
self.assertTrue(str(element['id']) is not None)
|
|
||||||
if test_key is not None and test_value is not None:
|
|
||||||
self.assertEqual(
|
|
||||||
element['dimensions'][test_key].encode('utf-8'),
|
|
||||||
test_value.encode('utf-8')
|
|
||||||
)
|
|
||||||
|
|
||||||
def _verify_list_measurements_measurement(self, measurement,
|
|
||||||
test_metric, test_vm_key,
|
|
||||||
test_vm_value):
|
|
||||||
# Timestamps stored in influx sometimes are 1 millisecond different to
|
|
||||||
# the value stored by the persister. Check if the timestamps are
|
|
||||||
# equal in one millisecond range to pass the test.
|
|
||||||
time_iso_millis = helpers.timestamp_to_iso_millis(
|
|
||||||
test_metric['timestamp'] + 0)
|
|
||||||
time_iso_millis_plus = helpers.timestamp_to_iso_millis(
|
|
||||||
test_metric['timestamp'] + 1)
|
|
||||||
time_iso_millis_minus = helpers.timestamp_to_iso_millis(
|
|
||||||
test_metric['timestamp'] - 1)
|
|
||||||
if str(measurement[0]) != time_iso_millis and str(measurement[0]) != \
|
|
||||||
time_iso_millis_plus and str(measurement[0]) != \
|
|
||||||
time_iso_millis_minus:
|
|
||||||
error_msg = ("Mismatch Error: None of {}, {}, {} matches {}").\
|
|
||||||
format(time_iso_millis, time_iso_millis_plus,
|
|
||||||
time_iso_millis_minus, str(measurement[0]))
|
|
||||||
self.fail(error_msg)
|
|
||||||
self.assertEqual(measurement[1], test_metric['value'])
|
|
||||||
if test_vm_key is not None and test_vm_value is not None:
|
|
||||||
self.assertEqual(
|
|
||||||
measurement[2][test_vm_key].encode('utf-8'),
|
|
||||||
test_vm_value.encode('utf-8')
|
|
||||||
)
|
|
||||||
|
|
||||||
def _verify_list_metrics_element(self, element, test_key=None,
|
|
||||||
test_value=None, test_name=None):
|
|
||||||
self.assertTrue(type(element['id']) is unicode)
|
|
||||||
self.assertTrue(type(element['name']) is unicode)
|
|
||||||
self.assertTrue(type(element['dimensions']) is dict)
|
|
||||||
self.assertEqual(set(element), set(['dimensions', 'id', 'name']))
|
|
||||||
self.assertTrue(str(element['id']) is not None)
|
|
||||||
if test_key is not None and test_value is not None:
|
|
||||||
self.assertEqual(str(element['dimensions'][test_key]), test_value)
|
|
||||||
if test_name is not None:
|
|
||||||
self.assertEqual(str(element['name']), test_name)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_list_metrics_with_time_args(self):
|
|
||||||
name = data_utils.rand_name('name')
|
|
||||||
key = data_utils.rand_name('key')
|
|
||||||
value_org = data_utils.rand_name('value')
|
|
||||||
|
|
||||||
now = int(round(time.time() * 1000))
|
|
||||||
#
|
|
||||||
# Built start and end time args before and after the measurement.
|
|
||||||
#
|
|
||||||
start_iso = helpers.timestamp_to_iso(now - 1000)
|
|
||||||
end_timestamp = int(round(now + 1000))
|
|
||||||
end_iso = helpers.timestamp_to_iso(end_timestamp)
|
|
||||||
|
|
||||||
metric = helpers.create_metric(name=name,
|
|
||||||
dimensions={key: value_org},
|
|
||||||
timestamp=now)
|
|
||||||
|
|
||||||
self.monasca_client.create_metrics(metric)
|
|
||||||
for timer in range(constants.MAX_RETRIES):
|
|
||||||
query_parms = '?name=' + name + '&start_time=' + start_iso + '&end_time=' + end_iso
|
|
||||||
resp, response_body = self.monasca_client.list_metrics(query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
if elements:
|
|
||||||
dimensions = elements[0]
|
|
||||||
dimension = dimensions['dimensions']
|
|
||||||
value = dimension[unicode(key)]
|
|
||||||
self.assertEqual(value_org, str(value))
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
if timer == constants.MAX_RETRIES - 1:
|
|
||||||
skip_msg = "Skipped test_list_metrics_with_time_args: " \
|
|
||||||
"timeout on waiting for metrics: at least one " \
|
|
||||||
"metric is needed. Current number of metrics " \
|
|
||||||
"= 0"
|
|
||||||
raise self.skipException(skip_msg)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _get_metric_dimensions(key_service, values, same_metric_name):
|
|
||||||
if same_metric_name:
|
|
||||||
metric_dimensions = [{key_service: values[0], 'key3': ''},
|
|
||||||
{key_service: values[1], 'key3': ''},
|
|
||||||
{key_service: '', 'key3': 'value3'}]
|
|
||||||
else:
|
|
||||||
metric_dimensions = [{key_service: values[0]},
|
|
||||||
{key_service: values[1]},
|
|
||||||
{'key3': 'value3'}]
|
|
||||||
return metric_dimensions
|
|
||||||
|
|
||||||
def _verify_dimensions(self, query_param, metric_dimensions):
|
|
||||||
for i in range(constants.MAX_RETRIES):
|
|
||||||
resp, response_body = self.monasca_client.list_metrics(query_param)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
if len(elements) == 2:
|
|
||||||
dimension_sets = []
|
|
||||||
for element in elements:
|
|
||||||
dimension_sets.append(element['dimensions'])
|
|
||||||
self.assertIn(metric_dimensions[0], dimension_sets)
|
|
||||||
self.assertIn(metric_dimensions[1], dimension_sets)
|
|
||||||
self.assertNotIn(metric_dimensions[2], dimension_sets)
|
|
||||||
return
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
if i == constants.MAX_RETRIES - 1:
|
|
||||||
error_msg = "Timeout on waiting for metrics: at least " \
|
|
||||||
"2 metrics are needed. Current number of " \
|
|
||||||
"metrics = {}".format(len(elements))
|
|
||||||
self.fail(error_msg)
|
|
||||||
|
|
||||||
def _create_metrics_with_different_dimensions(self, same_name=True):
|
|
||||||
name1 = data_utils.rand_name('name1')
|
|
||||||
name2 = name1 if same_name else data_utils.rand_name('name2')
|
|
||||||
name3 = name1 if same_name else data_utils.rand_name('name3')
|
|
||||||
key_service = data_utils.rand_name('service')
|
|
||||||
values = [data_utils.rand_name('value1'),
|
|
||||||
data_utils.rand_name('value2')]
|
|
||||||
metrics = [helpers.create_metric(name1, {key_service: values[0]}),
|
|
||||||
helpers.create_metric(name2, {key_service: values[1]}),
|
|
||||||
helpers.create_metric(name3, {'key3': 'value3'})]
|
|
||||||
resp, response_body = self.monasca_client.create_metrics(metrics)
|
|
||||||
self.assertEqual(204, resp.status)
|
|
||||||
return metrics, name1, key_service, values
|
|
@ -1,162 +0,0 @@
|
|||||||
# (C) Copyright 2015-2017 Hewlett Packard Enterprise Development LP
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import time
|
|
||||||
|
|
||||||
from monasca_tempest_tests.tests.api import base
|
|
||||||
from monasca_tempest_tests.tests.api import constants
|
|
||||||
from monasca_tempest_tests.tests.api import helpers
|
|
||||||
from tempest.lib.common.utils import data_utils
|
|
||||||
from tempest.lib import decorators
|
|
||||||
from urllib import urlencode
|
|
||||||
|
|
||||||
|
|
||||||
class TestMetricsNames(base.BaseMonascaTest):
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_setup(cls):
|
|
||||||
super(TestMetricsNames, cls).resource_setup()
|
|
||||||
name1 = data_utils.rand_name('name1')
|
|
||||||
name2 = data_utils.rand_name('name2')
|
|
||||||
name3 = data_utils.rand_name('name3')
|
|
||||||
key = data_utils.rand_name()
|
|
||||||
key1 = data_utils.rand_name()
|
|
||||||
value = data_utils.rand_name()
|
|
||||||
value1 = data_utils.rand_name()
|
|
||||||
|
|
||||||
timestamp = int(round(time.time() * 1000))
|
|
||||||
time_iso = helpers.timestamp_to_iso(timestamp)
|
|
||||||
|
|
||||||
metric1 = helpers.create_metric(name=name1,
|
|
||||||
dimensions={key: value})
|
|
||||||
metric2 = helpers.create_metric(name=name2,
|
|
||||||
dimensions={key1: value1})
|
|
||||||
metric3 = helpers.create_metric(name=name3,
|
|
||||||
dimensions={key: value})
|
|
||||||
cls._test_metric_names = {name1, name2, name3}
|
|
||||||
cls._expected_names_list = list(cls._test_metric_names)
|
|
||||||
cls._expected_names_list.sort()
|
|
||||||
cls._test_metric_names_with_same_dim = [name1, name3]
|
|
||||||
cls._test_metrics = [metric1, metric2, metric3]
|
|
||||||
cls._dimensions_param = key + ':' + value
|
|
||||||
|
|
||||||
cls.monasca_client.create_metrics(cls._test_metrics)
|
|
||||||
|
|
||||||
query_param = '?start_time=' + time_iso
|
|
||||||
returned_name_set = set()
|
|
||||||
for i in range(constants.MAX_RETRIES):
|
|
||||||
resp, response_body = cls.monasca_client.list_metrics(query_param)
|
|
||||||
elements = response_body['elements']
|
|
||||||
for element in elements:
|
|
||||||
returned_name_set.add(str(element['name']))
|
|
||||||
if cls._test_metric_names.issubset(returned_name_set):
|
|
||||||
return
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
|
|
||||||
assert False, 'Unable to initialize metrics'
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_cleanup(cls):
|
|
||||||
super(TestMetricsNames, cls).resource_cleanup()
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_list_metrics_names(self):
|
|
||||||
resp, response_body = self.monasca_client.list_metrics_names()
|
|
||||||
metric_names = self._verify_response(resp, response_body)
|
|
||||||
self.assertEqual(metric_names, self._expected_names_list)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_list_metrics_names_with_dimensions(self):
|
|
||||||
query_params = '?dimensions=' + self._dimensions_param
|
|
||||||
resp, response_body = self.monasca_client.list_metrics_names(
|
|
||||||
query_params)
|
|
||||||
metric_names = self._verify_response(resp, response_body)
|
|
||||||
self.assertEqual(metric_names,
|
|
||||||
self._test_metric_names_with_same_dim)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_list_metrics_names_with_limit_offset(self):
|
|
||||||
resp, response_body = self.monasca_client.list_metrics_names()
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
num_names = len(elements)
|
|
||||||
|
|
||||||
for limit in range(1, num_names):
|
|
||||||
start_index = 0
|
|
||||||
params = [('limit', limit)]
|
|
||||||
offset = None
|
|
||||||
while True:
|
|
||||||
num_expected_elements = limit
|
|
||||||
if (num_expected_elements + start_index) > num_names:
|
|
||||||
num_expected_elements = num_names - start_index
|
|
||||||
|
|
||||||
these_params = list(params)
|
|
||||||
# If not the first call, use the offset returned by the last
|
|
||||||
# call
|
|
||||||
if offset:
|
|
||||||
these_params.extend([('offset', str(offset))])
|
|
||||||
query_params = '?' + urlencode(these_params)
|
|
||||||
|
|
||||||
resp, response_body = \
|
|
||||||
self.monasca_client.list_metrics_names(query_params)
|
|
||||||
new_elements = self._verify_response(resp, response_body)
|
|
||||||
self.assertEqual(num_expected_elements, len(new_elements))
|
|
||||||
|
|
||||||
expected_elements = elements[start_index:start_index + limit]
|
|
||||||
expected_names = \
|
|
||||||
[expected_elements[i]['name'] for i in range(
|
|
||||||
len(expected_elements))]
|
|
||||||
|
|
||||||
self.assertEqual(expected_names, new_elements)
|
|
||||||
start_index += num_expected_elements
|
|
||||||
if start_index >= num_names:
|
|
||||||
break
|
|
||||||
# Get the next set
|
|
||||||
offset = self._get_offset(response_body)
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_list_metrics_names_with_offset_not_in_metrics_names_list(self):
|
|
||||||
offset1 = 'tempest-abc'
|
|
||||||
offset2 = 'tempest-name111'
|
|
||||||
offset3 = 'tempest-name4-random'
|
|
||||||
query_param1 = '?' + urlencode([('offset', offset1)])
|
|
||||||
query_param2 = '?' + urlencode([('offset', offset2)])
|
|
||||||
query_param3 = '?' + urlencode([('offset', offset3)])
|
|
||||||
|
|
||||||
resp, response_body = self.monasca_client.list_metrics_names(
|
|
||||||
query_param1)
|
|
||||||
metric_names = self._verify_response(resp, response_body)
|
|
||||||
|
|
||||||
self.assertEqual(metric_names, self._expected_names_list[:])
|
|
||||||
|
|
||||||
resp, response_body = self.monasca_client.list_metrics_names(
|
|
||||||
query_param2)
|
|
||||||
metric_names = self._verify_response(resp, response_body)
|
|
||||||
self.assertEqual(metric_names, self._expected_names_list[1:])
|
|
||||||
|
|
||||||
resp, response_body = self.monasca_client.list_metrics_names(
|
|
||||||
query_param3)
|
|
||||||
self.assertEqual(response_body['elements'], [])
|
|
||||||
|
|
||||||
def _verify_response(self, resp, response_body):
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertTrue(set(['links', 'elements']) == set(response_body))
|
|
||||||
|
|
||||||
response_names_length = len(response_body['elements'])
|
|
||||||
if response_names_length == 0:
|
|
||||||
self.fail("No metric names returned")
|
|
||||||
|
|
||||||
metric_names = [str(response_body['elements'][i]['name']) for i in
|
|
||||||
range(response_names_length)]
|
|
||||||
return metric_names
|
|
@ -1,33 +0,0 @@
|
|||||||
# (C) Copyright 2016-2017 Hewlett Packard Enterprise Development LP
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from monasca_tempest_tests.tests.api import base
|
|
||||||
from tempest.lib import decorators
|
|
||||||
|
|
||||||
|
|
||||||
class TestNotificationMethodType(base.BaseMonascaTest):
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_setup(cls):
|
|
||||||
super(TestNotificationMethodType, cls).resource_setup()
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_cleanup(cls):
|
|
||||||
super(TestNotificationMethodType, cls).resource_cleanup()
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_notification_method_type(self):
|
|
||||||
resp, response_body = (self.monasca_client.
|
|
||||||
list_notification_method_types())
|
|
||||||
self.assertEqual(200, resp.status)
|
|
File diff suppressed because it is too large
Load Diff
@ -1,184 +0,0 @@
|
|||||||
# (C) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import time
|
|
||||||
|
|
||||||
from tempest.lib import decorators
|
|
||||||
from tempest.lib import exceptions
|
|
||||||
|
|
||||||
from monasca_tempest_tests import clients
|
|
||||||
from monasca_tempest_tests.tests.api import base
|
|
||||||
from monasca_tempest_tests.tests.api import helpers
|
|
||||||
|
|
||||||
|
|
||||||
class TestReadOnlyRole(base.BaseMonascaTest):
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_setup(cls):
|
|
||||||
super(TestReadOnlyRole, cls).resource_setup()
|
|
||||||
credentials = cls.cred_provider.get_creds_by_roles(
|
|
||||||
['monasca-read-only-user']).credentials
|
|
||||||
cls.os = clients.Manager(credentials=credentials)
|
|
||||||
cls.monasca_client = cls.os.monasca_client
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_cleanup(cls):
|
|
||||||
super(TestReadOnlyRole, cls).resource_cleanup()
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_alarms_success(self):
|
|
||||||
resp, response_body = self.monasca_client.list_alarms()
|
|
||||||
#
|
|
||||||
# Validate the call succeeds with empty result (we didn't
|
|
||||||
# create any alarms)
|
|
||||||
#
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertEqual(0, len(response_body['elements']))
|
|
||||||
self.assertTrue(response_body['links'][0]['href'].endswith('/v2.0/alarms'))
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_metrics_success(self):
|
|
||||||
resp, response_body = self.monasca_client.list_metrics()
|
|
||||||
#
|
|
||||||
# Validate the call succeeds with empty result (we didn't
|
|
||||||
# create any metrics)
|
|
||||||
#
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertEqual(0, len(response_body['elements']))
|
|
||||||
self.assertTrue(response_body['links'][0]['href'].endswith('/v2.0/metrics'))
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_alarm_definition_success(self):
|
|
||||||
resp, response_body = self.monasca_client.list_alarm_definitions()
|
|
||||||
#
|
|
||||||
# Validate the call succeeds with empty result (we didn't
|
|
||||||
# create any alarm definitions)
|
|
||||||
#
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertEqual(0, len(response_body['elements']))
|
|
||||||
self.assertTrue(response_body['links'][0]['href'].endswith('/v2.0/alarm-definitions'))
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_notification_methods_success(self):
|
|
||||||
resp, response_body = self.monasca_client.list_notification_methods()
|
|
||||||
#
|
|
||||||
# Validate the call succeeds with empty result (we didn't
|
|
||||||
# create any notifications)
|
|
||||||
#
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertEqual(0, len(response_body['elements']))
|
|
||||||
self.assertTrue(response_body['links'][0]['href'].endswith('/v2.0/notification-methods'))
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_alarm_count_success(self):
|
|
||||||
resp, response_body = self.monasca_client.count_alarms()
|
|
||||||
#
|
|
||||||
# Validate the call succeeds with empty result (we didn't
|
|
||||||
# create any alarms to count)
|
|
||||||
#
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertEqual(0, response_body['counts'][0][0])
|
|
||||||
self.assertTrue(response_body['links'][0]['href'].endswith('/v2.0/alarms/count'))
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_alarm_state_history_success(self):
|
|
||||||
resp, response_body = self.monasca_client.list_alarms_state_history()
|
|
||||||
#
|
|
||||||
# Validate the call succeeds with empty result (we didn't
|
|
||||||
# create any alarms that have history)
|
|
||||||
#
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertEqual(0, len(response_body['elements']))
|
|
||||||
self.assertTrue(response_body['links'][0]['href'].endswith('/v2.0/alarms/state-history'))
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_dimension_values_success(self):
|
|
||||||
parms = '?dimension_name=foo'
|
|
||||||
resp, response_body = self.monasca_client.list_dimension_values(parms)
|
|
||||||
#
|
|
||||||
# Validate the call succeeds with empty result (we didn't
|
|
||||||
# create any metrics/dimensions)
|
|
||||||
#
|
|
||||||
url = '/v2.0/metrics/dimensions/names/values?dimension_name=foo'
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertEqual(0, len(response_body['elements']))
|
|
||||||
self.assertTrue(response_body['links'][0]['href'].endswith(url))
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_dimension_names_success(self):
|
|
||||||
resp, response_body = self.monasca_client.list_dimension_names()
|
|
||||||
#
|
|
||||||
# Validate the call succeeds with empty result (we didn't
|
|
||||||
# create any metrics/dimensions)
|
|
||||||
#
|
|
||||||
url = '/v2.0/metrics/dimensions/names'
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertEqual(0, len(response_body['elements']))
|
|
||||||
self.assertTrue(response_body['links'][0]['href'].endswith(url))
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_measurements_success(self):
|
|
||||||
start_timestamp = int(time.time() * 1000)
|
|
||||||
start_time = str(helpers.timestamp_to_iso(start_timestamp))
|
|
||||||
parms = '?name=foo&start_time=' + start_time
|
|
||||||
resp, response_body = self.monasca_client.list_measurements(parms)
|
|
||||||
#
|
|
||||||
# Validate the call succeeds with empty result (we didn't
|
|
||||||
# create any metrics to get measurements for)
|
|
||||||
#
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertEqual(0, len(response_body['elements']))
|
|
||||||
self.assertTrue('/v2.0/metrics/measurements' in response_body['links'][0]['href'])
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_statistics_success(self):
|
|
||||||
start_timestamp = int(time.time() * 1000)
|
|
||||||
start_time = str(helpers.timestamp_to_iso(start_timestamp))
|
|
||||||
query_parms = '?name=foo&statistics=avg&start_time=' + start_time
|
|
||||||
resp, response_body = self.monasca_client.list_statistics(
|
|
||||||
query_parms)
|
|
||||||
#
|
|
||||||
# Validate the call succeeds with empty result (we didn't
|
|
||||||
# create any metrics to get statistics for)
|
|
||||||
#
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertEqual(0, len(response_body['elements']))
|
|
||||||
self.assertTrue('/v2.0/metrics/statistics' in response_body['links'][0]['href'])
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_delete_alarms_fails(self):
|
|
||||||
self.assertRaises(exceptions.Unauthorized,
|
|
||||||
self.monasca_client.delete_alarm, "foo")
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_create_metric_fails(self):
|
|
||||||
self.assertRaises(exceptions.Unauthorized,
|
|
||||||
self.monasca_client.create_metrics,
|
|
||||||
None)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_create_alarm_definition_fails(self):
|
|
||||||
self.assertRaises(exceptions.Unauthorized,
|
|
||||||
self.monasca_client.create_alarm_definitions,
|
|
||||||
None)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_create_notification_fails(self):
|
|
||||||
notif = helpers.create_notification()
|
|
||||||
self.assertRaises(exceptions.Unauthorized,
|
|
||||||
self.monasca_client.create_notifications,
|
|
||||||
notif)
|
|
@ -1,510 +0,0 @@
|
|||||||
# (C) Copyright 2015-2016 Hewlett Packard Enterprise Development LP
|
|
||||||
# (C) Copyright 2017-2018 SUSE LLC
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import time
|
|
||||||
|
|
||||||
import six.moves.urllib.parse as urlparse
|
|
||||||
|
|
||||||
from monasca_tempest_tests.tests.api import base
|
|
||||||
from monasca_tempest_tests.tests.api import constants
|
|
||||||
from monasca_tempest_tests.tests.api import helpers
|
|
||||||
from tempest.lib.common.utils import data_utils
|
|
||||||
from tempest.lib import decorators
|
|
||||||
from tempest.lib import exceptions
|
|
||||||
from urllib import urlencode
|
|
||||||
|
|
||||||
NUM_MEASUREMENTS = 100
|
|
||||||
MIN_REQUIRED_MEASUREMENTS = 2
|
|
||||||
WAIT_TIME = 30
|
|
||||||
|
|
||||||
|
|
||||||
class TestStatistics(base.BaseMonascaTest):
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_setup(cls):
|
|
||||||
super(TestStatistics, cls).resource_setup()
|
|
||||||
name = data_utils.rand_name('name')
|
|
||||||
key = data_utils.rand_name('key')
|
|
||||||
value1 = data_utils.rand_name('value1')
|
|
||||||
value2 = data_utils.rand_name('value2')
|
|
||||||
cls._test_name = name
|
|
||||||
cls._test_key = key
|
|
||||||
cls._test_value1 = value1
|
|
||||||
cls._start_timestamp = int(time.time() * 1000)
|
|
||||||
metrics = [
|
|
||||||
helpers.create_metric(name=name,
|
|
||||||
dimensions={key: value1},
|
|
||||||
timestamp=cls._start_timestamp,
|
|
||||||
value=1.23),
|
|
||||||
helpers.create_metric(name=name,
|
|
||||||
dimensions={key: value2},
|
|
||||||
timestamp=cls._start_timestamp + 1000,
|
|
||||||
value=4.56)
|
|
||||||
]
|
|
||||||
cls.metric_values = [m['value'] for m in metrics]
|
|
||||||
cls.monasca_client.create_metrics(metrics)
|
|
||||||
start_time_iso = helpers.timestamp_to_iso(cls._start_timestamp)
|
|
||||||
query_param = '?name=' + str(name) + '&start_time=' + \
|
|
||||||
start_time_iso + '&merge_metrics=true' + '&end_time=' + \
|
|
||||||
helpers.timestamp_to_iso(cls._start_timestamp + 1000 * 2)
|
|
||||||
start_time_iso = helpers.timestamp_to_iso(cls._start_timestamp)
|
|
||||||
cls._start_time_iso = start_time_iso
|
|
||||||
|
|
||||||
num_measurements = 0
|
|
||||||
for i in range(constants.MAX_RETRIES):
|
|
||||||
resp, response_body = cls.monasca_client.\
|
|
||||||
list_measurements(query_param)
|
|
||||||
elements = response_body['elements']
|
|
||||||
if len(elements) > 0:
|
|
||||||
num_measurements = len(elements[0]['measurements'])
|
|
||||||
if num_measurements >= MIN_REQUIRED_MEASUREMENTS:
|
|
||||||
break
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
|
|
||||||
if num_measurements < MIN_REQUIRED_MEASUREMENTS:
|
|
||||||
assert False, "Required {} measurements, found {}".format(MIN_REQUIRED_MEASUREMENTS, num_measurements)
|
|
||||||
|
|
||||||
cls._end_timestamp = cls._start_timestamp + 3000
|
|
||||||
cls._end_time_iso = helpers.timestamp_to_iso(cls._end_timestamp)
|
|
||||||
|
|
||||||
name2 = data_utils.rand_name("group-by")
|
|
||||||
cls._group_by_metric_name = name2
|
|
||||||
cls._group_by_end_time_iso = helpers.timestamp_to_iso(cls._start_timestamp + 4000)
|
|
||||||
|
|
||||||
group_by_metrics = [
|
|
||||||
helpers.create_metric(name=name2, dimensions={'key1': 'value1', 'key2': 'value5', 'key3': 'value7'},
|
|
||||||
timestamp=cls._start_timestamp + 1, value=2),
|
|
||||||
helpers.create_metric(name=name2, dimensions={'key1': 'value2', 'key2': 'value5', 'key3': 'value7'},
|
|
||||||
timestamp=cls._start_timestamp + 1001, value=3),
|
|
||||||
helpers.create_metric(name=name2, dimensions={'key1': 'value3', 'key2': 'value6', 'key3': 'value7'},
|
|
||||||
timestamp=cls._start_timestamp + 2001, value=5),
|
|
||||||
helpers.create_metric(name=name2, dimensions={'key1': 'value4', 'key2': 'value6', 'key3': 'value8'},
|
|
||||||
timestamp=cls._start_timestamp + 3001, value=7),
|
|
||||||
]
|
|
||||||
|
|
||||||
cls.monasca_client.create_metrics(group_by_metrics)
|
|
||||||
query_param = '?name=' + str(name2) + \
|
|
||||||
'&start_time=' + start_time_iso + \
|
|
||||||
'&merge_metrics=true' + \
|
|
||||||
'&end_time=' + cls._group_by_end_time_iso
|
|
||||||
|
|
||||||
num_measurements = 0
|
|
||||||
for i in range(constants.MAX_RETRIES):
|
|
||||||
resp, response_body = cls.monasca_client. \
|
|
||||||
list_measurements(query_param)
|
|
||||||
elements = response_body['elements']
|
|
||||||
if len(elements) > 0:
|
|
||||||
num_measurements = len(elements[0]['measurements'])
|
|
||||||
if num_measurements >= len(group_by_metrics):
|
|
||||||
break
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
|
|
||||||
if num_measurements < len(group_by_metrics):
|
|
||||||
assert False, "Required {} measurements, found {}".format(len(group_by_metrics),
|
|
||||||
response_body)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_cleanup(cls):
|
|
||||||
super(TestStatistics, cls).resource_cleanup()
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_statistics(self):
|
|
||||||
self._test_list_statistic(with_end_time=True)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_statistics_with_no_end_time(self):
|
|
||||||
self._test_list_statistic(with_end_time=False)
|
|
||||||
|
|
||||||
def _test_list_statistic(self, with_end_time=True):
|
|
||||||
query_parms = '?name=' + str(self._test_name) + \
|
|
||||||
'&statistics=' + urlparse.quote('avg,sum,min,max,count') + \
|
|
||||||
'&start_time=' + str(self._start_time_iso) + \
|
|
||||||
'&merge_metrics=true' + '&period=100000'
|
|
||||||
if with_end_time is True:
|
|
||||||
query_parms += '&end_time=' + str(self._end_time_iso)
|
|
||||||
|
|
||||||
resp, response_body = self.monasca_client.list_statistics(
|
|
||||||
query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertTrue(set(['links', 'elements']) == set(response_body))
|
|
||||||
element = response_body['elements'][0]
|
|
||||||
self._verify_element(element)
|
|
||||||
column = element['columns']
|
|
||||||
num_statistics_method = 5
|
|
||||||
statistics = element['statistics'][0]
|
|
||||||
self._verify_column_and_statistics(
|
|
||||||
column, num_statistics_method, statistics, self.metric_values)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_list_statistics_with_no_name(self):
|
|
||||||
query_parms = '?merge_metrics=true&statistics=avg&start_time=' + \
|
|
||||||
str(self._start_time_iso) + '&end_time=' + \
|
|
||||||
str(self._end_time_iso)
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.list_statistics, query_parms)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_list_statistics_with_no_statistics(self):
|
|
||||||
query_parms = '?name=' + str(self._test_name) + '&start_time=' + str(
|
|
||||||
self._start_time_iso) + '&end_time=' + str(self._end_time_iso)
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.list_statistics, query_parms)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_list_statistics_with_no_start_time(self):
|
|
||||||
query_parms = '?name=' + str(self._test_name) + '&statistics=avg'
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.list_statistics, query_parms)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_list_statistics_with_invalid_statistics(self):
|
|
||||||
query_parms = '?name=' + str(self._test_name) + '&statistics=abc' + \
|
|
||||||
'&start_time=' + str(self._start_time_iso) + \
|
|
||||||
'&end_time=' + str(self._end_time_iso)
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.list_statistics, query_parms)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_statistics_with_dimensions(self):
|
|
||||||
query_parms = '?name=' + str(self._test_name) + '&statistics=avg' \
|
|
||||||
'&start_time=' + str(self._start_time_iso) + \
|
|
||||||
'&end_time=' + str(self._end_time_iso) + \
|
|
||||||
'&dimensions=' + str(self._test_key) + ':' + \
|
|
||||||
str(self._test_value1) + '&period=100000'
|
|
||||||
resp, response_body = self.monasca_client.list_statistics(
|
|
||||||
query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
dimensions = response_body['elements'][0]['dimensions']
|
|
||||||
self.assertEqual(dimensions[self._test_key], self._test_value1)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_list_statistics_with_end_time_equals_start_time(self):
|
|
||||||
query_parms = '?name=' + str(self._test_name) + \
|
|
||||||
'&merge_metrics=true&statistics=avg&' \
|
|
||||||
'start_time=' + str(self._start_time_iso) + \
|
|
||||||
'&end_time=' + str(self._start_time_iso) + \
|
|
||||||
'&period=100000'
|
|
||||||
self.assertRaises(exceptions.BadRequest,
|
|
||||||
self.monasca_client.list_statistics, query_parms)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_statistics_with_period(self):
|
|
||||||
query_parms = '?name=' + str(self._test_name) + \
|
|
||||||
'&merge_metrics=true&statistics=avg&' \
|
|
||||||
'start_time=' + str(self._start_time_iso) + \
|
|
||||||
'&end_time=' + str(self._end_time_iso) + \
|
|
||||||
'&period=1'
|
|
||||||
resp, response_body = self.monasca_client.list_statistics(
|
|
||||||
query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
time_diff = self._end_timestamp - self._start_timestamp
|
|
||||||
len_statistics = len(response_body['elements'][0]['statistics'])
|
|
||||||
self.assertEqual(time_diff / 1000, len_statistics)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_statistics_with_offset_limit(self):
|
|
||||||
start_timestamp = int(time.time() * 1000)
|
|
||||||
name = data_utils.rand_name()
|
|
||||||
metric = [
|
|
||||||
helpers.create_metric(name=name, timestamp=start_timestamp + 1,
|
|
||||||
dimensions={'key1': 'value-1',
|
|
||||||
'key2': 'value-1'},
|
|
||||||
value=1),
|
|
||||||
helpers.create_metric(name=name, timestamp=start_timestamp + 1001,
|
|
||||||
dimensions={'key1': 'value-2',
|
|
||||||
'key2': 'value-2'},
|
|
||||||
value=2),
|
|
||||||
helpers.create_metric(name=name, timestamp=start_timestamp + 2001,
|
|
||||||
dimensions={'key1': 'value-3',
|
|
||||||
'key2': 'value-3'},
|
|
||||||
value=3),
|
|
||||||
helpers.create_metric(name=name, timestamp=start_timestamp + 3001,
|
|
||||||
dimensions={'key1': 'value-4',
|
|
||||||
'key2': 'value-4'},
|
|
||||||
value=4)
|
|
||||||
]
|
|
||||||
|
|
||||||
num_metrics = len(metric)
|
|
||||||
self.monasca_client.create_metrics(metric)
|
|
||||||
query_parms = '?name=' + name
|
|
||||||
for i in range(constants.MAX_RETRIES):
|
|
||||||
resp, response_body = self.monasca_client.list_metrics(query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
if elements and len(elements) == num_metrics:
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
self._check_timeout(i, constants.MAX_RETRIES, elements, num_metrics)
|
|
||||||
|
|
||||||
start_time = helpers.timestamp_to_iso(start_timestamp)
|
|
||||||
end_timestamp = start_timestamp + 4001
|
|
||||||
end_time = helpers.timestamp_to_iso(end_timestamp)
|
|
||||||
query_parms = '?name=' + name + '&merge_metrics=true&statistics=avg' \
|
|
||||||
+ '&start_time=' + str(start_time) + '&end_time=' + \
|
|
||||||
str(end_time) + '&period=1'
|
|
||||||
resp, body = self.monasca_client.list_statistics(query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = body['elements'][0]['statistics']
|
|
||||||
first_element = elements[0]
|
|
||||||
|
|
||||||
query_parms = '?name=' + name + '&merge_metrics=true&statistics=avg'\
|
|
||||||
+ '&start_time=' + str(start_time) + '&end_time=' + \
|
|
||||||
str(end_time) + '&period=1' + '&limit=' + str(num_metrics)
|
|
||||||
resp, response_body = self.monasca_client.list_statistics(
|
|
||||||
query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements'][0]['statistics']
|
|
||||||
self.assertEqual(num_metrics, len(elements))
|
|
||||||
self.assertEqual(first_element, elements[0])
|
|
||||||
|
|
||||||
for limit in range(1, num_metrics):
|
|
||||||
start_index = 0
|
|
||||||
params = [('name', name),
|
|
||||||
('merge_metrics', 'true'),
|
|
||||||
('statistics', 'avg'),
|
|
||||||
('start_time', str(start_time)),
|
|
||||||
('end_time', str(end_time)),
|
|
||||||
('period', 1),
|
|
||||||
('limit', limit)]
|
|
||||||
offset = None
|
|
||||||
while True:
|
|
||||||
num_expected_elements = limit
|
|
||||||
if (num_expected_elements + start_index) > num_metrics:
|
|
||||||
num_expected_elements = num_metrics - start_index
|
|
||||||
|
|
||||||
these_params = list(params)
|
|
||||||
# If not the first call, use the offset returned by the last call
|
|
||||||
if offset:
|
|
||||||
these_params.extend([('offset', str(offset))])
|
|
||||||
query_parms = '?' + urlencode(these_params)
|
|
||||||
resp, response_body = self.monasca_client.list_statistics(query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
if not response_body['elements']:
|
|
||||||
self.fail("No metrics returned")
|
|
||||||
if not response_body['elements'][0]['statistics']:
|
|
||||||
self.fail("No statistics returned")
|
|
||||||
new_elements = response_body['elements'][0]['statistics']
|
|
||||||
|
|
||||||
self.assertEqual(num_expected_elements, len(new_elements))
|
|
||||||
expected_elements = elements[start_index:start_index + limit]
|
|
||||||
self.assertEqual(expected_elements, new_elements)
|
|
||||||
start_index += num_expected_elements
|
|
||||||
if start_index >= num_metrics:
|
|
||||||
break
|
|
||||||
# Get the next set
|
|
||||||
offset = self._get_offset(response_body)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_statistics_with_group_by_one(self):
|
|
||||||
query_parms = '?name=' + self._group_by_metric_name + \
|
|
||||||
'&group_by=key2' + \
|
|
||||||
'&statistics=max,avg,min' + \
|
|
||||||
'&start_time=' + str(self._start_time_iso) + \
|
|
||||||
'&end_time=' + str(self._group_by_end_time_iso)
|
|
||||||
resp, response_body = self.monasca_client.list_statistics(
|
|
||||||
query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
self.assertEqual(len(elements), 2)
|
|
||||||
for statistics in elements:
|
|
||||||
self.assertEqual(1, len(statistics['dimensions'].keys()))
|
|
||||||
self.assertEqual([u'key2'], statistics['dimensions'].keys())
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_statistics_with_group_by_multiple(self):
|
|
||||||
query_parms = '?name=' + self._group_by_metric_name + \
|
|
||||||
'&group_by=key2,key3' + \
|
|
||||||
'&statistics=max,avg,min' + \
|
|
||||||
'&start_time=' + str(self._start_time_iso) + \
|
|
||||||
'&end_time=' + str(self._group_by_end_time_iso)
|
|
||||||
resp, response_body = self.monasca_client.list_statistics(
|
|
||||||
query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
self.assertEqual(len(elements), 3)
|
|
||||||
for statistics in elements:
|
|
||||||
self.assertEqual(2, len(statistics['dimensions'].keys()))
|
|
||||||
self.assertEqual({u'key2', u'key3'}, set(statistics['dimensions'].keys()))
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_statistics_with_group_by_all(self):
|
|
||||||
query_parms = '?name=' + self._group_by_metric_name + \
|
|
||||||
'&group_by=*' + \
|
|
||||||
'&statistics=max,avg,min' + \
|
|
||||||
'&start_time=' + str(self._start_time_iso) + \
|
|
||||||
'&end_time=' + str(self._group_by_end_time_iso)
|
|
||||||
resp, response_body = self.monasca_client.list_statistics(
|
|
||||||
query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
elements = response_body['elements']
|
|
||||||
self.assertEqual(len(elements), 4)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_statistics_with_group_by_offset_limit(self):
|
|
||||||
query_parms = '?name=' + str(self._group_by_metric_name) + \
|
|
||||||
'&group_by=key2' + \
|
|
||||||
'&statistics=avg,max' + \
|
|
||||||
'&start_time=' + str(self._start_time_iso) + \
|
|
||||||
'&end_time=' + str(self._group_by_end_time_iso) + \
|
|
||||||
'&period=1'
|
|
||||||
resp, response_body = self.monasca_client.list_statistics(query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
all_expected_elements = response_body['elements']
|
|
||||||
|
|
||||||
for limit in range(1, 4):
|
|
||||||
offset = None
|
|
||||||
for i in range(4 - limit):
|
|
||||||
query_parms = '?name=' + str(self._group_by_metric_name) + \
|
|
||||||
'&group_by=key2' + \
|
|
||||||
'&statistics=avg,max' + \
|
|
||||||
'&start_time=' + str(self._start_time_iso) + \
|
|
||||||
'&end_time=' + str(self._group_by_end_time_iso) + \
|
|
||||||
'&period=1' + \
|
|
||||||
'&limit=' + str(limit)
|
|
||||||
if i > 0:
|
|
||||||
offset = self._get_offset(response_body)
|
|
||||||
query_parms += "&offset=" + offset
|
|
||||||
|
|
||||||
expected_elements = helpers.get_expected_elements_inner_offset_limit(
|
|
||||||
all_expected_elements,
|
|
||||||
offset,
|
|
||||||
limit,
|
|
||||||
'statistics')
|
|
||||||
|
|
||||||
resp, response_body = self.monasca_client.list_statistics(query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertEqual(expected_elements, response_body['elements'])
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_statistics_with_long_start_time(self):
|
|
||||||
query_parms = '?name=' + str(self._test_name) + \
|
|
||||||
'&statistics=' + urlparse.quote('avg,sum,min,max,count') + \
|
|
||||||
'&start_time=' + "2017-01-01T00:00:00.00Z" + \
|
|
||||||
'&end_time=' + str(self._end_time_iso) + \
|
|
||||||
'&merge_metrics=true' + '&period=100000'
|
|
||||||
resp, response_body = self.monasca_client.list_statistics(
|
|
||||||
query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
self.assertTrue(set(['links', 'elements']) == set(response_body))
|
|
||||||
element = response_body['elements'][0]
|
|
||||||
self._verify_element(element)
|
|
||||||
column = element['columns']
|
|
||||||
num_statistics_method = 5
|
|
||||||
statistics = element['statistics'][0]
|
|
||||||
self._verify_column_and_statistics(
|
|
||||||
column, num_statistics_method, statistics, self.metric_values)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_list_statistics_with_no_merge_metrics(self):
|
|
||||||
key = data_utils.rand_name('key')
|
|
||||||
value = data_utils.rand_name('value')
|
|
||||||
metric3 = helpers.create_metric(
|
|
||||||
name=self._test_name,
|
|
||||||
dimensions={key: value},
|
|
||||||
timestamp=self._start_timestamp + 2000)
|
|
||||||
self.monasca_client.create_metrics(metric3)
|
|
||||||
query_param = '?name=' + str(self._test_name) + '&start_time=' + \
|
|
||||||
self._start_time_iso + '&end_time=' + helpers.\
|
|
||||||
timestamp_to_iso(self._start_timestamp + 1000 * 4) + \
|
|
||||||
'&merge_metrics=True'
|
|
||||||
|
|
||||||
for i in range(constants.MAX_RETRIES):
|
|
||||||
resp, response_body = self.monasca_client.\
|
|
||||||
list_measurements(query_param)
|
|
||||||
elements = response_body['elements']
|
|
||||||
for element in elements:
|
|
||||||
if str(element['name']) == self._test_name and len(
|
|
||||||
element['measurements']) == 3:
|
|
||||||
end_time_iso = helpers.timestamp_to_iso(
|
|
||||||
self._start_timestamp + 1000 * 4)
|
|
||||||
query_parms = '?name=' + str(self._test_name) + \
|
|
||||||
'&statistics=avg' + '&start_time=' + \
|
|
||||||
str(self._start_time_iso) + '&end_time=' +\
|
|
||||||
str(end_time_iso) + '&period=100000'
|
|
||||||
self.assertRaises(exceptions.Conflict,
|
|
||||||
self.monasca_client.list_statistics,
|
|
||||||
query_parms)
|
|
||||||
return
|
|
||||||
time.sleep(constants.RETRY_WAIT_SECS)
|
|
||||||
self._check_timeout(i, constants.MAX_RETRIES, elements, 3)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
@decorators.attr(type=['negative'])
|
|
||||||
def test_list_statistics_with_name_exceeds_max_length(self):
|
|
||||||
long_name = "x" * (constants.MAX_LIST_STATISTICS_NAME_LENGTH + 1)
|
|
||||||
query_parms = '?name=' + str(long_name) + '&merge_metrics=true' + \
|
|
||||||
'&start_time=' + str(self._start_time_iso) + \
|
|
||||||
'&end_time=' + str(self._end_time_iso)
|
|
||||||
self.assertRaises(exceptions.UnprocessableEntity,
|
|
||||||
self.monasca_client.list_statistics, query_parms)
|
|
||||||
|
|
||||||
@decorators.attr(type="gate")
|
|
||||||
def test_list_statistics_response_body_statistic_result_type(self):
|
|
||||||
query_parms = '?name=' + str(self._test_name) + '&period=100000' + \
|
|
||||||
'&statistics=avg' + '&merge_metrics=true' + \
|
|
||||||
'&start_time=' + str(self._start_time_iso) + \
|
|
||||||
'&end_time=' + str(self._end_time_iso)
|
|
||||||
resp, response_body = self.monasca_client.list_statistics(
|
|
||||||
query_parms)
|
|
||||||
self.assertEqual(200, resp.status)
|
|
||||||
element = response_body['elements'][0]
|
|
||||||
statistic = element['statistics']
|
|
||||||
statistic_result_type = type(statistic[0][1])
|
|
||||||
self.assertEqual(statistic_result_type, float)
|
|
||||||
|
|
||||||
def _verify_element(self, element):
|
|
||||||
self.assertTrue(set(['id', 'name', 'dimensions', 'columns',
|
|
||||||
'statistics']) == set(element))
|
|
||||||
self.assertTrue(type(element['id']) is unicode)
|
|
||||||
self.assertTrue(element['id'] is not None)
|
|
||||||
self.assertTrue(type(element['name']) is unicode)
|
|
||||||
self.assertTrue(type(element['dimensions']) is dict)
|
|
||||||
self.assertEqual(len(element['dimensions']), 0)
|
|
||||||
self.assertTrue(type(element['columns']) is list)
|
|
||||||
self.assertTrue(type(element['statistics']) is list)
|
|
||||||
self.assertEqual(element['name'], self._test_name)
|
|
||||||
|
|
||||||
def _verify_column_and_statistics(
|
|
||||||
self, column, num_statistics_method, statistics, values):
|
|
||||||
self.assertTrue(type(column) is list)
|
|
||||||
self.assertTrue(type(statistics) is list)
|
|
||||||
self.assertEqual(len(column), num_statistics_method + 1)
|
|
||||||
self.assertEqual(column[0], 'timestamp')
|
|
||||||
for i, method in enumerate(column):
|
|
||||||
if method == 'avg':
|
|
||||||
self.assertAlmostEqual(statistics[i], float(sum(values) / len(values)))
|
|
||||||
elif method == 'max':
|
|
||||||
self.assertEqual(statistics[i], max(values))
|
|
||||||
elif method == 'min':
|
|
||||||
self.assertEqual(statistics[i], min(values))
|
|
||||||
elif method == 'sum':
|
|
||||||
self.assertAlmostEqual(statistics[i], sum(values))
|
|
||||||
elif method == 'count':
|
|
||||||
self.assertEqual(statistics[i], len(values))
|
|
||||||
|
|
||||||
def _check_timeout(self, timer, max_retries, elements,
|
|
||||||
expect_num_elements):
|
|
||||||
if timer == max_retries - 1:
|
|
||||||
error_msg = ("Failed: timeout on waiting for metrics: {} elements "
|
|
||||||
"are needed. Current number of elements = {}").\
|
|
||||||
format(expect_num_elements, len(elements))
|
|
||||||
raise self.fail(error_msg)
|
|
@ -1,50 +0,0 @@
|
|||||||
# (C) Copyright 2015,2017 Hewlett Packard Enterprise Development Company LP
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import datetime
|
|
||||||
|
|
||||||
from oslo_serialization import jsonutils as json
|
|
||||||
|
|
||||||
from monasca_tempest_tests.tests.api import base
|
|
||||||
from tempest.lib import decorators
|
|
||||||
|
|
||||||
|
|
||||||
class TestVersions(base.BaseMonascaTest):
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def resource_setup(cls):
|
|
||||||
super(TestVersions, cls).resource_setup()
|
|
||||||
|
|
||||||
@decorators.attr(type='gate')
|
|
||||||
def test_get_version(self):
|
|
||||||
resp, response_body = self.monasca_client.get_version()
|
|
||||||
self.assertEqual(resp.status, 200)
|
|
||||||
response_body = json.loads(response_body)
|
|
||||||
|
|
||||||
self.assertIsInstance(response_body, dict)
|
|
||||||
version = response_body
|
|
||||||
self.assertTrue(set(['id', 'links', 'status', 'updated']) ==
|
|
||||||
set(version))
|
|
||||||
self.assertEqual(version['id'], u'v2.0')
|
|
||||||
self.assertEqual(version['status'], u'CURRENT')
|
|
||||||
date_object = datetime.datetime.strptime(version['updated'],
|
|
||||||
"%Y-%m-%dT%H:%M:%S.%fZ")
|
|
||||||
self.assertIsInstance(date_object, datetime.datetime)
|
|
||||||
links = response_body['links']
|
|
||||||
self.assertIsInstance(links, list)
|
|
||||||
link = links[0]
|
|
||||||
self.assertTrue(set(['rel', 'href']) ==
|
|
||||||
set(link))
|
|
||||||
self.assertEqual(link['rel'], u'self')
|
|
||||||
self.assertTrue(link['href'].endswith('/v2.0'))
|
|
@ -34,7 +34,7 @@
|
|||||||
MONASCA_API_IMPLEMENTATION_LANG="{{ api_lang }}"
|
MONASCA_API_IMPLEMENTATION_LANG="{{ api_lang }}"
|
||||||
MONASCA_PERSISTER_IMPLEMENTATION_LANG="{{ persister_lang }}"
|
MONASCA_PERSISTER_IMPLEMENTATION_LANG="{{ persister_lang }}"
|
||||||
MONASCA_METRICS_DB="{{ tsdb }}"
|
MONASCA_METRICS_DB="{{ tsdb }}"
|
||||||
|
TEMPEST_PLUGINS+='/opt/stack/new/monasca-tempest-plugin'
|
||||||
EOF
|
EOF
|
||||||
executable: /bin/bash
|
executable: /bin/bash
|
||||||
chdir: '{{ ansible_user_dir }}/workspace'
|
chdir: '{{ ansible_user_dir }}/workspace'
|
||||||
@ -50,6 +50,8 @@
|
|||||||
|
|
||||||
export DEVSTACK_GATE_NEUTRON=1
|
export DEVSTACK_GATE_NEUTRON=1
|
||||||
export DEVSTACK_GATE_EXERCISES=0
|
export DEVSTACK_GATE_EXERCISES=0
|
||||||
|
export DEVSTACK_GATE_TEMPEST=1
|
||||||
|
export DEVSTACK_GATE_TEMPEST_REGEX="monasca_tempest_tests.tests.api"
|
||||||
|
|
||||||
if [ "{{ database }}" == "postgresql" ]; then
|
if [ "{{ database }}" == "postgresql" ]; then
|
||||||
export DEVSTACK_GATE_POSTGRES=1
|
export DEVSTACK_GATE_POSTGRES=1
|
||||||
@ -67,15 +69,11 @@
|
|||||||
export PROJECTS="openstack/python-monascaclient $PROJECTS"
|
export PROJECTS="openstack/python-monascaclient $PROJECTS"
|
||||||
export PROJECTS="openstack/monasca-grafana-datasource $PROJECTS"
|
export PROJECTS="openstack/monasca-grafana-datasource $PROJECTS"
|
||||||
export PROJECTS="openstack/monasca-ui $PROJECTS"
|
export PROJECTS="openstack/monasca-ui $PROJECTS"
|
||||||
|
export PROJECTS="openstack/monasca-tempest-plugin $PROJECTS"
|
||||||
function pre_test_hook {
|
|
||||||
source $BASE/new/monasca-api/monasca_tempest_tests/contrib/gate_hook.sh
|
|
||||||
}
|
|
||||||
export -f pre_test_hook
|
|
||||||
|
|
||||||
function post_test_hook {
|
function post_test_hook {
|
||||||
# Configure and run tempest on monasca-api installation
|
# Configure and run tempest on monasca-api installation
|
||||||
source $BASE/new/monasca-api/monasca_tempest_tests/contrib/post_test_hook.sh
|
source $BASE/new/monasca-api/contrib/post_test_hook.sh
|
||||||
}
|
}
|
||||||
export -f post_test_hook
|
export -f post_test_hook
|
||||||
|
|
||||||
|
@ -20,7 +20,6 @@ classifier =
|
|||||||
[files]
|
[files]
|
||||||
packages =
|
packages =
|
||||||
monasca_api
|
monasca_api
|
||||||
monasca_tempest_tests
|
|
||||||
|
|
||||||
data_files =
|
data_files =
|
||||||
/etc/monasca =
|
/etc/monasca =
|
||||||
@ -31,9 +30,6 @@ data_files =
|
|||||||
console_scripts =
|
console_scripts =
|
||||||
monasca-api = monasca_api.api.server:launch
|
monasca-api = monasca_api.api.server:launch
|
||||||
|
|
||||||
tempest.test_plugins =
|
|
||||||
monasca_tests = monasca_tempest_tests.plugin:MonascaTempestPlugin
|
|
||||||
|
|
||||||
oslo.config.opts =
|
oslo.config.opts =
|
||||||
monasca_api = monasca_api.conf:list_opts
|
monasca_api = monasca_api.conf:list_opts
|
||||||
|
|
||||||
|
4
tox.ini
4
tox.ini
@ -51,7 +51,7 @@ skip_install = True
|
|||||||
usedevelop = False
|
usedevelop = False
|
||||||
commands =
|
commands =
|
||||||
{[testenv]commands}
|
{[testenv]commands}
|
||||||
flake8 monasca_api monasca_tempest_tests
|
flake8 monasca_api
|
||||||
|
|
||||||
[testenv:bandit]
|
[testenv:bandit]
|
||||||
skip_install = True
|
skip_install = True
|
||||||
@ -59,8 +59,6 @@ usedevelop = False
|
|||||||
commands =
|
commands =
|
||||||
# B101(assert_ussed) - API uses asserts because of performance reasons
|
# B101(assert_ussed) - API uses asserts because of performance reasons
|
||||||
bandit -r monasca_api -n5 -s B101 -x monasca_api/tests
|
bandit -r monasca_api -n5 -s B101 -x monasca_api/tests
|
||||||
# B101(assert_ussed) - asserts in test layers seems appropriate
|
|
||||||
bandit -r monasca_tempest_tests -n5 -s B101
|
|
||||||
|
|
||||||
[testenv:bashate]
|
[testenv:bashate]
|
||||||
skip_install = True
|
skip_install = True
|
||||||
|
Loading…
x
Reference in New Issue
Block a user