Ceilometer integration with monasca

Storage driver and publisher for integrating ceilometer with monasca
using monasca API

Change-Id: I67bef022db73fa6fbef804544126a188df4d9fac
This commit is contained in:
Srinivas Sakhamuri 2015-07-08 17:01:36 -06:00
parent 7d01d06031
commit fcef6c383b
17 changed files with 2655 additions and 150 deletions

View File

@ -1,13 +1,13 @@
monasca-ceilometer
========
Python plugin code for Ceilometer to send samples to monasca-api
Python plugin and storage driver for Ceilometer to send samples to monasca-api
### Installation Instructions
Assumes that an active monasca-api server is running.
1. Run devstack to get openstack installed. monasca-ceilometer was developed on a Ubuntu 12.04 host.
1. Run devstack to get openstack installed.
2. Install python-monascaclient
@ -15,45 +15,55 @@ Assumes that an active monasca-api server is running.
3. Clone monasca-ceilometer from github.com.
Copy monclient.py to the following path:
/opt/stack/ceilometer/ceilometer/publisher/monclient.py
Copy the following files to devstack's ceilometer location typically at /opt/stack/ceilometer
Edit monclient.py and set auth_url if using username and password for authentication.
Either set a token or username and password in monclient.py or configure it in pipeline.yaml below.
ceilometer/monasca_client.py
ceilometer/storage/impl_monasca.py
ceilometer/tests/api/v2/test_api_with_monasca_driver.py
ceilometer/tests/storage/test_impl_monasca.py
ceilometer/tests/test_monascaclient.py
ceilometer/tests/publisher/test_monasca_publisher.py
ceilometer/tests/publisher/test_monasca_data_filter.py
ceilometer/publisher/monasca_data_filter.py
ceilometer/publisher/monclient.py
4. Edit entry_points.txt
Under [ceilometer.publisher] section add the following line:
monclient = ceilometer.publisher.monclient:monclient
monasca = ceilometer.publisher.monclient:MonascaPublisher
5. Edit setup.cfg
Under [ceilometer.metering.storage] section add the following line:
monasca = ceilometer.storage.impl_monasca:Connection
5. Edit setup.cfg (used at the time of installation)
Under 'ceilometer.publisher =' section add the following line:
monclient = ceilometer.publisher.monclient:monclient
monasca = ceilometer.publisher.monclient:MonascaPublisher
6. Configure /etc/ceilometer/pipeline.yaml to send the metrics to the monclient publisher. Use the included pipeline.yaml file as an example.
Under 'ceilometer.metering.storage =' section add the following line
Set a valid username and password if a token or username and password weren't added to monclient.py
monasca = ceilometer.storage.impl_monasca:Connection
7. Setup debugging.
6. Configure /etc/ceilometer/pipeline.yaml to send the metrics to the monasca publisher. Use the included pipeline.yaml file as an example.
* Create a pycharm run configuration.
- Script: /usr/local/bin/ceilometer-api
- Script parameters: -d -v --log-dir=/var/log/ceilometer-api --config-file /etc/ceilometer/ceilometer.conf
7. Configure /etc/ceilometer/ceilometer.conf for setting up storage driver for ceilometer API. Use the included ceilometer.conf file as an example.
8. Copy the included monasca_field_definitions.yml file to /etc/ceilometer.
* Comment out any logging messages that cause the debugger to lose the debugging session.
* Make sure that 'Attach to subprocess automatically while debugging' is checked in Pycharm's Python Debugger settings.
* Make sure that 'Gevent compatible debugging' is checked in Pycharm's Debugger settings.
This file contains configuration how to treat each field in ceilometer sample object on per meter basis.
The monasca_data_filter.py uses this file and only stores the fields that are specified in this config file.
9. Make sure the user specified under service_credentials in ceilometer.conf has *monasca_user role* added.
### Other info
### Todo
1. Modify monclient.py to not hard-code kwargs sent to client.Client.
2. Reuse the token until it expires if username and password are used
1. The unit test files that are included need to be used in ceilometer dev env. It will be ideal to be able to run those tests using tox with in this dev. env.
# License
@ -72,7 +82,3 @@ implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,114 @@
# Copyright 2015 Hewlett-Packard Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from ceilometer.i18n import _
from monascaclient import client
from monascaclient import exc
from monascaclient import ksclient
from oslo_config import cfg
from oslo_log import log
monclient_opts = [
cfg.StrOpt('clientapi_version',
default='2_0',
help='Version of Monasca client to use while publishing.'),
]
cfg.CONF.register_opts(monclient_opts, group='monasca')
cfg.CONF.import_group('service_credentials', 'ceilometer.service')
LOG = log.getLogger(__name__)
class MonascaServiceException(Exception):
pass
class MonascaInvalidServiceCredentialsException(Exception):
pass
class MonascaInvalidParametersException(Exception):
code = 400
class Client(object):
"""A client which gets information via python-monascaclient."""
def __init__(self, parsed_url):
conf = cfg.CONF.service_credentials
if not conf.os_username or not conf.os_password or \
not conf.os_auth_url:
err_msg = _("No user name or password or auth_url "
"found in service_credentials")
LOG.error(err_msg)
raise MonascaInvalidServiceCredentialsException(err_msg)
kwargs = {
'username': conf.os_username,
'password': conf.os_password,
'auth_url': conf.os_auth_url.replace("v2.0", "v3"),
'project_id': conf.os_tenant_id,
'project_name': conf.os_tenant_name,
'region_name': conf.os_region_name,
}
self._kwargs = kwargs
self._endpoint = "http:" + parsed_url.path
LOG.info(_("monasca_client: using %s as monasca end point") %
self._endpoint)
self._refresh_client()
def _refresh_client(self):
_ksclient = ksclient.KSClient(**self._kwargs)
self._kwargs['token'] = _ksclient.token
self._mon_client = client.Client(cfg.CONF.monasca.clientapi_version,
self._endpoint, **self._kwargs)
def call_func(self, func, **kwargs):
try:
return func(**kwargs)
except (exc.HTTPInternalServerError,
exc.HTTPServiceUnavailable,
exc.HTTPBadGateway,
exc.CommunicationError) as e:
LOG.exception(e)
raise MonascaServiceException(e.message)
except exc.HTTPUnProcessable as e:
LOG.exception(e)
raise MonascaInvalidParametersException(e.message)
except Exception as e:
LOG.exception(e)
raise
def metrics_create(self, **kwargs):
return self.call_func(self._mon_client.metrics.create,
**kwargs)
def metrics_list(self, **kwargs):
return self.call_func(self._mon_client.metrics.list,
**kwargs)
def metric_names_list(self, **kwargs):
return self.call_func(self._mon_client.metrics.list_names,
**kwargs)
def measurements_list(self, **kwargs):
return self.call_func(self._mon_client.metrics.list_measurements,
**kwargs)
def statistics_list(self, **kwargs):
return self.call_func(self._mon_client.metrics.list_statistics,
**kwargs)

View File

@ -0,0 +1,128 @@
#
# Copyright 2015 Hewlett-Packard Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
from oslo_config import cfg
from oslo_log import log
from oslo_utils import timeutils
import yaml
from ceilometer.i18n import _LI
from ceilometer import sample as sample_util
OPTS = [
cfg.StrOpt('monasca_mappings',
default='/etc/ceilometer/monasca_field_definitions.yaml',
help='Monasca static and dynamic field mappings'),
]
cfg.CONF.register_opts(OPTS, group='monasca')
LOG = log.getLogger(__name__)
class UnableToLoadMappings(Exception):
pass
class NoMappingsFound(Exception):
pass
class MonascaDataFilter(object):
def __init__(self):
self._mapping = {}
self._mapping = self._get_mapping()
def _get_mapping(self):
with open(cfg.CONF.monasca.monasca_mappings, 'r') as f:
try:
return yaml.safe_load(f)
except yaml.YAMLError as exc:
raise UnableToLoadMappings(exc.message)
def _convert_timestamp(self, timestamp):
if isinstance(timestamp, datetime.datetime):
ts = timestamp
else:
ts = timeutils.parse_isotime(timestamp)
tdelta = (ts - datetime.datetime(1970, 1, 1, tzinfo=ts.tzinfo))
# convert timestamp to milli seconds as Monasca expects
return int(tdelta.total_seconds() * 1000)
def _convert_to_sample(self, s):
return sample_util.Sample(
name=s['counter_name'],
type=s['counter_type'],
unit=s['counter_unit'],
volume=s['counter_volume'],
user_id=s['user_id'],
project_id=s['project_id'],
resource_id=s['resource_id'],
timestamp=s['timestamp'],
resource_metadata=s['resource_metadata'],
source=s.get('source')).as_dict()
def process_sample_for_monasca(self, sample_obj):
if not self._mapping:
raise NoMappingsFound("Unable to process the sample")
dimensions = {}
if isinstance(sample_obj, sample_util.Sample):
sample = sample_obj.as_dict()
elif isinstance(sample_obj, dict):
if 'counter_name' in sample_obj:
sample = self._convert_to_sample(sample_obj)
else:
sample = sample_obj
for dim in self._mapping['dimensions']:
val = sample.get(dim, None)
if val:
dimensions[dim] = val
sample_meta = sample.get('resource_metadata', None)
value_meta = {}
meter_name = sample.get('name') or sample.get('counter_name')
if sample_meta:
for meta_key in self._mapping['metadata']['common']:
val = sample_meta.get(meta_key, None)
if val:
value_meta[meta_key] = val
if meter_name in self._mapping['metadata'].keys():
for meta_key in self._mapping['metadata'][meter_name]:
val = sample_meta.get(meta_key, None)
if val:
value_meta[meta_key] = val
metric = dict(
name=meter_name,
timestamp=self._convert_timestamp(sample['timestamp']),
value=sample.get('volume') or sample.get('counter_volume'),
dimensions=dimensions,
value_meta=value_meta if value_meta else None,
)
LOG.debug(_LI("Generated metric with name %(name)s,"
" timestamp %(timestamp)s, value %(value)s,"
" dimensions %(dimensions)s") %
{'name': metric['name'],
'timestamp': metric['timestamp'],
'value': metric['value'],
'dimensions': metric['dimensions']})
return metric

252
ceilometer/publisher/monclient.py Executable file
View File

@ -0,0 +1,252 @@
#
# Copyright 2015 Hewlett Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import time
from oslo_config import cfg
from oslo_log import log
import ceilometer
from ceilometer.i18n import _
from ceilometer import monasca_client as mon_client
from ceilometer.openstack.common import loopingcall
from ceilometer import publisher
from ceilometer.publisher.monasca_data_filter import MonascaDataFilter
from monascaclient import exc
monpub_opts = [
cfg.BoolOpt('batch_mode',
default=False,
help='Indicates whether samples are'
' published in a batch.'),
cfg.IntOpt('batch_count',
default=100,
help='Maximum number of samples in a batch.'),
cfg.IntOpt('batch_timeout',
default=18,
help='Maximum time interval(seconds) after which '
'samples are published in a batch.'),
cfg.IntOpt('batch_polling_interval',
default=5,
help='Frequency of checking if batch criteria is met.'),
cfg.BoolOpt('retry_on_failure',
default=False,
help='Indicates whether publisher retries publishing'
'sample in case of failure. Only a few error cases'
'are queued for a retry.'),
cfg.IntOpt('retry_interval',
default=60,
help='Frequency of attempting a retry.'),
cfg.IntOpt('max_retries',
default=3,
help='Maximum number of retry attempts on a publishing '
'failure.'),
cfg.StrOpt('failure_report_path',
default='monclient_failures.txt',
help='File report of samples that failed to publish to'
' Monasca. These include samples that failed to '
'publish on first attempt and failed samples that'
' maxed out their retries.'),
]
cfg.CONF.register_opts(monpub_opts, group='monasca')
cfg.CONF.import_group('service_credentials', 'ceilometer.service')
LOG = log.getLogger(__name__)
class MonascaPublisher(publisher.PublisherBase):
"""Publisher to publish samples to monasca using monasca-client.
Example URL to place in pipeline.yaml:
- monclient://http://192.168.10.4:8080/v2.0
"""
def __init__(self, parsed_url):
super(MonascaPublisher, self).__init__(parsed_url)
# list to hold metrics to be published in batch (behaves like queue)
self.metric_queue = []
self.time_of_last_batch_run = time.time()
self.mon_client = mon_client.Client(parsed_url)
self.mon_filter = MonascaDataFilter()
batch_timer = loopingcall.FixedIntervalLoopingCall(self.flush_batch)
batch_timer.start(interval=cfg.CONF.monasca.batch_polling_interval)
if cfg.CONF.monasca.retry_on_failure:
# list to hold metrics to be re-tried (behaves like queue)
self.retry_queue = []
# list to store retry attempts for metrics in retry_queue
self.retry_counter = []
retry_timer = loopingcall.FixedIntervalLoopingCall(
self.retry_batch)
retry_timer.start(
interval=cfg.CONF.monasca.retry_interval,
initial_delay=cfg.CONF.monasca.batch_polling_interval)
def _publish_handler(self, func, metrics, batch=False):
"""Handles publishing and exceptions that arise."""
try:
metric_count = len(metrics)
if batch:
func(**{'jsonbody': metrics})
else:
func(**metrics[0])
LOG.debug(_('Successfully published %d metric(s)') % metric_count)
except mon_client.MonascaServiceException:
# Assuming atomicity of create or failure - meaning
# either all succeed or all fail in a batch
LOG.error(_('Metric create failed for %(count)d metric(s) with'
' name(s) %(names)s ') %
({'count': len(metrics),
'names': ','.join([metric['name']
for metric in metrics])}))
if cfg.CONF.monasca.retry_on_failure:
# retry payload in case of internal server error(500),
# service unavailable error(503),bad gateway (502) or
# Communication Error
# append failed metrics to retry_queue
LOG.debug(_('Adding metrics to retry queue.'))
self.retry_queue.extend(metrics)
# initialize the retry_attempt for the each failed
# metric in retry_counter
self.retry_counter.extend(
[0 * i for i in range(metric_count)])
else:
# TODO(flush metric to file)
pass
except Exception:
# TODO(flush metric to file)
pass
def publish_samples(self, context, samples):
"""Main method called to publish samples."""
for sample in samples:
metric = self.mon_filter.process_sample_for_monasca(sample)
# In batch mode, push metric to queue,
# else publish the metric
if cfg.CONF.monasca.batch_mode:
LOG.debug(_('Adding metric to queue.'))
self.metric_queue.append(metric)
else:
LOG.debug(_('Publishing metric with name %(name)s and'
' timestamp %(ts)s to endpoint.') %
({'name': metric['name'],
'ts': metric['timestamp']}))
self._publish_handler(self.mon_client.metrics_create, [metric])
def is_batch_ready(self):
"""Method to check if batch is ready to trigger."""
previous_time = self.time_of_last_batch_run
current_time = time.time()
elapsed_time = current_time - previous_time
if elapsed_time >= cfg.CONF.monasca.batch_timeout and len(self.
metric_queue) > 0:
LOG.debug(_('Batch timeout exceeded, triggering batch publish.'))
return True
else:
if len(self.metric_queue) >= cfg.CONF.monasca.batch_count:
LOG.debug(_('Batch queue full, triggering batch publish.'))
return True
else:
return False
def flush_batch(self):
"""Method to flush the queued metrics."""
if self.is_batch_ready():
# publish all metrics in queue at this point
batch_count = len(self.metric_queue)
self._publish_handler(self.mon_client.metrics_create,
self.metric_queue[:batch_count],
batch=True)
self.time_of_last_batch_run = time.time()
# slice queue to remove metrics that
# published with success or failed and got queued on
# retry queue
self.metric_queue = self.metric_queue[batch_count:]
def is_retry_ready(self):
"""Method to check if retry batch is ready to trigger."""
if len(self.retry_queue) > 0:
LOG.debug(_('Retry queue has items, triggering retry.'))
return True
else:
return False
def retry_batch(self):
"""Method to retry the failed metrics."""
if self.is_retry_ready():
retry_count = len(self.retry_queue)
# Iterate over the retry_queue to eliminate
# metrics that have maxed out their retry attempts
for ctr in xrange(retry_count):
if self.retry_counter[ctr] > cfg.CONF.monasca.max_retries:
# TODO(persist maxed-out metrics to file)
LOG.debug(_('Removing metric %s from retry queue.'
' Metric retry maxed out retry attempts') %
self.retry_queue[ctr]['name'])
del self.retry_queue[ctr]
del self.retry_counter[ctr]
# Iterate over the retry_queue to retry the
# publish for each metric.
# If an exception occurs, the retry count for
# the failed metric is incremented.
# If the retry succeeds, remove the metric and
# the retry count from the retry_queue and retry_counter resp.
ctr = 0
while ctr < len(self.retry_queue):
try:
LOG.debug(_('Retrying metric publish from retry queue.'))
self.mon_client.metrics_create(**self.retry_queue[ctr])
# remove from retry queue if publish was success
LOG.debug(_('Retrying metric %s successful,'
' removing metric from retry queue.') %
self.retry_queue[ctr]['name'])
del self.retry_queue[ctr]
del self.retry_counter[ctr]
except exc.BaseException:
LOG.error(_('Exception encountered in retry. '
'Batch will be retried in next attempt.'))
# if retry failed, increment the retry counter
self.retry_counter[ctr] += 1
ctr += 1
def flush_to_file(self):
# TODO(persist maxed-out metrics to file)
pass
def publish_events(self, context, events):
"""Send an event message for publishing
:param context: Execution context from the service or RPC call
:param events: events from pipeline after transformation
"""
raise ceilometer.NotImplementedError

View File

@ -0,0 +1,473 @@
#
# Copyright 2015 Hewlett Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Simple monasca storage backend.
"""
import datetime
from monascaclient import exc as monasca_exc
from oslo_config import cfg
from oslo_log import log
from oslo_utils import netutils
from oslo_utils import timeutils
import ceilometer
from ceilometer.i18n import _
from ceilometer import monasca_client
from ceilometer.publisher.monasca_data_filter import MonascaDataFilter
from ceilometer.storage import base
from ceilometer.storage import models as api_models
from ceilometer import utils
OPTS = [
cfg.IntOpt('default_stats_period',
default=300,
help='Default period (in seconds) to use for querying stats '
'in case no period specified in the stats API call.'),
]
cfg.CONF.register_opts(OPTS, group='monasca')
LOG = log.getLogger(__name__)
AVAILABLE_CAPABILITIES = {
'meters': {'query': {'simple': True,
'metadata': False}},
'resources': {'query': {'simple': True,
'metadata': False}},
'samples': {'pagination': False,
'groupby': False,
'query': {'simple': True,
'metadata': False,
'complex': False}},
'statistics': {'groupby': False,
'query': {'simple': True,
'metadata': False},
'aggregation': {'standard': True,
'selectable': {
'max': True,
'min': True,
'sum': True,
'avg': True,
'count': True,
'stddev': False,
'cardinality': False}}
},
}
AVAILABLE_STORAGE_CAPABILITIES = {
'storage': {'production_ready': True},
}
class Connection(base.Connection):
CAPABILITIES = utils.update_nested(base.Connection.CAPABILITIES,
AVAILABLE_CAPABILITIES)
STORAGE_CAPABILITIES = utils.update_nested(
base.Connection.STORAGE_CAPABILITIES,
AVAILABLE_STORAGE_CAPABILITIES,
)
def __init__(self, url):
self.mc = monasca_client.Client(netutils.urlsplit(url))
self.mon_filter = MonascaDataFilter()
@staticmethod
def _convert_to_dict(stats, cols):
return {c: stats[i] for i, c in enumerate(cols)}
def upgrade(self):
pass
def clear(self):
pass
def record_metering_data(self, data):
"""Write the data to the backend storage system.
:param data: a dictionary such as returned by
ceilometer.meter.meter_message_from_counter.
"""
LOG.info(_('metering data %(counter_name)s for %(resource_id)s: '
'%(counter_volume)s')
% ({'counter_name': data['counter_name'],
'resource_id': data['resource_id'],
'counter_volume': data['counter_volume']}))
metric = self.mon_filter.process_sample_for_monasca(data)
self.mc.metrics_create(**metric)
def clear_expired_metering_data(self, ttl):
"""Clear expired data from the backend storage system.
Clearing occurs according to the time-to-live.
:param ttl: Number of seconds to keep records for.
"""
LOG.info(_("Dropping data with TTL %d"), ttl)
def get_resources(self, user=None, project=None, source=None,
start_timestamp=None, start_timestamp_op=None,
end_timestamp=None, end_timestamp_op=None,
metaquery=None, resource=None, pagination=None):
"""Return an iterable of dictionaries containing resource information.
{ 'resource_id': UUID of the resource,
'project_id': UUID of project owning the resource,
'user_id': UUID of user owning the resource,
'timestamp': UTC datetime of last update to the resource,
'metadata': most current metadata for the resource,
'meter': list of the meters reporting data for the resource,
}
:param user: Optional ID for user that owns the resource.
:param project: Optional ID for project that owns the resource.
:param source: Optional source filter.
:param start_timestamp: Optional modified timestamp start range.
:param start_timestamp_op: Optional start time operator, like gt, ge.
:param end_timestamp: Optional modified timestamp end range.
:param end_timestamp_op: Optional end time operator, like lt, le.
:param metaquery: Optional dict with metadata to match on.
:param resource: Optional resource filter.
:param pagination: Optional pagination query.
"""
if pagination:
raise ceilometer.NotImplementedError('Pagination not implemented')
if metaquery:
raise ceilometer.NotImplementedError('Metaquery not implemented')
if start_timestamp_op and start_timestamp_op != 'ge':
raise ceilometer.NotImplementedError(('Start time op %s '
'not implemented') %
start_timestamp_op)
if end_timestamp_op and end_timestamp_op != 'le':
raise ceilometer.NotImplementedError(('End time op %s '
'not implemented') %
end_timestamp_op)
if not start_timestamp:
start_timestamp = timeutils.isotime(datetime.datetime(1970, 1, 1))
else:
start_timestamp = timeutils.isotime(start_timestamp)
if end_timestamp:
end_timestamp = timeutils.isotime(end_timestamp)
dims_filter = dict(user_id=user,
project_id=project,
source=source,
resource_id=resource
)
dims_filter = {k: v for k, v in dims_filter.items() if v is not None}
_search_args = dict(
start_time=start_timestamp,
end_time=end_timestamp,
limit=1)
_search_args = {k: v for k, v in _search_args.items()
if v is not None}
for metric in self.mc.metrics_list(
**dict(dimensions=dims_filter)):
_search_args['name'] = metric['name']
_search_args['dimensions'] = metric['dimensions']
try:
for sample in self.mc.measurements_list(**_search_args):
d = sample['dimensions']
m = self._convert_to_dict(
sample['measurements'][0], sample['columns'])
if d.get('resource_id'):
yield api_models.Resource(
resource_id=d.get('resource_id'),
first_sample_timestamp=(
timeutils.parse_isotime(m['timestamp'])),
last_sample_timestamp=timeutils.utcnow(),
project_id=d.get('project_id'),
source=d.get('source'),
user_id=d.get('user_id'),
metadata=m['value_meta'],
)
except monasca_exc.HTTPConflict:
pass
def get_meters(self, user=None, project=None, resource=None, source=None,
limit=None, metaquery=None, pagination=None):
"""Return an iterable of dictionaries containing meter information.
{ 'name': name of the meter,
'type': type of the meter (gauge, delta, cumulative),
'resource_id': UUID of the resource,
'project_id': UUID of project owning the resource,
'user_id': UUID of user owning the resource,
}
:param user: Optional ID for user that owns the resource.
:param project: Optional ID for project that owns the resource.
:param resource: Optional resource filter.
:param source: Optional source filter.
:param limit: Maximum number of results to return.
:param metaquery: Optional dict with metadata to match on.
:param pagination: Optional pagination query.
"""
if pagination:
raise ceilometer.NotImplementedError('Pagination not implemented')
if metaquery:
raise ceilometer.NotImplementedError('Metaquery not implemented')
_dimensions = dict(
user_id=user,
project_id=project,
resource_id=resource,
source=source
)
_dimensions = {k: v for k, v in _dimensions.items() if v is not None}
_search_kwargs = {'dimensions': _dimensions}
if limit:
_search_kwargs['limit'] = limit
for metric in self.mc.metrics_list(**_search_kwargs):
yield api_models.Meter(
name=metric['name'],
type=metric['dimensions'].get('type') or 'cumulative',
unit=metric['dimensions'].get('unit'),
resource_id=metric['dimensions'].get('resource_id'),
project_id=metric['dimensions'].get('project_id'),
source=metric['dimensions'].get('source'),
user_id=metric['dimensions'].get('user_id'))
def get_samples(self, sample_filter, limit=None):
"""Return an iterable of dictionaries containing sample information.
{
'source': source of the resource,
'counter_name': name of the resource,
'counter_type': type of the sample (gauge, delta, cumulative),
'counter_unit': unit of the sample,
'counter_volume': volume of the sample,
'user_id': UUID of user owning the resource,
'project_id': UUID of project owning the resource,
'resource_id': UUID of the resource,
'timestamp': timestamp of the sample,
'resource_metadata': metadata of the sample,
'message_id': message ID of the sample,
'message_signature': message signature of the sample,
'recorded_at': time the sample was recorded
}
:param sample_filter: constraints for the sample search.
:param limit: Maximum number of results to return.
"""
if not sample_filter or not sample_filter.meter:
raise ceilometer.NotImplementedError(
"Supply meter name at the least")
if (sample_filter.start_timestamp_op and
sample_filter.start_timestamp_op != 'ge'):
raise ceilometer.NotImplementedError(('Start time op %s '
'not implemented') %
sample_filter.
start_timestamp_op)
if (sample_filter.end_timestamp_op and
sample_filter.end_timestamp_op != 'le'):
raise ceilometer.NotImplementedError(('End time op %s '
'not implemented') %
sample_filter.
end_timestamp_op)
if sample_filter.metaquery:
raise ceilometer.NotImplementedError('metaquery not '
'implemented '
'in get_samples')
if sample_filter.message_id:
raise ceilometer.NotImplementedError('message_id not '
'implemented '
'in get_samples')
if not sample_filter.start_timestamp:
sample_filter.start_timestamp = \
timeutils.isotime(datetime.datetime(1970, 1, 1))
else:
sample_filter.start_timestamp = \
timeutils.isotime(sample_filter.start_timestamp)
if sample_filter.end_timestamp:
sample_filter.end_timestamp = \
timeutils.isotime(sample_filter.end_timestamp)
_dimensions = dict(
user_id=sample_filter.user,
project_id=sample_filter.project,
resource_id=sample_filter.resource,
source=sample_filter.source
)
_dimensions = {k: v for k, v in _dimensions.items() if v is not None}
_search_args = dict(name=sample_filter.meter,
start_time=sample_filter.start_timestamp,
start_timestamp_op=(
sample_filter.start_timestamp_op),
end_time=sample_filter.end_timestamp,
end_timestamp_op=sample_filter.end_timestamp_op,
limit=limit,
merge_metrics=True,
dimensions=_dimensions)
_search_args = {k: v for k, v in _search_args.items()
if v is not None}
for sample in self.mc.measurements_list(**_search_args):
LOG.debug(_('Retrieved sample: %s'), sample)
d = sample['dimensions']
for measurement in sample['measurements']:
meas_dict = self._convert_to_dict(measurement,
sample['columns'])
yield api_models.Sample(
source=d.get('source'),
counter_name=sample['name'],
counter_type=d.get('type'),
counter_unit=d.get('unit'),
counter_volume=meas_dict['value'],
user_id=d.get('user_id'),
project_id=d.get('project_id'),
resource_id=d.get('resource_id'),
timestamp=timeutils.parse_isotime(meas_dict['timestamp']),
resource_metadata=meas_dict['value_meta'],
message_id=sample['id'],
message_signature='',
recorded_at=(
timeutils.parse_isotime(meas_dict['timestamp'])))
def get_meter_statistics(self, filter, period=None, groupby=None,
aggregate=None):
"""Return a dictionary containing meter statistics.
Meter statistics is described by the query parameters.
The filter must have a meter value set.
{ 'min':
'max':
'avg':
'sum':
'count':
'period':
'period_start':
'period_end':
'duration':
'duration_start':
'duration_end':
}
"""
if filter:
if not filter.meter:
raise ceilometer.NotImplementedError('Query without meter '
'not implemented')
else:
raise ceilometer.NotImplementedError('Query without filter '
'not implemented')
if groupby:
raise ceilometer.NotImplementedError('Groupby not implemented')
if filter.metaquery:
raise ceilometer.NotImplementedError('Metaquery not implemented')
if filter.message_id:
raise ceilometer.NotImplementedError('Message_id query '
'not implemented')
if filter.start_timestamp_op and filter.start_timestamp_op != 'ge':
raise ceilometer.NotImplementedError(('Start time op %s '
'not implemented') %
filter.start_timestamp_op)
if filter.end_timestamp_op and filter.end_timestamp_op != 'le':
raise ceilometer.NotImplementedError(('End time op %s '
'not implemented') %
filter.end_timestamp_op)
if not filter.start_timestamp:
filter.start_timestamp = timeutils.isotime(
datetime.datetime(1970, 1, 1))
# TODO(monasca): Add this a config parameter
allowed_stats = ['avg', 'min', 'max', 'sum', 'count']
if aggregate:
not_allowed_stats = [a.func for a in aggregate
if a.func not in allowed_stats]
if not_allowed_stats:
raise ceilometer.NotImplementedError(('Aggregate function(s) '
'%s not implemented') %
not_allowed_stats)
statistics = [a.func for a in aggregate
if a.func in allowed_stats]
else:
statistics = allowed_stats
dims_filter = dict(user_id=filter.user,
project_id=filter.project,
source=filter.source,
resource_id=filter.resource
)
dims_filter = {k: v for k, v in dims_filter.items() if v is not None}
period = period if period \
else cfg.CONF.monasca.default_stats_period
_search_args = dict(
name=filter.meter,
dimensions=dims_filter,
start_time=filter.start_timestamp,
end_time=filter.end_timestamp,
period=period,
statistics=','.join(statistics),
merge_metrics=True)
_search_args = {k: v for k, v in _search_args.items()
if v is not None}
stats_list = self.mc.statistics_list(**_search_args)
for stats in stats_list:
for s in stats['statistics']:
stats_dict = self._convert_to_dict(s, stats['columns'])
ts_start = timeutils.parse_isotime(stats_dict['timestamp'])
ts_end = ts_start + datetime.timedelta(0, period)
del stats_dict['timestamp']
if 'count' in stats_dict:
stats_dict['count'] = int(stats_dict['count'])
yield api_models.Statistics(
unit=stats['dimensions'].get('unit'),
period=period,
period_start=ts_start,
period_end=ts_end,
duration=period,
duration_start=ts_start,
duration_end=ts_end,
groupby={u'': u''},
**stats_dict
)

View File

@ -0,0 +1,223 @@
#
# Copyright 2015 Hewlett Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Test api with Monasca driver
"""
import mock
from oslo_config import cfg
from oslo_config import fixture as fixture_config
from oslotest import mockpatch
from ceilometer import storage
from ceilometer.tests import base as test_base
from oslo_policy import opts
import pecan
import pecan.testing
OPT_GROUP_NAME = 'keystone_authtoken'
cfg.CONF.import_group(OPT_GROUP_NAME, "keystonemiddleware.auth_token")
class TestApi(test_base.BaseTestCase):
# TODO(Unresolved comment from git review: Can we include CM-api test
# cases for get_samples in
# ceilometer/tests/api/v2/test_api_with_monasca_driver.py?)
def setUp(self):
super(TestApi, self).setUp()
self.PATH_PREFIX = '/v2'
self.CONF = self.useFixture(fixture_config.Config()).conf
self.CONF([], project='ceilometer', validate_default_values=True)
self.setup_messaging(self.CONF)
opts.set_defaults(self.CONF)
self.CONF.set_override("auth_version", "v2.0",
group=OPT_GROUP_NAME)
self.CONF.set_override("policy_file",
self.path_get('etc/ceilometer/policy.json'),
group='oslo_policy')
self.CONF.import_opt('pipeline_cfg_file', 'ceilometer.pipeline')
self.CONF.set_override(
'pipeline_cfg_file',
self.path_get('etc/ceilometer/monasca_pipeline.yaml')
)
self.CONF.import_opt('monasca_mappings',
'ceilometer.publisher.monasca_data_filter',
group='monasca')
self.CONF.set_override(
'monasca_mappings',
self.path_get('etc/ceilometer/monasca_field_definitions.yaml'),
group='monasca'
)
with mock.patch("ceilometer.monasca_client.Client") as mock_client:
self.mock_mon_client = mock_client
self.conn = storage.get_connection('monasca://127.0.0.1:8080',
'ceilometer.metering.storage')
self.useFixture(mockpatch.Patch(
'ceilometer.storage.get_connection',
return_value=self.conn))
self.app = self._make_app()
def _make_app(self, enable_acl=False):
self.config = {
'app': {
'root': 'ceilometer.api.controllers.root.RootController',
'modules': ['ceilometer.api'],
'enable_acl': enable_acl,
},
'wsme': {
'debug': True,
},
}
return pecan.testing.load_test_app(self.config)
def get_json(self, path, expect_errors=False, headers=None,
extra_environ=None, q=None, groupby=None, status=None,
override_params=None, **params):
"""Sends simulated HTTP GET request to Pecan test app.
:param path: url path of target service
:param expect_errors: boolean value whether an error is expected based
on request
:param headers: A dictionary of headers to send along with the request
:param extra_environ: A dictionary of environ variables to send along
with the request
:param q: list of queries consisting of: field, value, op, and type
keys
:param groupby: list of fields to group by
:param status: Expected status code of response
:param override_params: literally encoded query param string
:param params: content for wsgi.input of request
"""
q = q or []
groupby = groupby or []
full_path = self.PATH_PREFIX + path
if override_params:
all_params = override_params
else:
query_params = {'q.field': [],
'q.value': [],
'q.op': [],
'q.type': [],
}
for query in q:
for name in ['field', 'op', 'value', 'type']:
query_params['q.%s' % name].append(query.get(name, ''))
all_params = {}
all_params.update(params)
if q:
all_params.update(query_params)
if groupby:
all_params.update({'groupby': groupby})
response = self.app.get(full_path,
params=all_params,
headers=headers,
extra_environ=extra_environ,
expect_errors=expect_errors,
status=status)
if not expect_errors:
response = response.json
return response
class TestListMeters(TestApi):
def setUp(self):
super(TestListMeters, self).setUp()
self.meter_payload = [{'name': 'm1',
'dimensions': {
'type': 'gauge',
'unit': 'any',
'resource_id': 'resource-1',
'project_id': 'project-1',
'user_id': 'user-1',
'source': 'source'}},
{'name': 'm2',
'dimensions': {
'type': 'delta',
'unit': 'any',
'resource_id': 'resource-1',
'project_id': 'project-1',
'user_id': 'user-1',
'source': 'source'}}]
def test_empty(self):
data = self.get_json('/meters')
self.assertEqual([], data)
def test_not_implemented(self):
resp = self.get_json('/meters',
q=[{'field': 'pagination',
'value': True}],
expect_errors=True)
expected_error_message = 'Pagination not implemented'
self.assertEqual(expected_error_message,
resp.json['error_message']['faultstring'])
self.assertEqual(501, resp.status_code)
def test_get_meters(self):
mnl_mock = self.mock_mon_client().metrics_list
mnl_mock.return_value = self.meter_payload
data = self.get_json('/meters')
self.assertEqual(True, mnl_mock.called)
self.assertEqual(1, mnl_mock.call_count)
self.assertEqual(2, len(data))
(self.assertIn(meter['name'],
[payload.get('name') for payload in
self.meter_payload]) for meter in data)
def test_get_meters_query_with_project_resource(self):
mnl_mock = self.mock_mon_client().metrics_list
mnl_mock.return_value = self.meter_payload
self.get_json('/meters',
q=[{'field': 'resource_id',
'value': 'resource-1'},
{'field': 'project_id',
'value': 'project-1'}])
self.assertEqual(True, mnl_mock.called)
self.assertEqual(1, mnl_mock.call_count)
self.assertEqual(dict(dimensions=dict(resource_id=u'resource-1',
project_id=u'project-1')),
mnl_mock.call_args[1])
def test_get_meters_query_with_user(self):
mnl_mock = self.mock_mon_client().metrics_list
mnl_mock.return_value = self.meter_payload
self.get_json('/meters',
q=[{'field': 'user_id',
'value': 'user-1'}])
self.assertEqual(True, mnl_mock.called)
self.assertEqual(1, mnl_mock.call_count)
self.assertEqual(dict(dimensions=dict(user_id=u'user-1')),
mnl_mock.call_args[1])

View File

@ -0,0 +1,134 @@
#
# Copyright 2015 Hewlett-Packard Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import mock
from oslo_utils import timeutils
from oslotest import base
from ceilometer.publisher import monasca_data_filter as mdf
from ceilometer import sample
class TestMonUtils(base.BaseTestCase):
def setUp(self):
super(TestMonUtils, self).setUp()
self._field_mappings = {
'dimensions': ['resource_id',
'project_id',
'user_id',
'geolocation',
'region',
'availability_zone'],
'metadata': {
'common': ['event_type',
'audit_period_beginning',
'audit_period_ending'],
'image': ['size', 'status'],
'image.delete': ['size', 'status'],
'image.size': ['size', 'status'],
'image.update': ['size', 'status'],
'image.upload': ['size', 'status'],
'instance': ['state', 'state_description'],
'snapshot': ['status'],
'snapshot.size': ['status'],
'volume': ['status'],
'volume.size': ['status'],
}
}
def test_process_sample(self):
s = sample.Sample(
name='test',
type=sample.TYPE_CUMULATIVE,
unit='',
volume=1,
user_id='test',
project_id='test',
resource_id='test_run_tasks',
timestamp=datetime.datetime.utcnow().isoformat(),
resource_metadata={'name': 'TestPublish'},
)
to_patch = ("ceilometer.publisher.monasca_data_filter."
"MonascaDataFilter._get_mapping")
with mock.patch(to_patch, side_effect=[self._field_mappings]):
data_filter = mdf.MonascaDataFilter()
r = data_filter.process_sample_for_monasca(s)
self.assertEqual(s.name, r['name'])
self.assertIsNone(r['dimensions'].get('type'))
self.assertIsNone(r.get('value_meta'))
self.assertEqual(s.user_id, r['dimensions'].get('user_id'))
self.assertEqual(s.project_id, r['dimensions']['project_id'])
# 2015-04-07T20:07:06.156986 compare upto millisec
monasca_ts = \
timeutils.iso8601_from_timestamp(r['timestamp'] / 1000.0,
microsecond=True)[:23]
self.assertEqual(s.timestamp[:23], monasca_ts)
def test_process_sample_field_mappings(self):
s = sample.Sample(
name='test',
type=sample.TYPE_CUMULATIVE,
unit='',
volume=1,
user_id='test',
project_id='test',
resource_id='test_run_tasks',
timestamp=datetime.datetime.utcnow().isoformat(),
resource_metadata={'name': 'TestPublish'},
)
field_map = self._field_mappings
field_map['dimensions'].remove('project_id')
field_map['dimensions'].remove('user_id')
to_patch = ("ceilometer.publisher.monasca_data_filter."
"MonascaDataFilter._get_mapping")
with mock.patch(to_patch, side_effect=[field_map]):
data_filter = mdf.MonascaDataFilter()
r = data_filter.process_sample_for_monasca(s)
self.assertIsNone(r['dimensions'].get('project_id'))
self.assertIsNone(r['dimensions'].get('user_id'))
def test_process_sample_metadata(self):
s = sample.Sample(
name='image',
type=sample.TYPE_CUMULATIVE,
unit='',
volume=1,
user_id='test',
project_id='test',
resource_id='test_run_tasks',
timestamp=datetime.datetime.utcnow().isoformat(),
resource_metadata={'event_type': 'notification',
'status': 'active',
'size': 1500},
)
to_patch = ("ceilometer.publisher.monasca_data_filter."
"MonascaDataFilter._get_mapping")
with mock.patch(to_patch, side_effect=[self._field_mappings]):
data_filter = mdf.MonascaDataFilter()
r = data_filter.process_sample_for_monasca(s)
self.assertEqual(s.name, r['name'])
self.assertIsNotNone(r.get('value_meta'))
self.assertEqual(s.resource_metadata.items(),
r['value_meta'].items())

View File

@ -0,0 +1,169 @@
#
# Copyright 2015 Hewlett Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Tests for ceilometer/publisher/monclient.py
"""
import datetime
import eventlet
import mock
from oslo_config import fixture as fixture_config
from oslotest import base
from ceilometer import monasca_client as mon_client
from ceilometer.publisher import monclient
from ceilometer import sample
from monascaclient import ksclient
class FakeResponse(object):
def __init__(self, status_code):
self.status_code = status_code
class TestMonascaPublisher(base.BaseTestCase):
test_data = [
sample.Sample(
name='test',
type=sample.TYPE_CUMULATIVE,
unit='',
volume=1,
user_id='test',
project_id='test',
resource_id='test_run_tasks',
timestamp=datetime.datetime.utcnow().isoformat(),
resource_metadata={'name': 'TestPublish'},
),
sample.Sample(
name='test2',
type=sample.TYPE_CUMULATIVE,
unit='',
volume=1,
user_id='test',
project_id='test',
resource_id='test_run_tasks',
timestamp=datetime.datetime.utcnow().isoformat(),
resource_metadata={'name': 'TestPublish'},
),
sample.Sample(
name='test2',
type=sample.TYPE_CUMULATIVE,
unit='',
volume=1,
user_id='test',
project_id='test',
resource_id='test_run_tasks',
timestamp=datetime.datetime.utcnow().isoformat(),
resource_metadata={'name': 'TestPublish'},
),
]
field_mappings = {
'dimensions': ['resource_id',
'project_id',
'user_id',
'geolocation',
'region',
'availability_zone'],
'metadata': {
'common': ['event_type',
'audit_period_beginning',
'audit_period_ending'],
'image': ['size', 'status'],
'image.delete': ['size', 'status'],
'image.size': ['size', 'status'],
'image.update': ['size', 'status'],
'image.upload': ['size', 'status'],
'instance': ['state', 'state_description'],
'snapshot': ['status'],
'snapshot.size': ['status'],
'volume': ['status'],
'volume.size': ['status'],
}
}
@staticmethod
def create_side_effect(exception_type, test_exception):
def side_effect(*args, **kwargs):
if test_exception.pop():
raise exception_type
else:
return FakeResponse(204)
return side_effect
def setUp(self):
super(TestMonascaPublisher, self).setUp()
self.CONF = self.useFixture(fixture_config.Config()).conf
self.parsed_url = mock.MagicMock()
ksclient.KSClient = mock.MagicMock()
@mock.patch("ceilometer.publisher.monasca_data_filter."
"MonascaDataFilter._get_mapping",
side_effect=[field_mappings])
def test_publisher_publish(self, mapping_patch):
publisher = monclient.MonascaPublisher(self.parsed_url)
publisher.mon_client = mock.MagicMock()
with mock.patch.object(publisher.mon_client,
'metrics_create') as mock_create:
mock_create.return_value = FakeResponse(204)
publisher.publish_samples(None, self.test_data)
self.assertEqual(3, mock_create.call_count)
self.assertEqual(1, mapping_patch.called)
@mock.patch("ceilometer.publisher.monasca_data_filter."
"MonascaDataFilter._get_mapping",
side_effect=[field_mappings])
def test_publisher_batch(self, mapping_patch):
self.CONF.set_override('batch_mode', True, group='monasca')
self.CONF.set_override('batch_count', 3, group='monasca')
self.CONF.set_override('batch_polling_interval', 1, group='monasca')
publisher = monclient.MonascaPublisher(self.parsed_url)
publisher.mon_client = mock.MagicMock()
with mock.patch.object(publisher.mon_client,
'metrics_create') as mock_create:
mock_create.return_value = FakeResponse(204)
publisher.publish_samples(None, self.test_data)
eventlet.sleep(2)
self.assertEqual(1, mock_create.call_count)
self.assertEqual(1, mapping_patch.called)
@mock.patch("ceilometer.publisher.monasca_data_filter."
"MonascaDataFilter._get_mapping",
side_effect=[field_mappings])
def test_publisher_batch_retry(self, mapping_patch):
self.CONF.set_override('batch_mode', True, group='monasca')
self.CONF.set_override('batch_count', 3, group='monasca')
self.CONF.set_override('batch_polling_interval', 1, group='monasca')
self.CONF.set_override('retry_on_failure', True, group='monasca')
self.CONF.set_override('retry_interval', 2, group='monasca')
self.CONF.set_override('max_retries', 1, group='monasca')
publisher = monclient.MonascaPublisher(self.parsed_url)
publisher.mon_client = mock.MagicMock()
with mock.patch.object(publisher.mon_client,
'metrics_create') as mock_create:
raise_http_error = [False, False, False, True]
mock_create.side_effect = self.create_side_effect(
mon_client.MonascaServiceException,
raise_http_error)
publisher.publish_samples(None, self.test_data)
eventlet.sleep(5)
self.assertEqual(4, mock_create.call_count)
self.assertEqual(1, mapping_patch.called)

View File

@ -0,0 +1,549 @@
#
# Copyright 2015 Hewlett Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import dateutil.parser
import mock
from oslotest import base
import ceilometer
import ceilometer.storage as storage
from ceilometer.storage import impl_monasca
class TestGetResources(base.BaseTestCase):
@mock.patch("ceilometer.storage.impl_monasca.MonascaDataFilter")
def test_not_implemented_params(self, mock_mdf):
with mock.patch("ceilometer.monasca_client.Client"):
conn = impl_monasca.Connection("127.0.0.1:8080")
kwargs = dict(pagination=True)
self.assertRaises(ceilometer.NotImplementedError,
lambda: list(conn.get_resources(**kwargs)))
kwargs = dict(metaquery=True)
self.assertRaises(ceilometer.NotImplementedError,
lambda: list(conn.get_resources(**kwargs)))
kwargs = dict(start_timestamp_op='le')
self.assertRaises(ceilometer.NotImplementedError,
lambda: list(conn.get_resources(**kwargs)))
kwargs = dict(end_timestamp_op='ge')
self.assertRaises(ceilometer.NotImplementedError,
lambda: list(conn.get_resources(**kwargs)))
@mock.patch("ceilometer.storage.impl_monasca.MonascaDataFilter")
def test_dims_filter(self, mdf_patch):
with mock.patch("ceilometer.monasca_client.Client") as mock_client:
conn = impl_monasca.Connection("127.0.0.1:8080")
mnl_mock = mock_client().metrics_list
mnl_mock.return_value = [
{
'name': 'some',
'dimensions': {}
}
]
kwargs = dict(project='proj1')
list(conn.get_resources(**kwargs))
self.assertEqual(True, mnl_mock.called)
self.assertEqual(dict(dimensions=dict(project_id='proj1')),
mnl_mock.call_args[1])
self.assertEqual(1, mnl_mock.call_count)
@mock.patch("ceilometer.storage.impl_monasca.MonascaDataFilter")
def test_get_resources(self, mock_mdf):
with mock.patch("ceilometer.monasca_client.Client") as mock_client:
conn = impl_monasca.Connection("127.0.0.1:8080")
mnl_mock = mock_client().metrics_list
mnl_mock.return_value = [{'name': 'metric1',
'dimensions': {}},
{'name': 'metric2',
'dimensions': {}}
]
kwargs = dict(source='openstack')
list(conn.get_resources(**kwargs))
ml_mock = mock_client().measurements_list
self.assertEqual(2, ml_mock.call_count)
self.assertEqual(dict(dimensions={},
name='metric1',
limit=1,
start_time='1970-01-01T00:00:00Z'),
ml_mock.call_args_list[0][1])
class MeterTest(base.BaseTestCase):
@mock.patch("ceilometer.storage.impl_monasca.MonascaDataFilter")
def test_not_implemented_params(self, mock_mdf):
with mock.patch('ceilometer.monasca_client.Client'):
conn = impl_monasca.Connection('127.0.0.1:8080')
kwargs = dict(pagination=True)
self.assertRaises(ceilometer.NotImplementedError,
lambda: list(conn.get_meters(**kwargs)))
kwargs = dict(metaquery=True)
self.assertRaises(ceilometer.NotImplementedError,
lambda: list(conn.get_meters(**kwargs)))
@mock.patch("ceilometer.storage.impl_monasca.MonascaDataFilter")
def test_metrics_list_call(self, mock_mdf):
with mock.patch('ceilometer.monasca_client.Client') as mock_client:
conn = impl_monasca.Connection('127.0.0.1:8080')
metrics_list_mock = mock_client().metrics_list
kwargs = dict(user='user-1',
project='project-1',
resource='resource-1',
source='openstack',
limit=100)
list(conn.get_meters(**kwargs))
self.assertEqual(True, metrics_list_mock.called)
self.assertEqual(1, metrics_list_mock.call_count)
self.assertEqual(dict(dimensions=dict(user_id='user-1',
project_id='project-1',
resource_id='resource-1',
source='openstack'),
limit=100),
metrics_list_mock.call_args[1])
class TestGetSamples(base.BaseTestCase):
dummy_get_samples_mocked_return_value = (
[{u'dimensions': {},
u'measurements': [[u'2015-04-14T17:52:31Z', 1.0, {}]],
u'id': u'2015-04-14T18:42:31Z',
u'columns': [u'timestamp', u'value', u'value_meta'],
u'name': u'image'}])
@mock.patch("ceilometer.storage.impl_monasca.MonascaDataFilter")
def test_get_samples_not_implemented_params(self, mdf_mock):
with mock.patch("ceilometer.monasca_client.Client"):
conn = impl_monasca.Connection("127.0.0.1:8080")
sample_filter = storage.SampleFilter(meter='specific meter',
start_timestamp_op='<')
self.assertRaises(ceilometer.NotImplementedError,
lambda: list(conn.get_samples(sample_filter)))
sample_filter = storage.SampleFilter(meter='specific meter',
end_timestamp_op='>')
self.assertRaises(ceilometer.NotImplementedError,
lambda: list(conn.get_samples(sample_filter)))
sample_filter = storage.SampleFilter(
meter='specific meter', metaquery='specific metaquery')
self.assertRaises(ceilometer.NotImplementedError,
lambda: list(conn.get_samples(sample_filter)))
sample_filter = storage.SampleFilter(meter='specific meter',
message_id='specific message')
self.assertRaises(ceilometer.NotImplementedError,
lambda: list(conn.get_samples(sample_filter)))
@mock.patch("ceilometer.storage.impl_monasca.MonascaDataFilter")
def test_get_samples_name(self, mdf_mock):
with mock.patch("ceilometer.monasca_client.Client") as mock_client:
conn = impl_monasca.Connection("127.0.0.1:8080")
ml_mock = mock_client().measurements_list
ml_mock.return_value = (
TestGetSamples.dummy_get_samples_mocked_return_value)
sample_filter = storage.SampleFilter(
meter='specific meter', end_timestamp='2015-04-20T00:00:00Z')
list(conn.get_samples(sample_filter))
self.assertEqual(True, ml_mock.called)
self.assertEqual(dict(
dimensions={},
start_time='1970-01-01T00:00:00Z',
merge_metrics=True, name='specific meter',
end_time=str(sample_filter.end_timestamp)),
ml_mock.call_args[1])
self.assertEqual(1, ml_mock.call_count)
@mock.patch("ceilometer.storage.impl_monasca.MonascaDataFilter")
def test_get_samples_start_timestamp_filter(self, mdf_mock):
with mock.patch("ceilometer.monasca_client.Client") as mock_client:
conn = impl_monasca.Connection("127.0.0.1:8080")
ml_mock = mock_client().measurements_list
ml_mock.return_value = (
TestGetSamples.dummy_get_samples_mocked_return_value)
sample_filter = storage.SampleFilter(
meter='specific meter',
start_timestamp='2015-03-20T00:00:00Z',
start_timestamp_op='ge')
list(conn.get_samples(sample_filter))
self.assertEqual(True, ml_mock.called)
self.assertEqual(dict(
dimensions={},
start_time=str(sample_filter.start_timestamp),
start_timestamp_op=sample_filter.start_timestamp_op,
merge_metrics=True, name='specific meter'),
ml_mock.call_args[1])
self.assertEqual(1, ml_mock.call_count)
@mock.patch("ceilometer.storage.impl_monasca.MonascaDataFilter")
def test_get_samples_end_timestamp_filter(self, mdf_mock):
with mock.patch("ceilometer.monasca_client.Client") as mock_client:
conn = impl_monasca.Connection("127.0.0.1:8080")
ml_mock = mock_client().measurements_list
ml_mock.return_value = (
TestGetSamples.dummy_get_samples_mocked_return_value)
sample_filter = storage.SampleFilter(
meter='specific meter', end_timestamp='2015-04-20T00:00:00Z')
list(conn.get_samples(sample_filter))
self.assertEqual(True, ml_mock.called)
self.assertEqual(dict(
dimensions={},
start_time='1970-01-01T00:00:00Z',
merge_metrics=True, name='specific meter',
end_time=str(sample_filter.end_timestamp)),
ml_mock.call_args[1])
self.assertEqual(1, ml_mock.call_count)
@mock.patch("ceilometer.storage.impl_monasca.MonascaDataFilter")
def test_get_samples_limit(self, mdf_mock):
with mock.patch("ceilometer.monasca_client.Client") as mock_client:
conn = impl_monasca.Connection("127.0.0.1:8080")
ml_mock = mock_client().measurements_list
ml_mock.return_value = (
TestGetSamples.dummy_get_samples_mocked_return_value)
sample_filter = storage.SampleFilter(
meter='specific meter', end_timestamp='2015-04-20T00:00:00Z')
list(conn.get_samples(sample_filter, limit=50))
self.assertEqual(True, ml_mock.called)
self.assertEqual(dict(
dimensions={},
start_time='1970-01-01T00:00:00Z',
merge_metrics=True, name='specific meter', limit=50,
end_time=str(sample_filter.end_timestamp)),
ml_mock.call_args[1])
self.assertEqual(1, ml_mock.call_count)
@mock.patch("ceilometer.storage.impl_monasca.MonascaDataFilter")
def test_get_samples_project_filter(self, mock_mdf):
with mock.patch("ceilometer.monasca_client.Client") as mock_client:
conn = impl_monasca.Connection("127.0.0.1:8080")
ml_mock = mock_client().measurements_list
ml_mock.return_value = (
TestGetSamples.dummy_get_samples_mocked_return_value)
sample_filter = storage.SampleFilter(meter='specific meter',
project='specific project')
list(conn.get_samples(sample_filter))
self.assertEqual(True, ml_mock.called)
self.assertEqual(dict(
start_time='1970-01-01T00:00:00Z',
merge_metrics=True, name='specific meter',
dimensions=dict(project_id=sample_filter.project)),
ml_mock.call_args[1])
self.assertEqual(1, ml_mock.call_count)
@mock.patch("ceilometer.storage.impl_monasca.MonascaDataFilter")
def test_get_samples_resource_filter(self, mock_mdf):
with mock.patch("ceilometer.monasca_client.Client") as mock_client:
conn = impl_monasca.Connection("127.0.0.1:8080")
ml_mock = mock_client().measurements_list
ml_mock.return_value = (
TestGetSamples.dummy_get_samples_mocked_return_value)
sample_filter = storage.SampleFilter(meter='specific meter',
resource='specific resource')
list(conn.get_samples(sample_filter))
self.assertEqual(True, ml_mock.called)
self.assertEqual(dict(
start_time='1970-01-01T00:00:00Z',
merge_metrics=True, name='specific meter',
dimensions=dict(resource_id=sample_filter.resource)),
ml_mock.call_args[1])
self.assertEqual(1, ml_mock.call_count)
@mock.patch("ceilometer.storage.impl_monasca.MonascaDataFilter")
def test_get_samples_source_filter(self, mdf_mock):
with mock.patch("ceilometer.monasca_client.Client") as mock_client:
conn = impl_monasca.Connection("127.0.0.1:8080")
ml_mock = mock_client().measurements_list
ml_mock.return_value = (
TestGetSamples.dummy_get_samples_mocked_return_value)
sample_filter = storage.SampleFilter(meter='specific meter',
source='specific source')
list(conn.get_samples(sample_filter))
self.assertEqual(True, ml_mock.called)
self.assertEqual(dict(
start_time='1970-01-01T00:00:00Z',
merge_metrics=True, name='specific meter',
dimensions=dict(source=sample_filter.source)),
ml_mock.call_args[1])
self.assertEqual(1, ml_mock.call_count)
@mock.patch("ceilometer.storage.impl_monasca.MonascaDataFilter")
def test_get_samples_results(self, mdf_mock):
with mock.patch("ceilometer.monasca_client.Client") as mock_client:
conn = impl_monasca.Connection("127.0.0.1:8080")
ml_mock = mock_client().measurements_list
ml_mock.return_value = (
[{u'dimensions': {
'source': 'some source',
'project_id': 'some project ID',
'resource_id': 'some resource ID',
'type': 'some type',
'unit': 'some unit'},
u'measurements':
[[u'2015-04-01T02:03:04Z', 1.0, {}],
[u'2015-04-11T22:33:44Z', 2.0, {}]],
u'id': u'2015-04-14T18:42:31Z',
u'columns': [u'timestamp', u'value', u'value_meta'],
u'name': u'image'}])
sample_filter = storage.SampleFilter(
meter='specific meter',
start_timestamp='2015-03-20T00:00:00Z')
results = list(conn.get_samples(sample_filter))
self.assertEqual(True, ml_mock.called)
self.assertEqual(results[0].counter_name,
ml_mock.return_value[0].get('name'))
self.assertEqual(results[0].counter_type,
ml_mock.return_value[0].get('dimensions').
get('type'))
self.assertEqual(results[0].counter_unit,
ml_mock.return_value[0].get('dimensions').
get('unit'))
self.assertEqual(results[0].counter_volume,
ml_mock.return_value[0].
get('measurements')[0][1])
self.assertEqual(results[0].message_id,
ml_mock.return_value[0].get('id'))
self.assertEqual(results[0].message_signature, '')
self.assertEqual(results[0].project_id,
ml_mock.return_value[0].get('dimensions').
get('project_id'))
self.assertEqual(results[0].recorded_at,
dateutil.parser.parse(
ml_mock.return_value[0].
get('measurements')[0][0]))
self.assertEqual(results[0].resource_id,
ml_mock.return_value[0].get('dimensions').
get('resource_id'))
self.assertEqual(results[0].resource_metadata, {})
self.assertEqual(results[0].source,
ml_mock.return_value[0].get('dimensions').
get('source'))
self.assertEqual(results[0].timestamp,
dateutil.parser.parse(
ml_mock.return_value[0].
get('measurements')[0][0]))
self.assertEqual(results[0].user_id, None)
self.assertEqual(1, ml_mock.call_count)
class MeterStatisticsTest(base.BaseTestCase):
Aggregate = collections.namedtuple("Aggregate", ['func', 'param'])
def assertRaisesWithMessage(self, msg, exc_class, func, *args, **kwargs):
try:
func(*args, **kwargs)
self.fail('Expecting %s exception, none raised' %
exc_class.__name__)
except AssertionError:
raise
except Exception as e:
self.assertIsInstance(e, exc_class)
self.assertEqual(e.message, msg)
@mock.patch("ceilometer.storage.impl_monasca.MonascaDataFilter")
def test_not_implemented_params(self, mock_mdf):
with mock.patch("ceilometer.monasca_client.Client"):
conn = impl_monasca.Connection("127.0.0.1:8080")
self.assertRaisesWithMessage("Query without filter "
"not implemented",
ceilometer.NotImplementedError,
lambda: list(
conn.get_meter_statistics(None)))
sf = storage.SampleFilter()
self.assertRaisesWithMessage("Query without meter "
"not implemented",
ceilometer.NotImplementedError,
lambda: list(
conn.get_meter_statistics(sf)))
sf.meter = "image"
self.assertRaisesWithMessage("Groupby not implemented",
ceilometer.NotImplementedError,
lambda: list(
conn.get_meter_statistics(
sf,
groupby="resource_id")))
sf.metaquery = "metaquery"
self.assertRaisesWithMessage("Metaquery not implemented",
ceilometer.NotImplementedError,
lambda: list(
conn.get_meter_statistics(sf)))
sf.metaquery = None
sf.start_timestamp_op = 'le'
self.assertRaisesWithMessage("Start time op le not implemented",
ceilometer.NotImplementedError,
lambda: list(
conn.get_meter_statistics(sf)))
sf.start_timestamp_op = None
sf.end_timestamp_op = 'ge'
self.assertRaisesWithMessage("End time op ge not implemented",
ceilometer.NotImplementedError,
lambda: list(
conn.get_meter_statistics(sf)))
sf.end_timestamp_op = None
sf.message_id = "message_id"
self.assertRaisesWithMessage("Message_id query not implemented",
ceilometer.NotImplementedError,
lambda: list(
conn.get_meter_statistics(sf)))
sf.message_id = None
aggregate = [self.Aggregate(func='stddev', param='test')]
self.assertRaisesWithMessage("Aggregate function(s) ['stddev']"
" not implemented",
ceilometer.NotImplementedError,
lambda: list(
conn.get_meter_statistics(
sf, aggregate=aggregate)))
@mock.patch("ceilometer.storage.impl_monasca.MonascaDataFilter")
def test_stats_list_called_with(self, mock_mdf):
with mock.patch("ceilometer.monasca_client.Client") as mock_client:
conn = impl_monasca.Connection("127.0.0.1:8080")
sl_mock = mock_client().statistics_list
sf = storage.SampleFilter()
sf.meter = "image"
sf.project = "project_id"
sf.user = "user_id"
sf.resource = "resource_id"
sf.source = "source_id"
aggregate = [self.Aggregate(func="min", param="some")]
list(conn.get_meter_statistics(sf, period=10, aggregate=aggregate))
self.assertEqual(True, sl_mock.called)
self.assertEqual(
{'merge_metrics': True,
'dimensions': {'source': 'source_id',
'project_id': 'project_id',
'user_id': 'user_id',
'resource_id': 'resource_id'
},
'start_time': '1970-01-01T00:00:00Z',
'period': 10,
'statistics': 'min',
'name': 'image'
},
sl_mock.call_args[1]
)
@mock.patch("ceilometer.storage.impl_monasca.MonascaDataFilter")
def test_stats_list(self, mock_mdf):
with mock.patch("ceilometer.monasca_client.Client") as mock_client:
conn = impl_monasca.Connection("127.0.0.1:8080")
sl_mock = mock_client().statistics_list
sl_mock.return_value = [
{
'statistics':
[
['2014-10-24T12:12:12Z', 0.008],
['2014-10-24T12:52:12Z', 0.018]
],
'dimensions': {'unit': 'gb'},
'columns': ['timestamp', 'min']
}
]
sf = storage.SampleFilter()
sf.meter = "image"
stats = list(conn.get_meter_statistics(sf, period=30))
self.assertEqual(2, len(stats))
self.assertEqual('gb', stats[0].unit)
self.assertEqual('gb', stats[1].unit)
self.assertEqual(0.008, stats[0].min)
self.assertEqual(0.018, stats[1].min)
self.assertEqual(30, stats[0].period)
self.assertEqual('2014-10-24T12:12:42+00:00',
stats[0].period_end.isoformat())
self.assertEqual('2014-10-24T12:52:42+00:00',
stats[1].period_end.isoformat())
class CapabilitiesTest(base.BaseTestCase):
def test_capabilities(self):
expected_capabilities = {
'meters':
{
'query':
{
'complex': False,
'metadata': False,
'simple': True
}
},
'resources':
{
'query':
{
'complex': False, 'metadata': False, 'simple': True
}
},
'samples':
{
'groupby': False,
'pagination': False,
'query':
{
'complex': False,
'metadata': False,
'simple': True
}
},
'statistics':
{
'aggregation':
{
'selectable':
{
'avg': True,
'cardinality': False,
'count': True,
'max': True,
'min': True,
'stddev': False,
'sum': True
},
'standard': True},
'groupby': False,
'query':
{
'complex': False,
'metadata': False,
'simple': True
}
}
}
actual_capabilities = impl_monasca.Connection.get_capabilities()
self.assertEqual(expected_capabilities, actual_capabilities)

View File

@ -0,0 +1,91 @@
# Copyright 2015 Hewlett-Packard Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from oslo_config import cfg
from oslo_utils import netutils
from oslotest import base
from ceilometer import monasca_client
from monascaclient import exc
cfg.CONF.import_group('service_credentials', 'ceilometer.service')
class TestMonascaClient(base.BaseTestCase):
def setUp(self):
super(TestMonascaClient, self).setUp()
self.mc = self._get_client()
@mock.patch('monascaclient.client.Client')
@mock.patch('monascaclient.ksclient.KSClient')
def _get_client(self, ksclass_mock, monclient_mock):
ksclient_mock = ksclass_mock.return_value
ksclient_mock.token.return_value = "token123"
return monasca_client.Client(
netutils.urlsplit("http://127.0.0.1:8080"))
def test_metrics_create(self):
with mock.patch.object(self.mc._mon_client.metrics, 'create',
side_effect=[True]) as create_patch:
self.mc.metrics_create()
self.assertEqual(1, create_patch.call_count)
@mock.patch.object(monasca_client.Client, '_refresh_client')
def test_metrics_create_with_401(self, rc_patch):
with mock.patch.object(
self.mc._mon_client.metrics, 'create',
side_effect=[exc.HTTPUnauthorized, True]) as create_patch:
self.mc.metrics_create()
self.assertEqual(2, create_patch.call_count)
self.assertTrue(rc_patch.called)
def test_metrics_create_exception(self):
with mock.patch.object(
self.mc._mon_client.metrics, 'create',
side_effect=[exc.HTTPInternalServerError, True])\
as create_patch:
self.assertRaises(monasca_client.MonascaServiceException,
self.mc.metrics_create)
self.assertEqual(1, create_patch.call_count)
def test_metrics_create_unprocessable_exception(self):
with mock.patch.object(
self.mc._mon_client.metrics, 'create',
side_effect=[exc.HTTPUnProcessable, True])\
as create_patch:
self.assertRaises(monasca_client.MonascaInvalidParametersException,
self.mc.metrics_create)
self.assertEqual(1, create_patch.call_count)
def test_invalid_service_creds(self):
conf = cfg.CONF.service_credentials
class SetOpt(object):
def __enter__(self):
self.username = conf.os_username
conf.os_username = ""
def __exit__(self, exc_type, exc_val, exc_tb):
conf.os_username = self.username
with SetOpt():
self.assertRaises(
monasca_client.MonascaInvalidServiceCredentialsException,
self._get_client)
self.assertIsNotNone(True, conf.os_username)

View File

@ -0,0 +1,39 @@
[DEFAULT]
collector_workers = 4
policy_file = /etc/ceilometer/policy.json
debug = False
verbose = False
notification_topics = notifications
rpc_backend = rabbit
[oslo_messaging_rabbit]
rabbit_userid = stackrabbit
rabbit_password = password
rabbit_hosts = 16.78.179.83
[service_credentials]
os_tenant_name = mini-mon
os_password = password
os_username = mini-mon
[keystone_authtoken]
signing_dir = /var/cache/ceilometer
cafile = /opt/stack/data/ca-bundle.pem
auth_uri = http://16.78.179.83:5000
project_domain_id = default
project_name = service
user_domain_id = default
password = password
username = ceilometer
auth_url = http://16.78.179.83:35357
auth_plugin = password
[notification]
store_events = True
[database]
#metering_connection = mysql://root:password@127.0.0.1/ceilometer?charset=utf8
event_connection = mysql://root:password@127.0.0.1/ceilometer?charset=utf8
alarm_connection = mysql://root:password@127.0.0.1/ceilometer?charset=utf8
metering_connection = monasca://http://127.0.0.1:8080/v2.0

View File

@ -0,0 +1,41 @@
dimensions:
- resource_id
- project_id
- user_id
- geolocation
- region
- availability_zone
- type
- unit
- source
metadata:
common:
- event_type
- audit_period_beginning
- audit_period_ending
instance:
- state
- state_description
image:
- size
- status
image.size:
- size
- status
image.update:
- size
- status
image.upload:
- size
- status
image.delete:
- size
- status
snapshot:
- status
snapshot.size:
- status
volume:
- status
volume.size:
- status

View File

@ -0,0 +1,13 @@
---
sources:
- name: meter_source
interval: 6
meters:
- "*"
sinks:
- meter_sink
sinks:
- name: meter_sink
transformers:
publishers:
- monasca://http://127.0.0.1:8080/v2.0

View File

@ -1,107 +0,0 @@
import calendar
import re
import time
from ceilometer.openstack.common.gettextutils import _
from ceilometer.openstack.common import log
from ceilometer import publisher
from monascaclient import client
from monascaclient import ksclient
LOG = log.getLogger(__name__)
class monclient(publisher.PublisherBase):
"""Publisher to publish samples to monclient.
Example URL to place in pipeline.yaml:
- monclient://http://192.168.10.4:8080/v2.0?username=xxxx&password=yyyy
"""
def __init__(self, parsed_url):
super(monclient, self).__init__(parsed_url)
# Set these if they are not passed as part of the URL
self.token = None
self.username = None
self.password = None
# auth_url must be a v3 endpoint, e.g.
# http://192.168.10.5:35357/v3/
self.auth_url = None
query_parms = parsed_url[3]
for query_parm in query_parms.split('&'):
name = query_parm.split('=')[0]
value = query_parm.split('=')[1]
if (name == 'username'):
self.username = value
LOG.debug(_('found username in query parameters'))
if (name == 'password'):
self.password = str(value)
LOG.debug(_('found password in query parameters'))
if (name == 'token'):
self.token = value
LOG.debug(_('found token in query parameters'))
if not self.token:
if not self.username or not self.password:
LOG.error(_('username and password must be '
'specified if no token is given'))
if not self.auth_url:
LOG.error(_('auth_url must be '
'specified if no token is given'))
self.endpoint = "http:" + parsed_url.path
LOG.debug(_('publishing samples to endpoint %s') % self.endpoint)
def publish_samples(self, context, samples):
"""Main method called to publish samples."""
if not self.token:
kwargs = {
'username': self.username,
'password': self.password,
'auth_url': self.auth_url
}
_ksclient = ksclient.KSClient(**kwargs)
self.token = _ksclient.token
kwargs = {'token': self.token}
api_version = '2_0'
mon_client = client.Client(api_version, self.endpoint, **kwargs)
self.metrics = mon_client.metrics
for sample in samples:
dimensions = {}
dimensions['project_id'] = sample.project_id
dimensions['resource_id'] = sample.resource_id
dimensions['source'] = sample.source
dimensions['type'] = sample.type
dimensions['unit'] = sample.unit
dimensions['user_id'] = sample.user_id
self._traverse_dict(dimensions, 'meta', sample.resource_metadata)
timeWithoutFractionalSeconds = sample.timestamp[0:19]
try:
seconds = \
calendar.timegm(time.strptime(timeWithoutFractionalSeconds,
"%Y-%m-%dT%H:%M:%S"))
except ValueError:
seconds = \
calendar.timegm(time.strptime(timeWithoutFractionalSeconds,
"%Y-%m-%d %H:%M:%S"))
self.metrics.create(**{'name': sample.name, 'dimensions':
dimensions, 'timestamp': seconds, 'value':
sample.volume})
def _traverse_dict(self, dimensions, name_prefix, meta_dict):
"""Method to add values of a dictionary to another dictionary.
Nested dictionaries are handled.
"""
for name, value in meta_dict.iteritems():
# Ensure name only contains valid dimension name characters
name = re.sub('[^a-zA-Z0-9_\.\-]', '.', name)
if isinstance(value, basestring) and value:
dimensions[name_prefix + '.' + name] = value
elif isinstance(value, dict):
self._traverse_dict(dimensions, name_prefix + '.' + name,
value)

View File

@ -1,13 +0,0 @@
---
sources:
- name: cpu_source
interval: 600
meters:
- "cpu"
sinks:
- cpu_sink
sinks:
- name: cpu_sink
transformers:
publishers:
- monclient://http://192.168.10.4:8080/v2.0?username=xxxx&password=yyyy

388
setup.cfg Normal file
View File

@ -0,0 +1,388 @@
[metadata]
name = ceilometer
version = 2015.2
summary = OpenStack Telemetry
description-file =
README.rst
author = OpenStack
author-email = openstack-dev@lists.openstack.org
home-page = http://www.openstack.org/
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Topic :: System :: Monitoring
[global]
setup-hooks =
pbr.hooks.setup_hook
[files]
packages =
ceilometer
[entry_points]
ceilometer.notification =
magnetodb_table = ceilometer.key_value_storage.notifications:Table
magnetodb_index_count = ceilometer.key_value_storage.notifications:Index
instance = ceilometer.compute.notifications.instance:Instance
instance_flavor = ceilometer.compute.notifications.instance:InstanceFlavor
instance_delete = ceilometer.compute.notifications.instance:InstanceDelete
instance_scheduled = ceilometer.compute.notifications.instance:InstanceScheduled
memory = ceilometer.compute.notifications.instance:Memory
vcpus = ceilometer.compute.notifications.instance:VCpus
disk_root_size = ceilometer.compute.notifications.instance:RootDiskSize
disk_ephemeral_size = ceilometer.compute.notifications.instance:EphemeralDiskSize
cpu_frequency = ceilometer.compute.notifications.cpu:CpuFrequency
cpu_user_time = ceilometer.compute.notifications.cpu:CpuUserTime
cpu_kernel_time = ceilometer.compute.notifications.cpu:CpuKernelTime
cpu_idle_time = ceilometer.compute.notifications.cpu:CpuIdleTime
cpu_iowait_time = ceilometer.compute.notifications.cpu:CpuIowaitTime
cpu_kernel_percent = ceilometer.compute.notifications.cpu:CpuKernelPercent
cpu_idle_percent = ceilometer.compute.notifications.cpu:CpuIdlePercent
cpu_user_percent = ceilometer.compute.notifications.cpu:CpuUserPercent
cpu_iowait_percent = ceilometer.compute.notifications.cpu:CpuIowaitPercent
cpu_percent = ceilometer.compute.notifications.cpu:CpuPercent
volume = ceilometer.volume.notifications:Volume
volume_size = ceilometer.volume.notifications:VolumeSize
volume_crud = ceilometer.volume.notifications:VolumeCRUD
snapshot = ceilometer.volume.notifications:Snapshot
snapshot_size = ceilometer.volume.notifications:SnapshotSize
snapshot_crud = ceilometer.volume.notifications:SnapshotCRUD
authenticate = ceilometer.identity.notifications:Authenticate
user = ceilometer.identity.notifications:User
group = ceilometer.identity.notifications:Group
role = ceilometer.identity.notifications:Role
project = ceilometer.identity.notifications:Project
trust = ceilometer.identity.notifications:Trust
role_assignment = ceilometer.identity.notifications:RoleAssignment
image_crud = ceilometer.image.notifications:ImageCRUD
image = ceilometer.image.notifications:Image
image_size = ceilometer.image.notifications:ImageSize
image_download = ceilometer.image.notifications:ImageDownload
image_serve = ceilometer.image.notifications:ImageServe
network = ceilometer.network.notifications:Network
subnet = ceilometer.network.notifications:Subnet
port = ceilometer.network.notifications:Port
router = ceilometer.network.notifications:Router
floatingip = ceilometer.network.notifications:FloatingIP
bandwidth = ceilometer.network.notifications:Bandwidth
http.request = ceilometer.middleware:HTTPRequest
http.response = ceilometer.middleware:HTTPResponse
stack_crud = ceilometer.orchestration.notifications:StackCRUD
data_processing = ceilometer.data_processing.notifications:DataProcessing
profiler = ceilometer.profiler.notifications:ProfilerNotifications
hardware.ipmi.temperature = ceilometer.ipmi.notifications.ironic:TemperatureSensorNotification
hardware.ipmi.voltage = ceilometer.ipmi.notifications.ironic:VoltageSensorNotification
hardware.ipmi.current = ceilometer.ipmi.notifications.ironic:CurrentSensorNotification
hardware.ipmi.fan = ceilometer.ipmi.notifications.ironic:FanSensorNotification
network.services.lb.pool = ceilometer.network.notifications:Pool
network.services.lb.vip = ceilometer.network.notifications:Vip
network.services.lb.member = ceilometer.network.notifications:Member
network.services.lb.health_monitor = ceilometer.network.notifications:HealthMonitor
network.services.firewall = ceilometer.network.notifications:Firewall
network.services.firewall.policy = ceilometer.network.notifications:FirewallPolicy
network.services.firewall.rule = ceilometer.network.notifications:FirewallRule
network.services.vpn = ceilometer.network.notifications:VPNService
network.services.vpn.ipsecpolicy = ceilometer.network.notifications:IPSecPolicy
network.services.vpn.ikepolicy = ceilometer.network.notifications:IKEPolicy
network.services.vpn.connections = ceilometer.network.notifications:IPSecSiteConnection
objectstore.request = ceilometer.objectstore.notifications:SwiftWsgiMiddleware
objectstore.request.meters = ceilometer.objectstore.notifications:SwiftWsgiMiddlewareMeters
dns.domain.crud = ceilometer.dns.notifications:DomainCRUD
ceilometer.discover =
local_instances = ceilometer.compute.discovery:InstanceDiscovery
endpoint = ceilometer.agent.discovery.endpoint:EndpointDiscovery
tenant = ceilometer.agent.discovery.tenant:TenantDiscovery
local_node = ceilometer.agent.discovery.localnode:LocalNodeDiscovery
lb_pools = ceilometer.network.services.discovery:LBPoolsDiscovery
lb_vips = ceilometer.network.services.discovery:LBVipsDiscovery
lb_members = ceilometer.network.services.discovery:LBMembersDiscovery
lb_health_probes = ceilometer.network.services.discovery:LBHealthMonitorsDiscovery
vpn_services = ceilometer.network.services.discovery:VPNServicesDiscovery
ipsec_connections = ceilometer.network.services.discovery:IPSecConnectionsDiscovery
fw_services = ceilometer.network.services.discovery:FirewallDiscovery
fw_policy = ceilometer.network.services.discovery:FirewallPolicyDiscovery
tripleo_overcloud_nodes = ceilometer.hardware.discovery:NodesDiscoveryTripleO
ceilometer.poll.compute =
disk.read.requests = ceilometer.compute.pollsters.disk:ReadRequestsPollster
disk.write.requests = ceilometer.compute.pollsters.disk:WriteRequestsPollster
disk.read.bytes = ceilometer.compute.pollsters.disk:ReadBytesPollster
disk.write.bytes = ceilometer.compute.pollsters.disk:WriteBytesPollster
disk.read.requests.rate = ceilometer.compute.pollsters.disk:ReadRequestsRatePollster
disk.write.requests.rate = ceilometer.compute.pollsters.disk:WriteRequestsRatePollster
disk.read.bytes.rate = ceilometer.compute.pollsters.disk:ReadBytesRatePollster
disk.write.bytes.rate = ceilometer.compute.pollsters.disk:WriteBytesRatePollster
disk.device.read.requests = ceilometer.compute.pollsters.disk:PerDeviceReadRequestsPollster
disk.device.write.requests = ceilometer.compute.pollsters.disk:PerDeviceWriteRequestsPollster
disk.device.read.bytes = ceilometer.compute.pollsters.disk:PerDeviceReadBytesPollster
disk.device.write.bytes = ceilometer.compute.pollsters.disk:PerDeviceWriteBytesPollster
disk.device.read.requests.rate = ceilometer.compute.pollsters.disk:PerDeviceReadRequestsRatePollster
disk.device.write.requests.rate = ceilometer.compute.pollsters.disk:PerDeviceWriteRequestsRatePollster
disk.device.read.bytes.rate = ceilometer.compute.pollsters.disk:PerDeviceReadBytesRatePollster
disk.device.write.bytes.rate = ceilometer.compute.pollsters.disk:PerDeviceWriteBytesRatePollster
disk.latency = ceilometer.compute.pollsters.disk:DiskLatencyPollster
disk.device.latency = ceilometer.compute.pollsters.disk:PerDeviceDiskLatencyPollster
disk.iops = ceilometer.compute.pollsters.disk:DiskIOPSPollster
disk.device.iops = ceilometer.compute.pollsters.disk:PerDeviceDiskIOPSPollster
cpu = ceilometer.compute.pollsters.cpu:CPUPollster
cpu_util = ceilometer.compute.pollsters.cpu:CPUUtilPollster
network.incoming.bytes = ceilometer.compute.pollsters.net:IncomingBytesPollster
network.incoming.packets = ceilometer.compute.pollsters.net:IncomingPacketsPollster
network.outgoing.bytes = ceilometer.compute.pollsters.net:OutgoingBytesPollster
network.outgoing.packets = ceilometer.compute.pollsters.net:OutgoingPacketsPollster
network.incoming.bytes.rate = ceilometer.compute.pollsters.net:IncomingBytesRatePollster
network.outgoing.bytes.rate = ceilometer.compute.pollsters.net:OutgoingBytesRatePollster
instance = ceilometer.compute.pollsters.instance:InstancePollster
instance_flavor = ceilometer.compute.pollsters.instance:InstanceFlavorPollster
memory.usage = ceilometer.compute.pollsters.memory:MemoryUsagePollster
memory.resident = ceilometer.compute.pollsters.memory:MemoryResidentPollster
disk.capacity = ceilometer.compute.pollsters.disk:CapacityPollster
disk.allocation = ceilometer.compute.pollsters.disk:AllocationPollster
disk.usage = ceilometer.compute.pollsters.disk:PhysicalPollster
disk.device.capacity = ceilometer.compute.pollsters.disk:PerDeviceCapacityPollster
disk.device.allocation = ceilometer.compute.pollsters.disk:PerDeviceAllocationPollster
disk.device.usage = ceilometer.compute.pollsters.disk:PerDevicePhysicalPollster
ceilometer.poll.ipmi =
hardware.ipmi.node.power = ceilometer.ipmi.pollsters.node:PowerPollster
hardware.ipmi.node.temperature = ceilometer.ipmi.pollsters.node:InletTemperaturePollster
hardware.ipmi.node.outlet_temperature = ceilometer.ipmi.pollsters.node:OutletTemperaturePollster
hardware.ipmi.node.airflow = ceilometer.ipmi.pollsters.node:AirflowPollster
hardware.ipmi.node.cups = ceilometer.ipmi.pollsters.node:CUPSIndexPollster
hardware.ipmi.node.cpu_util = ceilometer.ipmi.pollsters.node:CPUUtilPollster
hardware.ipmi.node.mem_util = ceilometer.ipmi.pollsters.node:MemUtilPollster
hardware.ipmi.node.io_util = ceilometer.ipmi.pollsters.node:IOUtilPollster
hardware.ipmi.temperature = ceilometer.ipmi.pollsters.sensor:TemperatureSensorPollster
hardware.ipmi.voltage = ceilometer.ipmi.pollsters.sensor:VoltageSensorPollster
hardware.ipmi.current = ceilometer.ipmi.pollsters.sensor:CurrentSensorPollster
hardware.ipmi.fan = ceilometer.ipmi.pollsters.sensor:FanSensorPollster
ceilometer.poll.central =
ip.floating = ceilometer.network.floatingip:FloatingIPPollster
image = ceilometer.image.glance:ImagePollster
image.size = ceilometer.image.glance:ImageSizePollster
rgw.containers.objects = ceilometer.objectstore.rgw:ContainersObjectsPollster
rgw.containers.objects.size = ceilometer.objectstore.rgw:ContainersSizePollster
rgw.objects = ceilometer.objectstore.rgw:ObjectsPollster
rgw.objects.size = ceilometer.objectstore.rgw:ObjectsSizePollster
rgw.objects.containers = ceilometer.objectstore.rgw:ObjectsContainersPollster
rgw.usage = ceilometer.objectstore.rgw:UsagePollster
storage.containers.objects = ceilometer.objectstore.swift:ContainersObjectsPollster
storage.containers.objects.size = ceilometer.objectstore.swift:ContainersSizePollster
storage.objects = ceilometer.objectstore.swift:ObjectsPollster
storage.objects.size = ceilometer.objectstore.swift:ObjectsSizePollster
storage.objects.containers = ceilometer.objectstore.swift:ObjectsContainersPollster
energy = ceilometer.energy.kwapi:EnergyPollster
power = ceilometer.energy.kwapi:PowerPollster
switch.port = ceilometer.network.statistics.port:PortPollster
switch.port.receive.packets = ceilometer.network.statistics.port:PortPollsterReceivePackets
switch.port.transmit.packets = ceilometer.network.statistics.port:PortPollsterTransmitPackets
switch.port.receive.bytes = ceilometer.network.statistics.port:PortPollsterReceiveBytes
switch.port.transmit.bytes = ceilometer.network.statistics.port:PortPollsterTransmitBytes
switch.port.receive.drops = ceilometer.network.statistics.port:PortPollsterReceiveDrops
switch.port.transmit.drops = ceilometer.network.statistics.port:PortPollsterTransmitDrops
switch.port.receive.errors = ceilometer.network.statistics.port:PortPollsterReceiveErrors
switch.port.transmit.errors = ceilometer.network.statistics.port:PortPollsterTransmitErrors
switch.port.receive.frame_error = ceilometer.network.statistics.port:PortPollsterReceiveFrameErrors
switch.port.receive.overrun_error = ceilometer.network.statistics.port:PortPollsterReceiveOverrunErrors
switch.port.receive.crc_error = ceilometer.network.statistics.port:PortPollsterReceiveCRCErrors
switch.port.collision.count = ceilometer.network.statistics.port:PortPollsterCollisionCount
switch.table = ceilometer.network.statistics.table:TablePollster
switch.table.active.entries = ceilometer.network.statistics.table:TablePollsterActiveEntries
switch.table.lookup.packets = ceilometer.network.statistics.table:TablePollsterLookupPackets
switch.table.matched.packets = ceilometer.network.statistics.table:TablePollsterMatchedPackets
switch = ceilometer.network.statistics.switch:SWPollster
switch.flow = ceilometer.network.statistics.flow:FlowPollster
switch.flow.bytes = ceilometer.network.statistics.flow:FlowPollsterBytes
switch.flow.duration.nanoseconds = ceilometer.network.statistics.flow:FlowPollsterDurationNanoseconds
switch.flow.duration.seconds = ceilometer.network.statistics.flow:FlowPollsterDurationSeconds
switch.flow.packets = ceilometer.network.statistics.flow:FlowPollsterPackets
hardware.cpu.load.1min = ceilometer.hardware.pollsters.cpu:CPULoad1MinPollster
hardware.cpu.load.5min = ceilometer.hardware.pollsters.cpu:CPULoad5MinPollster
hardware.cpu.load.15min = ceilometer.hardware.pollsters.cpu:CPULoad15MinPollster
hardware.disk.size.total = ceilometer.hardware.pollsters.disk:DiskTotalPollster
hardware.disk.size.used = ceilometer.hardware.pollsters.disk:DiskUsedPollster
hardware.network.incoming.bytes = ceilometer.hardware.pollsters.net:IncomingBytesPollster
hardware.network.outgoing.bytes = ceilometer.hardware.pollsters.net:OutgoingBytesPollster
hardware.network.outgoing.errors = ceilometer.hardware.pollsters.net:OutgoingErrorsPollster
hardware.memory.total = ceilometer.hardware.pollsters.memory:MemoryTotalPollster
hardware.memory.used = ceilometer.hardware.pollsters.memory:MemoryUsedPollster
hardware.memory.buffer = ceilometer.hardware.pollsters.memory:MemoryBufferPollster
hardware.memory.cached = ceilometer.hardware.pollsters.memory:MemoryCachedPollster
hardware.memory.swap.total = ceilometer.hardware.pollsters.memory:MemorySwapTotalPollster
hardware.memory.swap.avail = ceilometer.hardware.pollsters.memory:MemorySwapAvailPollster
hardware.system_stats.cpu.idle = ceilometer.hardware.pollsters.system:SystemCpuIdlePollster
hardware.system_stats.io.outgoing.blocks = ceilometer.hardware.pollsters.system:SystemIORawSentPollster
hardware.system_stats.io.incoming.blocks = ceilometer.hardware.pollsters.system:SystemIORawReceivedPollster
hardware.network.ip.outgoing.datagrams = ceilometer.hardware.pollsters.network_aggregated:NetworkAggregatedIPOutRequests
hardware.network.ip.incoming.datagrams = ceilometer.hardware.pollsters.network_aggregated:NetworkAggregatedIPInReceives
network.services.lb.pool = ceilometer.network.services.lbaas:LBPoolPollster
network.services.lb.vip = ceilometer.network.services.lbaas:LBVipPollster
network.services.lb.member = ceilometer.network.services.lbaas:LBMemberPollster
network.services.lb.health_monitor = ceilometer.network.services.lbaas:LBHealthMonitorPollster
network.services.lb.total.connections = ceilometer.network.services.lbaas:LBTotalConnectionsPollster
network.services.lb.active.connections = ceilometer.network.services.lbaas:LBActiveConnectionsPollster
network.services.lb.incoming.bytes = ceilometer.network.services.lbaas:LBBytesInPollster
network.services.lb.outgoing.bytes = ceilometer.network.services.lbaas:LBBytesOutPollster
network.services.vpn = ceilometer.network.services.vpnaas:VPNServicesPollster
network.services.vpn.connections = ceilometer.network.services.vpnaas:IPSecConnectionsPollster
network.services.firewall = ceilometer.network.services.fwaas:FirewallPollster
network.services.firewall.policy = ceilometer.network.services.fwaas:FirewallPolicyPollster
ceilometer.alarm.storage =
log = ceilometer.alarm.storage.impl_log:Connection
mongodb = ceilometer.alarm.storage.impl_mongodb:Connection
mysql = ceilometer.alarm.storage.impl_sqlalchemy:Connection
postgresql = ceilometer.alarm.storage.impl_sqlalchemy:Connection
sqlite = ceilometer.alarm.storage.impl_sqlalchemy:Connection
hbase = ceilometer.alarm.storage.impl_hbase:Connection
db2 = ceilometer.alarm.storage.impl_db2:Connection
ceilometer.event.storage =
es = ceilometer.event.storage.impl_elasticsearch:Connection
log = ceilometer.event.storage.impl_log:Connection
mongodb = ceilometer.event.storage.impl_mongodb:Connection
mysql = ceilometer.event.storage.impl_sqlalchemy:Connection
postgresql = ceilometer.event.storage.impl_sqlalchemy:Connection
sqlite = ceilometer.event.storage.impl_sqlalchemy:Connection
hbase = ceilometer.event.storage.impl_hbase:Connection
db2 = ceilometer.event.storage.impl_db2:Connection
ceilometer.metering.storage =
log = ceilometer.storage.impl_log:Connection
mongodb = ceilometer.storage.impl_mongodb:Connection
mysql = ceilometer.storage.impl_sqlalchemy:Connection
postgresql = ceilometer.storage.impl_sqlalchemy:Connection
sqlite = ceilometer.storage.impl_sqlalchemy:Connection
hbase = ceilometer.storage.impl_hbase:Connection
db2 = ceilometer.storage.impl_db2:Connection
monasca = ceilometer.storage.impl_monasca:Connection
ceilometer.compute.virt =
libvirt = ceilometer.compute.virt.libvirt.inspector:LibvirtInspector
hyperv = ceilometer.compute.virt.hyperv.inspector:HyperVInspector
vsphere = ceilometer.compute.virt.vmware.inspector:VsphereInspector
xenapi = ceilometer.compute.virt.xenapi.inspector:XenapiInspector
ceilometer.hardware.inspectors =
snmp = ceilometer.hardware.inspector.snmp:SNMPInspector
ceilometer.transformer =
accumulator = ceilometer.transformer.accumulator:TransformerAccumulator
unit_conversion = ceilometer.transformer.conversions:ScalingTransformer
rate_of_change = ceilometer.transformer.conversions:RateOfChangeTransformer
aggregator = ceilometer.transformer.conversions:AggregatorTransformer
arithmetic = ceilometer.transformer.arithmetic:ArithmeticTransformer
ceilometer.publisher =
test = ceilometer.publisher.test:TestPublisher
meter_publisher = ceilometer.publisher.messaging:RPCPublisher
meter = ceilometer.publisher.messaging:RPCPublisher
rpc = ceilometer.publisher.messaging:RPCPublisher
notifier = ceilometer.publisher.messaging:SampleNotifierPublisher
udp = ceilometer.publisher.udp:UDPPublisher
file = ceilometer.publisher.file:FilePublisher
direct = ceilometer.publisher.direct:DirectPublisher
kafka = ceilometer.publisher.kafka_broker:KafkaBrokerPublisher
monasca = ceilometer.publisher.monclient:MonascaPublisher
ceilometer.event.publisher =
test = ceilometer.publisher.test:TestPublisher
direct = ceilometer.publisher.direct:DirectPublisher
notifier = ceilometer.publisher.messaging:EventNotifierPublisher
kafka = ceilometer.publisher.kafka_broker:KafkaBrokerPublisher
ceilometer.alarm.rule =
threshold = ceilometer.api.controllers.v2.alarm_rules.threshold:AlarmThresholdRule
combination = ceilometer.api.controllers.v2.alarm_rules.combination:AlarmCombinationRule
gnocchi_resources_threshold = ceilometer.api.controllers.v2.alarm_rules.gnocchi:MetricOfResourceRule
gnocchi_aggregation_by_metrics_threshold = ceilometer.api.controllers.v2.alarm_rules.gnocchi:AggregationMetricsByIdLookupRule
gnocchi_aggregation_by_resources_threshold = ceilometer.api.controllers.v2.alarm_rules.gnocchi:AggregationMetricByResourcesLookupRule
ceilometer.alarm.evaluator =
threshold = ceilometer.alarm.evaluator.threshold:ThresholdEvaluator
combination = ceilometer.alarm.evaluator.combination:CombinationEvaluator
gnocchi_resources_threshold = ceilometer.alarm.evaluator.gnocchi:GnocchiThresholdEvaluator
gnocchi_aggregation_by_metrics_threshold = ceilometer.alarm.evaluator.gnocchi:GnocchiThresholdEvaluator
gnocchi_aggregation_by_resources_threshold = ceilometer.alarm.evaluator.gnocchi:GnocchiThresholdEvaluator
ceilometer.alarm.notifier =
log = ceilometer.alarm.notifier.log:LogAlarmNotifier
test = ceilometer.alarm.notifier.test:TestAlarmNotifier
http = ceilometer.alarm.notifier.rest:RestAlarmNotifier
https = ceilometer.alarm.notifier.rest:RestAlarmNotifier
trust+http = ceilometer.alarm.notifier.trust:TrustRestAlarmNotifier
trust+https = ceilometer.alarm.notifier.trust:TrustRestAlarmNotifier
ceilometer.event.trait_plugin =
split = ceilometer.event.trait_plugins:SplitterTraitPlugin
bitfield = ceilometer.event.trait_plugins:BitfieldTraitPlugin
console_scripts =
ceilometer-api = ceilometer.cmd.api:main
ceilometer-agent-central = ceilometer.cmd.eventlet.polling:main_central
ceilometer-agent-compute = ceilometer.cmd.eventlet.polling:main_compute
ceilometer-polling = ceilometer.cmd.eventlet.polling:main
ceilometer-agent-notification = ceilometer.cmd.eventlet.agent_notification:main
ceilometer-agent-ipmi = ceilometer.cmd.eventlet.polling:main_ipmi
ceilometer-send-sample = ceilometer.cmd.eventlet.sample:send_sample
ceilometer-dbsync = ceilometer.cmd.eventlet.storage:dbsync
ceilometer-expirer = ceilometer.cmd.eventlet.storage:expirer
ceilometer-rootwrap = oslo_rootwrap.cmd.eventlet:main
ceilometer-collector = ceilometer.cmd.eventlet.collector:main
ceilometer-alarm-evaluator = ceilometer.cmd.eventlet.alarm:evaluator
ceilometer-alarm-notifier = ceilometer.cmd.eventlet.alarm:notifier
ceilometer.dispatcher =
database = ceilometer.dispatcher.database:DatabaseDispatcher
file = ceilometer.dispatcher.file:FileDispatcher
http = ceilometer.dispatcher.http:HttpDispatcher
gnocchi = ceilometer.dispatcher.gnocchi.GnocchiDispatcher
ceilometer.dispatcher.resource =
instance = ceilometer.dispatcher.resources.instance:Instance
swift_account = ceilometer.dispatcher.resources.swift_account:SwiftAccount
volume = ceilometer.dispatcher.resources.volume:Volume
ceph_account = ceilometer.dispatcher.resources.ceph_account:CephAccount
network = ceilometer.dispatcher.resources.network:Network
identity = ceilometer.dispatcher.resources.identity:Identity
ipmi = ceilometer.dispatcher.resources.ipmi:IPMI
stack = ceilometer.dispatcher.resources.orchestration:Stack
image = ceilometer.dispatcher.resources.image:Image
network.statistics.drivers =
opendaylight = ceilometer.network.statistics.opendaylight.driver:OpenDayLightDriver
opencontrail = ceilometer.network.statistics.opencontrail.driver:OpencontrailDriver
oslo.config.opts =
ceilometer = ceilometer.opts:list_opts
[build_sphinx]
all_files = 1
build-dir = doc/build
source-dir = doc/source
[pbr]
warnerrors = true
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = ceilometer/locale/ceilometer.pot
[compile_catalog]
directory = ceilometer/locale
domain = ceilometer
[update_catalog]
domain = ceilometer
output_dir = ceilometer/locale
input_file = ceilometer/locale/ceilometer.pot

View File

@ -18,8 +18,13 @@ downloadcache = ~/cache/pip
[flake8]
show-source = True
# H302 Do not import objects, only modules
# H803 git commit title should not end with period
# H305 imports not grouped correctly
# H307 like imports should be grouped together
# H904 Wrap long lines in parentheses instead of a backslash
ignore = H302,H803,H904
ignore = H302,H305,H307,H904
builtins = _
exclude=.venv,.git,.tox,dist,client_api_example.py,*openstack/common*,*lib/python*,*egg,build
[hacking]
import_exceptions =
ceilometer.i18n