Kibana metrics

- auto detecting kibana process
- building configuration
- collecting metrics

Change-Id: I4c457b8396d8131fa18c76305a8f069626c51256
This commit is contained in:
Tomasz Trębski 2016-02-07 19:52:37 +01:00 committed by Tomasz Trębski
parent 2c66d7cedc
commit 704ab58a4e
8 changed files with 961 additions and 3 deletions

View File

@ -0,0 +1,17 @@
# Copyright 2016 FUJITSU LIMITED
init_config:
# URL that check uses to access Kibana metrics
url: http://192.168.10.6:5601/api/status
instances:
- built_by: Kibana
# List of metrics check should collect
# You can disable collecting of some metrics
# by removing them from the list
metrics:
- heap_total
- heap_used
- load
- requests_per_second
- response_time_avg
- response_time_max

View File

@ -77,6 +77,7 @@
- [Vertica Checks](#vertica-checks)
- [WMI Check](#wmi-check)
- [ZooKeeper](#zookeeper)
- [Kibana](#kibana)
- [OpenStack Monitoring](#openstack-monitoring)
- [Nova Checks](#nova-checks)
- [Nova Processes Monitored](#nova-processes-monitored)
@ -135,6 +136,7 @@ The following plugins are delivered via setup as part of the standard plugin che
| iis | | Microsoft Internet Information Services |
| jenkins | | |
| kafka_consumer | | |
| kibana | **kibana_install_dir**/kibana.yml | Integration to Kibana |
| kyototycoon | | |
| libvirt | | |
| lighttpd | | |
@ -300,6 +302,7 @@ These are the detection plugins included with the Monasca Agent. See [Customiza
| vcenter | Plugin |
| vertica | Plugin |
| zookeeper | Plugin |
| kibana | Plugin |
# Agent Plugin Detail
@ -742,9 +745,9 @@ The Elasticsearch checks return the following metrics:
* [List of available thread pools](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-threadpool.html)
## File Size
This section describes the file size check that can be performed by the Agent. File size checks are used for gathering the size of individual files or the size of each file under a specific directory. The agent supports additional functionality through the use of Python scripts. A YAML file (file_size.yaml) contains the list of file directory names and file names to check. A Python script (file_size.py) runs checks each host in turn to gather stats.
This section describes the file size check that can be performed by the Agent. File size checks are used for gathering the size of individual files or the size of each file under a specific directory. The agent supports additional functionality through the use of Python scripts. A YAML file (file_size.yaml) contains the list of file directory names and file names to check. A Python script (file_size.py) runs checks each host in turn to gather stats.
Similar to other checks, the configuration is done in YAML, and consists of two keys: init_config and instances. The former is not used by file_size, while the later contains one or more sets of file directory name and file names to check, plus optional parameter recursive. When recursive is true and file_name is set to '*', file_size check will take all the files under the given directory recursively.
Similar to other checks, the configuration is done in YAML, and consists of two keys: init_config and instances. The former is not used by file_size, while the later contains one or more sets of file directory name and file names to check, plus optional parameter recursive. When recursive is true and file_name is set to '*', file_size check will take all the files under the given directory recursively.
Sample config:
@ -1058,7 +1061,7 @@ instances:
port: 3306
server: padawan-ccp-c1-m1-mgmt
user: root
Example ssl connect:
instances:
- built_by: MySQL
@ -1493,6 +1496,43 @@ The Zookeeper checks return the following metrics:
| zookeeper.zxid_count | hostname, mode, service=zookeeper | Count number |
| zookeeper.zxid_epoch | hostname, mode, service=zookeeper | Epoch number |
## Kibana
This section describes the Kibana check that can be performed by the Agent.
The Kibana check requires a configuration file containing Kibana configuration
(it is the same file Kibana is using).
Check is accessing status endpoint (```curl -XGET http://localhost:5601/api/status```)
of Kibana, which means it can work only with Kibana >= 4.2.x, that was first to introduce
this capability.
Sample config:
```yaml
init_config:
url: http://localhost:5601/api/status
instances:
- built_by: Kibana
metrics:
- heap_size
- heap_used
- load
- req_sec
- resp_time_avg
- resp_time_max
```
The Kibana checks return the following metrics:
| Metric Name | Dimensions | Semantics |
| ----------- | ---------- | --------- |
| kibana.load_avg_1m | hostnam, version, service=monitoring | The average kibana load over a 1 minute period, for more details see [here](https://nodejs.org/api/os.html#os_os_loadavg) |
| kibana.load_avg_5m | hostnam, version, service=monitoring | The average kibana load over a 5 minutes period, for more details see [here](https://nodejs.org/api/os.html#os_os_loadavg) |
| kibana.load_avg_15m | hostnam, version, service=monitoring | The average kibana load over a 15 minutes period, for more details see [here](https://nodejs.org/api/os.html#os_os_loadavg) |
| kibana.heap_size_mb | hostnam, version, service=monitoring | Total heap size in MB |
| kibana.heap_used_mb | hostnam, version, service=monitoring | Used heap size in MB |
| kibana.req_sec | hostnam, version, service=monitoring | Requests per second to Kibana server |
| kibana.resp_time_avg_ms | hostnam, version, service=monitoring | The average response time of Kibana server in ms |
| kibana.resp_time_max_ms | hostnam, version, service=monitoring | The maximum response time of Kibana server in ms |
## OpenStack Monitoring
The `monasca-setup` script when run on a system that is running OpenStack services, configures the Agent to send the following list of metrics:

View File

@ -0,0 +1,139 @@
# Copyright 2016 FUJITSU LIMITED
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import requests
from monasca_agent.collector import checks
from monasca_agent.common import util
from monasca_setup.detection.plugins import kibana as kibana_setup
LOG = logging.getLogger(__name__)
_ONE_MB = (1024 * 1024) * 1.0
_LOAD_TIME_SERIES = ['1m', '5m', '15m']
class Kibana(checks.AgentCheck):
def get_library_versions(self):
try:
import yaml
version = yaml.__version__
except ImportError:
version = "Not Found"
except AttributeError:
version = "Unknown"
return {"PyYAML": version}
def check(self, instance):
config_url = self.init_config.get('url', None)
if config_url is None:
raise Exception('An url to kibana must be specified')
instance_metrics = instance.get('metrics', None)
if not instance_metrics:
LOG.warn('All metrics have been disabled in configuration '
'file, nothing to do.')
return
version = self._get_kibana_version(config_url)
dimensions = self._set_dimensions({'version': version}, instance)
LOG.debug('Kibana version %s', version)
try:
stats = self._get_data(config_url)
except Exception as ex:
LOG.error('Error while trying to get stats from Kibana[%s]' %
config_url)
LOG.exception(ex)
return
if not stats:
LOG.warn('No stats data was collected from kibana')
return
self._process_metrics(stats, dimensions, instance_metrics)
def _get_data(self, url):
return requests.get(
url=url,
headers=util.headers(self.agent_config)
).json()
def _process_metrics(self, stats, dimensions, instance_metrics):
# collect from instance which metrics should be checked
actual_metrics = {kibana_setup.get_metric_name(k): v for k, v in
stats.get('metrics', {}).items()}
instance_url = self.init_config.get('url')
for metric in actual_metrics.keys():
if metric not in instance_metrics:
LOG.debug('%s has been disabled for %s check' % (
metric, instance_url))
continue
else:
self._process_metric(metric,
actual_metrics.get(metric),
dimensions)
def _process_metric(self, metric, stats, dimensions):
LOG.debug('Processing metric %s' % metric)
metric_name = self.normalize(metric, 'kibana')
if metric in ['heap_size', 'heap_used']:
metric_name = '%s_mb' % metric_name
elif metric in ['resp_time_max', 'resp_time_avg']:
metric_name = '%s_ms' % metric_name
for item in stats:
timestamp = int(item[0]) / 1000.0
measurements = item[1]
cleaned_metric_name = metric_name
if not isinstance(measurements, list):
# only load comes as list in measurements
measurements = [measurements]
for it, measurement in enumerate(measurements):
if measurement is None:
LOG.debug('Measurement for metric %s at %d was not '
'returned from kibana server, skipping'
% (metric_name, timestamp))
continue
if metric in ['heap_size', 'heap_used']:
measurement /= _ONE_MB
elif metric == 'load':
load_sub_metric = _LOAD_TIME_SERIES[it]
cleaned_metric_name = '%s_avg_%s' % (metric_name,
load_sub_metric)
LOG.debug('Reporting %s as gauge with value %f'
% (cleaned_metric_name, measurement))
self.gauge(
metric=cleaned_metric_name,
value=measurement,
dimensions=dimensions,
timestamp=timestamp
)
def _get_kibana_version(self, url):
return requests.head(url=url).headers['kbn-version']

View File

@ -0,0 +1,195 @@
# Copyright 2016 FUJITSU LIMITED
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import os
import requests
from monasca_setup import agent_config
from monasca_setup import detection
from monasca_setup.detection import utils
LOG = logging.getLogger(__name__)
_KIBANA_CFG_FILE = '/opt/kibana/config/kibana.yml'
_API_STATUS = 'api/status'
_METRIC_ALIASES = {
'heap_total': 'heap_size',
'requests_per_second': 'req_sec',
'response_time_avg': 'resp_time_avg',
'response_time_max': 'resp_time_max'
}
def _to_snake_case(word):
final = ''
for item in word:
if item.isupper():
final += "_" + item.lower()
else:
final += item
if final[0] == "_":
final = final[1:]
return final
def get_metric_name(metric):
actual_name = _to_snake_case(metric)
return _METRIC_ALIASES.get(actual_name, actual_name)
class Kibana(detection.Plugin):
def _detect(self):
# check process and port
process_found = utils.find_process_cmdline('kibana') is not None
has_deps = self.dependencies_installed()
has_args = self.args is not None
cfg_file = self._get_config_file() if has_args else _KIBANA_CFG_FILE
has_config_file = os.path.isfile(cfg_file)
available = process_found and has_deps and has_config_file
self.available = available
if not self.available:
err_str = 'Plugin for Kibana will not be configured.'
if not process_found:
LOG.error('Kibana process has not been found. %s' % err_str)
elif not has_deps:
LOG.error('Kibana plugin dependencies are not satisfied. '
'Module "pyaml" not found. %s'
% err_str)
elif not has_config_file:
LOG.error('Kibana plugin cannot find configuration file %s. %s'
% (cfg_file, err_str))
def build_config(self):
kibana_config = self._get_config_file()
try:
(kibana_host,
kibana_port,
kibana_protocol) = self._read_config(kibana_config)
except Exception as ex:
LOG.error('Failed to read configuration at %s' % kibana_config)
LOG.exception(ex)
return
if kibana_protocol == 'https':
LOG.error('"https" protocol is currently not supported')
return None
config = agent_config.Plugins()
# retrieve user name and set in config
# if passed in args (note args are optional)
if (self.args and 'kibana-user' in self.args and
self.args['kibana-user']):
process = detection.watch_process_by_username(
username=self.args['kibana-user'],
process_name='kibana',
service='monitoring',
component='kibana'
)
else:
process = detection.watch_process(['kibana'],
service='monitoring',
component='kibana',
process_name='kibana')
config.merge(process)
kibana_url = '%s://%s:%d' % (
kibana_protocol,
kibana_host,
kibana_port
)
if not self._has_metrics_support(kibana_url):
LOG.warning('Running kibana does not support metrics, skipping...')
return None
else:
metrics = self._get_all_metrics(kibana_url)
config['kibana'] = {
'init_config': {
'url': '%s/%s' % (kibana_url, _API_STATUS),
},
'instances': [
{
'metrics': metrics
}
]
}
LOG.info('\tWatching the kibana process.')
return config
def dependencies_installed(self):
try:
import yaml
except Exception:
return False
return True
def _get_config_file(self):
if self.args is not None:
kibana_config = self.args.get('kibana-config', _KIBANA_CFG_FILE)
else:
kibana_config = _KIBANA_CFG_FILE
return kibana_config
@staticmethod
def _read_config(kibana_cfg):
import yaml
with open(kibana_cfg, 'r') as stream:
document = yaml.load(stream=stream)
has_ssl_support = ('server.ssl.cert' in document and
'server.ssl.key' in document)
host = document.get('server.host')
port = int(document.get('server.port'))
protocol = 'https' if has_ssl_support else 'http'
return host, port, protocol
def _get_all_metrics(self, kibana_url):
resp = self._get_metrics_request(kibana_url)
data = resp.json()
metrics = []
# do not check plugins, check will go for overall status
# get metrics
for metric in data.get('metrics').keys():
metrics.append(get_metric_name(metric))
return metrics
def _has_metrics_support(self, kibana_url):
resp = self._get_metrics_request(kibana_url, method='HEAD')
status_code = resp.status_code
# although Kibana will respond with 400:Bad Request
# it means that URL is available but simply does
# not support HEAD request
# Looks like guys from Kibana just support GET for this url
return status_code == 400
def _get_metrics_request(self, url, method='GET'):
request_url = '%s/%s' % (url, _API_STATUS)
return requests.request(method=method, url=request_url)

View File

View File

@ -0,0 +1,88 @@
{
"metrics": {
"heapTotal": [
[1467194860974, 209715200],
[1467194855970, 209715200],
[1467194850970, 209715200],
[1467194845969, 209715200],
[1467194840968, 209715200],
[1467194835968, 209715200],
[1467194830967, 209715200],
[1467194825967, 209715200],
[1467194820967, 209715200],
[1467194815966, 209715200],
[1467194810966, 209715200],
[1467194805963, 209715200]
],
"heapUsed": [
[1467194860974, 104857600],
[1467194855971, 104857600],
[1467194850970, 104857600],
[1467194845969, 104857600],
[1467194840968, 104857600],
[1467194835968, 104857600],
[1467194830967, 104857600],
[1467194825967, 104857600],
[1467194820967, 104857600],
[1467194815966, 104857600],
[1467194810966, 104857600],
[1467194805963, 104857600]
],
"load": [
[1467194860974, [0.5, 1.0, 1.5]],
[1467194855971, [0.5, 1.0, 1.5]],
[1467194850970, [0.5, 1.0, 1.5]],
[1467194845969, [0.5, 1.0, 1.5]],
[1467194840968, [0.5, 1.0, 1.5]],
[1467194835968, [0.5, 1.0, 1.5]],
[1467194830967, [0.5, 1.0, 1.5]],
[1467194825967, [0.5, 1.0, 1.5]],
[1467194820967, [0.5, 1.0, 1.5]],
[1467194815966, [0.5, 1.0, 1.5]],
[1467194810966, [0.5, 1.0, 1.5]],
[1467194805963, [0.5, 1.0, 1.5]]
],
"responseTimeAvg": [
[1467194860974, 20],
[1467194855971, 30],
[1467194850970, null],
[1467194845969, 5],
[1467194840968, null],
[1467194835968, null],
[1467194830967, 4],
[1467194825967, null],
[1467194820967, null],
[1467194815966, 25],
[1467194810966, null],
[1467194805963, null]
],
"responseTimeMax": [
[1467194860974, 50],
[1467194855971, 200],
[1467194850970, 25],
[1467194845969, 0],
[1467194840968, 0],
[1467194835968, 0],
[1467194830967, 15],
[1467194825967, 0],
[1467194820967, 0],
[1467194815966, 5],
[1467194810966, 5],
[1467194805963, 0]
],
"requestsPerSecond": [
[1467194860974, 4],
[1467194855972, 2],
[1467194850970, 0],
[1467194845969, 10],
[1467194840968, 0],
[1467194835968, 0],
[1467194830967, 1],
[1467194825967, 1],
[1467194820967, 0],
[1467194815966, 5],
[1467194810966, 0],
[1467194805963, 0]
]
}
}

View File

@ -0,0 +1,201 @@
# Copyright 2016 FUJITSU LIMITED
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import contextlib
import logging
import unittest
import mock
import json
import random
from monasca_agent.common import util
from monasca_agent.collector.checks_d import kibana
LOG = logging.getLogger(kibana.__name__)
_KIBANA_VERSION = '4.4.0'
_KIBANA_URL = 'http://localhost:5700/api/status'
class MockKibanaCheck(kibana.Kibana):
def __init__(self):
super(MockKibanaCheck, self).__init__(
name='kibana',
init_config={
'url': _KIBANA_URL
},
instances=[],
agent_config={}
)
class KibanaCheckTest(unittest.TestCase):
def setUp(self):
super(KibanaCheckTest, self).setUp()
with mock.patch.object(util, 'get_hostname'):
self.kibana_check = MockKibanaCheck()
self.kibana_check._get_kibana_version = mock.Mock(
return_value=_KIBANA_VERSION
)
def test_should_throw_exception_if_url_not_specified(self):
with self.assertRaises(Exception) as err:
self.kibana_check.init_config = {}
self.kibana_check.check(None)
self.assertEqual('An url to kibana must be specified',
err.exception.message)
def test_should_early_exit_if_all_metrics_disabled(self):
with contextlib.nested(
mock.patch.object(util, 'get_hostname'),
mock.patch.object(LOG, 'warn')
) as (_, mock_log_warn):
self.kibana_check._get_kibana_version = mock.Mock()
self.kibana_check._get_data = mock.Mock()
self.kibana_check._process_metrics = mock.Mock()
self.kibana_check.check({'metrics': []})
self.assertFalse(self.kibana_check._get_kibana_version.called)
self.assertFalse(self.kibana_check._get_data.called)
self.assertFalse(self.kibana_check._process_metrics.called)
self.assertEqual(mock_log_warn.call_count, 1)
self.assertEqual(mock_log_warn.call_args[0][0],
'All metrics have been disabled in configuration '
'file, nothing to do.')
def test_failed_to_retrieve_data(self):
with contextlib.nested(
mock.patch.object(util, 'get_hostname'),
mock.patch.object(LOG, 'error'),
mock.patch.object(LOG, 'exception')
) as (_, mock_log_error, mock_log_exception):
exception = Exception('oh')
self.kibana_check._get_data = mock.Mock(
side_effect=exception)
self.kibana_check.check({
'metrics': ['heap_size',
'heap_used',
'load',
'req_sec',
'resp_time_avg',
'resp_time_max']
})
self.assertEqual(mock_log_error.call_count, 1)
self.assertEqual(mock_log_error.call_args[0][0],
'Error while trying to get stats from Kibana[%s]'
% _KIBANA_URL)
self.assertEqual(mock_log_exception.call_count, 1)
self.assertEqual(mock_log_exception.call_args[0][0],
exception)
def test_empty_data_returned(self):
with contextlib.nested(
mock.patch.object(util, 'get_hostname'),
mock.patch.object(LOG, 'warn')
) as (_, mock_log_warn):
self.kibana_check._get_data = mock.Mock(return_value=None)
self.kibana_check.check({
'metrics': ['heap_size',
'heap_used',
'load',
'req_sec',
'resp_time_avg',
'resp_time_max']
})
self.assertEqual(mock_log_warn.call_count, 1)
self.assertEqual(mock_log_warn.call_args[0][0],
'No stats data was collected from kibana')
def test_process_metrics(self):
all_metrics = ['heap_size', 'heap_used', 'load',
'req_sec', 'resp_time_avg',
'resp_time_max']
enabled_metrics = all_metrics[:random.randint(0, len(all_metrics) - 1)]
if not enabled_metrics:
# if random made a joke, make sure at least one metric
# is there to check
enabled_metrics.append(all_metrics[0])
response = {
'metrics': {
'heapTotal': [],
'heapUsed': [],
'load': [],
'requestsPerSecond': [],
'responseTimeAvg': [],
'responseTimeMax': [],
}
}
with mock.patch.object(util, 'get_hostname'):
self.kibana_check._get_data = mock.Mock(return_value=response)
self.kibana_check._process_metric = mock.Mock()
self.kibana_check.check({'metrics': enabled_metrics})
self.assertTrue(self.kibana_check._process_metric.called)
self.assertEquals(len(enabled_metrics),
self.kibana_check._process_metric.call_count)
def test_check(self):
fixture_file = os.path.dirname(
os.path.abspath(__file__)) + '/fixtures/test_kibana.json'
response = json.load(file(fixture_file))
metrics = ['heap_size', 'heap_used', 'load',
'req_sec', 'resp_time_avg',
'resp_time_max']
# expected value, see fixture values for details
# it presents partial response kibana returns
# mocked to always returned repeatable and known data
# 96 values is in total
# but 7 will be omitted because there not returned
# in responseTimeAvg
expected_metric = [
'kibana.heap_size_mb',
'kibana.heap_used_mb',
'kibana.load_avg_1m',
'kibana.load_avg_5m',
'kibana.load_avg_15m',
'kibana.req_sec',
'kibana.resp_time_avg_ms',
'kibana.resp_time_max_ms'
]
with mock.patch.object(util, 'get_hostname'):
self.kibana_check._get_data = mock.Mock(return_value=response)
self.kibana_check.gauge = mock.Mock(return_value=response)
self.kibana_check.check({'metrics': metrics})
self.assertTrue(self.kibana_check.gauge.called)
self.assertEquals(89, self.kibana_check.gauge.call_count)
for call_arg in self.kibana_check.gauge.call_args_list:
metric_name = call_arg[1]['metric']
self.assertIn(metric_name, expected_metric)

View File

@ -0,0 +1,278 @@
# Copyright 2016 FUJITSU LIMITED
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import contextlib
import logging
import os
import unittest
import mock
import psutil
import json
from monasca_setup.detection.plugins import kibana
LOG = logging.getLogger(kibana.__name__)
_KIBANA_METRICS = ['heap_size',
'heap_used',
'load',
'req_sec',
'resp_time_avg',
'resp_time_max']
class JsonResponse(object):
def __init__(self, data):
self.data = data
def json(self):
return self.data
class PSUtilGetProc(object):
cmdLine = ['kibana']
def as_dict(self):
return {'name': 'kibana',
'cmdline': PSUtilGetProc.cmdLine}
def cmdline(self):
return self.cmdLine
class KibanaDetectionTest(unittest.TestCase):
def setUp(self):
unittest.TestCase.setUp(self)
with mock.patch.object(kibana.Kibana, '_detect') as mock_detect:
self.kibana_plugin = kibana.Kibana('temp_dir')
self.assertTrue(mock_detect.called)
def _detect(self,
kibana_plugin,
config_is_file=True,
deps_installed=True):
kibana_plugin.available = False
psutil_mock = PSUtilGetProc()
process_iter_patch = mock.patch.object(psutil, 'process_iter',
return_value=[psutil_mock])
isfile_patch = mock.patch.object(os.path, 'isfile',
return_value=config_is_file)
deps_installed_patch = mock.patch.object(kibana_plugin,
'dependencies_installed',
return_value=deps_installed)
with contextlib.nested(process_iter_patch,
isfile_patch,
deps_installed_patch) as (
mock_process_iter, mock_isfile, mock_deps_installed):
kibana_plugin._detect()
self.assertTrue(mock_process_iter.called)
self.assertTrue(mock_isfile.called)
self.assertTrue(mock_deps_installed.called)
def _verify_kibana_conf(self, kibana_check, kibana_url):
self.assertIn('init_config', kibana_check)
self.assertIsNotNone(kibana_check['init_config'])
self.assertIn('url', kibana_check['init_config'])
self.assertEqual(kibana_check['init_config']['url'], kibana_url)
self.assertIn('instances', kibana_check)
self.assertEqual(1, len(kibana_check['instances']))
for instance in kibana_check['instances']:
self.assertIn('metrics', instance)
self.assertEqual(list, type(instance['metrics']))
self.assertItemsEqual(_KIBANA_METRICS, instance['metrics'])
def _verify_process_conf(self, process_check, kibana_user):
# minimize check here, do not check how process should work
# just find the user
self.assertIn('instances', process_check)
self.assertEqual(1, len(process_check['instances']))
for instance in process_check['instances']:
if not kibana_user:
self.assertNotIn('username', instance)
else:
self.assertIn('username', instance)
self.assertEqual(kibana_user, instance['username'])
def test_no_detect_no_process(self):
with mock.patch.object(LOG, 'error') as mock_log_error:
PSUtilGetProc.cmdLine = []
self._detect(self.kibana_plugin)
self.assertFalse(self.kibana_plugin.available)
self.assertEqual(mock_log_error.call_count, 1)
self.assertEqual(mock_log_error.call_args[0][0],
'Kibana process has not been found. '
'Plugin for Kibana will not be configured.')
def test_no_detect_no_dependencies(self):
with mock.patch.object(LOG, 'error') as mock_log_error:
self._detect(self.kibana_plugin, deps_installed=False)
self.assertFalse(self.kibana_plugin.available)
self.assertEqual(mock_log_error.call_count, 1)
self.assertEqual(mock_log_error.call_args[0][0],
'Kibana plugin dependencies are not satisfied. '
'Module "pyaml" not found. '
'Plugin for Kibana will not be configured.')
def test_no_detect_no_default_config_file(self):
with mock.patch.object(LOG, 'error') as mock_log_error:
self._detect(self.kibana_plugin, config_is_file=False)
self.assertFalse(self.kibana_plugin.available)
self.assertEqual(mock_log_error.call_count, 1)
self.assertEqual(mock_log_error.call_args[0][0],
'Kibana plugin cannot find configuration '
'file /opt/kibana/config/kibana.yml. '
'Plugin for Kibana will not be configured.')
def test_no_detect_no_args_config_file(self):
config_file = '/fake/config'
patch_log_error = mock.patch.object(LOG, 'error')
with patch_log_error as mock_log_error:
self.kibana_plugin.args = {'kibana-config': config_file}
self._detect(self.kibana_plugin, config_is_file=False)
self.assertFalse(self.kibana_plugin.available)
self.assertEqual(mock_log_error.call_count, 1)
self.assertEqual(mock_log_error.call_args[0][0],
'Kibana plugin cannot find configuration '
'file %s. '
'Plugin for Kibana will not be configured.'
% config_file)
def test_detect_ok(self):
self._detect(self.kibana_plugin)
self.assertTrue(self.kibana_plugin.available)
def test_build_config_unreadable_config(self):
patch_log_error = mock.patch.object(LOG, 'error')
patch_log_exception = mock.patch.object(LOG, 'exception')
patch_read_config = mock.patch.object(self.kibana_plugin,
'_read_config',
side_effect=Exception('oh'))
with contextlib.nested(
patch_log_error,
patch_log_exception,
patch_read_config
) as (mock_log_error, mock_log_exception, _):
self.kibana_plugin.build_config()
self.assertEqual(mock_log_error.call_count, 1)
self.assertEqual(mock_log_error.call_args[0][0],
'Failed to read configuration at '
'/opt/kibana/config/kibana.yml')
self.assertEqual(mock_log_exception.call_count, 1)
self.assertEqual(repr(mock_log_exception.call_args[0][0]),
repr(Exception('oh')))
def test_build_config_https_support(self):
config = ('localhost', 5700, 'https')
patch_log_error = mock.patch.object(LOG, 'error')
patch_read_config = mock.patch.object(self.kibana_plugin,
'_read_config',
return_value=config)
with contextlib.nested(
patch_log_error,
patch_read_config
) as (mock_log_error, _):
self.assertIsNone(self.kibana_plugin.build_config())
self.assertEqual(mock_log_error.call_count, 1)
self.assertEqual(mock_log_error.call_args[0][0],
'"https" protocol is currently not supported')
def test_build_config_no_metric_support(self):
config = ('localhost', 5700, 'http')
patch_log_warning = mock.patch.object(LOG, 'warning')
patch_read_config = mock.patch.object(self.kibana_plugin,
'_read_config',
return_value=config)
has_metric_patch = mock.patch.object(self.kibana_plugin,
'_has_metrics_support',
return_value=False)
with contextlib.nested(
patch_log_warning,
patch_read_config,
has_metric_patch
) as (patch_log_warning, _, __):
self.assertIsNone(self.kibana_plugin.build_config())
self.assertEqual(patch_log_warning.call_count, 1)
self.assertEqual(patch_log_warning.call_args[0][0],
'Running kibana does not support '
'metrics, skipping...')
def test_build_config_ok_no_kibana_user(self):
self._test_build_config_ok(None)
def test_build_config_ok_kibana_user(self):
self._test_build_config_ok('kibana-wizard')
def _test_build_config_ok(self, kibana_user):
kibana_host = 'localhost'
kibana_port = 5700
kibana_protocol = 'http'
kibana_cfg = (kibana_host, kibana_port, kibana_protocol)
kibana_url = '%s://%s:%d/api/status' % (
kibana_protocol,
kibana_host,
kibana_port
)
fixture_file = (os.path.dirname(os.path.abspath(__file__))
+ '/../checks_d/fixtures/test_kibana.json')
response = json.load(file(fixture_file))
get_metric_req_ret = mock.Mock(
wraps=JsonResponse(response)
)
patch_read_config = mock.patch.object(self.kibana_plugin,
'_read_config',
return_value=kibana_cfg)
has_metric_patch = mock.patch.object(self.kibana_plugin,
'_has_metrics_support',
return_value=True)
get_metrics_patch = mock.patch.object(self.kibana_plugin,
'_get_metrics_request',
return_value=get_metric_req_ret)
self.kibana_plugin.args = {'kibana-user': kibana_user}
with contextlib.nested(patch_read_config,
has_metric_patch,
get_metrics_patch):
conf = self.kibana_plugin.build_config()
self.assertIsNotNone(conf)
self.assertItemsEqual(['kibana', 'process'], conf.keys())
self._verify_kibana_conf(conf['kibana'], kibana_url)
self._verify_process_conf(conf['process'], kibana_user)