Retire monasca-agent repository
This repository is being retired as part of the Monasca project retirement. The project content has been replaced with a retirement notice. Needed-By: I3cb522ce8f51424b64e93c1efaf0dfd1781cd5ac Change-Id: Ic8d2c26a52940d13a30e605edc61432d2db9377f Signed-off-by: Goutham Pacha Ravi <gouthampravi@gmail.com>
This commit is contained in:
@@ -1,3 +0,0 @@
|
||||
[DEFAULT]
|
||||
test_path=${OS_TEST_PATH:-./tests}
|
||||
top_dir=./
|
||||
54
.zuul.yaml
54
.zuul.yaml
@@ -1,54 +0,0 @@
|
||||
- project:
|
||||
templates:
|
||||
- openstack-python3-zed-jobs
|
||||
check:
|
||||
jobs:
|
||||
- build-monasca-docker-image
|
||||
- monasca-tempest-python3-influxdb:
|
||||
voting: false
|
||||
post:
|
||||
jobs:
|
||||
- publish-monasca-agent-docker-images
|
||||
periodic:
|
||||
jobs:
|
||||
- publish-monasca-agent-docker-images
|
||||
release:
|
||||
jobs:
|
||||
- publish-monasca-agent-docker-images
|
||||
|
||||
- job:
|
||||
name: publish-monasca-agent-docker-images
|
||||
parent: build-monasca-docker-image
|
||||
post-run: playbooks/docker-publish.yml
|
||||
required-projects:
|
||||
- openstack/monasca-common
|
||||
vars:
|
||||
publisher: true
|
||||
secrets:
|
||||
- doker_hub_login_agent
|
||||
|
||||
- secret:
|
||||
name: doker_hub_login_agent
|
||||
data:
|
||||
user: !encrypted/pkcs1-oaep
|
||||
- 1bRzmNeNonIu3WCRS9nXsfMH6peZ4zw7ilLURI1WTXDB7wJU6KjzczXcIXDpWnEOkGeNg
|
||||
UgkV9UAEQ41qBcGLO9JRTCjTolx8GJv0uFrpwd1ZLD3X8jNcObHO64f3UFjCryCS+dwMm
|
||||
NNRCPYRusBV7o2B7SaeqlHlhHB/d0lYWqlxnHZLPRLuwDZFpNr273wJ6IA7B8+1KYIYiO
|
||||
yiUnFcEHE147iDoZTzPHZirGDbTRHOvMFr8mhuB78vloW90U1ZSwwTqLvhADI+51fQZv8
|
||||
pqLPGjE0jraPZH7c0ZKOLEX7GqXAAGZ/rn3fEN8AdXLzRZMO14Vv2ltNWRf/fCf71IuRR
|
||||
091ZIl4tPgz3Nb4j/4xkz4gMDginEEr5xSrJ/jWTSy5LUjaCfav35ve+H0UtMQyNs3pJr
|
||||
37yFTedwZu1glVn1AcOaus5Shb7utl+qclCN/kZ8Dx1g75ZL3PD67t/8ryHnNfvU3RYMO
|
||||
6NKWePsIAQ/JHncERxojtrLxXlh3beqhhcPVjQ/0tLPDZUg2U+ZJOYTXt2g5rALChYWOt
|
||||
rjn/41JD1SVJwichxfk661rh//Qy91g5DmTa43XmMUIjKLGezCZ21VsA9FQt8xTERT0mm
|
||||
309yHo+x8EkDGKtrfJYTgQWNywqEWgjAAYkoplVjyabS+6jgANVOgq/4scLilE=
|
||||
password: !encrypted/pkcs1-oaep
|
||||
- nTqpCACpuK/jmzh6WnODHCeIoKv4SlFzkptANnUqSsg0JLaLe2vcxEnXJLw7TQBFJGlBY
|
||||
Tp3TLf5bkNGOe5ezGAX+2LPFzX1asvHjY7m7OXTRRF55hdt9jOe87KdzG7CrqBbPEsqoG
|
||||
pGRu+s7KzhPzk1HhY7iv44R29AHyd/ncwRPvpeTfKYsAPmKvQ56MOnaPX80152XNl0T4k
|
||||
zzsdK1RBHweA0Wskw2Ryh4GtCqpEOH4+GXfDm+gt0+oefxGjJwaKglWQNxhLkPghiujUi
|
||||
1YiHj3ePhgf25omixQQ2qriWpBuTHkHUtrIF1Y2vsMCo5rZSU10AGfiaO0qB92HfLvQI/
|
||||
/XWHeRvJrRbz0G3LEClqVmHEPHRg5qMuWZx/+BIqTzm4pBsN+zdW5jz/Sk8l+Vpx2z1t1
|
||||
VWWCnJn5RuQEf6qCRwmJEFOvZ9HDZuYLAmHPBIlJPCCbZhMkXQtmCBwCrs8b4IpbK4qE5
|
||||
x7CjnJvaNb0JZJiHxhWVq3d8VBzH5WruIX/JwTWPkaLdNjAOgke5q0Uic/2xd+XdJKUVh
|
||||
JO1CixW6uYw9j82W8hCmxdgtlLWAdz7ZWnGzmX0t4JvY20zTZKAPnJ5L/fdkyd/xX2Jyh
|
||||
8JwIxP+NWhteedYmDUfcr1j9lg/f6EQyEh32Nj5ns7CpnbY4/IDZYYsDtTMbhs=
|
||||
@@ -1,19 +0,0 @@
|
||||
The source repository for this project can be found at:
|
||||
|
||||
https://opendev.org/openstack/monasca-agent
|
||||
|
||||
Pull requests submitted through GitHub are not monitored.
|
||||
|
||||
To start contributing to OpenStack, follow the steps in the contribution guide
|
||||
to set up and use Gerrit:
|
||||
|
||||
https://docs.openstack.org/contributors/code-and-documentation/quick-start.html
|
||||
|
||||
Bugs should be filed on Storyboard:
|
||||
|
||||
https://storyboard.openstack.org/#!/project/861
|
||||
|
||||
For more specific information about contributing to this repository, see the
|
||||
Monasca contributor guide:
|
||||
|
||||
https://docs.openstack.org/monasca-api/latest/contributor/contributing.html
|
||||
79
README.rst
79
README.rst
@@ -1,74 +1,9 @@
|
||||
Openstack Monasca Agent
|
||||
========================
|
||||
This project is no longer maintained.
|
||||
|
||||
|Team and repository tags|
|
||||
The contents of this repository are still available in the Git
|
||||
source code management system. To see the contents of this
|
||||
repository before it reached its end of life, please check out the
|
||||
previous commit with "git checkout HEAD^1".
|
||||
|
||||
Introduction
|
||||
============
|
||||
|
||||
The *Monasca Agent* is a modern Python monitoring agent for gathering
|
||||
metrics and sending them to the Monasca API. The Agent supports
|
||||
collecting metrics from a variety of sources as follows:
|
||||
|
||||
System metrics
|
||||
such as cpu and memory utilization.
|
||||
Prometheus
|
||||
The *Monasca Agent* supports scraping metrics from endpoints provided by
|
||||
*Prometheus exporters* or *Prometheus* instrumented applications.
|
||||
Statsd
|
||||
The *Monasca Agent* supports an integrated *StatsD* daemon which
|
||||
can be used by applications via a statsd client library.
|
||||
OpenStack metrics
|
||||
The agent can perform checks on OpenStack processes.
|
||||
Host alive
|
||||
The *Monasca Agent* can perform active checks on a host to
|
||||
determine if it is alive using ping (ICMP) or SSH.
|
||||
Process checks
|
||||
The *Monasca Agent* can check a process and return
|
||||
several metrics on the process such as a number of instances, memory,
|
||||
io and threads.
|
||||
Http Endpoint checks
|
||||
The *Monasca Agent* can perform active checks on
|
||||
http endpoints by sending an HTTP request to an API.
|
||||
Service checks
|
||||
The *Monasca Agent* can check services such as MySQL, RabbitMQ,
|
||||
and many more.
|
||||
Nagios plugins
|
||||
The *Monasca Agent* can run *Nagios* plugins and send the
|
||||
status code returned by the plugin as a metric to the Monasca API.
|
||||
|
||||
The Agent can automatically detect and setup checks on certain
|
||||
processes and resources.
|
||||
|
||||
The Agent is extensible through the configuration of additional plugins,
|
||||
written in Python.
|
||||
|
||||
Detailed Documentation
|
||||
======================
|
||||
|
||||
For an introduction to the Monasca Agent, including a complete list of
|
||||
the metrics that the Agent supports, see the "Agent" detailed
|
||||
documentation.
|
||||
|
||||
The Agent is extensible through the configuration of additional check and
|
||||
setup plugins, written in Python. See the "Agent Customizations"
|
||||
detailed documentation.
|
||||
|
||||
Agent
|
||||
https://opendev.org/openstack/monasca-agent/src/branch/master/docs/Agent.md
|
||||
|
||||
Agent Customizations
|
||||
https://opendev.org/openstack/monasca-agent/src/branch/master/docs/Customizations.md
|
||||
|
||||
Monasca Metrics
|
||||
https://opendev.org/openstack/monasca-agent/src/branch/master/docs/MonascaMetrics.md
|
||||
|
||||
Agent Plugin details
|
||||
https://opendev.org/openstack/monasca-agent/src/branch/master/docs/Plugins.md
|
||||
|
||||
* License: Simplified BSD License
|
||||
* Source: https://opendev.org/openstack/monasca-agent
|
||||
* Bugs: https://storyboard.openstack.org/#!/project/861 (please use `bug` tag)
|
||||
|
||||
.. |Team and repository tags| image:: https://governance.openstack.org/tc/badges/monasca-agent.svg
|
||||
:target: https://governance.openstack.org/tc/reference/tags/index.html
|
||||
For any further questions, please email openstack-discuss@lists.openstack.org
|
||||
or join #openstack-dev on OFTC.
|
||||
|
||||
@@ -1,152 +0,0 @@
|
||||
Api:
|
||||
# To configure Keystone correctly, a project-scoped token must be acquired.
|
||||
# To accomplish this, the configuration must be set up with one of the
|
||||
# following scenarios:
|
||||
# Set username and password and you have a default project set in keystone.
|
||||
# Set username, password and project id.
|
||||
# Set username, password, project name and (domain id or domain name).
|
||||
#
|
||||
# Monitoring API URL (example: https://region-a.geo-1.monitoring.hpcloudsvc.com:8080/v2.0)
|
||||
# If undefined, it will be pulled from Keystone Service Catalog optionally filtering by
|
||||
# service_type ('monitoring' by default), endpoint_type ('publicURL' by default) and/or
|
||||
# region_name (none by default).
|
||||
url: {args.monasca_url}
|
||||
service_type: {args.service_type}
|
||||
endpoint_type: {args.endpoint_type}
|
||||
region_name: {args.region_name}
|
||||
# Keystone Username
|
||||
username: {args.username}
|
||||
# Keystone Password
|
||||
password: "{args.password}"
|
||||
# Keystone API URL: URL for the Keystone server to use
|
||||
# Example: https://region-a.geo-1.identity.hpcloudsvc.com:35357/v3/
|
||||
keystone_url: {args.keystone_url}
|
||||
# Domain id to be used to resolve username
|
||||
user_domain_id: {args.user_domain_id}
|
||||
# Domain name to be used to resolve username
|
||||
user_domain_name: {args.user_domain_name}
|
||||
# Project name to be used by this agent
|
||||
project_name: {args.project_name}
|
||||
# Project domain id to be used by this agent
|
||||
project_domain_id: {args.project_domain_id}
|
||||
# Project domain id to be used by this agent
|
||||
project_domain_name: {args.project_domain_name}
|
||||
# Project id to be used by this agent
|
||||
project_id: {args.project_id}
|
||||
# Set whether certificates are used for Keystone
|
||||
# *******************************************************************************************
|
||||
# **** CAUTION ****: The insecure flag should NOT be set to True in a production environment!
|
||||
# *******************************************************************************************
|
||||
# If insecure is set to False, a ca_file name must be set to authenticate with Keystone
|
||||
insecure: {args.insecure}
|
||||
# Name of the ca certs file
|
||||
ca_file: {args.ca_file}
|
||||
|
||||
# The following 3 options are for handling buffering and reconnection to the monasca-api
|
||||
# If the agent forwarder is consuming too much memory, you may want to set
|
||||
# max_measurement_buffer_size to a lower value. If you have a larger system with many agents,
|
||||
# you may want to throttle the number of messages sent to the API by setting the
|
||||
# backlog_send_rate to a lower number.
|
||||
|
||||
# DEPRECATED - please use max_measurement_buffer_size instead
|
||||
# Maximum number of messages (batches of measurements) to buffer when unable to communicate
|
||||
# with the monasca-api (-1 means no limit)
|
||||
max_buffer_size: {args.max_buffer_size}
|
||||
# Maximum number of measurements to buffer when unable to communicate with the monasca-api
|
||||
# (-1 means no limit)
|
||||
max_measurement_buffer_size: {args.max_measurement_buffer_size}
|
||||
# Maximum number of messages to send at one time when communication with the monasca-api is restored
|
||||
backlog_send_rate: {args.backlog_send_rate}
|
||||
# Maximum batch size of measurements to write to monasca-api, 0 is no limit
|
||||
max_batch_size: {args.max_batch_size}
|
||||
|
||||
# Publish extra metrics to the API by adding this number of 'amplifier' dimensions.
|
||||
# For load testing purposes only; set to 0 for production use.
|
||||
amplifier: {args.amplifier}
|
||||
|
||||
Main:
|
||||
# Force the hostname to whatever you want.
|
||||
hostname: {hostname}
|
||||
|
||||
# Optional dimensions to be sent with every metric from this node
|
||||
# They should be in the format name: value
|
||||
# Example of dimensions below
|
||||
# dimensions:
|
||||
# service: nova
|
||||
# group: group_a
|
||||
# environment: production
|
||||
dimensions: {args.dimensions}
|
||||
|
||||
# Set the threshold for accepting points to allow anything
|
||||
# with recent_point_threshold seconds
|
||||
# Defaults to 30 seconds if no value is provided
|
||||
#recent_point_threshold: 30
|
||||
|
||||
# time to wait between collection runs
|
||||
check_freq: {args.check_frequency}
|
||||
|
||||
# Number of Collector Threads to run
|
||||
num_collector_threads: {args.num_collector_threads}
|
||||
|
||||
# Maximum number of collection cycles where all of the threads in the pool are
|
||||
# still running plugins before the collector will exit
|
||||
pool_full_max_retries: {args.pool_full_max_retries}
|
||||
|
||||
# Threshold value for warning on collection time of each check (in seconds)
|
||||
sub_collection_warn: {args.plugin_collect_time_warn}
|
||||
|
||||
# Collector restart interval (in hours)
|
||||
collector_restart_interval: 24
|
||||
|
||||
# Change port the Agent is listening to
|
||||
# listen_port: 17123
|
||||
|
||||
# Allow non-local traffic to this Agent
|
||||
# This is required when using this Agent as a proxy for other Agents
|
||||
# that might not have an internet connection
|
||||
# For more information, please see
|
||||
# https://github.com/DataDog/dd-agent/wiki/Network-Traffic-and-Proxy-Configuration
|
||||
# non_local_traffic: no
|
||||
|
||||
# Submits all metrics to this tenant unless specified by the metric.
|
||||
# This is the equivalent of submitting delegated_tenant with all metrics, and when
|
||||
# not set will submit metrics to the default tenant of the provided credentials.
|
||||
# Used when deploying the agent to systems where the credentials of the monitored
|
||||
# tenant are not known.
|
||||
# global_delegated_tenant:
|
||||
|
||||
Statsd:
|
||||
# ========================================================================== #
|
||||
# Monasca Statsd configuration #
|
||||
# ========================================================================== #
|
||||
# Monasca Statsd is a small server that aggregates your custom app metrics.
|
||||
|
||||
# Make sure your client is sending to the same port.
|
||||
monasca_statsd_port : {args.monasca_statsd_port}
|
||||
|
||||
## The monasca_statsd flush period.
|
||||
monasca_statsd_interval : {args.monasca_statsd_interval}
|
||||
|
||||
# If you want to forward every packet received by the monasca_statsd server
|
||||
# to another statsd server, uncomment these lines.
|
||||
# WARNING: Make sure that forwarded packets are regular statsd packets and not "monasca_statsd" packets,
|
||||
# as your other statsd server might not be able to handle them.
|
||||
# monasca_statsd_forward_host: address_of_own_statsd_server
|
||||
# monasca_statsd_statsd_forward_port: 8125
|
||||
|
||||
Logging:
|
||||
# ========================================================================== #
|
||||
# Logging
|
||||
# ========================================================================== #
|
||||
log_level: {args.log_level}
|
||||
collector_log_file: {args.log_dir}/collector.log
|
||||
forwarder_log_file: {args.log_dir}/forwarder.log
|
||||
statsd_log_file: {args.log_dir}/statsd.log
|
||||
enable_logrotate: {args.enable_logrotate}
|
||||
|
||||
# if syslog is enabled but a host and port are not set, a local domain socket
|
||||
# connection will be attempted
|
||||
#
|
||||
# log_to_syslog: yes
|
||||
# syslog_host:
|
||||
# syslog_port:
|
||||
61
bindep.txt
61
bindep.txt
@@ -1,61 +0,0 @@
|
||||
# This is a cross-platform list tracking distribution packages needed by tests;
|
||||
# see http://docs.openstack.org/infra/bindep/ for additional information.
|
||||
|
||||
build-essential [platform:dpkg]
|
||||
curl
|
||||
cyrus-sasl-devel [platform:rpm]
|
||||
gawk
|
||||
gettext
|
||||
language-pack-en [platform:ubuntu]
|
||||
libcurl-devel [platform:rpm]
|
||||
libcurl4-gnutls-dev [platform:dpkg]
|
||||
libevent-dev [platform:dpkg]
|
||||
libevent-devel [platform:rpm]
|
||||
libffi-dev [platform:dpkg]
|
||||
libffi-devel [platform:rpm]
|
||||
libldap2-dev [platform:dpkg]
|
||||
libmysqlclient-dev [platform:dpkg]
|
||||
libpcap-dev [platform:dpkg]
|
||||
libpcap-devel [platform:rpm]
|
||||
libpq-dev [platform:dpkg]
|
||||
librrd-dev [platform:dpkg]
|
||||
libsasl2-dev [platform:dpkg]
|
||||
libuuid-devel [platform:rpm]
|
||||
libxml2-dev [platform:dpkg]
|
||||
libxml2-devel [platform:rpm]
|
||||
libxml2-utils [platform:dpkg]
|
||||
libxslt-devel [platform:rpm]
|
||||
libxslt1-dev [platform:dpkg]
|
||||
locales [platform:debian]
|
||||
mariadb [platform:rpm]
|
||||
mariadb-devel [platform:rpm]
|
||||
mariadb-server [platform:rpm]
|
||||
memcached
|
||||
mongodb [platform:dpkg !platform:ubuntu-jammy]
|
||||
mongodb-server [platform:rpm !platform:ubuntu-jammy]
|
||||
mysql-client [platform:dpkg]
|
||||
mysql-server [platform:dpkg]
|
||||
pkg-config [platform:dpkg]
|
||||
pkgconfig [platform:rpm]
|
||||
postgresql
|
||||
postgresql-client [platform:dpkg]
|
||||
postgresql-devel [platform:rpm]
|
||||
postgresql-server [platform:rpm]
|
||||
python-dev [platform:dpkg !platform:ubuntu-jammy]
|
||||
python-devel [platform:rpm]
|
||||
python-libvirt [platform:dpkg !platform:ubuntu-focal !platform:ubuntu-jammy]
|
||||
python-lxml [!platform:ubuntu-jammy]
|
||||
python3-all-dev [platform:ubuntu !platform:ubuntu-precise]
|
||||
python3-dev [platform:dpkg]
|
||||
python3-devel [platform:fedora]
|
||||
python3-libvirt [platform:ubuntu-focal platform:ubuntu-jammy]
|
||||
python3.4 [platform:ubuntu-trusty]
|
||||
python34-devel [platform:centos]
|
||||
python3.5 [platform:ubuntu-xenial]
|
||||
redis [platform:rpm]
|
||||
redis-server [platform:dpkg]
|
||||
rrdtool-devel [platform:rpm]
|
||||
uuid-dev [platform:dpkg]
|
||||
zlib-devel [platform:rpm]
|
||||
zlib1g-dev [platform:dpkg]
|
||||
zookeeperd [platform:dpkg]
|
||||
@@ -1,7 +0,0 @@
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
- name: a10_system_check
|
||||
a10_device: a10_device_ip
|
||||
a10_username: admin
|
||||
a10_password: password
|
||||
@@ -1,69 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
instances:
|
||||
# - host: localhost
|
||||
# port: 7199
|
||||
# user: username
|
||||
# password: password
|
||||
# name: activemq_instance
|
||||
# #java_bin_path: /path/to/java #Optional, should be set if the agent cannot find your java executable
|
||||
# #trust_store_path: /path/to/trustStore.jks # Optional, should be set if ssl is enabled
|
||||
# #trust_store_password: password
|
||||
|
||||
# List of metrics to be collected by the integration
|
||||
# Read http://docs.datadoghq.com/integrations/java/ to learn how to customize it
|
||||
init_config:
|
||||
conf:
|
||||
- include:
|
||||
Type: Queue
|
||||
attribute:
|
||||
AverageEnqueueTime:
|
||||
alias: activemq.queue.avg_enqueue_time
|
||||
metric_type: gauge
|
||||
ConsumerCount:
|
||||
alias: activemq.queue.consumer_count
|
||||
metric_type: gauge
|
||||
ProducerCount:
|
||||
alias: activemq.queue.producer_count
|
||||
metric_type: gauge
|
||||
MaxEnqueueTime:
|
||||
alias: activemq.queue.max_enqueue_time
|
||||
metric_type: gauge
|
||||
MinEnqueueTime:
|
||||
alias: activemq.queue.min_enqueue_time
|
||||
metric_type: gauge
|
||||
MemoryPercentUsage:
|
||||
alias: activemq.queue.memory_pct
|
||||
metric_type: gauge
|
||||
QueueSize:
|
||||
alias: activemq.queue.size
|
||||
metric_type: gauge
|
||||
DequeueCount:
|
||||
alias: activemq.queue.dequeue_count
|
||||
metric_type: counter
|
||||
DispatchCount:
|
||||
alias: activemq.queue.dispatch_count
|
||||
metric_type: counter
|
||||
EnqueueCount:
|
||||
alias: activemq.queue.enqueue_count
|
||||
metric_type: counter
|
||||
ExpiredCount:
|
||||
alias: activemq.queue.expired_count
|
||||
metric_type: counter
|
||||
InFlightCount:
|
||||
alias: activemq.queue.in_flight_count
|
||||
metric_type: counter
|
||||
|
||||
- include:
|
||||
Type: Broker
|
||||
attribute:
|
||||
StorePercentUsage:
|
||||
alias: activemq.broker.store_pct
|
||||
metric_type: gauge
|
||||
TempPercentUsage:
|
||||
alias: activemq.broker.temp_pct
|
||||
metric_type: gauge
|
||||
MemoryPercentUsage:
|
||||
alias: activemq.broker.memory_pct
|
||||
metric_type: gauge
|
||||
|
||||
@@ -1,70 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
instances:
|
||||
# - host: localhost
|
||||
# port: 7199
|
||||
# user: username
|
||||
# password: password
|
||||
# name: activemq_instance
|
||||
# #java_bin_path: /path/to/java #Optional, should be set if the agent cannot find your java executable
|
||||
# #trust_store_path: /path/to/trustStore.jks # Optional, should be set if ssl is enabled
|
||||
# #trust_store_password: password
|
||||
|
||||
|
||||
|
||||
# List of metrics to be collected by the integration
|
||||
# Read http://docs.datadoghq.com/integrations/java/ to learn how to customize it
|
||||
init_config:
|
||||
conf:
|
||||
- include:
|
||||
destinationType: Queue
|
||||
attribute:
|
||||
AverageEnqueueTime:
|
||||
alias: activemq.queue.avg_enqueue_time
|
||||
metric_type: gauge
|
||||
ConsumerCount:
|
||||
alias: activemq.queue.consumer_count
|
||||
metric_type: gauge
|
||||
ProducerCount:
|
||||
alias: activemq.queue.producer_count
|
||||
metric_type: gauge
|
||||
MaxEnqueueTime:
|
||||
alias: activemq.queue.max_enqueue_time
|
||||
metric_type: gauge
|
||||
MinEnqueueTime:
|
||||
alias: activemq.queue.min_enqueue_time
|
||||
metric_type: gauge
|
||||
MemoryPercentUsage:
|
||||
alias: activemq.queue.memory_pct
|
||||
metric_type: gauge
|
||||
QueueSize:
|
||||
alias: activemq.queue.size
|
||||
metric_type: gauge
|
||||
DequeueCount:
|
||||
alias: activemq.queue.dequeue_count
|
||||
metric_type: counter
|
||||
DispatchCount:
|
||||
alias: activemq.queue.dispatch_count
|
||||
metric_type: counter
|
||||
EnqueueCount:
|
||||
alias: activemq.queue.enqueue_count
|
||||
metric_type: counter
|
||||
ExpiredCount:
|
||||
alias: activemq.queue.expired_count
|
||||
metric_type: counter
|
||||
InFlightCount:
|
||||
alias: activemq.queue.in_flight_count
|
||||
metric_type: counter
|
||||
|
||||
- include:
|
||||
type: Broker
|
||||
attribute:
|
||||
StorePercentUsage:
|
||||
alias: activemq.broker.store_pct
|
||||
metric_type: gauge
|
||||
TempPercentUsage:
|
||||
alias: activemq.broker.temp_pct
|
||||
metric_type: gauge
|
||||
MemoryPercentUsage:
|
||||
alias: activemq.broker.memory_pct
|
||||
metric_type: gauge
|
||||
@@ -1,10 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
- apache_status_url: http://example.com/server-status?auto
|
||||
# apache_user: example_user
|
||||
# apache_password: example_password
|
||||
dimensions:
|
||||
dim1: value1
|
||||
@@ -1,45 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# The Cacti checks requires access to the Cacti DB in MySQL and to the RRD
|
||||
# files that contain the metrics tracked in Cacti.
|
||||
# In almost all cases, you'll only need one instance pointing to the Cacti
|
||||
# database.
|
||||
#
|
||||
# The `rrd_path` will probably be '/var/lib/cacti/rra' on Ubuntu
|
||||
# or '/var/www/html/cacti/rra' on any other machines.
|
||||
#
|
||||
# The `field_names` is an optional parameter to specify which field_names
|
||||
# should be used to determine if a device is a real device. You can let it
|
||||
# commented out as the default values should satisfy your needs.
|
||||
# You can run the following query to determine your field names:
|
||||
# SELECT
|
||||
# h.hostname as hostname,
|
||||
# hsc.field_value as device_name,
|
||||
# dt.data_source_path as rrd_path,
|
||||
# hsc.field_name as field_name
|
||||
# FROM data_local dl
|
||||
# JOIN host h on dl.host_id = h.id
|
||||
# JOIN data_template_data dt on dt.local_data_id = dl.id
|
||||
# LEFT JOIN host_snmp_cache hsc on h.id = hsc.host_id
|
||||
# AND dl.snmp_index = hsc.snmp_index
|
||||
# WHERE dt.data_source_path IS NOT NULL
|
||||
# AND dt.data_source_path != ''
|
||||
#
|
||||
#
|
||||
#
|
||||
# The `rrd_whitelist` is a path to a text file that has a list of patterns,
|
||||
# one per line, that should be fetched. If no whitelist is specified, all
|
||||
# metrics will be fetched.
|
||||
#
|
||||
- mysql_host: localhost
|
||||
mysql_user: MYSQL_USER
|
||||
mysql_password: MYSQL_PASSWORD
|
||||
rrd_path: /path/to/cacti/rra
|
||||
#field_names:
|
||||
# - ifName
|
||||
# - dskDevice
|
||||
# - ifIndex
|
||||
#rrd_whitelist: /path/to/rrd_whitelist.txt
|
||||
@@ -1,22 +0,0 @@
|
||||
# (C) Copyright 2017 Hewlett Packard Enterprise Development LP
|
||||
init_config:
|
||||
# Timeout on cAdvisor requests
|
||||
# connection_timeout: 3
|
||||
# white_list:
|
||||
# metrics:
|
||||
# cpu.system_time_sec:
|
||||
# dimensions:
|
||||
# workspace_id: 12345uuid
|
||||
# hostname: host_default
|
||||
# net.in_bytes_sec:
|
||||
# mem.used_bytes:
|
||||
# dimensions:
|
||||
# workspace_id: 56789uuid
|
||||
# dimensions:
|
||||
# workspace_id: abcd_uuid
|
||||
|
||||
instances:
|
||||
# URL of cAdvisor to connect to.
|
||||
- cadvisor_url: "http://127.0.0.1:4194"
|
||||
# You can also set the plugin to determine cadvisor url from running within a kubernetes container
|
||||
# kubernetes_detect_cadvisor: True
|
||||
@@ -1,71 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
instances:
|
||||
# - host: localhost
|
||||
# port: 7199
|
||||
# user: username
|
||||
# password: password
|
||||
# name: cassandra_instance
|
||||
# #trust_store_path: /path/to/trustStore.jks # Optional, should be set if ssl is enabled
|
||||
# #trust_store_password: password
|
||||
# #java_bin_path: /path/to/java #Optional, should be set if the agent cannot find your java executable
|
||||
|
||||
|
||||
init_config:
|
||||
# List of metrics to be collected by the integration
|
||||
# Read http://docs.datadoghq.com/integrations/java/ to learn how to customize it
|
||||
conf:
|
||||
- include:
|
||||
domain: org.apache.cassandra.db
|
||||
attribute:
|
||||
- BloomFilterDiskSpaceUsed
|
||||
- BloomFilterFalsePositives
|
||||
- BloomFilterFalseRatio
|
||||
- Capacity
|
||||
- CompressionRatio
|
||||
- CompletedTasks
|
||||
- ExceptionCount
|
||||
- Hits
|
||||
- RecentHitRate
|
||||
- RowCacheRecentHitRate
|
||||
- KeyCacheRecentHitRate
|
||||
- LiveDiskSpaceUsed
|
||||
- LiveSSTableCount
|
||||
- Load
|
||||
- MaxRowSize
|
||||
- MeanRowSize
|
||||
- MemtableColumnsCount
|
||||
- MemtableDataSize
|
||||
- MemtableSwitchCount
|
||||
- MinRowSize
|
||||
- ReadCount
|
||||
- Requests
|
||||
- Size
|
||||
- TotalDiskSpaceUsed
|
||||
- TotalReadLatencyMicros
|
||||
- TotalWriteLatencyMicros
|
||||
- UpdateInterval
|
||||
- WriteCount
|
||||
- PendingTasks
|
||||
exclude:
|
||||
keyspace: system
|
||||
attribute:
|
||||
- MinimumCompactionThreshold
|
||||
- MaximumCompactionThreshold
|
||||
- RowCacheKeysToSave
|
||||
- KeyCacheSavePeriodInSeconds
|
||||
- RowCacheSavePeriodInSeconds
|
||||
- PendingTasks
|
||||
- Scores
|
||||
- RpcTimeout
|
||||
- include:
|
||||
domain: org.apache.cassandra.internal
|
||||
attribute:
|
||||
- ActiveCount
|
||||
- CompletedTasks
|
||||
- CurrentlyBlockedTasks
|
||||
- TotalBlockedTasks
|
||||
- include:
|
||||
domain: org.apache.cassandra.net
|
||||
attribute:
|
||||
- TotalTimeouts
|
||||
@@ -1,21 +0,0 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
- cluster_name: ceph
|
||||
collect_usage_metrics: True # Collect cluster usage metrics
|
||||
collect_stats_metrics: True # Collect cluster stats metrics
|
||||
collect_mon_metrics: True # Collect metrics regarding monitors
|
||||
collect_osd_metrics: True # Collect metrics regarding OSDs
|
||||
collect_pool_metrics: True # Collect metrics regarding Pools
|
||||
@@ -1,27 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
# Specify the path to the mk_livestatus socket
|
||||
mk_agent_path: /usr/bin/check_mk_agent
|
||||
|
||||
custom:
|
||||
# Customize automatic metric names and dimensions for the following items.
|
||||
# - mk_item: This is the name (2nd field) returned by check_mk_agent
|
||||
# discard: Exclude the metric from Monasca, True or False (the default)
|
||||
# dimensions: Extra dimensions to include, in {'name': 'value'} format.
|
||||
# metric_name_base: This represents the leftmost part of the metric name
|
||||
# to use. Status and any performance metrics are
|
||||
# appended following a dot, so ".status" and
|
||||
# ".response_time" would be examples
|
||||
|
||||
# - mk_item: sirius-api
|
||||
# discard: false
|
||||
# dimensions: {'component': 'sirius'}
|
||||
# metric_name_base: check_mk.sirius_api
|
||||
# - mk_item: eon-api
|
||||
# discard: true
|
||||
|
||||
instances:
|
||||
# Leave empty (- {})
|
||||
# check_mk_local to automatically process all configured 'local' checks.
|
||||
- {}
|
||||
@@ -1,6 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
#instances:
|
||||
# - server: http://localhost:5984
|
||||
@@ -1,11 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# - server: http://localhost:8091
|
||||
# user: Administrator
|
||||
# password: password
|
||||
# timeout: 10 # Optional timeout to http connections to the API (Only supported on python2.6 and above)
|
||||
# dimensions:
|
||||
# dim1: value1
|
||||
@@ -1,15 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
# process_fs_path: (optional) STRING. It will set up the path of the process
|
||||
# filesystem. By default it's set for: /proc directory.
|
||||
# Example:
|
||||
#
|
||||
# process_fs_path: /rootfs/proc
|
||||
instances:
|
||||
# Cpu check only supports one configured instance
|
||||
- name: cpu_stats
|
||||
# Send additional cpu rollup system metrics (Default = False)
|
||||
# These metrics include the following metrics aggregated across all cpus:
|
||||
# cpu.total_logical_cores
|
||||
#send_rollup_stats: True
|
||||
@@ -1,8 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
# crash_dir: /var/crash
|
||||
|
||||
instances:
|
||||
# Crash check only supports one configured instance
|
||||
- name: crash_stats
|
||||
@@ -1,22 +0,0 @@
|
||||
# (C) Copyright 2015-2016 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# This config is for the Directory Check which is used to report metrics
|
||||
# for the size of a given directory
|
||||
#
|
||||
# NOTE: This check is NOT currently supported on Windows systems
|
||||
#
|
||||
# For each instance, the 'directory' parameter is required
|
||||
#
|
||||
# WARNING: Ensure the user account running the Agent (typically mon-agent)
|
||||
# has read access to the monitored directory and files.
|
||||
#
|
||||
# Instances take the following parameters:
|
||||
# "directory" - string, the directory to monitor. Required
|
||||
|
||||
- built_by: Directory
|
||||
directory: /path/to/directory_1
|
||||
- built_by: Directory
|
||||
directory: /path/to/directory_2
|
||||
@@ -1,31 +0,0 @@
|
||||
# (C) Copyright 2015,2016 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
# process_fs_path: (optional) STRING. It will set up the path of the process
|
||||
# filesystem. By default it's set for: /proc directory.
|
||||
# Example:
|
||||
#
|
||||
# process_fs_path: /rootfs/proc
|
||||
instances:
|
||||
# Disk check only supports one configured instance
|
||||
- name: disk_stats
|
||||
# Add a mount_point dimension to the disk system metrics (Default = True)
|
||||
#use_mount: False
|
||||
|
||||
# Send additional disk i/o system metrics (Default = True)
|
||||
#send_io_stats: False
|
||||
|
||||
# Send additional disk rollup system metrics (Default = False)
|
||||
# These metrics include the following metrics aggregated across all disks:
|
||||
# disk.total_space_mb and disk.total_used_space_mb
|
||||
#send_rollup_stats: True
|
||||
|
||||
# Some infrastructures have many constantly changing virtual devices (e.g. folks
|
||||
# running constantly churning linux containers) whose metrics aren't
|
||||
# interesting. To filter out a particular pattern of devices
|
||||
# from the disk collection, configure a regex here:
|
||||
# device_blacklist_re: .*\/dev\/mapper\/lxc-box.*
|
||||
|
||||
# For disk metrics it is also possible to ignore entire filesystem types
|
||||
ignore_filesystem_types: iso9660,tmpfs,nsfs
|
||||
device_blacklist_re: .*freezer_backup_snap.*
|
||||
@@ -1,20 +0,0 @@
|
||||
# (C) Copyright 2016 Hewlett Packard Enterprise Development LP
|
||||
|
||||
# Warning
|
||||
# The user running the monasca-agent (usually "mon-agent") must be part of the "docker" group
|
||||
|
||||
init_config:
|
||||
docker_root: /
|
||||
|
||||
# Timeout on Docker api connection. You may have to increase it if you have many containers.
|
||||
# connection_timeout: 3
|
||||
|
||||
# docker version to use when connecting to the docker api. Default is auto
|
||||
# version: "auto"
|
||||
|
||||
instances:
|
||||
# URL of the Docker daemon socket to reach the Docker API. HTTP also works.
|
||||
- url: "unix://var/run/docker.sock"
|
||||
|
||||
# Set to true if you want the kubernetes namepsace and pod name to be set as dimensions
|
||||
add_kubernetes_dimensions: True
|
||||
@@ -1,18 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# The URL where elasticsearch accepts HTTP requests. This will be used to
|
||||
# fetch statistics from the nodes and information about the cluster health.
|
||||
#
|
||||
# If you're using basic authentication with a 3rd party library, for example
|
||||
# elasticsearch-http-basic, you will need to specify a value for username
|
||||
# and password for every instance that requires authentication.
|
||||
#
|
||||
#- url: http://localhost:9200
|
||||
# username: username
|
||||
# password: password
|
||||
# dimensions:
|
||||
# dim1: value1
|
||||
# dim2: value2
|
||||
@@ -1,45 +0,0 @@
|
||||
# (C) Copyright 2016 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# This config is for the File Check which is used to report metrics
|
||||
# for the size of the files in a given directory
|
||||
#
|
||||
# NOTE: This check is NOT currently supported on Windows systems
|
||||
#
|
||||
# For each instance, both parameters 'directory_name' and 'file_names' are
|
||||
# required
|
||||
#
|
||||
# WARNING: Ensure the user account running the Agent (typically mon-agent)
|
||||
# has read access to the monitored directory and files.
|
||||
#
|
||||
# Instances take the following parameters:
|
||||
# "directory_name" - string, the directory of the files to monitor.
|
||||
# Required
|
||||
# "file_names" - list of strings, names of the files to monitor. Required
|
||||
# "recursive" - boolean, when true and file_name is set to '*' will
|
||||
# recursively grab files under the given directory to gather
|
||||
# stats on.
|
||||
|
||||
# Check the size of all the files under directory_1
|
||||
# - built_by: Logging
|
||||
# directory_name: /path/to/directory_1
|
||||
# file_names:
|
||||
# - '*'
|
||||
# recursive: True
|
||||
|
||||
# Check one file under directory_2
|
||||
# - built_by: Logging
|
||||
# directory_name: /path/to/directory_2
|
||||
# file_names:
|
||||
# -file_name2
|
||||
# recursive: False
|
||||
|
||||
# Check two or more files under directory_3
|
||||
# - built_by: Logging
|
||||
# directory_name: /path/to/directory_3
|
||||
# file_names:
|
||||
# -file_name31
|
||||
# -file_name32
|
||||
# recursive: False
|
||||
@@ -1,9 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
- server: localhost
|
||||
port: 4730
|
||||
dimensions:
|
||||
dim1: value11
|
||||
@@ -1,12 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# The name of the gunicorn process. For the following gunicorn server ...
|
||||
#
|
||||
# gunicorn --name my_web_app my_web_app_config.ini
|
||||
#
|
||||
# ... we'd use the name `my_web_app`.
|
||||
#
|
||||
# - proc_name: my_web_app
|
||||
@@ -1,12 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# - username: username
|
||||
# password: password
|
||||
# url: https://path/to/haproxy
|
||||
# status_check: False
|
||||
# collect_service_stats_only: True # Only collect the totals for the entire haproxy service
|
||||
# collect_aggregates_only: True # Only collect totals for each VIP
|
||||
# collect_status_metrics: False
|
||||
@@ -1,13 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
# HDFS check does not require any init_config
|
||||
|
||||
instances:
|
||||
# Each instance requires a namenode hostname.
|
||||
# Port defaults to 8020.
|
||||
|
||||
# - namenode: namenode.example.com
|
||||
# port: 8020
|
||||
# dimensions:
|
||||
# dim1: value1
|
||||
@@ -1,28 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
# Specify the ssh_port, if not 22 (optional)
|
||||
# ssh_port: 22
|
||||
|
||||
# ssh_timeout is a floating-point number of seconds (optional)
|
||||
# ssh_timeout: 0.5
|
||||
|
||||
# ping_timeout is an integer number of seconds (optional)
|
||||
# ping_timeout: 1
|
||||
|
||||
# alive_test can be either "ssh" for an SSH banner test (port 22)
|
||||
#+ or "ping" for an ICMP ping test
|
||||
instances:
|
||||
# - name: ssh to host1
|
||||
# host_name: host1.domain.net
|
||||
# alive_test: ssh
|
||||
# dimensions:
|
||||
# dim1: value1
|
||||
|
||||
# - name: ping host2
|
||||
# host_name: host2.domain.net
|
||||
# alive_test: ping
|
||||
|
||||
# - name: ssh to 192.168.0.221
|
||||
# host_name: 192.168.0.221
|
||||
# alive_test: ssh
|
||||
@@ -1,65 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
# If your service uses keystone for authentication, you can optionally
|
||||
# specify the information to collect a token to be used in the check.
|
||||
# This information should follow the same guidelines presented in
|
||||
# agent.yaml.template
|
||||
# https://github.com/openstack/monasca-agent/blob/master/agent.yaml.template
|
||||
# keystone_config:
|
||||
# keystone_url: http://endpoint.com/v3/
|
||||
# project_name: project
|
||||
# username: user
|
||||
# password: password
|
||||
|
||||
instances:
|
||||
# - name: Some Service Name
|
||||
# url: http://some.url.example.com
|
||||
# timeout: 1
|
||||
|
||||
# If your service uses basic authentication, you can optionally
|
||||
# specify a username and password that will be used in the check.
|
||||
# username: user
|
||||
# password: pass
|
||||
|
||||
# If use_keystone=True then Keystone information from `init_config`
|
||||
# will be used, and if it's not there then Keystone configuration
|
||||
# from the agent config will be used.
|
||||
# use_keystone: True
|
||||
|
||||
# The (optional) match_pattern parameter will instruct the check
|
||||
# to match the HTTP response body against a regular-expression-
|
||||
# compatible pattern. If the pattern matches the check will
|
||||
# return 0 for OK. Otherwise, it will return 1 for an error
|
||||
|
||||
# match_pattern: '.*OK.*OK.*OK.*OK.*OK'
|
||||
|
||||
# The (optional) collect_response_time parameter will instruct the
|
||||
# check to create a metric 'network.http.response_time', tagged with
|
||||
# the url, reporting the response time in seconds.
|
||||
|
||||
# collect_response_time: true
|
||||
|
||||
# The (optional) disable_ssl_validation will instruct the check
|
||||
# to skip the validation of the SSL certificate of the URL being tested.
|
||||
# This is mostly useful when checking SSL connections signed with
|
||||
# certificates that are not themselves signed by a public authority.
|
||||
# When true, the check logs a warning in collector.log
|
||||
|
||||
# disable_ssl_validation: true
|
||||
|
||||
# The (optional) headers parameter allows you to send extra headers
|
||||
# with the request. This is useful for explicitly specifying the host
|
||||
# header or perhaps adding headers for authorisation purposes. Note
|
||||
# that the http client library converts all headers to lowercase.
|
||||
# This is legal according to RFC2616
|
||||
# (See: http://tools.ietf.org/html/rfc2616#section-4.2)
|
||||
# but may be problematic with some HTTP servers
|
||||
# (See: https://code.google.com/p/httplib2/issues/detail?id=169)
|
||||
|
||||
# headers:
|
||||
# Host: alternative.host.example.com
|
||||
# X-Auth-Token: SOME-AUTH-TOKEN
|
||||
|
||||
# dimensions:
|
||||
# dim1: value1
|
||||
@@ -1,74 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# - name: Some Service Name
|
||||
# url: http://some.url.example.com
|
||||
# timeout: 1
|
||||
|
||||
# If your service uses basic authentication, you can optionally
|
||||
# specify a username and password that will be used in the check.
|
||||
# username: user
|
||||
# password: pass
|
||||
|
||||
# If your service uses keystone for authentication, you can optionally
|
||||
# specify the information to collect a token to be used in the check.
|
||||
# This information should follow the same guidelines presented in
|
||||
# agent.yaml.template
|
||||
# https://github.com/openstack/monasca-agent/blob/master/agent.yaml.template
|
||||
# If use_keystone=True and keystone_config is not specified, the keystone information
|
||||
# from the agent config will be used.
|
||||
# use_keystone=True
|
||||
# keystone_config:
|
||||
# keystone_url: http://endpoint.com/v3/
|
||||
# username: user
|
||||
# password: password
|
||||
|
||||
# The (optional) collect_response_time parameter will instruct the
|
||||
# check to create a metric 'network.http.response_time', tagged with
|
||||
# the url, reporting the response time in seconds.
|
||||
|
||||
# collect_response_time: true
|
||||
|
||||
# The (optional) disable_ssl_validation will instruct the check
|
||||
# to skip the validation of the SSL certificate of the URL being tested.
|
||||
# This is mostly useful when checking SSL connections signed with
|
||||
# certificates that are not themselves signed by a public authority.
|
||||
# When true, the check logs a warning in collector.log
|
||||
|
||||
# disable_ssl_validation: true
|
||||
|
||||
# The (optional) headers parameter allows you to send extra headers
|
||||
# with the request. This is useful for explicitly specifying the host
|
||||
# header or perhaps adding headers for authorisation purposes. Note
|
||||
# that the http client library converts all headers to lowercase.
|
||||
# This is legal according to RFC2616
|
||||
# (See: http://tools.ietf.org/html/rfc2616#section-4.2)
|
||||
# but may be problematic with some HTTP servers
|
||||
# (See: https://code.google.com/p/httplib2/issues/detail?id=169)
|
||||
|
||||
# headers:
|
||||
# Host: alternative.host.example.com
|
||||
# X-Auth-Token: SOME-AUTH-TOKEN
|
||||
|
||||
# dimensions:
|
||||
# dim1: value1
|
||||
|
||||
# To select which metrics to record, create a whitelist. Each entry in
|
||||
# the whitelist should include the name you want to give the metric,
|
||||
# the path to the metric value in the json (as a series of keys
|
||||
# separated by '/'), and the type of recording to use (counter, gauge,
|
||||
# rate, histogram, set). See the Plugins documentation about
|
||||
# http_metrics for more information about the different types.
|
||||
|
||||
# whitelist:
|
||||
# - name: jvm.memory.total.used
|
||||
# path: gauges/jvm.memory.total.used/value
|
||||
# type: gauge
|
||||
# - name: metrics.published
|
||||
# path: meters/monasca.api.app.MetricService.metrics.published/count
|
||||
# type: rate
|
||||
# - name: raw-sql.time.avg
|
||||
# path: timers/org.skife.jdbi.v2.DBI.raw-sql/mean
|
||||
# type: gauge
|
||||
@@ -1,35 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
|
||||
instances:
|
||||
# By default, this check will run against a single instance - the current
|
||||
# machine that the Agent is running on. It will check the WMI performance
|
||||
# counters for IIS on that machine.
|
||||
#
|
||||
# If you want to check other remote machines as well, you can add one
|
||||
# instance per host. Note: If you also want to check the counters on the
|
||||
# current machine, you will have to create an instance with empty params.
|
||||
#
|
||||
# The `sites` parameter allows you to specify a list of sites you want to
|
||||
# read metrics from. With sites specified, metrics will be tagged with the
|
||||
# site name. If you don't define any sites, the check will pull the
|
||||
# aggregate values across all sites.
|
||||
#
|
||||
# Here's an example of configuration that would check the current machine
|
||||
# and a remote machine called MYREMOTESERVER. For the remote host we are
|
||||
# only pulling metrics from the default site.
|
||||
#
|
||||
#- host: . # "." means the current host
|
||||
# dimensions:
|
||||
# app_name: myapp
|
||||
#
|
||||
#- host: MYREMOTESERVER
|
||||
# username: MYREMOTESERVER\fred
|
||||
# password: mysecretpassword
|
||||
# dimensions:
|
||||
# app_name: myapp
|
||||
# region: east
|
||||
# sites:
|
||||
# - Default Web Site
|
||||
@@ -1,7 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
- name: default
|
||||
jenkins_home: /var/lib/jenkins
|
||||
@@ -1,35 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# - host: localhost
|
||||
# port: 7199
|
||||
# name: jmx_instance
|
||||
# user: username
|
||||
# password: password
|
||||
# #java_bin_path: /path/to/java #Optional, should be set if the agent cannot find your java executable
|
||||
# #trust_store_path: /path/to/trustStore.jks # Optional, should be set if ssl is enabled
|
||||
# #trust_store_password: password
|
||||
# dimensions:
|
||||
# env: stage
|
||||
# newTag: test
|
||||
|
||||
#
|
||||
# # List of metrics to be collected by the integration
|
||||
# # Read http://docs.datadoghq.com/integrations/java/ to learn how to customize it
|
||||
# conf:
|
||||
# - include:
|
||||
# domain: my_domain
|
||||
# bean: my_bean
|
||||
# attribute:
|
||||
# attribute1:
|
||||
# metric_type: counter
|
||||
# alias: jmx.my_metric_name
|
||||
# attribute2:
|
||||
# metric_type: gauge
|
||||
# alias: jmx.my2ndattribute
|
||||
# - include:
|
||||
# domain: 2nd_domain
|
||||
# - exclude:
|
||||
# bean: excluded_bean
|
||||
@@ -1,11 +0,0 @@
|
||||
init_config: null
|
||||
instances:
|
||||
- built_by: JsonPlugin
|
||||
metrics_dir: /var/cache/monasca_json_plugin
|
||||
name: /var/cache/monasca_json_plugin
|
||||
- built_by: Me
|
||||
metrics_file: /var/cache/my_dir/my_metrics.json
|
||||
name: Mine1
|
||||
- built_by: Me
|
||||
metrics_file: /dev/shm/more_metrics.json
|
||||
name: Mine2
|
||||
@@ -1,13 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# - kafka_connect_str: localhost:19092
|
||||
# It is possible to get the lag for each partition, the default is to sum lag for all partitions
|
||||
# per_partition: False
|
||||
# Full includes additional metrics with actual offsets per partition, can't be set if per_partition is False
|
||||
# full_output: False
|
||||
# consumer_groups:
|
||||
# my_consumer:
|
||||
# - my_topic # the plugin will automatically collect from all existing partitions
|
||||
@@ -1,17 +0,0 @@
|
||||
# Copyright 2016 FUJITSU LIMITED
|
||||
|
||||
init_config:
|
||||
# URL that check uses to access Kibana metrics
|
||||
url: http://192.168.10.6:5601/api/status
|
||||
instances:
|
||||
- built_by: Kibana
|
||||
# List of metrics check should collect
|
||||
# You can disable collecting of some metrics
|
||||
# by removing them from the list
|
||||
metrics:
|
||||
- heap_total
|
||||
- heap_used
|
||||
- load
|
||||
- requests_per_second
|
||||
- response_time_avg
|
||||
- response_time_max
|
||||
@@ -1,30 +0,0 @@
|
||||
# (C) Copyright 2017 Hewlett Packard Enterprise Development LP
|
||||
|
||||
init_config:
|
||||
# Timeout on GET requests endpoints
|
||||
connection_timeout: 3
|
||||
# Report container metrics. Defaults to false.
|
||||
report_container_metrics: False
|
||||
# Report container usage percent if running in kubernetes. Defaults to true.
|
||||
report_container_mem_percent: True
|
||||
instances:
|
||||
# There are two options for getting the host that we use to connect to kubelet/cadvisor with. Either manually setting
|
||||
# it via host or by setting derive_host to True. We derive the host by first using the kubernetes environment
|
||||
# variables to get the api url (assuming we are running in a kubernetes container). Next we use the container's pod
|
||||
# name and namespace (passed in as environment variables to the agents container - see kubernetes example yaml file)
|
||||
# with the api url to hit the api to get the pods metadata including the host it is running on. That is the host we
|
||||
# use.
|
||||
# NOTE - this plugin only supports one instance.
|
||||
- host: "127.0.0.1"
|
||||
|
||||
# Derive the host by querying the Kubernetes api for pod's (pod the agent is running in) metadata.
|
||||
# derive_host: False
|
||||
|
||||
# Port of cadvisor to connect to defaults to 4194
|
||||
# cadvisor_port: 4194
|
||||
|
||||
# Port of kubelet to connect to defaults to 10255
|
||||
# kublet_port: 10255
|
||||
|
||||
# Set of kubernetes labels that we search for in the kubernetes metadata to set as dimensions
|
||||
# kubernetes_labels: ['k8s-app', 'version']
|
||||
@@ -1,18 +0,0 @@
|
||||
# (C) Copyright 2017 Hewlett Packard Enterprise Development LP
|
||||
|
||||
init_config:
|
||||
# Timeout on GET requests endpoints
|
||||
connection_timeout: 3
|
||||
instances:
|
||||
# There is two options for connecting to the api. Either by passing in the host the api is running on and the port
|
||||
# it is bound to or by deriving the api url from the kubernetes environment variables (if the agent is running in
|
||||
# a kubernetes container. You must set one or the other.
|
||||
- host: "127.0.0.1"
|
||||
# Port of kubernetes master to connect to defaults to 8080
|
||||
# kubernetes_api_port: 8080
|
||||
|
||||
# Derive kubernetes api url from kubernetes environmental variables.
|
||||
# derive_api_url: True
|
||||
|
||||
# Set of kubernetes labels that we search for in the kubernetes metadata to set as dimensions
|
||||
# kubernetes_labels: ['k8s-app', 'version']
|
||||
@@ -1,17 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
# The Kyoto Tycoon check does not require any init_config
|
||||
|
||||
instances:
|
||||
# Add one or more instances, which accept report_url,
|
||||
# name, and optionally dimensions. The report URL should
|
||||
# be a URL to the Kyoto Tycoon "report" RPC endpoint.
|
||||
#
|
||||
# Complete example:
|
||||
#
|
||||
# - name: my_kyoto_instance
|
||||
# report_url: http://localhost:1978/rpc/report
|
||||
# dimensions:
|
||||
# foo: bar
|
||||
# baz: bat
|
||||
@@ -1,65 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
# These are Nova credentials, [keystone_authtoken] in /etc/nova/nova.conf
|
||||
password: pass
|
||||
project_name: service
|
||||
username: nova
|
||||
auth_url: 'http://192.168.10.5/identity'
|
||||
region_name: 'region1'
|
||||
# Options to specify endpoint type, default to 'publicURL', other choices:
|
||||
# 'internalURL' and 'adminURL'
|
||||
endpoint_type: 'publicURL'
|
||||
# Location of temporary files maintained by the plugin. Ramdisk preferred.
|
||||
cache_dir: /dev/shm
|
||||
# How long to wait before querying Nova for instance updates? (seconds)
|
||||
# Note that there is an automatic refresh if new instances are encountered.
|
||||
nova_refresh: 14400
|
||||
# How long before gathering data on newly-provisioned instances? (seconds)
|
||||
vm_probation: 300
|
||||
# Command line to ping VMs, set to False (or simply remove) to disable.
|
||||
# The word 'NAMESPACE' is automatically replaced by the appropriate network
|
||||
# namespace for each VM being monitored.
|
||||
ping_check: sudo -n /sbin/ip exec NAMESPACE /usr/bin/fping -n -c1 -t250 -q
|
||||
# Number of ping command processes that will be run concurrently.
|
||||
max_ping_concurrency: 8
|
||||
# Suppress all per-VM metrics aside from host_alive_status, including all
|
||||
# I/O, network, memory, ping, and CPU metrics.
|
||||
alive_only: false
|
||||
# List of instance metadata keys to be sent as dimensions
|
||||
# By default 'scale_group' metadata is used here for supporting auto
|
||||
# scaling in Heat.
|
||||
metadata:
|
||||
- scale_group
|
||||
# Include scale group dimension for customer metrics.
|
||||
customer_metadata:
|
||||
- scale_group
|
||||
# Submit network metrics in bits rather than bytes.
|
||||
network_use_bits: false
|
||||
# Configuration parameters used to control which metrics are reported
|
||||
# by libvirt plugin.
|
||||
vm_cpu_check_enable: True
|
||||
vm_disks_check_enable: True
|
||||
vm_network_check_enable: True
|
||||
vm_ping_check_enable: True
|
||||
vm_extended_disks_check_enable: False
|
||||
# Specify a regular expression with which to match nova host aggregate names.
|
||||
# If this hypervisor is a member of a host aggregate matching this regular
|
||||
# expression, an additional dimension of host_aggregate will be published
|
||||
# for the operations metrics (with a value of the host aggregate name).
|
||||
host_aggregate_re: None
|
||||
# Interval between disk metric collections in seconds. If this is less than
|
||||
# the agent collection period, it will be ignored.
|
||||
disk_collection_period: 0
|
||||
# Interval between vnic metric collections in seconds. If this is less than
|
||||
# the agent collection period, it will be ignored.
|
||||
vnic_collection_period: 0
|
||||
# Libvirt domain type. ('kvm', 'lxc', 'qemu', 'uml', 'xen')
|
||||
libvirt_type: kvm
|
||||
# Override the default libvirt URI (which is dependent on libvirt_type).
|
||||
# eg. 'qemu+tcp://hostname/system'
|
||||
libvirt_uri: qemu:///system
|
||||
instances:
|
||||
# Instances are automatically detected through queries to the Nova API,
|
||||
# and therefore do not need to be listed here, so this remains empty.
|
||||
- {}
|
||||
@@ -1,20 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# For every instance, you have an `lighttpd_status_url` and (optionally)
|
||||
# a list of dimensions.
|
||||
|
||||
- lighttpd_status_url: http://example.com/server-status?auto
|
||||
dimensions:
|
||||
instance:foo
|
||||
|
||||
- lighttpd_status_url: http://example2.com:1234/server-status?auto
|
||||
dimensions:
|
||||
instance:bar
|
||||
|
||||
# Lighttpd2 status url
|
||||
- lighttpd_status_url: http://example.com/server-status?format=plain
|
||||
dimensions:
|
||||
instance:l2
|
||||
@@ -1,14 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
# process_fs_path: (optional) STRING. It will set up the path of the process
|
||||
# filesystem. By default it's set for: /proc directory.
|
||||
# Example:
|
||||
#
|
||||
# process_fs_path: /rootfs/proc
|
||||
instances:
|
||||
# Load check only supports one configured instance
|
||||
- name: load_stats
|
||||
# (optional) Additional dimensions to dimensions each metric with
|
||||
# dimensions:
|
||||
# dim1: value1
|
||||
@@ -1,21 +0,0 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
- container: all
|
||||
cpu: True
|
||||
mem: True
|
||||
swap: True
|
||||
blkio: True
|
||||
net: True
|
||||
@@ -1,9 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# - url: localhost # url used to connect to the memcached instance
|
||||
# port: 11211 # If this line is not present, port will default to 11211
|
||||
# dimensions:
|
||||
# dim1: value1
|
||||
@@ -1,11 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
# process_fs_path: (optional) STRING. It will set up the path of the process
|
||||
# filesystem. By default it's set for: /proc directory.
|
||||
# Example:
|
||||
#
|
||||
# process_fs_path: /rootfs/proc
|
||||
instances:
|
||||
# Memory check only supports one configured instance
|
||||
- name: memory_stats
|
||||
@@ -1,52 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
# Specify the path to the mk_livestatus socket
|
||||
socket_path: /var/lib/icinga/rw/live
|
||||
|
||||
instances:
|
||||
# List the service and/or host checks to query.
|
||||
# Service checks:
|
||||
# name Monasca metric name to use Required
|
||||
# check_type "service" Required
|
||||
# display_name Name of Nagios check Required
|
||||
# host_name Nagios' reported host_name Optional
|
||||
# dimensions key:value pairs Optional
|
||||
#
|
||||
# Host checks:
|
||||
# name Monasca metric name to use Required
|
||||
# check_type "host" Required
|
||||
# host_name Nagios' reported host_name Optional
|
||||
# dimensions key:value pairs Optional
|
||||
#
|
||||
# If 'host_name' is not specified, metrics for all hosts will be reported.
|
||||
|
||||
# One service on one host
|
||||
- name: nagios.check_http_status
|
||||
check_type: service
|
||||
display_name: HTTP
|
||||
host_name: web01.example.net
|
||||
|
||||
# One service on all hosts
|
||||
- name: nagios.process_count_status
|
||||
check_type: service
|
||||
display_name: Total Processes
|
||||
|
||||
# One service on all hosts with extra dimensions
|
||||
- name: nagios.check_http_status
|
||||
check_type: service
|
||||
display_name: HTTP
|
||||
dimensions: { 'group': 'webservers' }
|
||||
|
||||
# All services on all hosts
|
||||
# These will be assigned metric names automatically, based on display_name
|
||||
- check_type: service
|
||||
|
||||
# One host
|
||||
- name: nagios.host_status
|
||||
check_type: host
|
||||
host_name: web01.example.net
|
||||
|
||||
# All hosts
|
||||
- name: nagios.host_status
|
||||
check_type: host
|
||||
@@ -1,17 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# - server: mongodb://localhost:27017
|
||||
# dimensions:
|
||||
# dim1:value1
|
||||
#
|
||||
# Optional SSL parameters, see https://github.com/mongodb/mongo-python-driver/blob/2.6.3/pymongo/mongo_client.py#L193-L212
|
||||
# for more details
|
||||
#
|
||||
# ssl: False # Optional (default to False)
|
||||
# ssl_keyfile: # Path to the private keyfile used to identify the local
|
||||
# ssl_certfile: # Path to the certificate file used to identify the local connection against mongod.
|
||||
# ssl_cert_reqs: # Specifies whether a certificate is required from the other side of the connection, and whether it will be validated if provided.
|
||||
# ssl_ca_certs: # Path to the ca_certs file
|
||||
@@ -1,16 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# - server: localhost
|
||||
# user: my_username
|
||||
# pass: my_password
|
||||
# port: 3306 # Optional
|
||||
# sock: /path/to/sock # Connect via Unix Socket
|
||||
# defaults_file: my.cnf # Alternate configuration mechanism
|
||||
# dimensions: # Optional
|
||||
# dim1:value1
|
||||
# options: # Optional
|
||||
# replication: 0
|
||||
# galera_cluster: 1
|
||||
@@ -1,25 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
# Directories where Nagios checks (scripts, programs) may live (optional)
|
||||
# check_path: /usr/lib/nagios/plugins:/usr/local/bin/nagios
|
||||
|
||||
# Where to store last-run timestamps for each check (required)
|
||||
# temp_file_path: /dev/shm/
|
||||
|
||||
# List a check name under 'name' and the full command-line
|
||||
# under 'check_command'. If the command exists in 'check_path' above,
|
||||
# it is not necessary to specify the full path.
|
||||
|
||||
instances:
|
||||
# - name: nagios.load
|
||||
# check_command: check_load -r -w 2,1.5,1 -c 10,5,4
|
||||
|
||||
# - name: nagios.disk
|
||||
# check_command: check_disk -w 15\% -c 5\% -A -i /srv/node
|
||||
# check_interval: 300
|
||||
|
||||
# - name: nagios.swap
|
||||
# check_command: /usr/lib/nagios/plugins/check_swap -w 50\% -c 10\%
|
||||
# check_interval: 120
|
||||
# dimensions: { 'group': 'memory' }
|
||||
@@ -1,13 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# Network check only supports one configured instance
|
||||
- name: network_stats
|
||||
#excluded_interfaces:
|
||||
# - lo
|
||||
# - lo0
|
||||
# Optionally completely ignore any network interface
|
||||
# matching the given regex:
|
||||
excluded_interface_re: lo.*|vnet.*|tun.*|ovs.*|br.*|tap.*|qbr.*|qvb.*|qvo.*
|
||||
@@ -1,14 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# For every instance, you have an `nginx_status_url` and (optionally) dimensions
|
||||
|
||||
- nginx_status_url: http://example.com/nginx_status/
|
||||
dimensions:
|
||||
dim1: value1
|
||||
|
||||
- nginx_status_url: http://example2.com:1234/nginx_status/
|
||||
dimensions:
|
||||
dim1: value1
|
||||
@@ -1,14 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# All params are optional
|
||||
- host: pool.ntp.org
|
||||
# Optional params:
|
||||
#
|
||||
# port: ntp
|
||||
# version: 3
|
||||
# timeout: 5
|
||||
# dimensions:
|
||||
# dim1: value1
|
||||
@@ -1,49 +0,0 @@
|
||||
# (C) Copyright 2016 Hewlett Packard Enterprise Development LP
|
||||
init_config:
|
||||
# These are Neutron credentials, [keystone_authtoken] in /etc/neutron/neutron.conf
|
||||
password: password
|
||||
project_name: services
|
||||
username: neutron
|
||||
auth_url: 'http://192.168.10.5/identity'
|
||||
# Number of seconds to wait before updating the neutron router cache file.
|
||||
neutron_refresh: 14400
|
||||
# Options to specify endpoint type, default to 'publicURL', other choices:
|
||||
# 'internalURL' and 'adminURL'
|
||||
endpoint_type: 'publicURL'
|
||||
# The region name in /etc/neutron/neutron.conf
|
||||
region_name: 'region1'
|
||||
# Location of temporary files maintained by the plugin. Ramdisk preferred.
|
||||
cache_dir: /dev/shm
|
||||
# Specfies network metrics in bitslog.
|
||||
network_use_bits: false
|
||||
# If set, will submit raw counters from ovs-vsctl command output for the given
|
||||
# network interface
|
||||
use_absolute_metrics: true
|
||||
# List of router metadata keys to be sent as dimensions when cross posting
|
||||
# metrics to the infrastruture project.
|
||||
metadata:
|
||||
- tenant_name
|
||||
# Installations that don't allow usage of sudo should copy the `ovs-vsctl`
|
||||
# command to another location and use the `setcap` command to allow the
|
||||
# monasca-agent to run that command. The new location of the `ovs-vsctl`
|
||||
# command should be what is set in the config file for `ovs_cmd`.
|
||||
ovs_cmd: 'sudo /usr/bin/ovs-vsctl'
|
||||
# Regular expression for the interfaces type
|
||||
included_interface_re: qg.*|vhu.*|sg.*
|
||||
# If set, will submit the rate metrics derived from ovs-vsctl command
|
||||
# output for the given network interface.
|
||||
use_rate_metrics = true
|
||||
# If set, will submit the health related metrics from ovs-vsctl command
|
||||
# output for the given network interface.
|
||||
use_health_metrics = true
|
||||
# If specified, the collector will skip calling the plugin until the specified
|
||||
# number of seconds has passed.
|
||||
# NOTE: Collection is dictated by the default collection frequency therefor
|
||||
# collect_period should be evenly divisible by the default collection frequency
|
||||
# otherwise the plugin will be called on the collection run after the specified time
|
||||
collect_period = 300
|
||||
|
||||
instances:
|
||||
# Instances are not used and should be empty in `ovs.yaml` because like the
|
||||
# ovs plugin it runs against all routers hosted on the node at once.
|
||||
- {}
|
||||
@@ -1,20 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
# The user running monasca-agent must have passwordless sudo access for the find
|
||||
# command to run the postfix check. Here's an example:
|
||||
#
|
||||
# example /etc/sudoers entry:
|
||||
# monasca-agent ALL=(ALL) NOPASSWD:/usr/bin/find
|
||||
#
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
- name: /var/spool/postfix
|
||||
directory: /var/spool/postfix
|
||||
queues:
|
||||
- incoming
|
||||
- active
|
||||
- deferred
|
||||
# dimensions:
|
||||
# dim1: value1
|
||||
@@ -1,23 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# - host: localhost
|
||||
# port: 5432
|
||||
# username: my_username
|
||||
# password: my_password
|
||||
# dbname: db_name
|
||||
# dimensions:
|
||||
# dim1: value1
|
||||
|
||||
# Custom-metrics section
|
||||
|
||||
# You can now track per-relation (table) metrics
|
||||
# You need to specify the list. Each relation
|
||||
# generates a lot of metrics (10 + 10 per index)
|
||||
# so you want to only use the ones you really care about
|
||||
|
||||
# relations:
|
||||
# - my_table
|
||||
# - my_other_table
|
||||
@@ -1,46 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
# process_fs_path: (optional) STRING. It will set up the path of the process
|
||||
# filesystem. By default it's set for: /proc directory.
|
||||
# Example:
|
||||
#
|
||||
process_fs_path: /rootfs/proc
|
||||
|
||||
instances:
|
||||
# - name: (required) STRING. It will be used to uniquely identify your metrics as they will be tagged with this name
|
||||
# detailed: (optional) Boolean. Defaults to False, if set detailed metrics are pulled for the process. When false
|
||||
only the process.pid_count metric is returned.
|
||||
# search_string: (required if user is not set) LIST OF STRINGS. If one of the elements in the list matches,
|
||||
# return the counter of all the processes that contain the string
|
||||
# username: (required if search_string is not set) STRING. Will grab all processes owned by this user.
|
||||
# exact_match: (optional) Boolean. Default to True, if you want to look for an arbitrary
|
||||
# string, use exact_match: False
|
||||
#
|
||||
# Examples:
|
||||
#
|
||||
|
||||
- name: ssh
|
||||
search_string: ['ssh', 'sshd']
|
||||
detailed: false
|
||||
|
||||
- name: postgres
|
||||
search_string: ['postgres']
|
||||
detailed: true
|
||||
|
||||
- name: All
|
||||
search_string: ['All']
|
||||
detailed: false
|
||||
|
||||
- name: nodeserver
|
||||
search_string: ['node server.js']
|
||||
detailed: false
|
||||
|
||||
- name: monasca_agent
|
||||
username: mon-agent
|
||||
detailed: true
|
||||
|
||||
# - name: python
|
||||
# process: ['python']
|
||||
# - name: node
|
||||
# process: ['node']
|
||||
@@ -1,20 +0,0 @@
|
||||
# (C) Copyright 2017 Hewlett Packard Enterprise Development LP
|
||||
|
||||
init_config:
|
||||
timeout: 3
|
||||
auto_detect_endpoints: True
|
||||
# Detection method can be either service or pod. Default is pod
|
||||
detect_method: "pod"
|
||||
|
||||
instances:
|
||||
# If configuring each metric_endpoint
|
||||
- metric_endpoint: "http://127.0.0.1:8000"
|
||||
# Dimensions to add to every metric coming out of the plugin
|
||||
default_dimensions:
|
||||
app: my_app
|
||||
|
||||
- metric_endpoint: "http://127.0.0.1:9000"
|
||||
|
||||
# Kubernetes labels to match on when using auto detection in a Kubernetes environment.
|
||||
# There can only be one instance when auto_detection_endpoints is set to true
|
||||
- kubernetes_labels: ['app']
|
||||
@@ -1,68 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# NOTE: The rabbitmq management plugin must be enabled for this check to work!
|
||||
# To enable the management plugin:
|
||||
# sudo rabbitmq-plugins enable rabbitmq_management
|
||||
# OUTPUT:
|
||||
# The following plugins have been enabled:
|
||||
# mochiweb
|
||||
# webmachine
|
||||
# rabbitmq_web_dispatch
|
||||
# amqp_client
|
||||
# rabbitmq_management_agent
|
||||
# rabbitmq_management
|
||||
# Plugin configuration has changed. Restart RabbitMQ for changes to take effect.
|
||||
# To restart the rabbitmq-server
|
||||
# sudo service rabbitmq-server restart
|
||||
# OUTPUT:
|
||||
# * Restarting message broker rabbitmq-server
|
||||
#
|
||||
# For every instance a 'rabbitmq_api_url' must be provided, pointing to the api
|
||||
# url of the RabbitMQ Management Plugin (http://www.rabbitmq.com/management.html)
|
||||
# optional: 'rabbitmq_user' (default: guest) and 'rabbitmq_pass' (default: guest)
|
||||
#
|
||||
# If you have less than 5 queues, you don't have to set the queues parameters
|
||||
# All queues metrics will be collected
|
||||
# If you have more than 5 queues: you can set 5 queue names. Metrics will be collected
|
||||
# for these queues. For the other queues, aggregate will be calculated.
|
||||
#
|
||||
# # If you have less than 3 nodes, you don't have to set the nodes parameters
|
||||
# All nodes metrics will be collected
|
||||
# If you have more than 3 nodes: you can set 3 node names. Metrics will be collected
|
||||
# for these nodes. For the other nodes, aggregate will be calculated.
|
||||
#
|
||||
# Warning: aggregate are calculated on the 100 first queues/nodes.
|
||||
#
|
||||
# The whitelist limits the collected metrics based on their path. The whitelist shown
|
||||
# includes all the metrics collected by default if no list is specified. If any section
|
||||
# is not specified, the defaults will be collected for that section (i.e. if the key 'node'
|
||||
# is not present, the 'node' metrics shown in this example will be collected)
|
||||
|
||||
|
||||
- rabbitmq_api_url: http://localhost:15672/api/
|
||||
rabbitmq_user: guest
|
||||
rabbitmq_pass: guest
|
||||
nodes:
|
||||
- rabbit@localhost
|
||||
- rabbit2@domain
|
||||
queues:
|
||||
- queue1
|
||||
- queue2
|
||||
whitelist:
|
||||
queues:
|
||||
- message_stats/deliver_details/rate
|
||||
- message_stats/publish_details/rate
|
||||
- message_stats/redeliver_details/rate
|
||||
exchanges:
|
||||
- message_stats/publish_out
|
||||
- message_stats/publish_out_details/rate
|
||||
- message_stats/publish_in
|
||||
- message_stats/publish_in_details/rate
|
||||
nodes:
|
||||
- fd_used
|
||||
- mem_used
|
||||
- run_queue
|
||||
- sockets_used
|
||||
@@ -1,11 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# - host: localhost
|
||||
# port: 6379
|
||||
# unix_socket_path: /var/run/redis/redis.sock # optional, can be used in lieu of host/port
|
||||
# password: mypassword
|
||||
# dimensions:
|
||||
# dim1: value1
|
||||
@@ -1,6 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# - url: http://127.0.0.1:8098/stats
|
||||
@@ -1,12 +0,0 @@
|
||||
# Copyright (c) 2016 NetApp, Inc.
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# Each cluster can be monitored with a separate instance.
|
||||
- name: rack_d_cluster
|
||||
# Cluser admin with reporting permissions.
|
||||
username: monasca_admin
|
||||
# Cluster admin password
|
||||
password: secret_password
|
||||
# Cluster MVIP address, must be reachable.
|
||||
mvip: 192.168.1.1
|
||||
@@ -1,86 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
instances:
|
||||
# - host: localhost
|
||||
# port: 7199
|
||||
# user: username
|
||||
# password: password
|
||||
# name: solr_instance
|
||||
# #java_bin_path: /path/to/java #Optional, should be set if the agent cannot find your java executable
|
||||
# #trust_store_path: /path/to/trustStore.jks # Optional, should be set if ssl is enabled
|
||||
# #trust_store_password: password
|
||||
# dimensions:
|
||||
# env: stage
|
||||
# newTag: test
|
||||
|
||||
|
||||
|
||||
# List of metrics to be collected by the integration
|
||||
# Read http://docs.datadoghq.com/integrations/java/ to learn how to customize it
|
||||
init_config:
|
||||
conf:
|
||||
- include:
|
||||
type: searcher
|
||||
attribute:
|
||||
maxDoc:
|
||||
alias: solr.searcher.maxdoc
|
||||
metric_type: gauge
|
||||
numDocs:
|
||||
alias: solr.searcher.numdocs
|
||||
metric_type: gauge
|
||||
warmupTime:
|
||||
alias: solr.searcher.warmup
|
||||
metric_type: gauge
|
||||
- include:
|
||||
id: org.apache.solr.search.FastLRUCache
|
||||
attribute:
|
||||
cumulative_lookups:
|
||||
alias: solr.cache.lookups
|
||||
metric_type: counter
|
||||
cumulative_hits:
|
||||
alias: solr.cache.hits
|
||||
metric_type: counter
|
||||
cumulative_inserts:
|
||||
alias: solr.cache.inserts
|
||||
metric_type: counter
|
||||
cumulative_evictions:
|
||||
alias: solr.cache.evictions
|
||||
metric_type: counter
|
||||
- include:
|
||||
id: org.apache.solr.search.LRUCache
|
||||
attribute:
|
||||
cumulative_lookups:
|
||||
alias: solr.cache.lookups
|
||||
metric_type: counter
|
||||
cumulative_hits:
|
||||
alias: solr.cache.hits
|
||||
metric_type: counter
|
||||
cumulative_inserts:
|
||||
alias: solr.cache.inserts
|
||||
metric_type: counter
|
||||
cumulative_evictions:
|
||||
alias: solr.cache.evictions
|
||||
metric_type: counter
|
||||
- include:
|
||||
id: org.apache.solr.handler.component.SearchHandler
|
||||
attribute:
|
||||
errors:
|
||||
alias: solr.search_handler.errors
|
||||
metric_type: counter
|
||||
requests:
|
||||
alias: solr.search_handler.requests
|
||||
metric_type: counter
|
||||
timeouts:
|
||||
alias: solr.search_handler.timeouts
|
||||
metric_type: counter
|
||||
totalTime:
|
||||
alias: solr.search_handler.time
|
||||
metric_type: counter
|
||||
avgTimePerRequest:
|
||||
alias: solr.search_handler.avg_time_per_req
|
||||
metric_type: gauge
|
||||
avgRequestsPerSecond:
|
||||
alias: solr.search_handler.avg_requests_per_sec
|
||||
metric_type: gauge
|
||||
|
||||
|
||||
@@ -1,48 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
#
|
||||
# By default, we only capture *some* of the metrics available in the
|
||||
# `sys.dm_os_performance_counters` table. You can easily add additional
|
||||
# metrics by following the custom_metrics structure shown below.
|
||||
#
|
||||
# In order to connect to SQL Server either enable SQL Authentication and
|
||||
# specify a username or password below. If you do not specify a username
|
||||
# or password then we will connect using integrated authentication.
|
||||
#
|
||||
#custom_metrics:
|
||||
#
|
||||
# This is a basic custom metric. There is not instance assicated with
|
||||
# this counter.
|
||||
# Options for type are: gauge, rate, histogram
|
||||
#
|
||||
# - name: sqlserver.clr.execution
|
||||
# type: gauge
|
||||
# counter_name: CLR Execution
|
||||
#
|
||||
# This counter has multiple instances assocaited with it and we're
|
||||
# choosing to only fetch the '_Total' instance.
|
||||
#
|
||||
# - name: sqlserver.exec.in_progress
|
||||
# type: gauge
|
||||
# counter_name: OLEDB calls
|
||||
# instance_name: Cumulative execution time (ms) per second
|
||||
#
|
||||
# This counter has multiple instances assocaited with it and we want
|
||||
# every instance available. We'll use the special case ALL instance
|
||||
# which *requires* a value for "tag_by". In this case, we'll get metrics
|
||||
# tagged as "db:mydb1", "db:mydb2".
|
||||
#
|
||||
# - name: sqlserver.db.commit_table_entries
|
||||
# type: gauge
|
||||
# counter_name: Log Flushes/sec
|
||||
# instance_name: ALL
|
||||
# tag_by: db
|
||||
|
||||
instances:
|
||||
- host: HOST,PORT
|
||||
username: my_username
|
||||
password: my_password
|
||||
database: my_database # Optional, defaults to "master"
|
||||
dimensions:
|
||||
dim1: value1
|
||||
@@ -1,66 +0,0 @@
|
||||
# (C) Copyright 2016 Hewlett Packard Enterprise Development Company LP
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
#
|
||||
# There are two ways to get started with the supervisord check.
|
||||
#
|
||||
# You can configure inet_http_server in /etc/supervisord.conf. Below is an
|
||||
# example inet_http_server configuration:
|
||||
#
|
||||
# [inet_http_server]
|
||||
# port:localhost:9001
|
||||
# username:user # optional
|
||||
# password:pass # optional
|
||||
#
|
||||
# OR, you can use supervisorctl socket to communicate with supervisor.
|
||||
# If supervisor is running as root, make sure chmod property is set
|
||||
# to a permission accessible to non-root users. See the example below:
|
||||
#
|
||||
# [supervisorctl]
|
||||
# serverurl=unix:///var/run//supervisor.sock
|
||||
#
|
||||
# [unix_http_server]
|
||||
# file=/var/run/supervisor.sock
|
||||
# chmod=775
|
||||
#
|
||||
# Reload supervisor, specify the inet or unix socket server information
|
||||
# in this yaml file along with an optional list of the processes you want
|
||||
# to monitor per instance, and you're good to go!
|
||||
#
|
||||
# See http://supervisord.org/configuration.html for more information on
|
||||
# configuring supervisord sockets and inet http servers.
|
||||
#
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# - name: server0 # Required. An arbitrary name to identify the supervisord server
|
||||
# host: localhost # Optional. Defaults to localhost. The host where supervisord server is running
|
||||
# port: 9001 # Optional. Defaults to 9001. The port number.
|
||||
# user: user # Optional. Required only if a username is configured.
|
||||
# pass: pass # Optional. Required only if a password is configured.
|
||||
# proc_regex: # Optional. Regex pattern[s] matching the names of processes to monitor
|
||||
# - 'myprocess-\d\d$'
|
||||
# proc_names: # Optional. The process to monitor within this supervisord instance.
|
||||
# - apache2 # If not specified, the check will monitor all processes.
|
||||
# - webapp
|
||||
# - java
|
||||
# proc_uptime_check: False # Optional. Defaults to True.
|
||||
# proc_details_check: False # Optional. Defaults to True.
|
||||
# - name: server1
|
||||
# host: localhost
|
||||
# port: 9002
|
||||
# - name: server2
|
||||
# socket: unix:///var/run//supervisor.sock
|
||||
# host: http://127.0.0.1 # Optional. Defaults to http://127.0.0.1
|
||||
@@ -1,7 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# Run swift checks once for this definition
|
||||
- name: swift_stats
|
||||
@@ -1,24 +0,0 @@
|
||||
init_config:
|
||||
collect_period: 300
|
||||
|
||||
instances:
|
||||
- name: policy_0 ring
|
||||
ring: /etc/swift/object.ring.gz
|
||||
devices: /srv/node
|
||||
granularity: server
|
||||
#granularity: device
|
||||
- name: policy_1 ring
|
||||
ring: /etc/swift/object_1.ring.gz
|
||||
devices: /srv/node
|
||||
granularity: server
|
||||
#granularity: device
|
||||
- name: account ring
|
||||
ring: /etc/swift/account.ring.gz
|
||||
devices: /srv/node
|
||||
granularity: server
|
||||
#granularity: device
|
||||
- name: container ring
|
||||
ring: /etc/swift/container.ring.gz
|
||||
devices: /srv/node
|
||||
granularity: server
|
||||
#granularity: device
|
||||
@@ -1,17 +0,0 @@
|
||||
init_config:
|
||||
collect_period: 300
|
||||
|
||||
instances:
|
||||
- name: swift-account
|
||||
server_type: account
|
||||
hostname: localhost
|
||||
port: 6012
|
||||
timeout: 5
|
||||
- name: swift-container
|
||||
server_type: container
|
||||
hostname: localhost
|
||||
port: 6011
|
||||
- name: swift-object
|
||||
server_type: object
|
||||
hostname: localhost
|
||||
port: 6010
|
||||
@@ -1,39 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
# To notify on every service, set a list of notified uses here.
|
||||
#
|
||||
# notify:
|
||||
# - user1@example.com
|
||||
# - pagerduty
|
||||
|
||||
|
||||
instances:
|
||||
# - name: My first service
|
||||
# host: myhost.example.com
|
||||
# port: 8080
|
||||
# timeout: 1
|
||||
|
||||
# The (optional) window and threshold parameters allow you to trigger
|
||||
# alerts only if the check fails x times within the last y attempts
|
||||
# where x is the threshold and y is the window.
|
||||
|
||||
# threshold: 3
|
||||
# window: 5
|
||||
|
||||
# The (optional) collect_response_time parameter will instruct the
|
||||
# check to create a metric 'network.tcp.response_time', tagged with
|
||||
# the url, reporting the response time in seconds.
|
||||
|
||||
# collect_response_time: true
|
||||
|
||||
# - name: My second service
|
||||
# host: 127.0.0.1
|
||||
# port: 80
|
||||
|
||||
# For service-specific notifications, you can optionally specify
|
||||
# a list of users to notify within the service configuration.
|
||||
# notify:
|
||||
# - user2@example.com
|
||||
# - pagerduty
|
||||
@@ -1,81 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
instances:
|
||||
# - host: localhost
|
||||
# port: 7199
|
||||
# user: username
|
||||
# password: password
|
||||
# name: tomcat_instance
|
||||
# #java_bin_path: /path/to/java #Optional, should be set if the agent cannot find your java executable
|
||||
# #trust_store_path: /path/to/trustStore.jks # Optional, should be set if ssl is enabled
|
||||
# #trust_store_password: password
|
||||
# dimensions:
|
||||
# env: stage
|
||||
# newTag: test
|
||||
|
||||
|
||||
# List of metrics to be collected by the integration
|
||||
# Read http://docs.datadoghq.com/integrations/java/ to learn how to customize it
|
||||
init_config:
|
||||
conf:
|
||||
- include:
|
||||
type: ThreadPool
|
||||
attribute:
|
||||
maxThreads:
|
||||
alias: tomcat.threads.max
|
||||
metric_type: gauge
|
||||
currentThreadCount:
|
||||
alias: tomcat.threads.count
|
||||
metric_type: gauge
|
||||
currentThreadsBusy:
|
||||
alias: tomcat.threads.busy
|
||||
metric_type: gauge
|
||||
- include:
|
||||
type: GlobalRequestProcessor
|
||||
attribute:
|
||||
bytesSent:
|
||||
alias: tomcat.bytes_sent
|
||||
metric_type: counter
|
||||
bytesReceived:
|
||||
alias: tomcat.bytes_rcvd
|
||||
metric_type: counter
|
||||
errorCount:
|
||||
alias: tomcat.error_count
|
||||
metric_type: counter
|
||||
requestCount:
|
||||
alias: tomcat.request_count
|
||||
metric_type: counter
|
||||
maxTime:
|
||||
alias: tomcat.max_time
|
||||
metric_type: gauge
|
||||
processingTime:
|
||||
alias: tomcat.processing_time
|
||||
metric_type: counter
|
||||
- include:
|
||||
j2eeType: Servlet
|
||||
attribute:
|
||||
processingTime:
|
||||
alias: tomcat.servlet.processing_time
|
||||
metric_type: counter
|
||||
errorCount:
|
||||
alias: tomcat.servlet.error_count
|
||||
metric_type: counter
|
||||
requestCount:
|
||||
alias: tomcat.servlet.request_count
|
||||
metric_type: counter
|
||||
- include:
|
||||
type: Cache
|
||||
accessCount:
|
||||
alias: tomcat.cache.access_count
|
||||
metric_type: counter
|
||||
hitsCounts:
|
||||
alias: tomcat.cache.hits_count
|
||||
metric_type: counter
|
||||
- include:
|
||||
type: JspMonitor
|
||||
jspCount:
|
||||
alias: tomcat.jsp.count
|
||||
metric_type: counter
|
||||
jspReloadCount:
|
||||
alias: tomcat.jsp.reload_count
|
||||
metric_type: counter
|
||||
@@ -1,15 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# - varnishstat: (required) String path to varnishstat binary
|
||||
# name: (optional) String name of varnish instance. Passed to the -n parameter of varnishstat. Will also dimensions each metric with this name.
|
||||
# dimensions: (optional) Additional dimensions to dimensions each metric with
|
||||
#
|
||||
# Example:
|
||||
#
|
||||
- varnishstat: /usr/bin/varnishstat
|
||||
name: myvarnishinstance
|
||||
dimensions:
|
||||
instance:production
|
||||
@@ -1,13 +0,0 @@
|
||||
init_config: {}
|
||||
|
||||
instances:
|
||||
# vcenter - hostname or ipaddress of the vcenter server
|
||||
# username & password of the vcenter server
|
||||
# clusters - list of clusters to be monitored
|
||||
# dimensions:
|
||||
# vcenter:ip
|
||||
# cluster:clustername
|
||||
- vcenter_ip: <vcenter-ip>
|
||||
username: <user>
|
||||
password: <password>
|
||||
clusters: <[cluster list]>
|
||||
@@ -1,27 +0,0 @@
|
||||
init_config:
|
||||
|
||||
# vcenter credentials. Port defaults to the value shown, while all others default to empty strings
|
||||
vcenter_ip: "127.0.0.1"
|
||||
vcenter_user: "joe"
|
||||
vcenter_password: "12345"
|
||||
vcenter_port: 443
|
||||
|
||||
# number of retries to the vcenter api
|
||||
retry_count: 3
|
||||
poll_interval: 0.5
|
||||
|
||||
# maximum number of objects to return from vcenter, this should be larger than the total number of vms running.
|
||||
vcenter_max_objects: 100000
|
||||
|
||||
# Sets the keys to be collected from the annotations on vms. By default this is empty, meaning no
|
||||
# annotations will be collected
|
||||
allowed_keys:
|
||||
- project_id
|
||||
|
||||
# This can be used to change the name of a key when it is added to the metric dimensions. By default, all
|
||||
# keys will be recorded without modification
|
||||
key_map:
|
||||
project_id: tenant_id
|
||||
|
||||
instances:
|
||||
# this plugin doesn't support instances, this section will be ignored (but is still required for structure)
|
||||
@@ -1,11 +0,0 @@
|
||||
# (C) Copyright 2016 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# - name: localhost
|
||||
# user: username
|
||||
# password: my_password
|
||||
# node_name: v_mon_node0001
|
||||
# service: monasca # Optional
|
||||
# timeout: 3 # Optional (secs)
|
||||
@@ -1,47 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# Each Event Log instance lets you define the type of events you want to
|
||||
# match and how to tag those events. You can use the following filters:
|
||||
#
|
||||
# - type: Warning, Error, Information
|
||||
#
|
||||
# - log_file: Application, System, Setup, Security
|
||||
#
|
||||
# - source_name: Any available source name
|
||||
#
|
||||
# - user: Any valid user name
|
||||
#
|
||||
# - message_filters: A list of message filters, using % as a wildcard.
|
||||
# See http://msdn.microsoft.com/en-us/library/aa392263(v=vs.85).aspx for
|
||||
# more on the format for LIKE queries.
|
||||
# NOTE: Any filter that starts with "-" will be a NOT query, e.g.: '-%success%'
|
||||
# will search for events without 'success' in the message.
|
||||
#
|
||||
# Here are a couple basic examples:
|
||||
#
|
||||
# The following will capture errors and warnings from SQL Server which
|
||||
# puts all events under the MSSQLSERVER source and tag them with #sqlserver.
|
||||
#
|
||||
#- tags:
|
||||
# - sqlserver
|
||||
# type:
|
||||
# - Warning
|
||||
# - Error
|
||||
# log_file:
|
||||
# - Application
|
||||
# source_name:
|
||||
# - MSSQLSERVER
|
||||
# message_filters:
|
||||
# - "%error%"
|
||||
#
|
||||
# This instance will capture all system errors and tag them with #system.
|
||||
#
|
||||
#- tags:
|
||||
# - system
|
||||
# type:
|
||||
# - Error
|
||||
# log_file:
|
||||
# - System
|
||||
@@ -1,75 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
# Each WMI query has 2 required options, `class` and `metrics` and two
|
||||
# optional options, `filters` and `tag_by`.
|
||||
#
|
||||
# `class` is the name of the WMI class, for example Win32_OperatingSystem
|
||||
# or Win32_PerfFormattedData_PerfProc_Process. You can find many of the
|
||||
# standard class names on the MSDN docs at
|
||||
# http://msdn.microsoft.com/en-us/library/windows/desktop/aa394084.aspx.
|
||||
# The Win32_FormattedData_* classes provide many useful performance counters
|
||||
# by default.
|
||||
#
|
||||
#
|
||||
# `metrics` is a list of metrics you want to capture, with each item in the
|
||||
# list being a set of [WMI property name, metric name, metric type].
|
||||
#
|
||||
# - The property name is something like `NumberOfUsers` or `ThreadCount`.
|
||||
# The standard properties are also available on the MSDN docs for each
|
||||
# class.
|
||||
#
|
||||
# - The metric name is the name you want to show up in Datadog.
|
||||
#
|
||||
# - The metric type is from the standard choices for all agent checks, such
|
||||
# as gauge, rate, histogram or counter.
|
||||
#
|
||||
#
|
||||
# `filters` is a list of filters on the WMI query you may want. For example,
|
||||
# for a process-based WMI class you may want metrics for only certain
|
||||
# processes running on your machine, so you could add a filter for each
|
||||
# process name. See below for an example of this case.
|
||||
#
|
||||
#
|
||||
# `tag_by` optionally lets you tag each metric with a property from the
|
||||
# WMI class you're using. This is only useful when you will have multiple
|
||||
# values for your WMI query. The examples below show how you can tag your
|
||||
# process metrics with the process name (giving a tag of "name:app_name").
|
||||
|
||||
|
||||
# Fetch the number of processes and users
|
||||
- class: Win32_OperatingSystem
|
||||
metrics:
|
||||
- [NumberOfProcesses, system.proc.count, gauge]
|
||||
- [NumberOfUsers, system.users.count, gauge]
|
||||
|
||||
# Fetch metrics for a single running application, called myapp
|
||||
- class: Win32_PerfFormattedData_PerfProc_Process
|
||||
metrics:
|
||||
- [ThreadCount, my_app.threads.count, gauge]
|
||||
- [VirtualBytes, my_app.mem.virtual, gauge]
|
||||
filters:
|
||||
- Name: myapp
|
||||
|
||||
# Fetch process metrics for a set of processes, tagging by app name.
|
||||
- class: Win32_PerfFormattedData_PerfProc_Process
|
||||
metrics:
|
||||
- [ThreadCount, proc.threads.count, gauge]
|
||||
- [VirtualBytes, proc.mem.virtual, gauge]
|
||||
- [PercentProcessorTime, proc.cpu_pct, gauge]
|
||||
filters:
|
||||
- Name: app1
|
||||
- Name: app2
|
||||
- Name: app3
|
||||
tag_by: Name
|
||||
|
||||
# Fetch process metrics for every available process, tagging by app name.
|
||||
- class: Win32_PerfFormattedData_PerfProc_Process
|
||||
metrics:
|
||||
- [IOReadBytesPerSec, proc.io.bytes_read, gauge]
|
||||
- [IOWriteBytesPerSec, proc.io.bytes_written, gauge]
|
||||
tag_by: Name
|
||||
|
||||
|
||||
@@ -1,11 +0,0 @@
|
||||
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
- name: localhost
|
||||
host: localhost
|
||||
port: 2181
|
||||
timeout: 3
|
||||
# dimensions:
|
||||
# dim1: value1
|
||||
@@ -1,260 +0,0 @@
|
||||
===============================
|
||||
Docker images for Monasca Agent
|
||||
===============================
|
||||
There are two separate images for monasca-agent services: collector
|
||||
and forwarder. Collector is working best with services that allow remote access
|
||||
to them and to gather host level metrics collector will need to work together
|
||||
with cAdvisor service.
|
||||
|
||||
|
||||
Building monasca-base image
|
||||
===========================
|
||||
See https://opendev.org/openstack/monasca-common/src/branch/master/docker/README.rst
|
||||
|
||||
|
||||
Building Monasca Agent images
|
||||
=============================
|
||||
|
||||
``build_image.sh`` script in top level folder (``docker/build_image.sh``) is
|
||||
dummy script that will build both collector and forwarder images at once.
|
||||
|
||||
Example:
|
||||
$ ./build_image.sh <repository_version> <upper_constrains_branch> <common_version>
|
||||
|
||||
Everything after ``./build_image.sh`` is optional and by default configured
|
||||
to get versions from ``Dockerfile``. ``./build_image.sh`` also contain more
|
||||
detailed build description.
|
||||
|
||||
|
||||
Environment variables
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
============================== =========================== ====================================================
|
||||
Variable Default Description
|
||||
============================== =========================== ====================================================
|
||||
LOG_LEVEL WARN Python logging level
|
||||
MONASCA_URL http://monasca:8070/v2.0 Versioned Monasca API URL
|
||||
FORWARDER_URL http://localhost:17123 Monasca Agent Collector URL
|
||||
KEYSTONE_DEFAULTS_ENABLED true Use all OS defaults
|
||||
OS_AUTH_URL http://keystone:35357/v3/ Versioned Keystone URL
|
||||
OS_USERNAME monasca-agent Agent Keystone username
|
||||
OS_PASSWORD password Agent Keystone password
|
||||
OS_USER_DOMAIN_NAME Default Agent Keystone user domain
|
||||
OS_PROJECT_NAME mini-mon Agent Keystone project name
|
||||
OS_PROJECT_DOMAIN_NAME Default Agent Keystone project domain
|
||||
HOSTNAME_FROM_KUBERNETES false Determine node hostname from Kubernetes
|
||||
AUTORESTART false Auto Restart Monasca Agent Collector
|
||||
COLLECTOR_RESTART_INTERVAL 24 Interval in hours to restart Monasca Agent Collector
|
||||
STAY_ALIVE_ON_FAILURE false If true, container runs 2 hours after tests fail
|
||||
============================== =========================== ====================================================
|
||||
|
||||
Note that additional variables can be specified as well, see the
|
||||
``agent.yaml.j2`` for a definitive list in every image folder.
|
||||
|
||||
Note that the auto restart feature can be enabled if the agent collector
|
||||
has unchecked memory growth. The proper restart behavior must be enabled
|
||||
in Docker or Kubernetes if this feature is turned on.
|
||||
|
||||
Environment variables for self monitoring
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Provide them to monasca agent collector container.
|
||||
|
||||
============================== =========== =====================================
|
||||
Variable Default Description
|
||||
============================== =========== =====================================
|
||||
DOCKER false Monitor Docker
|
||||
CADVISOR false Monitor Cadvisor
|
||||
KUBERNETES false Monitor Kubernetes
|
||||
KUBERNETES_API false Monitor Kubernetes API
|
||||
PROMETHEUS false Monitor Prometheus
|
||||
MONASCA_MONITORING false Monitor services for metrics pipeline
|
||||
MONASCA_LOG_MONITORING false Monitor services for logs pipeline
|
||||
============================== =========== =====================================
|
||||
|
||||
Scripts
|
||||
~~~~~~~
|
||||
start.sh
|
||||
In this starting script provide all steps that lead to the proper service
|
||||
start. Including usage of wait scripts and templating of configuration
|
||||
files. You also could provide the ability to allow running container after
|
||||
service died for easier debugging.
|
||||
|
||||
health_check.py
|
||||
This file will be used for checking the status of the application.
|
||||
|
||||
|
||||
Docker Plugin
|
||||
-------------
|
||||
|
||||
This plugin is enabled when ``DOCKER=true``. It has the following options:
|
||||
|
||||
* ``DOCKER_ROOT``: The mounted host rootfs volume. Default: ``/host``
|
||||
* ``DOCKER_SOCKET``: The mounted Docker socket. Default: ``/var/run/docker.sock``
|
||||
|
||||
This plugin monitors Docker containers directly. It should only be used in a
|
||||
bare Docker environment (i.e. not Kubernetes), and requires two mounted volumes
|
||||
from the host:
|
||||
|
||||
* Host ``/`` mounted to ``/host`` (path configurable with ``DOCKER_ROOT``)
|
||||
* Host ``/var/run/docker.sock`` mounted to ``/var/run/docker.sock`` (path
|
||||
configurable with ``DOCKER_SOCKET``)
|
||||
|
||||
Kubernetes Plugin
|
||||
-----------------
|
||||
|
||||
This plugin is enabled when ``KUBERNETES=true``. It has the following options:
|
||||
|
||||
* ``KUBERNETES_TIMEOUT``: The K8s API connection timeout. Default: ``3``
|
||||
* ``KUBERNETES_NAMESPACE_ANNOTATIONS``: If set, will grab annotations from
|
||||
namespaces to include as dimensions for metrics that are under that
|
||||
namespace. Should be passed in as 'annotation1,annotation2,annotation3'.
|
||||
Default: unset
|
||||
* ``KUBERNETES_MINIMUM_WHITELIST``: Sets whitelist on kubernetes plugin for
|
||||
the following metrics pod.cpu.total_time_sec, pod.mem.cache_bytes,
|
||||
pod.mem.swap_bytes, pod.mem.used_bytes, pod.mem.working_set_bytes. This
|
||||
will alleviate the amount of load on Monasca.
|
||||
Default: unset
|
||||
|
||||
The Kubernetes plugin is intended to be run as a DaemonSet on each Kubernetes
|
||||
node. In order for API endpoints to be detected correctly, ``AGENT_POD_NAME`` and
|
||||
``AGENT_POD_NAMESPACE`` must be set using the `Downward API`_ as described
|
||||
above.
|
||||
|
||||
Kubernetes API Plugin
|
||||
---------------------
|
||||
|
||||
This plugin is enabled when ``KUBERNETES_API=true``. It has the following options:
|
||||
|
||||
* ``KUBERNETES_API_HOST``: If set, manually sets the location of the Kubernetes
|
||||
API host. Default: unset
|
||||
* ``KUBERNETES_API_PORT``: If set, manually sets the port for of the Kubernetes
|
||||
API host. Only used if ``KUBERNETES_API_HOST`` is also set. Default: 8080
|
||||
* ``KUBERNETES_API_CUSTOM_LABELS``: If set, provides a list of Kubernetes label
|
||||
keys to include as dimensions from gathered metrics. Labels should be comma
|
||||
separated strings, such as ``label1,label2,label3`. The ``app`` label is always
|
||||
included regardless of this value. Default: unset
|
||||
* ``KUBERNETES_NAMESPACE_ANNOTATIONS``: If set, will grab annotations from
|
||||
namespaces to include as dimensions for metrics that are under that
|
||||
namespace. Should be passed in as 'annotation1,annotation2,annotation3'.
|
||||
Default: unset
|
||||
* ``REPORT_PERSISTENT_STORAGE``: If set, will gather bound pvc per a storage
|
||||
class. Will be reported by namespace and cluster wide. Default: true
|
||||
* ``STORAGE_PARAMETERS_DIMENSIONS``: If set and report_persistent_storage is
|
||||
set, will grab storage class parameters as dimensions when reporting
|
||||
persistent storage. Should be passed in as 'parameter1,parameter2". Default:
|
||||
unset
|
||||
|
||||
The Kubernetes API plugin is intended to be run as a standalone deployment and
|
||||
will collect cluster-level metrics.
|
||||
|
||||
Prometheus Plugin
|
||||
-----------------
|
||||
|
||||
This plugin is enabled when ``PROMETHEUS=true``. It has the following options:
|
||||
|
||||
* ``PROMETHEUS_TIMEOUT``: The connection timeout. Default: ``3``
|
||||
* ``PROMETHEUS_ENDPOINTS``: A list of endpoints to scrape. If unset,
|
||||
they will be determined automatically via the Kubernetes API. See below for
|
||||
syntax. Default: unset
|
||||
* ``PROMETHEUS_DETECT_METHOD``: When endpoints are determined automatically,
|
||||
this specifies the resource type to scan, one of: ``pod``, ``service``.
|
||||
Default: ``pod``
|
||||
* ``PROMETHEUS_KUBERNETES_LABELS``: When endpoints are determined automatically,
|
||||
this comma-separated list of labels will be included as dimensions (by name).
|
||||
Default: ``app``
|
||||
|
||||
If desired, a static list of Prometheus endpoints can be provided by setting
|
||||
`PROMETHEUS_ENDPOINTS`. Entries in this list should be comma-separated.
|
||||
Additionally, each entry can specify a set of dimensions like so:
|
||||
|
||||
``http://host-a/metrics,http://host-b/metrics|prop=value&prop2=value2,http://host-c``
|
||||
|
||||
Note that setting ``PROMETHEUS_ENDPOINTS`` disables auto-detection.
|
||||
|
||||
When autodetection is enabled, this plugin will automatically scrape all
|
||||
annotated Prometheus endpoints on the node the agent is running on. Ideally, it
|
||||
should be run alongside the Kubernetes plugin as a DaemonSet on each node.
|
||||
|
||||
cAdvisor_host Plugin
|
||||
--------------------
|
||||
|
||||
This plugin is enabled when ``CADVISOR=true``. It has the following options:
|
||||
|
||||
* ``CADVISOR_TIMEOUT``: The connection timeout for the cAdvisor API. Default: ``3``
|
||||
* ``CADVISOR_URL``: If set, sets the URL at which to access cAdvisor. If unset,
|
||||
(default) the cAdvisor host will be determined automatically via the
|
||||
Kubernetes API.
|
||||
* ``CADVISOR_MINIMUM_WHITELIST``: Sets whitelist on cadvisor host plugin for
|
||||
the following metrics cpu.total_time_sec, mem.cache_bytes,
|
||||
mem.swap_bytes, mem.used_bytes, mem.working_set_bytes. This
|
||||
will alleviate the amount of load on Monasca.
|
||||
Default: unset
|
||||
|
||||
This plugin collects host-level metrics from a running cAdvisor instance.
|
||||
cAdvisor is included in ``kubelet`` when in Kubernetes environments and is
|
||||
necessary to retrieve host-level metrics. As with the Kubernetes plugin,
|
||||
``AGENT_POD_NAME`` and ``AGENT_POD_NAMESPACE`` must be set to determine the URL
|
||||
automatically.
|
||||
|
||||
cAdvisor can be easily run in `standard Docker environments`_ or directly on
|
||||
host systems. In these cases, the URL must be manually provided via
|
||||
``CADVISOR_URL``.
|
||||
|
||||
Monasca-monitoring
|
||||
------------------
|
||||
|
||||
Metrics pipeline
|
||||
^^^^^^^^^^^^^^^^
|
||||
The monasca-monitoring enables plugins for HTTP endpoint check and processes.
|
||||
Additionally enables plugins for detailed metrics for the following components:
|
||||
Kafka, MySQL, and Zookeeper. This is enabled when ``MONASCA_MONITORING=true``.
|
||||
The components use the default configuration. A user can specify own settings
|
||||
for them in docker-compose file. To customize those settings you can adjust the
|
||||
configuration of the components by passing environment variables:
|
||||
|
||||
Kafka
|
||||
+++++
|
||||
* ``KAFKA_CONNECT_STR``: The kafka connection string. Default: ``kafka:9092``
|
||||
|
||||
Zookeeper
|
||||
+++++++++
|
||||
* ``ZOOKEEPER_HOST``: The zookeeper host name. Default: ``zookeeper``
|
||||
* ``ZOOKEEPER_PORT``: The port to listen for client connections. Default: ``2181``
|
||||
|
||||
MySQL
|
||||
+++++
|
||||
* ``MYSQL_SERVER``: The MySQL server name. Default: ``mysql``
|
||||
* ``MYSQL_USER``, ``MYSQL_PASSWORD``: These variables are used in conjunction to specify user and password for this user. Default: ``root`` and ``secretmysql``
|
||||
* ``MYSQL_PORT``: The port to listen for client connections. Default: ``3306``
|
||||
|
||||
Logs pipeline
|
||||
^^^^^^^^^^^^^
|
||||
For logs pipeline you can enable HTTP endpoint check, process and
|
||||
``Elasticsearch`` plugins. This is enabled when ``MONASCA_LOG_MONITORING=true``.
|
||||
You can adjust the configuration of the components by passing environment
|
||||
variables:
|
||||
|
||||
Elasticsearch
|
||||
+++++++++++++
|
||||
* ``ELASTIC_URL``: The Elasticsearch connection string. Default: ``http://elasticsearch:9200``
|
||||
|
||||
Monasca-statsd
|
||||
^^^^^^^^^^^^^^
|
||||
To monitor ``monasca-notifcation`` and ``monasca-log-api`` use ``statsd``. Enable the
|
||||
statsd monitoring by setting up ``STATSD_HOST`` and ``STATSD_PORT`` environment
|
||||
variables in those projects.
|
||||
|
||||
Custom plugins
|
||||
~~~~~~~~~~~~~~
|
||||
Custom plugin configuration files can be provided by mounting them to
|
||||
``/plugins.d/*.yaml`` inside the container of monasca agent collector.
|
||||
|
||||
Plugins should have ``yaml`` extension when you don't need any templating.
|
||||
When they have ``yaml.j2`` extension, they will be processed as Jinja2
|
||||
templates with access to all environment variables.
|
||||
|
||||
Links
|
||||
~~~~~
|
||||
https://opendev.org/openstack/monasca-agent/src/branch/master/README.rst
|
||||
|
||||
.. _`Downward API`: https://kubernetes.io/docs/user-guide/downward-api/
|
||||
.. _`standard Docker environments`: https://github.com/google/cadvisor#quick-start-running-cadvisor-in-a-docker-container
|
||||
@@ -1,62 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
# TODO(Dobroslaw): move this script to monasca-common/docker folder
|
||||
# and leave here small script to download it and execute using env variables
|
||||
# to minimize code duplication.
|
||||
|
||||
set -x # Print each script step.
|
||||
set -eo pipefail # Exit the script if any statement returns error.
|
||||
|
||||
# Dummy script for building both images for monasca-agent: collector and
|
||||
# forwarder. It will relay all arguments to every image build script.
|
||||
|
||||
# This script is used for building Docker image with proper labels
|
||||
# and proper version of monasca-common.
|
||||
#
|
||||
# Example usage:
|
||||
# $ ./build_image.sh <repository_version> <upper_constains_branch> <common_version>
|
||||
#
|
||||
# Everything after `./build_image.sh` is optional and by default configured
|
||||
# to get versions from `Dockerfile`.
|
||||
#
|
||||
# To build from master branch (default):
|
||||
# $ ./build_image.sh
|
||||
# To build specific version run this script in the following way:
|
||||
# $ ./build_image.sh stable/queens
|
||||
# Building from specific commit:
|
||||
# $ ./build_image.sh cb7f226
|
||||
# When building from a tag monasca-common will be used in version available
|
||||
# in upper constraint file:
|
||||
# $ ./build_image.sh 2.5.0
|
||||
# To build image from Gerrit patch sets that is targeting branch stable/queens:
|
||||
# $ ./build_image.sh refs/changes/51/558751/1 stable/queens
|
||||
#
|
||||
# If you want to build image with custom monasca-common version you need
|
||||
# to provide it as in the following example:
|
||||
# $ ./build_image.sh master master refs/changes/19/595719/3
|
||||
|
||||
# Go to folder with Docker files.
|
||||
REAL_PATH=$(python3 -c "import os,sys; print(os.path.realpath('$0'))")
|
||||
cd "$(dirname "$REAL_PATH")/../docker/"
|
||||
|
||||
./collector/build_image.sh "$@"
|
||||
|
||||
printf "\n\n\n"
|
||||
|
||||
./forwarder/build_image.sh "$@"
|
||||
|
||||
printf "\n\n\n"
|
||||
|
||||
./statsd/build_image.sh "$@"
|
||||
@@ -1,35 +0,0 @@
|
||||
ARG DOCKER_IMAGE=monasca/agent-collector
|
||||
ARG APP_REPO=https://review.opendev.org/openstack/monasca-agent
|
||||
|
||||
# Branch, tag or git hash to build from.
|
||||
ARG REPO_VERSION=master
|
||||
ARG CONSTRAINTS_BRANCH=master
|
||||
|
||||
# Extra Python3 dependencies.
|
||||
ARG EXTRA_DEPS="Jinja2 prometheus_client docker-py"
|
||||
|
||||
# Always start from `monasca-base` image and use specific tag of it.
|
||||
ARG BASE_TAG=master
|
||||
FROM monasca/base:$BASE_TAG
|
||||
|
||||
# Environment variables used for our service or wait scripts.
|
||||
ENV \
|
||||
KEYSTONE_DEFAULTS_ENABLED=true \
|
||||
MONASCA_URL=http://monasca:8070/v2.0 \
|
||||
LOG_LEVEL=WARN \
|
||||
HOSTNAME_FROM_KUBERNETES=false \
|
||||
STAY_ALIVE_ON_FAILURE="false"
|
||||
|
||||
# Copy all neccessary files to proper locations.
|
||||
COPY templates/ /templates
|
||||
COPY agent.yaml.j2 /etc/monasca/agent/agent.yaml.j2
|
||||
COPY kubernetes_get_host.py /
|
||||
|
||||
# Run here all additionals steps your service need post installation.
|
||||
# Stay with only one `RUN` and use `&& \` for next steps to don't create
|
||||
# unnecessary image layers. Clean at the end to conserve space.
|
||||
RUN \
|
||||
apk add --no-cache util-linux libxml2 libffi-dev openssl-dev
|
||||
|
||||
# Implement start script in `start.sh` file.
|
||||
CMD ["/start.sh"]
|
||||
@@ -1,49 +0,0 @@
|
||||
Api:
|
||||
url: "{{ MONASCA_URL | default('') }}"
|
||||
service_type: "{{ SERVICE_TYPE | default('') }}"
|
||||
endpoint_type: "{{ ENDPOINT_TYPE | default('') }}"
|
||||
region_name: "{{ REGION_NAME | default('') }}"
|
||||
|
||||
username: "{{ OS_USERNAME }}"
|
||||
password: "{{ OS_PASSWORD }}"
|
||||
keystone_url: "{{ OS_AUTH_URL }}"
|
||||
user_domain_id: "{{ OS_USER_DOMAIN_ID | default('') }}"
|
||||
user_domain_name: "{{ OS_USER_DOMAIN_NAME | default('') }}"
|
||||
project_name: "{{ OS_PROJECT_NAME | default('') }}"
|
||||
project_domain_id: "{{ OS_PROJECT_DOMAIN_ID | default('') }}"
|
||||
project_domain_name: "{{ OS_PROJECT_DOMAIN_NAME | default('') }}"
|
||||
project_id: "{{ OS_PROJECT_ID | default('') }}"
|
||||
insecure: {{ INSECURE | default(False)}}
|
||||
ca_file: "{{ CA_FILE | default('') }}"
|
||||
|
||||
max_buffer_size: {{ MAX_BUFFER_SIZE | default(1000) }}
|
||||
max_batch_size: {{ MAX_BATCH_SIZE | default(0) }}
|
||||
max_measurement_buffer_size: {{ MAX_MEASUREMENT_BUFFER_SIZE | default(-1)}}
|
||||
backlog_send_rate: {{ BACKLOG_SEND_RATE | default(5) }}
|
||||
|
||||
Main:
|
||||
hostname: {{ '"%s"' % AGENT_HOSTNAME if AGENT_HOSTNAME else 'null' }}
|
||||
{% if DIMENSIONS %}
|
||||
dimensions:
|
||||
{% for dimension in DIMENSIONS.split(',') %}
|
||||
{% set k, v = dimension.split('=', 1) %}
|
||||
{{k.strip()}}: "{{v}}"
|
||||
{% endfor %}
|
||||
{% else %}
|
||||
dimensions: {}
|
||||
{% endif %}
|
||||
check_freq: {{ CHECK_FREQ | default(30) }}
|
||||
num_collector_threads: {{ NUM_COLLECTOR_THREADS | default(1) }}
|
||||
pool_full_max_retries: {{ POOL_FULL_MAX_TRIES | default(4) }}
|
||||
sub_collection_warn: {{ SUB_COLLECTION_WARN | default(6) }}
|
||||
autorestart: {{ AUTORESTART | default(False) }}
|
||||
collector_restart_interval: {{ COLLECTOR_RESTART_INTERVAL | default(24) }}
|
||||
non_local_traffic: {{ NON_LOCAL_TRAFFIC | default(False) }}
|
||||
forwarder_url: {{ FORWARDER_URL | default("http://localhost:17123") }}
|
||||
|
||||
Statsd:
|
||||
monasca_statsd_port : {{ STATSD_PORT | default(8125) }}
|
||||
|
||||
Logging:
|
||||
log_level: {{ LOG_LEVEL | default('WARN') }}
|
||||
disable_file_logging: True
|
||||
@@ -1,150 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
# TODO(Dobroslaw): move this script to monasca-common/docker folder
|
||||
# and leave here small script to download it and execute using env variables
|
||||
# to minimize code duplication.
|
||||
|
||||
set -x # Print each script step.
|
||||
set -eo pipefail # Exit the script if any statement returns error.
|
||||
|
||||
# This script is used for building Docker image with proper labels
|
||||
# and proper version of monasca-common.
|
||||
#
|
||||
# Example usage:
|
||||
# $ ./build_image.sh <repository_version> <upper_constains_branch> <common_version>
|
||||
#
|
||||
# Everything after `./build_image.sh` is optional and by default configured
|
||||
# to get versions from `Dockerfile`.
|
||||
#
|
||||
# To build from master branch (default):
|
||||
# $ ./build_image.sh
|
||||
# To build specific version run this script in the following way:
|
||||
# $ ./build_image.sh stable/queens
|
||||
# Building from specific commit:
|
||||
# $ ./build_image.sh cb7f226
|
||||
# When building from a tag monasca-common will be used in version available
|
||||
# in upper constraint file:
|
||||
# $ ./build_image.sh 2.5.0
|
||||
# To build image from Gerrit patch sets that is targeting branch stable/queens:
|
||||
# $ ./build_image.sh refs/changes/51/558751/1 stable/queens
|
||||
#
|
||||
# If you want to build image with custom monasca-common version you need
|
||||
# to provide it as in the following example:
|
||||
# $ ./build_image.sh master master refs/changes/19/595719/3
|
||||
|
||||
# Go to folder with Docker files.
|
||||
REAL_PATH=$(python3 -c "import os,sys; print(os.path.realpath('$0'))")
|
||||
cd "$(dirname "$REAL_PATH")/../collector/"
|
||||
|
||||
[ -z "$DOCKER_IMAGE" ] && \
|
||||
DOCKER_IMAGE=$(\grep DOCKER_IMAGE Dockerfile | cut -f2 -d"=")
|
||||
|
||||
: "${REPO_VERSION:=$1}"
|
||||
[ -z "$REPO_VERSION" ] && \
|
||||
REPO_VERSION=$(\grep REPO_VERSION Dockerfile | cut -f2 -d"=")
|
||||
# Let's stick to more readable version and disable SC2001 here.
|
||||
# shellcheck disable=SC2001
|
||||
REPO_VERSION_CLEAN=$(echo "$REPO_VERSION" | sed 's|/|-|g')
|
||||
|
||||
[ -z "$APP_REPO" ] && APP_REPO=$(\grep APP_REPO Dockerfile | cut -f2 -d"=")
|
||||
GITHUB_REPO=$(echo "$APP_REPO" | sed 's/review.opendev.org/github.com/' | \
|
||||
sed 's/ssh:/https:/')
|
||||
|
||||
if [ -z "$CONSTRAINTS_FILE" ]; then
|
||||
CONSTRAINTS_FILE=$(\grep CONSTRAINTS_FILE Dockerfile | cut -f2 -d"=") || true
|
||||
: "${CONSTRAINTS_FILE:=https://opendev.org/openstack/requirements/raw/branch/master/upper-constraints.txt}"
|
||||
fi
|
||||
|
||||
: "${CONSTRAINTS_BRANCH:=$2}"
|
||||
[ -z "$CONSTRAINTS_BRANCH" ] && \
|
||||
CONSTRAINTS_BRANCH=$(\grep CONSTRAINTS_BRANCH Dockerfile | cut -f2 -d"=")
|
||||
|
||||
# When using stable version of repository use same stable constraints file.
|
||||
case "$REPO_VERSION" in
|
||||
*stable*)
|
||||
CONSTRAINTS_BRANCH_CLEAN="$REPO_VERSION"
|
||||
CONSTRAINTS_FILE=${CONSTRAINTS_FILE/master/$CONSTRAINTS_BRANCH_CLEAN}
|
||||
# Get monasca-common version from stable upper constraints file.
|
||||
CONSTRAINTS_TMP_FILE=$(mktemp)
|
||||
wget --output-document "$CONSTRAINTS_TMP_FILE" \
|
||||
$CONSTRAINTS_FILE
|
||||
UPPER_COMMON=$(\grep 'monasca-common' "$CONSTRAINTS_TMP_FILE")
|
||||
# Get only version part from monasca-common.
|
||||
UPPER_COMMON_VERSION="${UPPER_COMMON##*===}"
|
||||
rm -rf "$CONSTRAINTS_TMP_FILE"
|
||||
;;
|
||||
*)
|
||||
CONSTRAINTS_BRANCH_CLEAN="$CONSTRAINTS_BRANCH"
|
||||
;;
|
||||
esac
|
||||
|
||||
# Monasca-common variables.
|
||||
if [ -z "$COMMON_REPO" ]; then
|
||||
COMMON_REPO=$(\grep COMMON_REPO Dockerfile | cut -f2 -d"=") || true
|
||||
: "${COMMON_REPO:=https://review.opendev.org/openstack/monasca-common}"
|
||||
fi
|
||||
: "${COMMON_VERSION:=$3}"
|
||||
if [ -z "$COMMON_VERSION" ]; then
|
||||
COMMON_VERSION=$(\grep COMMON_VERSION Dockerfile | cut -f2 -d"=") || true
|
||||
if [ "$UPPER_COMMON_VERSION" ]; then
|
||||
# Common from upper constraints file.
|
||||
COMMON_VERSION="$UPPER_COMMON_VERSION"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Clone project to temporary directory for getting proper commit number from
|
||||
# branches and tags. We need this for setting proper image labels.
|
||||
# Docker does not allow to get any data from inside of system when building
|
||||
# image.
|
||||
TMP_DIR=$(mktemp -d)
|
||||
(
|
||||
cd "$TMP_DIR"
|
||||
# This many steps are needed to support gerrit patch sets.
|
||||
git init
|
||||
git remote add origin "$APP_REPO"
|
||||
git fetch origin "$REPO_VERSION"
|
||||
git reset --hard FETCH_HEAD
|
||||
)
|
||||
GIT_COMMIT=$(git -C "$TMP_DIR" rev-parse HEAD)
|
||||
[ -z "${GIT_COMMIT}" ] && echo "No git commit hash found" && exit 1
|
||||
rm -rf "$TMP_DIR"
|
||||
|
||||
# Do the same for monasca-common.
|
||||
COMMON_TMP_DIR=$(mktemp -d)
|
||||
(
|
||||
cd "$COMMON_TMP_DIR"
|
||||
# This many steps are needed to support gerrit patch sets.
|
||||
git init
|
||||
git remote add origin "$COMMON_REPO"
|
||||
git fetch origin "$COMMON_VERSION"
|
||||
git reset --hard FETCH_HEAD
|
||||
)
|
||||
COMMON_GIT_COMMIT=$(git -C "$COMMON_TMP_DIR" rev-parse HEAD)
|
||||
[ -z "${COMMON_GIT_COMMIT}" ] && echo "No git commit hash found" && exit 1
|
||||
rm -rf "$COMMON_TMP_DIR"
|
||||
|
||||
CREATION_TIME=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||
|
||||
docker build --no-cache \
|
||||
--build-arg CREATION_TIME="$CREATION_TIME" \
|
||||
--build-arg GITHUB_REPO="$GITHUB_REPO" \
|
||||
--build-arg APP_REPO="$APP_REPO" \
|
||||
--build-arg REPO_VERSION="$REPO_VERSION" \
|
||||
--build-arg GIT_COMMIT="$GIT_COMMIT" \
|
||||
--build-arg CONSTRAINTS_FILE="$CONSTRAINTS_FILE" \
|
||||
--build-arg COMMON_REPO="$COMMON_REPO" \
|
||||
--build-arg COMMON_VERSION="$COMMON_VERSION" \
|
||||
--build-arg COMMON_GIT_COMMIT="$COMMON_GIT_COMMIT" \
|
||||
--tag "$DOCKER_IMAGE":"$REPO_VERSION_CLEAN" .
|
||||
@@ -1,28 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
# coding=utf-8
|
||||
|
||||
# (C) Copyright 2018 FUJITSU LIMITED
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Health check will returns 0 when service is working properly."""
|
||||
|
||||
|
||||
def main():
|
||||
"""Health check for Monasca-agent collector."""
|
||||
# TODO(Dobroslaw Zybort) wait for health check endpoint ...
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -1,7 +0,0 @@
|
||||
# (C) Copyright 2017 Hewlett Packard Enterprise Development LP
|
||||
# coding=utf-8
|
||||
|
||||
from monasca_agent.collector.checks import utils
|
||||
|
||||
kubernetes_connector = utils.KubernetesConnector(3)
|
||||
print(kubernetes_connector.get_agent_pod_host(return_host_name=True))
|
||||
@@ -1,123 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
# Starting script.
|
||||
# All checks and configuration templating you need to do before service
|
||||
# could be safely started should be added in this file.
|
||||
|
||||
set -eo pipefail # Exit the script if any statement returns error.
|
||||
|
||||
PLUGIN_TEMPLATES="/templates"
|
||||
USER_PLUGINS="/plugins.d"
|
||||
|
||||
AGENT_CONF="/etc/monasca/agent"
|
||||
AGENT_PLUGINS="$AGENT_CONF/conf.d"
|
||||
|
||||
if [ "$KEYSTONE_DEFAULTS_ENABLED" = "true" ]; then
|
||||
export OS_AUTH_URL=${OS_AUTH_URL:-"http://keystone:35357/v3/"}
|
||||
export OS_USERNAME=${OS_USERNAME:-"monasca-agent"}
|
||||
export OS_PASSWORD=${OS_PASSWORD:-"password"}
|
||||
export OS_USER_DOMAIN_NAME=${OS_USER_DOMAIN_NAME:-"Default"}
|
||||
export OS_PROJECT_NAME=${OS_PROJECT_NAME:-"mini-mon"}
|
||||
export OS_PROJECT_DOMAIN_NAME=${OS_PROJECT_DOMAIN_NAME:-"Default"}
|
||||
fi
|
||||
|
||||
mkdir -p "$AGENT_PLUGINS"
|
||||
|
||||
# Test services we need before starting our service.
|
||||
#echo "Start script: waiting for needed services"
|
||||
|
||||
# Template all config files before start, it will use env variables.
|
||||
# Read usage examples: https://pypi.org/project/Templer/
|
||||
echo "Start script: creating config files from templates"
|
||||
|
||||
alias template="templer --ignore-undefined-variables --force --verbose"
|
||||
|
||||
if [ "$HOSTNAME_FROM_KUBERNETES" = "true" ]; then
|
||||
if ! AGENT_HOSTNAME=$(python /kubernetes_get_host.py); then
|
||||
echo "Error getting hostname from Kubernetes"
|
||||
return 1
|
||||
fi
|
||||
export AGENT_HOSTNAME
|
||||
fi
|
||||
|
||||
if [ "$DOCKER" = "true" ]; then
|
||||
template $PLUGIN_TEMPLATES/docker.yaml.j2 $AGENT_PLUGINS/docker.yaml
|
||||
fi
|
||||
|
||||
# Cadvisor.
|
||||
if [ "$CADVISOR" = "true" ]; then
|
||||
template $PLUGIN_TEMPLATES/cadvisor_host.yaml.j2 $AGENT_PLUGINS/cadvisor_host.yaml
|
||||
fi
|
||||
|
||||
# Kubernetes.
|
||||
if [ "$KUBERNETES" = "true" ]; then
|
||||
template $PLUGIN_TEMPLATES/kubernetes.yaml.j2 $AGENT_PLUGINS/kubernetes.yaml
|
||||
fi
|
||||
|
||||
# Kubernetes_api.
|
||||
if [ "$KUBERNETES_API" = "true" ]; then
|
||||
template $PLUGIN_TEMPLATES/kubernetes_api.yaml.j2 $AGENT_PLUGINS/kubernetes_api.yaml
|
||||
fi
|
||||
|
||||
# Prometheus scraping.
|
||||
if [ "$PROMETHEUS" = "true" ]; then
|
||||
template $PLUGIN_TEMPLATES/prometheus.yaml.j2 $AGENT_PLUGINS/prometheus.yaml
|
||||
fi
|
||||
|
||||
# Monasca monitoring.
|
||||
if [ "$MONASCA_MONITORING" = "true" ]; then
|
||||
template $PLUGIN_TEMPLATES/zk.yaml.j2 $AGENT_PLUGINS/zk.yaml
|
||||
template $PLUGIN_TEMPLATES/kafka_consumer.yaml.j2 $AGENT_PLUGINS/kafka_consumer.yaml
|
||||
template $PLUGIN_TEMPLATES/mysql.yaml.j2 $AGENT_PLUGINS/mysql.yaml
|
||||
fi
|
||||
|
||||
# Monasca-log-monitoring.
|
||||
if [ "$MONASCA_LOG_MONITORING" = "true" ]; then
|
||||
template $PLUGIN_TEMPLATES/elastic.yaml.j2 $AGENT_PLUGINS/elastic.yaml
|
||||
fi
|
||||
|
||||
# Common for monasca-monitoring and monasca-log-monitoring.
|
||||
if [ "$MONASCA_MONITORING" = "true" ] || [ "$MONASCA_LOG_MONITORING" = "true" ]; then
|
||||
template $PLUGIN_TEMPLATES/http_check.yaml.j2 $AGENT_PLUGINS/http_check.yaml
|
||||
template $PLUGIN_TEMPLATES/process.yaml.j2 $AGENT_PLUGINS/process.yaml
|
||||
fi
|
||||
|
||||
# Apply user templates.
|
||||
for f in $USER_PLUGINS/*.yaml.j2; do
|
||||
if [ -e "$f" ]; then
|
||||
template "$f" "$AGENT_PLUGINS/$(basename "$f" .j2)"
|
||||
fi
|
||||
done
|
||||
|
||||
# Copy plain user plugins.
|
||||
for f in $USER_PLUGINS/*.yaml; do
|
||||
if [ -e "$f" ]; then
|
||||
cp "$f" "$AGENT_PLUGINS/$(basename "$f")"
|
||||
fi
|
||||
done
|
||||
|
||||
template $AGENT_CONF/agent.yaml.j2 $AGENT_CONF/agent.yaml
|
||||
|
||||
# Start our service.
|
||||
echo "Start script: starting container"
|
||||
monasca-collector foreground
|
||||
|
||||
# Allow server to stay alive in case of failure for 2 hours for debugging.
|
||||
RESULT=$?
|
||||
if [ $RESULT != 0 ] && [ "$STAY_ALIVE_ON_FAILURE" = "true" ]; then
|
||||
echo "Service died, waiting 120 min before exiting"
|
||||
sleep 7200
|
||||
fi
|
||||
exit $RESULT
|
||||
@@ -1,19 +0,0 @@
|
||||
# (C) Copyright 2017 Hewlett Packard Enterprise Development LP
|
||||
|
||||
init_config:
|
||||
connection_timeout: {{ CADVISOR_TIMEOUT | default(3) }}
|
||||
{% if CADVISOR_MINIMUM_WHITELIST %}
|
||||
white_list:
|
||||
metrics:
|
||||
cpu.total_time_sec:
|
||||
mem.cache_bytes:
|
||||
mem.swap_bytes:
|
||||
mem.used_bytes:
|
||||
mem.working_set_bytes:
|
||||
{% endif %}
|
||||
instances:
|
||||
{% if CADVISOR_URL %}
|
||||
- cadvisor_url: "{{ CADVISOR_URL }}"
|
||||
{% else %}
|
||||
- kubernetes_detect_cadvisor: True
|
||||
{% endif %}
|
||||
@@ -1,7 +0,0 @@
|
||||
# (C) Copyright 2017 Hewlett Packard Enterprise Development LP
|
||||
|
||||
init_config:
|
||||
docker_root: "{{ DOCKER_ROOT | default('/host') }}"
|
||||
|
||||
instances:
|
||||
- url: "{{ DOCKER_SOCKET | default('unix://var/run/docker.sock') }}"
|
||||
@@ -1,3 +0,0 @@
|
||||
init_config:
|
||||
instances:
|
||||
- url: "{{ ELASTIC_URL | default('http://elasticsearch:9200') }}"
|
||||
@@ -1,46 +0,0 @@
|
||||
init_config: null
|
||||
instances:
|
||||
{% if MONASCA_MONITORING %}
|
||||
- name: keystone
|
||||
dimensions:
|
||||
service: keystone
|
||||
timeout: 3
|
||||
url: http://keystone:5000
|
||||
- name: mysql
|
||||
dimensions:
|
||||
service: mysql
|
||||
timeout: 3
|
||||
url: http://mysql:3306
|
||||
- name: cadvisor
|
||||
dimensions:
|
||||
service: cadvisor
|
||||
timeout: 3
|
||||
url: http://cadvisor:8080/healthz
|
||||
- name: influxdb
|
||||
dimensions:
|
||||
service: influxdb
|
||||
timeout: 3
|
||||
url: http://influxdb:8086/ping
|
||||
- name: cadvisor
|
||||
dimensions:
|
||||
service: cadvisor
|
||||
timeout: 3
|
||||
url: http://cadvisor:8080/healthz
|
||||
{% endif %}
|
||||
{% if MONASCA_LOG_MONITORING %}
|
||||
- name: log-api
|
||||
dimensions:
|
||||
service: log-api
|
||||
timeout: 3
|
||||
url: http://log-api:5607/healthcheck
|
||||
- name: elasticsearch
|
||||
dimensions:
|
||||
service: elasticsearch
|
||||
timeout: 3
|
||||
url: http://elasticsearch:9200/_cat/health
|
||||
- name: kibana
|
||||
dimensions:
|
||||
service: kibana
|
||||
timeout: 3
|
||||
url: http://kibana:5601/status
|
||||
{% endif %}
|
||||
@@ -1,14 +0,0 @@
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
- built_by: Kafka
|
||||
consumer_groups:
|
||||
1_metrics:
|
||||
metrics: []
|
||||
thresh-event:
|
||||
events: []
|
||||
thresh-metric:
|
||||
metrics: []
|
||||
kafka_connect_str: "{{ KAFKA_CONNECT_STR | default('kafka:9092') }}"
|
||||
name: "{{ KAFKA_CONNECT_STR | default('kafka:9092') }}"
|
||||
per_partition: false
|
||||
@@ -1,24 +0,0 @@
|
||||
# (C) Copyright 2017 Hewlett Packard Enterprise Development LP
|
||||
|
||||
init_config:
|
||||
connection_timeout: {{ KUBERNETES_TIMEOUT | default(3) }}
|
||||
{% if KUBERNETES_MINIMUM_WHITELIST %}
|
||||
white_list:
|
||||
metrics:
|
||||
pod.cpu.total_time_sec:
|
||||
pod.mem.cache_bytes:
|
||||
pod.mem.swap_bytes:
|
||||
pod.mem.used_bytes:
|
||||
pod.mem.working_set_bytes:
|
||||
pod.mem.rss_bytes:
|
||||
container.mem.usage_percent:
|
||||
{% endif %}
|
||||
report_container_mem_percent: True
|
||||
instances:
|
||||
- derive_host: True
|
||||
{% if KUBERNETES_NAMESPACE_ANNOTATIONS %}
|
||||
namespace_annotations:
|
||||
{% for annotation in KUBERNETES_NAMESPACE_ANNOTATIONS.split(',') %}
|
||||
- "{{ annotation }}"
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
@@ -1,31 +0,0 @@
|
||||
# (C) Copyright 2017 Hewlett Packard Enterprise Development LP
|
||||
|
||||
init_config:
|
||||
connection_timeout: {{ KUBERNETES_API_TIMEOUT | default(3) }}
|
||||
|
||||
instances:
|
||||
{% if KUBERNETES_API_HOST %}
|
||||
- host: "{{ KUBERNETES_API_HOST }}"
|
||||
kubernetes_master_port: {{ KUBERNETES_API_PORT | default(8080) }}
|
||||
{% else %}
|
||||
- derive_api_url: true
|
||||
{% endif %}
|
||||
{% if KUBERNETES_API_CUSTOM_LABELS %}
|
||||
custom_kubernetes_labels:
|
||||
{% for label in KUBERNETES_API_CUSTOM_LABELS.split(',') %}
|
||||
- "{{ label }}"
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{% if KUBERNETES_NAMESPACE_ANNOTATIONS %}
|
||||
namespace_annotations:
|
||||
{% for annotation in KUBERNETES_NAMESPACE_ANNOTATIONS.split(',') %}
|
||||
- "{{ annotation }}"
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{% if STORAGE_PARAMETERS_DIMENSIONS %}
|
||||
storage_parameters_dimensions:
|
||||
{% for parameter in STORAGE_PARAMETERS_DIMENSIONS.split(',') %}
|
||||
- "{{ parameter }}"
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
report_persistent_storage: {{ REPORT_PERSISTENT_STORAGE | default(true) }}
|
||||
@@ -1,8 +0,0 @@
|
||||
init_config:
|
||||
instances:
|
||||
- built_by: MySQL
|
||||
name: "{{ MYSQL_SERVER | default('mysql') }}"
|
||||
server: "{{ MYSQL_SERVER | default('mysql') }}"
|
||||
port: "{{ MYSQL_PORT | default('3306') }}"
|
||||
user: "{{ MYSQL_USER | default('root') }}"
|
||||
pass: "{{ MYSQL_PASSWORD | default('secretmysql') }}"
|
||||
@@ -1,154 +0,0 @@
|
||||
init_config:
|
||||
process_fs_path: /rootfs/proc
|
||||
instances:
|
||||
{% if MONASCA_MONITORING %}
|
||||
- name: influxd
|
||||
detailed: true
|
||||
dimensions:
|
||||
service: influxd
|
||||
exact_match: false
|
||||
search_string:
|
||||
- influxd
|
||||
- name: monasca-statsd
|
||||
detailed: true
|
||||
dimensions:
|
||||
service: monasca-statsd
|
||||
exact_match: false
|
||||
search_string:
|
||||
- monasca-statsd
|
||||
- name: monasca-notification
|
||||
detailed: true
|
||||
dimensions:
|
||||
service: monasca-notification
|
||||
exact_match: false
|
||||
search_string:
|
||||
- monasca-notification
|
||||
- name: persister
|
||||
detailed: true
|
||||
dimensions:
|
||||
service: persister
|
||||
exact_match: false
|
||||
search_string:
|
||||
- persister
|
||||
- name: monasca-thresh
|
||||
detailed: true
|
||||
dimensions:
|
||||
service: monasca-thresh
|
||||
exact_match: false
|
||||
search_string:
|
||||
- monasca-thresh
|
||||
- name: monasca-api
|
||||
detailed: true
|
||||
dimensions:
|
||||
service: monasca-api
|
||||
exact_match: false
|
||||
search_string:
|
||||
- monasca-api
|
||||
- name: monasca-collector
|
||||
detailed: true
|
||||
dimensions:
|
||||
service: monasca-collector
|
||||
exact_match: false
|
||||
search_string:
|
||||
- monasca-collector
|
||||
- name: memcached
|
||||
detailed: true
|
||||
dimensions:
|
||||
service: memcached
|
||||
exact_match: false
|
||||
search_string:
|
||||
- memcached
|
||||
- name: cadvisor
|
||||
detailed: true
|
||||
dimensions:
|
||||
service: cadvisor
|
||||
exact_match: false
|
||||
search_string:
|
||||
- cadvisor
|
||||
- name: monasca-forwarder
|
||||
detailed: true
|
||||
dimensions:
|
||||
service: monasca-forwarder
|
||||
exact_match: false
|
||||
search_string:
|
||||
- monasca-forwarder
|
||||
- name: zookeeper
|
||||
detailed: true
|
||||
dimensions:
|
||||
service: zookeeper
|
||||
exact_match: false
|
||||
search_string:
|
||||
- zookeeper
|
||||
- name: kafka
|
||||
detailed: true
|
||||
dimensions:
|
||||
service: kafka
|
||||
exact_match: false
|
||||
search_string:
|
||||
- kafka
|
||||
- name: mysqld
|
||||
detailed: true
|
||||
dimensions:
|
||||
service: mysqld
|
||||
exact_match: false
|
||||
search_string:
|
||||
- mysqld
|
||||
{% endif %}
|
||||
{% if MONASCA_LOG_MONITORING %}
|
||||
- name: logspout
|
||||
detailed: true
|
||||
dimensions:
|
||||
service: logspout
|
||||
exact_match: false
|
||||
search_string:
|
||||
- logspout
|
||||
- name: log-agent
|
||||
detailed: true
|
||||
dimensions:
|
||||
service: log-agent
|
||||
exact_match: false
|
||||
search_string:
|
||||
- log-agent
|
||||
- name: log-api
|
||||
detailed: true
|
||||
dimensions:
|
||||
service: log-api
|
||||
exact_match: false
|
||||
search_string:
|
||||
- log-api
|
||||
- name: kibana
|
||||
detailed: true
|
||||
dimensions:
|
||||
service: kibana
|
||||
exact_match: false
|
||||
search_string:
|
||||
- kibana
|
||||
- name: elasticsearch
|
||||
detailed: true
|
||||
dimensions:
|
||||
service: elasticsearch
|
||||
exact_match: false
|
||||
search_string:
|
||||
- elasticsearch
|
||||
- name: log-transformer
|
||||
detailed: true
|
||||
dimensions:
|
||||
service: log-transformer
|
||||
exact_match: false
|
||||
search_string:
|
||||
- log-transformer
|
||||
- name: log-persister
|
||||
detailed: true
|
||||
dimensions:
|
||||
service: log-persister
|
||||
exact_match: false
|
||||
search_string:
|
||||
- log-persister
|
||||
- name: log-metrics
|
||||
detailed: true
|
||||
dimensions:
|
||||
service: log-metrics
|
||||
exact_match: false
|
||||
search_string:
|
||||
- log-metrics
|
||||
{% endif %}
|
||||
@@ -1,37 +0,0 @@
|
||||
# (C) Copyright 2017 Hewlett Packard Enterprise Development LP
|
||||
|
||||
init_config:
|
||||
timeout: {{ PROMETHEUS_TIMEOUT | default(3) }}
|
||||
{% if not PROMETHEUS_ENDPOINTS %}
|
||||
auto_detect_endpoints: true
|
||||
detect_method: {{ PROMETHEUS_DETECT_METHOD | default('pod') }}
|
||||
{% endif %}
|
||||
|
||||
instances:
|
||||
{% if PROMETHEUS_ENDPOINTS %}
|
||||
{% for endpoint in PROMETHEUS_ENDPOINTS.split(',') %}
|
||||
{% if '|' in endpoint %}
|
||||
{% set endpoint, dimensions = endpoint.split('|', 2) %}
|
||||
{% set dimensions = dimensions.split('&') %}
|
||||
{% else %}
|
||||
{% set dimensions = [] %}
|
||||
{% endif %}
|
||||
- metric_endpoint: "{{ endpoint }}"
|
||||
{% if dimensions %}
|
||||
default_dimensions:
|
||||
{% for dimension in dimensions %}
|
||||
{% set k, v = dimension.split('=', 1) %}
|
||||
{{k}}: {{v}}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
{% else %}
|
||||
{% if PROMETHEUS_KUBERNETES_LABELS %}
|
||||
- kubernetes_labels:
|
||||
{% for label in PROMETHEUS_KUBERNETES_LABELS.split(',') %}
|
||||
- {{ label }}
|
||||
{% endfor %}
|
||||
{% else %}
|
||||
- kubernetes_labels: ['app']
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
@@ -1,6 +0,0 @@
|
||||
init_config:
|
||||
|
||||
instances:
|
||||
- host: "{{ ZOOKEEPER_HOST | default('zookeeper') }}"
|
||||
port: "{{ ZOOKEEPER_PORT | default('2181') }}"
|
||||
timeout: 3
|
||||
@@ -1,33 +0,0 @@
|
||||
ARG DOCKER_IMAGE=monasca/agent-forwarder
|
||||
ARG APP_REPO=https://review.opendev.org/openstack/monasca-agent
|
||||
|
||||
# Branch, tag or git hash to build from.
|
||||
ARG REPO_VERSION=master
|
||||
ARG CONSTRAINTS_BRANCH=master
|
||||
|
||||
# Extra Python3 dependencies.
|
||||
#ARG EXTRA_DEPS=""
|
||||
|
||||
# Always start from `monasca-base` image and use specific tag of it.
|
||||
ARG BASE_TAG=master
|
||||
FROM monasca/base:$BASE_TAG
|
||||
|
||||
# Environment variables used for our service or wait scripts.
|
||||
ENV \
|
||||
KEYSTONE_DEFAULTS_ENABLED=true \
|
||||
MONASCA_URL=http://monasca:8070/v2.0 \
|
||||
LOG_LEVEL=WARN \
|
||||
HOSTNAME_FROM_KUBERNETES=false \
|
||||
STAY_ALIVE_ON_FAILURE="false"
|
||||
|
||||
# Copy all neccessary files to proper locations.
|
||||
COPY agent.yaml.j2 /etc/monasca/agent/agent.yaml.j2
|
||||
COPY kubernetes_get_host.py /
|
||||
|
||||
# Run here all additionals steps your service need post installation.
|
||||
# Stay with only one `RUN` and use `&& \` for next steps to don't create
|
||||
# unnecessary image layers. Clean at the end to conserve space.
|
||||
#RUN
|
||||
|
||||
# Implement start script in `start.sh` file.
|
||||
CMD ["/start.sh"]
|
||||
@@ -1,49 +0,0 @@
|
||||
Api:
|
||||
url: "{{ MONASCA_URL | default('') }}"
|
||||
service_type: "{{ SERVICE_TYPE | default('') }}"
|
||||
endpoint_type: "{{ ENDPOINT_TYPE | default('') }}"
|
||||
region_name: "{{ REGION_NAME | default('') }}"
|
||||
|
||||
username: "{{ OS_USERNAME }}"
|
||||
password: "{{ OS_PASSWORD }}"
|
||||
keystone_url: "{{ OS_AUTH_URL }}"
|
||||
user_domain_id: "{{ OS_USER_DOMAIN_ID | default('') }}"
|
||||
user_domain_name: "{{ OS_USER_DOMAIN_NAME | default('') }}"
|
||||
project_name: "{{ OS_PROJECT_NAME | default('') }}"
|
||||
project_domain_id: "{{ OS_PROJECT_DOMAIN_ID | default('') }}"
|
||||
project_domain_name: "{{ OS_PROJECT_DOMAIN_NAME | default('') }}"
|
||||
project_id: "{{ OS_PROJECT_ID | default('') }}"
|
||||
insecure: {{ INSECURE | default(False)}}
|
||||
ca_file: "{{ CA_FILE | default('') }}"
|
||||
|
||||
max_buffer_size: {{ MAX_BUFFER_SIZE | default(1000) }}
|
||||
max_batch_size: {{ MAX_BATCH_SIZE | default(0) }}
|
||||
max_measurement_buffer_size: {{ MAX_MEASUREMENT_BUFFER_SIZE | default(-1)}}
|
||||
backlog_send_rate: {{ BACKLOG_SEND_RATE | default(5) }}
|
||||
|
||||
Main:
|
||||
hostname: {{ '"%s"' % AGENT_HOSTNAME if AGENT_HOSTNAME else 'null' }}
|
||||
{% if DIMENSIONS %}
|
||||
dimensions:
|
||||
{% for dimension in DIMENSIONS.split(',') %}
|
||||
{% set k, v = dimension.split('=', 1) %}
|
||||
{{k.strip()}}: "{{v}}"
|
||||
{% endfor %}
|
||||
{% else %}
|
||||
dimensions: {}
|
||||
{% endif %}
|
||||
check_freq: {{ CHECK_FREQ | default(30) }}
|
||||
num_collector_threads: {{ NUM_COLLECTOR_THREADS | default(1) }}
|
||||
pool_full_max_retries: {{ POOL_FULL_MAX_TRIES | default(4) }}
|
||||
sub_collection_warn: {{ SUB_COLLECTION_WARN | default(6) }}
|
||||
autorestart: {{ AUTORESTART | default(False) }}
|
||||
collector_restart_interval: {{ COLLECTOR_RESTART_INTERVAL | default(24) }}
|
||||
non_local_traffic: {{ NON_LOCAL_TRAFFIC | default(False) }}
|
||||
forwarder_url: {{ FORWARDER_URL | default("http://localhost:17123") }}
|
||||
|
||||
Statsd:
|
||||
monasca_statsd_port : {{ STATSD_PORT | default(8125) }}
|
||||
|
||||
Logging:
|
||||
log_level: {{ LOG_LEVEL | default('WARN') }}
|
||||
disable_file_logging: True
|
||||
@@ -1,150 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
# TODO(Dobroslaw): move this script to monasca-common/docker folder
|
||||
# and leave here small script to download it and execute using env variables
|
||||
# to minimize code duplication.
|
||||
|
||||
set -x # Print each script step.
|
||||
set -eo pipefail # Exit the script if any statement returns error.
|
||||
|
||||
# This script is used for building Docker image with proper labels
|
||||
# and proper version of monasca-common.
|
||||
#
|
||||
# Example usage:
|
||||
# $ ./build_image.sh <repository_version> <upper_constains_branch> <common_version>
|
||||
#
|
||||
# Everything after `./build_image.sh` is optional and by default configured
|
||||
# to get versions from `Dockerfile`.
|
||||
#
|
||||
# To build from master branch (default):
|
||||
# $ ./build_image.sh
|
||||
# To build specific version run this script in the following way:
|
||||
# $ ./build_image.sh stable/queens
|
||||
# Building from specific commit:
|
||||
# $ ./build_image.sh cb7f226
|
||||
# When building from a tag monasca-common will be used in version available
|
||||
# in upper constraint file:
|
||||
# $ ./build_image.sh 2.5.0
|
||||
# To build image from Gerrit patch sets that is targeting branch stable/queens:
|
||||
# $ ./build_image.sh refs/changes/51/558751/1 stable/queens
|
||||
#
|
||||
# If you want to build image with custom monasca-common version you need
|
||||
# to provide it as in the following example:
|
||||
# $ ./build_image.sh master master refs/changes/19/595719/3
|
||||
|
||||
# Go to folder with Docker files.
|
||||
REAL_PATH=$(python3 -c "import os,sys; print(os.path.realpath('$0'))")
|
||||
cd "$(dirname "$REAL_PATH")/../forwarder/"
|
||||
|
||||
[ -z "$DOCKER_IMAGE" ] && \
|
||||
DOCKER_IMAGE=$(\grep DOCKER_IMAGE Dockerfile | cut -f2 -d"=")
|
||||
|
||||
: "${REPO_VERSION:=$1}"
|
||||
[ -z "$REPO_VERSION" ] && \
|
||||
REPO_VERSION=$(\grep REPO_VERSION Dockerfile | cut -f2 -d"=")
|
||||
# Let's stick to more readable version and disable SC2001 here.
|
||||
# shellcheck disable=SC2001
|
||||
REPO_VERSION_CLEAN=$(echo "$REPO_VERSION" | sed 's|/|-|g')
|
||||
|
||||
[ -z "$APP_REPO" ] && APP_REPO=$(\grep APP_REPO Dockerfile | cut -f2 -d"=")
|
||||
GITHUB_REPO=$(echo "$APP_REPO" | sed 's/review.opendev.org/github.com/' | \
|
||||
sed 's/ssh:/https:/')
|
||||
|
||||
if [ -z "$CONSTRAINTS_FILE" ]; then
|
||||
CONSTRAINTS_FILE=$(\grep CONSTRAINTS_FILE Dockerfile | cut -f2 -d"=") || true
|
||||
: "${CONSTRAINTS_FILE:=https://opendev.org/openstack/requirements/raw/branch/master/upper-constraints.txt}"
|
||||
fi
|
||||
|
||||
: "${CONSTRAINTS_BRANCH:=$2}"
|
||||
[ -z "$CONSTRAINTS_BRANCH" ] && \
|
||||
CONSTRAINTS_BRANCH=$(\grep CONSTRAINTS_BRANCH Dockerfile | cut -f2 -d"=")
|
||||
|
||||
# When using stable version of repository use same stable constraints file.
|
||||
case "$REPO_VERSION" in
|
||||
*stable*)
|
||||
CONSTRAINTS_BRANCH_CLEAN="$REPO_VERSION"
|
||||
CONSTRAINTS_FILE=${CONSTRAINTS_FILE/master/$CONSTRAINTS_BRANCH_CLEAN}
|
||||
# Get monasca-common version from stable upper constraints file.
|
||||
CONSTRAINTS_TMP_FILE=$(mktemp)
|
||||
wget --output-document "$CONSTRAINTS_TMP_FILE" \
|
||||
$CONSTRAINTS_FILE
|
||||
UPPER_COMMON=$(\grep 'monasca-common' "$CONSTRAINTS_TMP_FILE")
|
||||
# Get only version part from monasca-common.
|
||||
UPPER_COMMON_VERSION="${UPPER_COMMON##*===}"
|
||||
rm -rf "$CONSTRAINTS_TMP_FILE"
|
||||
;;
|
||||
*)
|
||||
CONSTRAINTS_BRANCH_CLEAN="$CONSTRAINTS_BRANCH"
|
||||
;;
|
||||
esac
|
||||
|
||||
# Monasca-common variables.
|
||||
if [ -z "$COMMON_REPO" ]; then
|
||||
COMMON_REPO=$(\grep COMMON_REPO Dockerfile | cut -f2 -d"=") || true
|
||||
: "${COMMON_REPO:=https://review.opendev.org/openstack/monasca-common}"
|
||||
fi
|
||||
: "${COMMON_VERSION:=$3}"
|
||||
if [ -z "$COMMON_VERSION" ]; then
|
||||
COMMON_VERSION=$(\grep COMMON_VERSION Dockerfile | cut -f2 -d"=") || true
|
||||
if [ "$UPPER_COMMON_VERSION" ]; then
|
||||
# Common from upper constraints file.
|
||||
COMMON_VERSION="$UPPER_COMMON_VERSION"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Clone project to temporary directory for getting proper commit number from
|
||||
# branches and tags. We need this for setting proper image labels.
|
||||
# Docker does not allow to get any data from inside of system when building
|
||||
# image.
|
||||
TMP_DIR=$(mktemp -d)
|
||||
(
|
||||
cd "$TMP_DIR"
|
||||
# This many steps are needed to support gerrit patch sets.
|
||||
git init
|
||||
git remote add origin "$APP_REPO"
|
||||
git fetch origin "$REPO_VERSION"
|
||||
git reset --hard FETCH_HEAD
|
||||
)
|
||||
GIT_COMMIT=$(git -C "$TMP_DIR" rev-parse HEAD)
|
||||
[ -z "${GIT_COMMIT}" ] && echo "No git commit hash found" && exit 1
|
||||
rm -rf "$TMP_DIR"
|
||||
|
||||
# Do the same for monasca-common.
|
||||
COMMON_TMP_DIR=$(mktemp -d)
|
||||
(
|
||||
cd "$COMMON_TMP_DIR"
|
||||
# This many steps are needed to support gerrit patch sets.
|
||||
git init
|
||||
git remote add origin "$COMMON_REPO"
|
||||
git fetch origin "$COMMON_VERSION"
|
||||
git reset --hard FETCH_HEAD
|
||||
)
|
||||
COMMON_GIT_COMMIT=$(git -C "$COMMON_TMP_DIR" rev-parse HEAD)
|
||||
[ -z "${COMMON_GIT_COMMIT}" ] && echo "No git commit hash found" && exit 1
|
||||
rm -rf "$COMMON_TMP_DIR"
|
||||
|
||||
CREATION_TIME=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||
|
||||
docker build --no-cache \
|
||||
--build-arg CREATION_TIME="$CREATION_TIME" \
|
||||
--build-arg GITHUB_REPO="$GITHUB_REPO" \
|
||||
--build-arg APP_REPO="$APP_REPO" \
|
||||
--build-arg REPO_VERSION="$REPO_VERSION" \
|
||||
--build-arg GIT_COMMIT="$GIT_COMMIT" \
|
||||
--build-arg CONSTRAINTS_FILE="$CONSTRAINTS_FILE" \
|
||||
--build-arg COMMON_REPO="$COMMON_REPO" \
|
||||
--build-arg COMMON_VERSION="$COMMON_VERSION" \
|
||||
--build-arg COMMON_GIT_COMMIT="$COMMON_GIT_COMMIT" \
|
||||
--tag "$DOCKER_IMAGE":"$REPO_VERSION_CLEAN" .
|
||||
@@ -1,28 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
# coding=utf-8
|
||||
|
||||
# (C) Copyright 2018 FUJITSU LIMITED
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Health check will returns 0 when service is working properly."""
|
||||
|
||||
|
||||
def main():
|
||||
"""Health check for Monasca-agent forwarder."""
|
||||
# TODO(Dobroslaw Zybort) wait for health check endpoint ...
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user