Retire monasca-persister repository
This repository is being retired as part of the Monasca project retirement. The project content has been replaced with a retirement notice. Needed-By: I3cb522ce8f51424b64e93c1efaf0dfd1781cd5ac Change-Id: I8e9fcfb98c46eb7e1cec927bff293aed1e0a753d Signed-off-by: Goutham Pacha Ravi <gouthampravi@gmail.com>
This commit is contained in:
@@ -1,7 +0,0 @@
|
||||
[run]
|
||||
branch = True
|
||||
source = monasca_persister
|
||||
omit = monasca_persister/tests/*
|
||||
|
||||
[report]
|
||||
ignore_errors = True
|
||||
@@ -1,4 +0,0 @@
|
||||
[DEFAULT]
|
||||
test_path=${OS_TEST_PATH:-./monasca_persister/tests}
|
||||
top_dir=./
|
||||
group_regex=monasca_persister\.tests(?:\.|_)([^_]+)
|
||||
60
.zuul.yaml
60
.zuul.yaml
@@ -1,60 +0,0 @@
|
||||
- project:
|
||||
templates:
|
||||
- check-requirements
|
||||
- openstack-cover-jobs
|
||||
- openstack-python3-zed-jobs
|
||||
- release-notes-jobs-python3
|
||||
check:
|
||||
jobs:
|
||||
- monasca-tempest-python3-influxdb
|
||||
- monasca-tempest-python3-cassandra
|
||||
- monasca-tempest-python3-java-cassandra
|
||||
gate:
|
||||
jobs:
|
||||
- monasca-tempest-python3-influxdb
|
||||
post:
|
||||
jobs:
|
||||
- publish-monasca-persister-docker-image
|
||||
periodic:
|
||||
jobs:
|
||||
- publish-monasca-persister-docker-image
|
||||
release:
|
||||
jobs:
|
||||
- publish-monasca-persister-docker-image
|
||||
|
||||
- job:
|
||||
name: publish-monasca-persister-docker-image
|
||||
parent: build-monasca-docker-image
|
||||
post-run: playbooks/docker-publish.yml
|
||||
required-projects:
|
||||
- openstack/monasca-common
|
||||
vars:
|
||||
publisher: true
|
||||
secrets:
|
||||
- doker_hub_login_persister
|
||||
|
||||
- secret:
|
||||
name: doker_hub_login_persister
|
||||
data:
|
||||
user: !encrypted/pkcs1-oaep
|
||||
- Y3eu2U5qjOTAILazUyN3PzGCsRucSK3CxZta+z5P2xNl5oqtAdEt/SxkBMat86eL13bVm
|
||||
AqSS5BQ6qafwKYc9UtP2dVuQkyLpWYqFTcOdpGK5KvQaQwz9x2PGAmDSqzY5u8vtFF05K
|
||||
bs3kn2Xa74Z1cfLOrlxGXzd09KIwxPCqYMW8zEx3qiIydDXLq99UoDUsxhCXj65vqXcfG
|
||||
yBVxFCnhNAS0kHd6gHM3Nnwi7bOLkRRDnxZb/WgyeqfB5qJsyZTGGU+CFAIpfa10hO0jU
|
||||
JUcV8RRuI6Jd2BQTngW/f2Py8lUNhQW6gL31XV1Lr7j3bxPY07EPJuEBFEOUSDeWd3cmX
|
||||
EP0iaJt//ZGNLIsn7C8KN38Gce4z681moWvK9eJQzfeVTaNXE4nUlNveWC/DUSfeULHMZ
|
||||
uRGXId0LOgWIxm3KNhUWUBKLVVJDU/+M3E1jAPcE7uZ1zC2fUI19X+3AwW1fx5wiZqKik
|
||||
niA9Sb5nwz1gA+ZnTn/AVot9pS0+VThGqU4AfmKowZn4uIHP+WKoLxvZalTgUYrHs+YQ1
|
||||
DYgZMGIVG8PRhVdIz96ALbw/ZaI4RM4y1Hh2r4IQgC6pbxrnSMvddq7wi29OETP3aRuJc
|
||||
Za/epFgYpEhVMTO4E47PYStiYqHoFser7sgPCs7U3ngxxkt8lXwJcin/yCQt6k=
|
||||
password: !encrypted/pkcs1-oaep
|
||||
- d207Nu/DU44eyy+2IjaOaF+bELwHsk4eOFK+3+VgZ4GwA5pv87bbUtWaj9rzwcPL0ZGRr
|
||||
/2P9+4gUE//DFjT/gH5FkrkLCb91PaflPibjfJg2+mloT3/8q4lJJU32zcmXWoKeP9A+I
|
||||
i8V1outQZP8ggsVWxAjC3595kglupbKX2BwcbbGlrqX/jdIZ8x7aCIpo08zjTKse+se3d
|
||||
zmojDSGXOCRof82FKhmA5uMQjHK7a3o29rHe0AW4kylLRosm7IG+JQ1BgFIh+5OlSeAdw
|
||||
VW7YJVXuW5TMzUwpXwWcfOUUSW655gmFFvXcpQaDu7Ad6Zg2QGpIyuInnYNAw1ZViWUE2
|
||||
kmQ2SdlqHtRDamPvdR/o3h6Y1Pr+VFote/wek46KpsOeWATNOjUaqFu7wGtUQV6PSkUYo
|
||||
hLyVLt3EkYIZmHI6ydQIeYCSM2O1wiuiwVNmgJ7S/6O2ZES68lplh3d9hzdUmoIz9u6j7
|
||||
+tkK1IcXFtq/45AgKD9iCCGKcE13RLzBBC+Qgt1SBBywgZiQ+q9GEFmz55ETpiXJttsn8
|
||||
/ed2JmJoJN5g9fNYqNrbj3SM3PMg1DqyphejoNCWWeWnJAdPoF5N2HfDUB3FiWfrYaC/C
|
||||
wB/W2TARkMG7FlE+blYeQayjlet9b02SmsQtFcyIHNzE5mjkg4SdPMJY0buxZo=
|
||||
@@ -1,19 +0,0 @@
|
||||
The source repository for this project can be found at:
|
||||
|
||||
https://opendev.org/openstack/monasca-persister
|
||||
|
||||
Pull requests submitted through GitHub are not monitored.
|
||||
|
||||
To start contributing to OpenStack, follow the steps in the contribution guide
|
||||
to set up and use Gerrit:
|
||||
|
||||
https://docs.openstack.org/contributors/code-and-documentation/quick-start.html
|
||||
|
||||
Bugs should be filed on Storyboard:
|
||||
|
||||
https://storyboard.openstack.org/#!/project/871
|
||||
|
||||
For more specific information about contributing to this repository, see the
|
||||
Monasca contributor guide:
|
||||
|
||||
https://docs.openstack.org/monasca-api/latest/contributor/contributing.html
|
||||
90
README.rst
90
README.rst
@@ -1,85 +1,9 @@
|
||||
Monasca Persister
|
||||
=================
|
||||
This project is no longer maintained.
|
||||
|
||||
.. image:: https://governance.openstack.org/tc/badges/monasca-persister.svg
|
||||
:target: https://governance.openstack.org/tc/reference/tags/index.html
|
||||
The contents of this repository are still available in the Git
|
||||
source code management system. To see the contents of this
|
||||
repository before it reached its end of life, please check out the
|
||||
previous commit with "git checkout HEAD^1".
|
||||
|
||||
.. Change things from this point on
|
||||
|
||||
The Monasca Persister consumes metrics and alarm state transitions
|
||||
from the Apache Kafka message queue and stores them in the time series
|
||||
database.
|
||||
|
||||
|
||||
Running
|
||||
=======
|
||||
|
||||
To install the Python monasca-persister modules, git clone the source
|
||||
and run the following command:
|
||||
|
||||
::
|
||||
|
||||
$ pip install -c https://releases.openstack.org/constraints/upper/master -e ./monasca-persister
|
||||
|
||||
To run the unit tests use:
|
||||
|
||||
::
|
||||
|
||||
$ tox -e py36
|
||||
|
||||
To start the persister run:
|
||||
|
||||
::
|
||||
|
||||
$ monasca-persister --config-file=monasca-persister.conf
|
||||
|
||||
|
||||
Configuration
|
||||
=============
|
||||
|
||||
A sample configuration file can be generated using the Oslo standards
|
||||
used in other OpenStack projects.
|
||||
|
||||
::
|
||||
|
||||
tox -e genconfig
|
||||
|
||||
The result will be in ./etc/monasca/monasca-persister.conf.sample
|
||||
|
||||
If the deployment is using the Docker files, the configuration template
|
||||
can be found in docker/monasca-persister.conf.j2.
|
||||
|
||||
|
||||
Java
|
||||
====
|
||||
|
||||
For information on Java implementation see `java/Readme.rst <java/Readme.rst>`_.
|
||||
|
||||
|
||||
Contributing and Reporting Bugs
|
||||
===============================
|
||||
|
||||
Ongoing work for the Monasca project is tracked in Storyboard_.
|
||||
|
||||
|
||||
License
|
||||
=======
|
||||
|
||||
Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
not use this file except in compliance with the License. You may obtain
|
||||
a copy of the License at
|
||||
|
||||
::
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
||||
|
||||
.. _Storyboard: https://storyboard.openstack.org
|
||||
For any further questions, please email openstack-discuss@lists.openstack.org
|
||||
or join #openstack-dev on OFTC.
|
||||
|
||||
@@ -1,5 +0,0 @@
|
||||
# This is a cross-platform list tracking distribution packages needed for install and tests;
|
||||
# see http://docs.openstack.org/infra/bindep/ for additional information.
|
||||
|
||||
maven
|
||||
openjdk-8-jdk
|
||||
@@ -1,30 +0,0 @@
|
||||
#!/bin/sh
|
||||
set -x
|
||||
ME=`whoami`
|
||||
echo "Running as user: $ME"
|
||||
MVN=$1
|
||||
VERSION=$2
|
||||
BRANCH=$3
|
||||
|
||||
check_user() {
|
||||
ME=$1
|
||||
if [ "${ME}" != "zuul" ]; then
|
||||
echo "\nERROR: Download monasca-common and do a mvn install to install the monasca-commom jars\n" 1>&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
BUILD_COMMON=false
|
||||
POM_FILE=~/.m2/repository/monasca-common/monasca-common/${VERSION}/monasca-common-${VERSION}.pom
|
||||
if [ ! -r "${POM_FILE}" ]; then
|
||||
check_user "${ME}"
|
||||
BUILD_COMMON=true
|
||||
fi
|
||||
|
||||
# This should only be done on the stack forge system
|
||||
if [ "${BUILD_COMMON}" = "true" ]; then
|
||||
git clone -b ${BRANCH} https://opendev.org/openstack/monasca-common --depth 1
|
||||
cd monasca-common
|
||||
${MVN} clean
|
||||
${MVN} install
|
||||
fi
|
||||
@@ -1,46 +0,0 @@
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
|
||||
<groupId>monasca-persister</groupId>
|
||||
<artifactId>monasca-persister-common</artifactId>
|
||||
<version>1.0.0-SNAPSHOT</version>
|
||||
<url>http://github.com/openstack/monasca-persister</url>
|
||||
<packaging>pom</packaging>
|
||||
|
||||
<properties>
|
||||
<!-- Versioning -->
|
||||
<exec.args>${project.version}</exec.args>
|
||||
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
|
||||
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
|
||||
</properties>
|
||||
|
||||
<build>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>org.codehaus.mojo</groupId>
|
||||
<artifactId>exec-maven-plugin</artifactId>
|
||||
<version>1.1.1</version>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>package-execution</id>
|
||||
<phase>validate</phase>
|
||||
<goals>
|
||||
<goal>exec</goal>
|
||||
</goals>
|
||||
</execution>
|
||||
<execution>
|
||||
<id>package-execution</id>
|
||||
<phase>clean</phase>
|
||||
<goals>
|
||||
<goal>exec</goal>
|
||||
</goals>
|
||||
</execution>
|
||||
</executions>
|
||||
<configuration>
|
||||
<executable>./build_common.sh</executable>
|
||||
</configuration>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
</project>
|
||||
@@ -1,7 +0,0 @@
|
||||
# config-generator
|
||||
|
||||
To generate sample configuration execute
|
||||
|
||||
```sh
|
||||
tox -e genconfig
|
||||
```
|
||||
@@ -1,6 +0,0 @@
|
||||
[DEFAULT]
|
||||
output_file = etc/monasca/monasca-persister.conf.sample
|
||||
wrap_width = 90
|
||||
format = ini
|
||||
namespace = monasca_persister
|
||||
namespace = oslo.log
|
||||
@@ -1,3 +0,0 @@
|
||||
sphinx>=2.0.0,!=2.1.0 # BSD
|
||||
reno>=3.1.0 # Apache-2.0
|
||||
openstackdocstheme>=2.2.1 # Apache-2.0
|
||||
@@ -1,67 +0,0 @@
|
||||
ARG DOCKER_IMAGE=monasca/persister
|
||||
ARG APP_REPO=https://review.opendev.org/openstack/monasca-persister
|
||||
|
||||
# Branch, tag or git hash to build from.
|
||||
ARG REPO_VERSION=master
|
||||
ARG CONSTRAINTS_BRANCH=master
|
||||
|
||||
# Extra Python3 dependencies.
|
||||
ARG EXTRA_DEPS="influxdb"
|
||||
|
||||
# Always start from `monasca-base` image and use specific tag of it.
|
||||
ARG BASE_TAG=master
|
||||
FROM monasca/base:$BASE_TAG
|
||||
|
||||
# Environment variables used for our service or wait scripts.
|
||||
ENV \
|
||||
DEBUG=false \
|
||||
VERBOSE=true \
|
||||
LOG_LEVEL=WARNING \
|
||||
LOG_LEVEL_KAFKA=WARNING \
|
||||
LOG_LEVEL_INFLUXDB=WARNING \
|
||||
LOG_LEVEL_CASSANDRA=WARNING \
|
||||
ZOOKEEPER_URI=zookeeper:2181 \
|
||||
KAFKA_URI=kafka:9092 \
|
||||
KAFKA_ALARM_HISTORY_BATCH_SIZE=1000 \
|
||||
KAFKA_ALARM_HISTORY_GROUP_ID=1_events \
|
||||
KAFKA_ALARM_HISTORY_PROCESSORS=1 \
|
||||
KAFKA_ALARM_HISTORY_WAIT_TIME=15 \
|
||||
KAFKA_EVENTS_ENABLE="false" \
|
||||
KAFKA_LEGACY_CLIENT_ENABLED=false \
|
||||
KAFKA_METRICS_BATCH_SIZE=1000 \
|
||||
KAFKA_METRICS_GROUP_ID=1_metrics \
|
||||
KAFKA_METRICS_PROCESSORS=1 \
|
||||
KAFKA_METRICS_WAIT_TIME=15 \
|
||||
KAFKA_WAIT_FOR_TOPICS=alarm-state-transitions,metrics \
|
||||
DATABASE_BACKEND=influxdb \
|
||||
INFLUX_HOST=influxdb \
|
||||
INFLUX_PORT=8086 \
|
||||
INFLUX_USER=mon_persister \
|
||||
INFLUX_PASSWORD=password \
|
||||
INFLUX_DB=mon \
|
||||
INFLUX_IGNORE_PARSE_POINT_ERROR="false" \
|
||||
CASSANDRA_HOSTS=cassandra \
|
||||
CASSANDRA_PORT=8086 \
|
||||
CASSANDRA_USER=mon_persister \
|
||||
CASSANDRA_PASSWORD=password \
|
||||
CASSANDRA_KEY_SPACE=monasca \
|
||||
CASSANDRA_CONNECTION_TIMEOUT=5 \
|
||||
CASSANDRA_MAX_CACHE_SIZE=20000000 \
|
||||
CASSANDRA_RETENTION_POLICY=45 \
|
||||
STAY_ALIVE_ON_FAILURE="false"
|
||||
|
||||
# Copy all necessary files to proper locations.
|
||||
COPY monasca-persister.conf.j2 persister-logging.conf.j2 /etc/monasca/
|
||||
|
||||
# Run here all additional steps your service need post installation.
|
||||
# Stay with only one `RUN` and use `&& \` for next steps to don't create
|
||||
# unnecessary image layers. Clean at the end to conserve space.
|
||||
#RUN \
|
||||
# echo "Some steps to do after main installation." && \
|
||||
# echo "Hello when building."
|
||||
|
||||
# Expose port for specific service.
|
||||
#EXPOSE 1234
|
||||
|
||||
# Implement start script in `start.sh` file.
|
||||
CMD ["/start.sh"]
|
||||
@@ -1,91 +0,0 @@
|
||||
==================================
|
||||
Docker image for Monasca persister
|
||||
==================================
|
||||
The Monasca persister image is based on the monasca-base image.
|
||||
|
||||
|
||||
Building monasca-base image
|
||||
===========================
|
||||
See https://github.com/openstack/monasca-common/tree/master/docker/README.rst
|
||||
|
||||
|
||||
Building Docker image
|
||||
=====================
|
||||
|
||||
Example:
|
||||
$ ./build_image.sh <repository_version> <upper_constrains_branch> <common_version>
|
||||
|
||||
Everything after ``./build_image.sh`` is optional and by default configured
|
||||
to get versions from ``Dockerfile``. ``./build_image.sh`` also contain more
|
||||
detailed build description.
|
||||
|
||||
Environment variables
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
=============================== ================= ================================================
|
||||
Variable Default Description
|
||||
=============================== ================= ================================================
|
||||
DEBUG false If true, enable debug logging
|
||||
VERBOSE true If true, enable info logging
|
||||
ZOOKEEPER_URI zookeeper:2181 The host and port for zookeeper
|
||||
KAFKA_URI kafka:9092 The host and port for kafka
|
||||
KAFKA_ALARM_HISTORY_BATCH_SIZE 1000 Kafka consumer takes messages in a batch
|
||||
KAFKA_ALARM_HISTORY_GROUP_ID 1_events Kafka Group from which persister get alarm history
|
||||
KAFKA_ALARM_HISTORY_PROCESSORS 1 Number of processes for alarm history topic
|
||||
KAFKA_ALARM_HISTORY_WAIT_TIME 15 Seconds to wait if the batch size is not reached
|
||||
KAFKA_EVENTS_ENABLE false Enable events persister
|
||||
KAFKA_LEGACY_CLIENT_ENABLED false Enable legacy Kafka client
|
||||
KAFKA_METRICS_BATCH_SIZE 1000 Kafka consumer takes messages in a batch
|
||||
KAFKA_METRICS_GROUP_ID 1_metrics Kafka Group from which persister get metrics
|
||||
KAFKA_METRICS_PROCESSORS 1 Number of processes for metrics topic
|
||||
KAFKA_METRICS_WAIT_TIME 15 Seconds to wait if the batch size is not reached
|
||||
DATABASE_BACKEND influxdb Select for backend database
|
||||
INFLUX_HOST influxdb The host for influxdb
|
||||
INFLUX_PORT 8086 The port for influxdb
|
||||
INFLUX_USER mon_persister The influx username
|
||||
INFLUX_PASSWORD password The influx password
|
||||
INFLUX_DB mon The influx database name
|
||||
INFLUX_IGNORE_PARSE_POINT_ERROR false Don't exit on InfluxDB parse point errors
|
||||
CASSANDRA_HOSTS cassandra Cassandra node addresses
|
||||
CASSANDRA_PORT 8086 Cassandra port number
|
||||
CASSANDRA_USER mon_persister Cassandra user name
|
||||
CASSANDRA_PASSWORD password Cassandra password
|
||||
CASSANDRA_KEY_SPACE monasca Keyspace name where metrics are stored
|
||||
CASSANDRA_CONNECTION_TIMEOUT 5 Cassandra timeout in seconds
|
||||
CASSANDRA_MAX_CACHE_SIZE 20000000 Maximum number of cached metric definition entries in memory
|
||||
CASSANDRA_RETENTION_POLICY 45 Data retention period in days
|
||||
STAY_ALIVE_ON_FAILURE false If true, container runs 2 hours even start fails
|
||||
=============================== ================= ================================================
|
||||
|
||||
Wait scripts environment variables
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
======================= ================================ =========================================
|
||||
Variable Default Description
|
||||
======================= ================================ =========================================
|
||||
KAFKA_URI kafka:9092 URI to Apache Kafka (distributed
|
||||
streaming platform)
|
||||
KAFKA_WAIT_FOR_TOPICS alarm-state-transitions,metrics The topics where metric-api streams
|
||||
the metric messages and alarm-states
|
||||
KAFKA_WAIT_RETRIES 24 Number of kafka connect attempts
|
||||
KAFKA_WAIT_DELAY 5 Seconds to wait between attempts
|
||||
======================= ================================ =========================================
|
||||
|
||||
Scripts
|
||||
~~~~~~~
|
||||
start.sh
|
||||
In this starting script provide all steps that lead to the proper service
|
||||
start. Including usage of wait scripts and templating of configuration
|
||||
files. You also could provide the ability to allow running container after
|
||||
service died for easier debugging.
|
||||
|
||||
build_image.sh
|
||||
Please read detailed build description inside the script.
|
||||
|
||||
Provide Configuration templates
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
* monasca-persister.conf.j2
|
||||
* persister-logging.conf.j2
|
||||
|
||||
|
||||
Links
|
||||
~~~~~
|
||||
https://github.com/openstack/monasca-persister/tree/master/monasca_persister
|
||||
@@ -1,150 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
# TODO(Dobroslaw): move this script to monasca-common/docker folder
|
||||
# and leave here small script to download it and execute using env variables
|
||||
# to minimize code duplication.
|
||||
|
||||
set -x # Print each script step.
|
||||
set -eo pipefail # Exit the script if any statement returns error.
|
||||
|
||||
# This script is used for building Docker image with proper labels
|
||||
# and proper version of monasca-common.
|
||||
#
|
||||
# Example usage:
|
||||
# $ ./build_image.sh <repository_version> <upper_constains_branch> <common_version>
|
||||
#
|
||||
# Everything after `./build_image.sh` is optional and by default configured
|
||||
# to get versions from `Dockerfile`.
|
||||
#
|
||||
# To build from master branch (default):
|
||||
# $ ./build_image.sh
|
||||
# To build specific version run this script in the following way:
|
||||
# $ ./build_image.sh stable/queens
|
||||
# Building from specific commit:
|
||||
# $ ./build_image.sh cb7f226
|
||||
# When building from a tag monasca-common will be used in version available
|
||||
# in upper constraint file:
|
||||
# $ ./build_image.sh 2.5.0
|
||||
# To build image from Gerrit patch sets that is targeting branch stable/queens:
|
||||
# $ ./build_image.sh refs/changes/51/558751/1 stable/queens
|
||||
#
|
||||
# If you want to build image with custom monasca-common version you need
|
||||
# to provide it as in the following example:
|
||||
# $ ./build_image.sh master master refs/changes/19/595719/3
|
||||
|
||||
# Go to folder with Docker files.
|
||||
REAL_PATH=$(python3 -c "import os,sys; print(os.path.realpath('$0'))")
|
||||
cd "$(dirname "$REAL_PATH")/../docker/"
|
||||
|
||||
[ -z "$DOCKER_IMAGE" ] && \
|
||||
DOCKER_IMAGE=$(\grep DOCKER_IMAGE Dockerfile | cut -f2 -d"=")
|
||||
|
||||
: "${REPO_VERSION:=$1}"
|
||||
[ -z "$REPO_VERSION" ] && \
|
||||
REPO_VERSION=$(\grep REPO_VERSION Dockerfile | cut -f2 -d"=")
|
||||
# Let's stick to more readable version and disable SC2001 here.
|
||||
# shellcheck disable=SC2001
|
||||
REPO_VERSION_CLEAN=$(echo "$REPO_VERSION" | sed 's|/|-|g')
|
||||
|
||||
[ -z "$APP_REPO" ] && APP_REPO=$(\grep APP_REPO Dockerfile | cut -f2 -d"=")
|
||||
GITHUB_REPO=$(echo "$APP_REPO" | sed 's/review.opendev.org/github.com/' | \
|
||||
sed 's/ssh:/https:/')
|
||||
|
||||
if [ -z "$CONSTRAINTS_FILE" ]; then
|
||||
CONSTRAINTS_FILE=$(\grep CONSTRAINTS_FILE Dockerfile | cut -f2 -d"=") || true
|
||||
: "${CONSTRAINTS_FILE:=https://opendev.org/openstack/requirements/raw/branch/master/upper-constraints.txt}"
|
||||
fi
|
||||
|
||||
: "${CONSTRAINTS_BRANCH:=$2}"
|
||||
[ -z "$CONSTRAINTS_BRANCH" ] && \
|
||||
CONSTRAINTS_BRANCH=$(\grep CONSTRAINTS_BRANCH Dockerfile | cut -f2 -d"=")
|
||||
|
||||
# When using stable version of repository use same stable constraints file.
|
||||
case "$REPO_VERSION" in
|
||||
*stable*)
|
||||
CONSTRAINTS_BRANCH_CLEAN="$REPO_VERSION"
|
||||
CONSTRAINTS_FILE=${CONSTRAINTS_FILE/master/$CONSTRAINTS_BRANCH_CLEAN}
|
||||
# Get monasca-common version from stable upper constraints file.
|
||||
CONSTRAINTS_TMP_FILE=$(mktemp)
|
||||
wget --output-document "$CONSTRAINTS_TMP_FILE" \
|
||||
$CONSTRAINTS_FILE
|
||||
UPPER_COMMON=$(\grep 'monasca-common' "$CONSTRAINTS_TMP_FILE")
|
||||
# Get only version part from monasca-common.
|
||||
UPPER_COMMON_VERSION="${UPPER_COMMON##*===}"
|
||||
rm -rf "$CONSTRAINTS_TMP_FILE"
|
||||
;;
|
||||
*)
|
||||
CONSTRAINTS_BRANCH_CLEAN="$CONSTRAINTS_BRANCH"
|
||||
;;
|
||||
esac
|
||||
|
||||
# Monasca-common variables.
|
||||
if [ -z "$COMMON_REPO" ]; then
|
||||
COMMON_REPO=$(\grep COMMON_REPO Dockerfile | cut -f2 -d"=") || true
|
||||
: "${COMMON_REPO:=https://review.opendev.org/openstack/monasca-common}"
|
||||
fi
|
||||
: "${COMMON_VERSION:=$3}"
|
||||
if [ -z "$COMMON_VERSION" ]; then
|
||||
COMMON_VERSION=$(\grep COMMON_VERSION Dockerfile | cut -f2 -d"=") || true
|
||||
if [ "$UPPER_COMMON_VERSION" ]; then
|
||||
# Common from upper constraints file.
|
||||
COMMON_VERSION="$UPPER_COMMON_VERSION"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Clone project to temporary directory for getting proper commit number from
|
||||
# branches and tags. We need this for setting proper image labels.
|
||||
# Docker does not allow to get any data from inside of system when building
|
||||
# image.
|
||||
TMP_DIR=$(mktemp -d)
|
||||
(
|
||||
cd "$TMP_DIR"
|
||||
# This many steps are needed to support gerrit patch sets.
|
||||
git init
|
||||
git remote add origin "$APP_REPO"
|
||||
git fetch origin "$REPO_VERSION"
|
||||
git reset --hard FETCH_HEAD
|
||||
)
|
||||
GIT_COMMIT=$(git -C "$TMP_DIR" rev-parse HEAD)
|
||||
[ -z "${GIT_COMMIT}" ] && echo "No git commit hash found" && exit 1
|
||||
rm -rf "$TMP_DIR"
|
||||
|
||||
# Do the same for monasca-common.
|
||||
COMMON_TMP_DIR=$(mktemp -d)
|
||||
(
|
||||
cd "$COMMON_TMP_DIR"
|
||||
# This many steps are needed to support gerrit patch sets.
|
||||
git init
|
||||
git remote add origin "$COMMON_REPO"
|
||||
git fetch origin "$COMMON_VERSION"
|
||||
git reset --hard FETCH_HEAD
|
||||
)
|
||||
COMMON_GIT_COMMIT=$(git -C "$COMMON_TMP_DIR" rev-parse HEAD)
|
||||
[ -z "${COMMON_GIT_COMMIT}" ] && echo "No git commit hash found" && exit 1
|
||||
rm -rf "$COMMON_TMP_DIR"
|
||||
|
||||
CREATION_TIME=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||
|
||||
docker build --no-cache \
|
||||
--build-arg CREATION_TIME="$CREATION_TIME" \
|
||||
--build-arg GITHUB_REPO="$GITHUB_REPO" \
|
||||
--build-arg APP_REPO="$APP_REPO" \
|
||||
--build-arg REPO_VERSION="$REPO_VERSION" \
|
||||
--build-arg GIT_COMMIT="$GIT_COMMIT" \
|
||||
--build-arg CONSTRAINTS_FILE="$CONSTRAINTS_FILE" \
|
||||
--build-arg COMMON_REPO="$COMMON_REPO" \
|
||||
--build-arg COMMON_VERSION="$COMMON_VERSION" \
|
||||
--build-arg COMMON_GIT_COMMIT="$COMMON_GIT_COMMIT" \
|
||||
--tag "$DOCKER_IMAGE":"$REPO_VERSION_CLEAN" .
|
||||
@@ -1,27 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
# coding=utf-8
|
||||
|
||||
# (C) Copyright 2018 FUJITSU LIMITED
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Health check will returns 0 when service is working properly."""
|
||||
|
||||
def main():
|
||||
"""health check for Monasca-persister"""
|
||||
# TODO(Christian Brandstetter) wait for health check endpoint ...
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -1,96 +0,0 @@
|
||||
[DEFAULT]
|
||||
# Provide logging configuration
|
||||
log_config_append = /etc/monasca/persister-logging.conf
|
||||
|
||||
# Show debugging output in logs (sets DEBUG log level output)
|
||||
debug = {{ DEBUG }}
|
||||
# Show more verbose log output (sets INFO log level output) if debug is False
|
||||
verbose = {{ VERBOSE }}
|
||||
|
||||
[repositories]
|
||||
{% if DATABASE_BACKEND | lower == 'cassandra' %}
|
||||
# The cassandra driver to use for the metrics repository
|
||||
metrics_driver = monasca_persister.repositories.cassandra.metrics_repository:MetricCassandraRepository
|
||||
|
||||
# The cassandra driver to use for the alarm state history repository
|
||||
alarm_state_history_driver = monasca_persister.repositories.cassandra.alarm_state_history_repository:AlarmStateHistCassandraRepository
|
||||
{% else %}
|
||||
# The influxdb driver to use for the metrics repository
|
||||
metrics_driver = monasca_persister.repositories.influxdb.metrics_repository:MetricInfluxdbRepository
|
||||
|
||||
# The influxdb driver to use for the alarm state history repository
|
||||
alarm_state_history_driver = monasca_persister.repositories.influxdb.alarm_state_history_repository:AlarmStateHistInfluxdbRepository
|
||||
|
||||
# Don't exit on InfluxDB parse point errors
|
||||
ignore_parse_point_error = {{ INFLUX_IGNORE_PARSE_POINT_ERROR }}
|
||||
{% endif %}
|
||||
|
||||
[zookeeper]
|
||||
# Comma separated list of host:port
|
||||
uri = {{ ZOOKEEPER_URI }}
|
||||
partition_interval_recheck_seconds = 15
|
||||
|
||||
[kafka_alarm_history]
|
||||
# Comma separated list of Kafka broker host:port.
|
||||
uri = {{ KAFKA_URI }}
|
||||
group_id = {{ KAFKA_ALARM_HISTORY_GROUP_ID }}
|
||||
topic = alarm-state-transitions
|
||||
consumer_id = 1
|
||||
client_id = 1
|
||||
batch_size = {{ KAFKA_ALARM_HISTORY_BATCH_SIZE }}
|
||||
max_wait_time_seconds = {{ KAFKA_ALARM_HISTORY_WAIT_TIME }}
|
||||
# The following 3 values are set to the kakfa-python defaults
|
||||
fetch_size_bytes = 4096
|
||||
buffer_size = 4096
|
||||
# 8 times buffer size
|
||||
max_buffer_size = 32768
|
||||
# Path in zookeeper for kafka consumer group partitioning algo
|
||||
zookeeper_path = /persister_partitions/alarm-state-transitions
|
||||
num_processors = {{ KAFKA_ALARM_HISTORY_PROCESSORS | default(1) }}
|
||||
legacy_kafka_client_enabled= {{ KAFKA_LEGACY_CLIENT_ENABLED | default(false) }}
|
||||
|
||||
[kafka_events]
|
||||
# Comma separated list of Kafka broker host:port.
|
||||
uri = {{ KAFKA_URI }}
|
||||
enabled = {{ KAFKA_EVENTS_ENABLE | default(false) }}
|
||||
group_id = 1_events
|
||||
topic = monevents
|
||||
batch_size = 1
|
||||
|
||||
[kafka_metrics]
|
||||
# Comma separated list of Kafka broker host:port
|
||||
uri = {{ KAFKA_URI }}
|
||||
group_id = {{ KAFKA_METRICS_GROUP_ID }}
|
||||
topic = metrics
|
||||
consumer_id = 1
|
||||
client_id = 1
|
||||
batch_size = {{ KAFKA_METRICS_BATCH_SIZE }}
|
||||
max_wait_time_seconds = {{ KAFKA_METRICS_WAIT_TIME }}
|
||||
# The following 3 values are set to the kakfa-python defaults
|
||||
fetch_size_bytes = 4096
|
||||
buffer_size = 4096
|
||||
# 8 times buffer size
|
||||
max_buffer_size = 32768
|
||||
# Path in zookeeper for kafka consumer group partitioning algo
|
||||
zookeeper_path = /persister_partitions/metrics
|
||||
num_processors = {{ KAFKA_METRICS_PROCESSORS | default(1) }}
|
||||
legacy_kafka_client_enabled= {{ KAFKA_LEGACY_CLIENT_ENABLED | default(false) }}
|
||||
|
||||
{% if DATABASE_BACKEND | lower == 'cassandra' %}
|
||||
[cassandra]
|
||||
contact_points = {{ CASSANDRA_HOSTS }}
|
||||
port = {{ CASSANDRA_PORT }}
|
||||
keyspace = {{ CASSANDRA_KEY_SPACE }}
|
||||
user = {{ CASSANDRA_USER }}
|
||||
password = {{ CASSANDRA_PASSWORD }}
|
||||
connection_timeout = {{ CASSANDRA_CONNECTION_TIMEOUT }}
|
||||
max_definition_cache_size = {{ CASSANDRA_MAX_CACHE_SIZE }}
|
||||
retention_policy = {{ CASSANDRA_RETENTION_POLICY }}
|
||||
{% else %}
|
||||
[influxdb]
|
||||
database_name = {{ INFLUX_DB }}
|
||||
ip_address = {{ INFLUX_HOST }}
|
||||
port = {{ INFLUX_PORT }}
|
||||
user = {{ INFLUX_USER }}
|
||||
password = {{ INFLUX_PASSWORD }}
|
||||
{% endif %}
|
||||
@@ -1,43 +0,0 @@
|
||||
[loggers]
|
||||
keys = root, kafka, influxdb, cassandra
|
||||
|
||||
[handlers]
|
||||
keys = console
|
||||
|
||||
[formatters]
|
||||
keys = generic
|
||||
|
||||
[logger_root]
|
||||
level = {{ LOG_LEVEL }}
|
||||
formatter = default
|
||||
handlers = console
|
||||
|
||||
[logger_kafka]
|
||||
qualname = kafka
|
||||
level = {{ LOG_LEVEL_KAFKA }}
|
||||
formatter = default
|
||||
handlers = console
|
||||
propagate = 0
|
||||
|
||||
[logger_influxdb]
|
||||
qualname = influxdb
|
||||
level = {{ LOG_LEVEL_INFLUXDB }}
|
||||
formatter = default
|
||||
handlers = console
|
||||
propagate = 0
|
||||
|
||||
[logger_cassandra]
|
||||
qualname = cassandra
|
||||
level = {{ LOG_LEVEL_CASSANDRA }}
|
||||
formatter = default
|
||||
handlers = console
|
||||
propagate = 0
|
||||
|
||||
[handler_console]
|
||||
class = logging.StreamHandler
|
||||
args = (sys.stderr,)
|
||||
level = DEBUG
|
||||
formatter = generic
|
||||
|
||||
[formatter_generic]
|
||||
format = %(asctime)s %(levelname)s [%(name)s][%(threadName)s] %(message)s
|
||||
@@ -1,42 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
# Starting script.
|
||||
# All checks and configuration templating you need to do before service
|
||||
# could be safely started should be added in this file.
|
||||
|
||||
set -eo pipefail # Exit the script if any statement returns error.
|
||||
|
||||
# Test services we need before starting our service.
|
||||
echo "Start script: waiting for needed services"
|
||||
python3 /kafka_wait_for_topics.py
|
||||
|
||||
# Template all config files before start, it will use env variables.
|
||||
# Read usage examples: https://pypi.org/project/Templer/
|
||||
echo "Start script: creating config files from templates"
|
||||
templer -v -f /etc/monasca/monasca-persister.conf.j2 /etc/monasca/monasca-persister.conf
|
||||
templer -v -f /etc/monasca/persister-logging.conf.j2 /etc/monasca/persister-logging.conf
|
||||
|
||||
# Start our service.
|
||||
# gunicorn --args
|
||||
echo "Start script: starting container"
|
||||
monasca-persister --config-file /etc/monasca/monasca-persister.conf
|
||||
|
||||
# Allow server to stay alive in case of failure for 2 hours for debugging.
|
||||
RESULT=$?
|
||||
if [ $RESULT != 0 ] && [ "$STAY_ALIVE_ON_FAILURE" = "true" ]; then
|
||||
echo "Service died, waiting 120 min before exiting"
|
||||
sleep 7200
|
||||
fi
|
||||
exit $RESULT
|
||||
@@ -1,50 +0,0 @@
|
||||
[loggers]
|
||||
keys = root, kafka, influxdb, cassandra
|
||||
|
||||
[handlers]
|
||||
keys = console, file
|
||||
|
||||
[formatters]
|
||||
keys = generic
|
||||
|
||||
[logger_root]
|
||||
level = INFO
|
||||
formatter = default
|
||||
handlers = console, file
|
||||
|
||||
[logger_kafka]
|
||||
qualname = kafka
|
||||
level = INFO
|
||||
formatter = default
|
||||
handlers = console, file
|
||||
propagate = 0
|
||||
|
||||
[logger_influxdb]
|
||||
qualname = influxdb
|
||||
level = INFO
|
||||
formatter = default
|
||||
handlers = console, file
|
||||
propagate = 0
|
||||
|
||||
[logger_cassandra]
|
||||
qualname = cassandra
|
||||
level = INFO
|
||||
formatter = default
|
||||
handlers = console, file
|
||||
propagate = 0
|
||||
|
||||
[handler_console]
|
||||
class = logging.StreamHandler
|
||||
args = (sys.stderr,)
|
||||
level = DEBUG
|
||||
formatter = generic
|
||||
|
||||
[handler_file]
|
||||
class = logging.handlers.RotatingFileHandler
|
||||
level = DEBUG
|
||||
formatter = generic
|
||||
# store up to 5*100MB of logs
|
||||
args = ('/var/log/monasca/persister/persister.log', 'a', 104857600, 5)
|
||||
|
||||
[formatter_generic]
|
||||
format = %(asctime)s %(levelname)s [%(name)s][%(threadName)s] %(message)s
|
||||
@@ -1,68 +0,0 @@
|
||||
monasca-persister
|
||||
=================
|
||||
|
||||
.. warning::
|
||||
|
||||
Java implementation of monasca-persister is deprecated as of Train release.
|
||||
|
||||
The Monasca Persister consumes metrics and alarm state transitions
|
||||
from the Apache Kafka message queue and stores them in the time series
|
||||
database.
|
||||
|
||||
Although the Persister isn't primarily a Web service it uses DropWizard,
|
||||
https://dropwizard.github.io/dropwizard/, which provides a nice Web
|
||||
application framework to expose an http endpoint that provides an
|
||||
interface through which metrics about the Persister can be queried as
|
||||
well as health status.
|
||||
|
||||
The basic design of the Persister is to have one Kafka consumer publish
|
||||
to a Disruptor, https://github.com/LMAX-Exchange/disruptor, that has
|
||||
output processors. The output processors use prepared batch statements
|
||||
to write to the Metrics and Alarms database.
|
||||
|
||||
The number of output processors/threads in the Persister can be
|
||||
specified to scale to more messages. To horizontally scale and provide
|
||||
fault-tolerance any number of Persisters can be started as consumers
|
||||
from the Message Queue.
|
||||
|
||||
Build
|
||||
=====
|
||||
|
||||
Requires monasca-common from
|
||||
https://opendev.org/openstack/monasca-common. Download and build
|
||||
following instructions in its README.rst. Then build monasca-persister
|
||||
by:
|
||||
|
||||
::
|
||||
|
||||
mvn clean package
|
||||
|
||||
Configuration
|
||||
=============
|
||||
|
||||
A sample configuration file is available in
|
||||
java/src/deb/etc/persister-config.yml-sample.
|
||||
|
||||
A second configuration file is provided in
|
||||
java/src/main/resources/persister-config.yml for use with the `vagrant
|
||||
"mini-mon" development environment`_.
|
||||
|
||||
TODO
|
||||
====
|
||||
|
||||
The following list is historic. Current work is tracked in `Storyboard`_.
|
||||
|
||||
- Purge metrics on shutdown
|
||||
- Add more robust offset management in Kafka. Currently, the offset is
|
||||
advanced as each message is read. If the Persister stops after the
|
||||
metric has been read and prior to it being committed to the Metrics
|
||||
and Alarms database, the metric will be lost.
|
||||
- Add better handling of SQL exceptions.
|
||||
- Complete health check.
|
||||
- Specify and document the names of the metrics that are available for
|
||||
monitoring of the Persister.
|
||||
- Document the yaml configuration parameters.
|
||||
|
||||
.. _vagrant "mini-mon" development environment: https://github.com/openstack/monasca-vagrant/
|
||||
.. _Storyboard: https://storyboard.openstack.org
|
||||
|
||||
306
java/pom.xml
306
java/pom.xml
@@ -1,306 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
|
||||
<groupId>monasca</groupId>
|
||||
<artifactId>monasca-persister</artifactId>
|
||||
<version>1.3.0-SNAPSHOT</version>
|
||||
<url>http://github.com/openstack/monasca-persister</url>
|
||||
<packaging>jar</packaging>
|
||||
|
||||
<prerequisites>
|
||||
<maven>3.0</maven>
|
||||
</prerequisites>
|
||||
|
||||
<properties>
|
||||
<gitRevision></gitRevision>
|
||||
<timestamp>${maven.build.timestamp}</timestamp>
|
||||
<maven.build.timestamp.format>yyyy-MM-dd'T'HH:mm:ss</maven.build.timestamp.format>
|
||||
<computedVersion>${project.version}-${timestamp}-${gitRevision}</computedVersion>
|
||||
<computedName>${project.artifactId}-${computedVersion}</computedName>
|
||||
<mon.common.version>1.3.0-SNAPSHOT</mon.common.version>
|
||||
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
|
||||
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
|
||||
<timestamp>${maven.build.timestamp}</timestamp>
|
||||
<maven.build.timestamp.format>yyyy-MM-dd'T'HH:mm:ss</maven.build.timestamp.format>
|
||||
<artifactNamedVersion>${project.artifactId}-${project.version}-${timestamp}-${buildNumber}
|
||||
</artifactNamedVersion>
|
||||
<shadedJarName>${project.artifactId}-${project.version}-shaded
|
||||
</shadedJarName>
|
||||
<!-- This line can be removed after updating openjdk-8-jdk package
|
||||
to version newer then 8u181-b13-2 and/or maven-surefire-plugin 3.0.0
|
||||
is released. -->
|
||||
<argLine>-Djdk.net.URLClassPath.disableClassPathURLCheck=true</argLine>
|
||||
</properties>
|
||||
|
||||
<!--Needed for buildnumber-maven-plugin-->
|
||||
<scm>
|
||||
<connection>scm:git:git@github.com:openstack/monasca-persister</connection>
|
||||
<developerConnection>scm:git:git@github.com:openstack/monasca-persister
|
||||
</developerConnection>
|
||||
</scm>
|
||||
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>monasca-common</groupId>
|
||||
<artifactId>monasca-common-model</artifactId>
|
||||
<version>${mon.common.version}</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>monasca-common</groupId>
|
||||
<artifactId>monasca-common-influxdb</artifactId>
|
||||
<version>${mon.common.version}</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>monasca-common</groupId>
|
||||
<artifactId>monasca-common-cassandra</artifactId>
|
||||
<version>${mon.common.version}</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.datastax.cassandra</groupId>
|
||||
<artifactId>cassandra-driver-core</artifactId>
|
||||
<version>3.1.0</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.datastax.cassandra</groupId>
|
||||
<artifactId>cassandra-driver-mapping</artifactId>
|
||||
<version>3.1.0</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.datastax.cassandra</groupId>
|
||||
<artifactId>cassandra-driver-extras</artifactId>
|
||||
<version>3.1.0</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.kafka</groupId>
|
||||
<artifactId>kafka_2.11</artifactId>
|
||||
<version>0.8.2.2</version>
|
||||
<exclusions>
|
||||
<exclusion>
|
||||
<groupId>com.sun.jmx</groupId>
|
||||
<artifactId>jmxri</artifactId>
|
||||
</exclusion>
|
||||
<exclusion>
|
||||
<groupId>com.sun.jdmk</groupId>
|
||||
<artifactId>jmxtools</artifactId>
|
||||
</exclusion>
|
||||
<exclusion>
|
||||
<groupId>org.slf4j</groupId>
|
||||
<artifactId>slf4j-simple</artifactId>
|
||||
</exclusion>
|
||||
</exclusions>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.dropwizard</groupId>
|
||||
<artifactId>dropwizard-core</artifactId>
|
||||
<version>0.7.0</version>
|
||||
<exclusions>
|
||||
<exclusion>
|
||||
<groupId>com.codahale.metrics</groupId>
|
||||
<artifactId>metrics-core</artifactId>
|
||||
</exclusion>
|
||||
</exclusions>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.dropwizard</groupId>
|
||||
<artifactId>dropwizard-jdbi</artifactId>
|
||||
<version>0.7.0</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.google.inject</groupId>
|
||||
<artifactId>guice</artifactId>
|
||||
<version>3.0</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.google.inject.extensions</groupId>
|
||||
<artifactId>guice-assistedinject</artifactId>
|
||||
<version>3.0</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.google.guava</groupId>
|
||||
<artifactId>guava</artifactId>
|
||||
<version>17.0</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.mockito</groupId>
|
||||
<artifactId>mockito-all</artifactId>
|
||||
<version>1.9.5</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>junit</groupId>
|
||||
<artifactId>junit</artifactId>
|
||||
<version>4.11</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>commons-codec</groupId>
|
||||
<artifactId>commons-codec</artifactId>
|
||||
<version>1.5</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.influxdb</groupId>
|
||||
<artifactId>influxdb-java</artifactId>
|
||||
<version>1.5</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.httpcomponents</groupId>
|
||||
<artifactId>httpclient</artifactId>
|
||||
<version>4.4</version>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
|
||||
|
||||
<build>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-compiler-plugin</artifactId>
|
||||
<version>3.1</version>
|
||||
<configuration>
|
||||
<compilerArgument>-Xlint:all</compilerArgument>
|
||||
<source>1.7</source>
|
||||
<target>1.7</target>
|
||||
<encoding>UTF-8</encoding>
|
||||
</configuration>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-shade-plugin</artifactId>
|
||||
<version>1.2</version>
|
||||
<configuration>
|
||||
<finalName>${artifactNamedVersion}</finalName>
|
||||
<createDependencyReducedPom>true</createDependencyReducedPom>
|
||||
<filters>
|
||||
<filter>
|
||||
<!-- *:* can't be used for artifact because we are using an older shade plugin -->
|
||||
<artifact>org.eclipse.jetty.orbit:javax.servlet</artifact>
|
||||
<excludes>
|
||||
<exclude>META-INF/*.SF</exclude>
|
||||
<exclude>META-INF/*.DSA</exclude>
|
||||
<exclude>META-INF/*.RSA</exclude>
|
||||
</excludes>
|
||||
</filter>
|
||||
</filters>
|
||||
</configuration>
|
||||
<executions>
|
||||
<execution>
|
||||
<phase>package</phase>
|
||||
<goals>
|
||||
<goal>shade</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<transformers>
|
||||
<transformer
|
||||
implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
|
||||
<transformer
|
||||
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
|
||||
<mainClass>monasca.persister.PersisterApplication
|
||||
</mainClass>
|
||||
</transformer>
|
||||
</transformers>
|
||||
<shadedArtifactAttached>true</shadedArtifactAttached>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-jar-plugin</artifactId>
|
||||
<version>2.4</version>
|
||||
<configuration>
|
||||
<archive>
|
||||
<manifest>
|
||||
<packageName>monasca.persister</packageName>
|
||||
</manifest>
|
||||
<manifestEntries>
|
||||
<Implementation-Version>${artifactNamedVersion}</Implementation-Version>
|
||||
</manifestEntries>
|
||||
</archive>
|
||||
</configuration>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<artifactId>maven-clean-plugin</artifactId>
|
||||
<version>2.5</version>
|
||||
<configuration>
|
||||
<filesets>
|
||||
<fileset>
|
||||
<directory>${project.basedir}/debs</directory>
|
||||
</fileset>
|
||||
</filesets>
|
||||
</configuration>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<groupId>org.codehaus.mojo</groupId>
|
||||
<artifactId>buildnumber-maven-plugin</artifactId>
|
||||
<version>1.2</version>
|
||||
<executions>
|
||||
<execution>
|
||||
<phase>validate</phase>
|
||||
<goals>
|
||||
<goal>create</goal>
|
||||
</goals>
|
||||
</execution>
|
||||
</executions>
|
||||
<configuration>
|
||||
<doCheck>false</doCheck>
|
||||
<shortRevisionLength>6</shortRevisionLength>
|
||||
</configuration>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<artifactId>maven-assembly-plugin</artifactId>
|
||||
<version>2.4.1</version>
|
||||
<configuration>
|
||||
<descriptors>
|
||||
<descriptor>src/assembly/tar.xml</descriptor>
|
||||
</descriptors>
|
||||
<finalName>${artifactNamedVersion}</finalName>
|
||||
</configuration>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>make-assembly</id>
|
||||
<phase>package</phase>
|
||||
<goals>
|
||||
<goal>single</goal>
|
||||
</goals>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<artifactId>jdeb</artifactId>
|
||||
<groupId>org.vafer</groupId>
|
||||
<version>1.0.1</version>
|
||||
<executions>
|
||||
<execution>
|
||||
<phase>package</phase>
|
||||
<goals>
|
||||
<goal>jdeb</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<deb>${project.basedir}/debs/binaries/${artifactNamedVersion}.deb</deb>
|
||||
<dataSet>
|
||||
<data>
|
||||
<type>file</type>
|
||||
<src>${project.build.directory}/${shadedJarName}.jar
|
||||
</src>
|
||||
<dst>/opt/monasca/monasca-persister.jar</dst>
|
||||
</data>
|
||||
<data>
|
||||
<type>file</type>
|
||||
<src>
|
||||
${project.basedir}/src/deb/etc/persister-config.yml-sample
|
||||
</src>
|
||||
<dst>/etc/monasca/persister-config.yml-sample</dst>
|
||||
</data>
|
||||
</dataSet>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
|
||||
|
||||
</project>
|
||||
@@ -1,29 +0,0 @@
|
||||
<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd">
|
||||
<id>tar</id>
|
||||
<formats>
|
||||
<format>tar.gz</format>
|
||||
</formats>
|
||||
<fileSets>
|
||||
<fileSet>
|
||||
<directory>${project.basedir}</directory>
|
||||
<outputDirectory>/</outputDirectory>
|
||||
<includes>
|
||||
<include>README*</include>
|
||||
<include>LICENSE*</include>
|
||||
</includes>
|
||||
</fileSet>
|
||||
</fileSets>
|
||||
<files>
|
||||
<file>
|
||||
<source>${project.build.directory}/${shadedJarName}.jar</source>
|
||||
<outputDirectory>/</outputDirectory>
|
||||
<destName>monasca-persister.jar</destName>
|
||||
</file>
|
||||
<file>
|
||||
<source>${project.basedir}/src/deb/etc/persister-config.yml-sample</source>
|
||||
<outputDirectory>examples</outputDirectory>
|
||||
</file>
|
||||
</files>
|
||||
</assembly>
|
||||
@@ -1,9 +0,0 @@
|
||||
Package: [[name]]
|
||||
Section: misc
|
||||
Priority: optional
|
||||
Architecture: all
|
||||
Depends: openjdk-7-jre-headless | openjdk-7-jre
|
||||
Version: [[version]]-[[timestamp]]-[[buildNumber]]
|
||||
Maintainer: HPCloud Monitoring <hpcs-mon@hp.com>
|
||||
Description: Monasca-Persister
|
||||
Reads data from Kafka and inserts into the Monasca DB.
|
||||
@@ -1,138 +0,0 @@
|
||||
name: monasca-persister
|
||||
|
||||
alarmHistoryConfiguration:
|
||||
batchSize: 100
|
||||
numThreads: 2
|
||||
maxBatchTime: 15
|
||||
# See http://kafka.apache.org/documentation.html#api for semantics and defaults.
|
||||
topic: alarm-state-transitions
|
||||
groupId: 1_alarm-state-transitions
|
||||
consumerId: mini-mon
|
||||
clientId: 1
|
||||
|
||||
metricConfiguration:
|
||||
batchSize: 10000
|
||||
numThreads: 4
|
||||
maxBatchTime: 15
|
||||
# See http://kafka.apache.org/documentation.html#api for semantics and defaults.
|
||||
topic: metrics
|
||||
groupId: 1_metrics
|
||||
consumerId: mini-mon
|
||||
clientId: 1
|
||||
|
||||
#Kafka settings.
|
||||
kafkaConfig:
|
||||
# See http://kafka.apache.org/documentation.html#api for semantics and defaults.
|
||||
zookeeperConnect: localhost:2181
|
||||
socketTimeoutMs: 30000
|
||||
socketReceiveBufferBytes : 65536
|
||||
fetchMessageMaxBytes: 1048576
|
||||
queuedMaxMessageChunks: 10
|
||||
rebalanceMaxRetries: 4
|
||||
fetchMinBytes: 1
|
||||
fetchWaitMaxMs: 100
|
||||
rebalanceBackoffMs: 2000
|
||||
refreshLeaderBackoffMs: 200
|
||||
autoOffsetReset: largest
|
||||
consumerTimeoutMs: 1000
|
||||
zookeeperSessionTimeoutMs : 60000
|
||||
zookeeperConnectionTimeoutMs : 60000
|
||||
zookeeperSyncTimeMs: 2000
|
||||
|
||||
verticaMetricRepoConfig:
|
||||
maxCacheSize: 2000000
|
||||
|
||||
databaseConfiguration:
|
||||
# databaseType can be (vertica | influxdb)
|
||||
databaseType: influxdb
|
||||
|
||||
# Uncomment if databaseType is influxdb
|
||||
influxDbConfiguration:
|
||||
# Retention policy may be left blank to indicate default policy.
|
||||
retentionPolicy:
|
||||
# Used only if version is V9.
|
||||
maxHttpConnections: 100
|
||||
name: mon
|
||||
replicationFactor: 1
|
||||
url: http://localhost:8086
|
||||
user: mon_persister
|
||||
password: password
|
||||
|
||||
# Uncomment if databaseType is vertica
|
||||
#dataSourceFactory:
|
||||
# driverClass: com.vertica.jdbc.Driver
|
||||
# url: jdbc:vertica://locahost:5433/mon
|
||||
# user: dbadmin
|
||||
# password: password
|
||||
# properties:
|
||||
# ssl: false
|
||||
# # the maximum amount of time to wait on an empty pool before throwing an exception
|
||||
# maxWaitForConnection: 1s
|
||||
#
|
||||
# # the SQL query to run when validating a connection's liveness
|
||||
# validationQuery: "/* MyService Health Check */ SELECT 1"
|
||||
#
|
||||
# # the minimum number of connections to keep open
|
||||
# minSize: 8
|
||||
#
|
||||
# # the maximum number of connections to keep open
|
||||
# maxSize: 41
|
||||
#
|
||||
# # whether or not idle connections should be validated
|
||||
# checkConnectionWhileIdle: false
|
||||
#
|
||||
# # the maximum lifetime of an idle connection
|
||||
# maxConnectionAge: 1 minute
|
||||
|
||||
metrics:
|
||||
frequency: 1 second
|
||||
|
||||
# Logging settings.
|
||||
logging:
|
||||
|
||||
# The default level of all loggers. Can be OFF, ERROR, WARN, INFO,
|
||||
# DEBUG, TRACE, or ALL.
|
||||
level: INFO
|
||||
|
||||
# Logger-specific levels.
|
||||
loggers:
|
||||
monasca: DEBUG
|
||||
|
||||
appenders:
|
||||
# Uncomment to enable logging to the console:
|
||||
#- type: console
|
||||
# threshold: DEBUG
|
||||
# timeZone: UTC
|
||||
# target: stdout
|
||||
|
||||
- type: file
|
||||
threshold: INFO
|
||||
archive: true
|
||||
# The file to which current statements will be logged.
|
||||
currentLogFilename: /var/log/monasca/persister/monasca-persister.log
|
||||
|
||||
# When the log file rotates, the archived log will be renamed to this and gzipped. The
|
||||
# %d is replaced with the previous day (yyyy-MM-dd). Custom rolling windows can be created
|
||||
# by passing a SimpleDateFormat-compatible format as an argument: "%d{yyyy-MM-dd-hh}".
|
||||
archivedLogFilenamePattern: /var/log/monasca/persister/monasca-persister.log-%d.log.gz
|
||||
|
||||
# The number of archived files to keep.
|
||||
archivedFileCount: 5
|
||||
|
||||
# The timezone used to format dates. HINT: USE THE DEFAULT, UTC.
|
||||
timeZone: UTC
|
||||
|
||||
# Uncomment to approximately match the default log format of the python
|
||||
# Openstack components. %pid is unavoidably formatted with [brackets],
|
||||
# which are hard-coded in dropwizard's logging module.
|
||||
# See http://logback.qos.ch/manual/layouts.html#conversionWord for details of the format string
|
||||
# logFormat: "%app%pid: %d{YYYY-MM-dd HH:mm:ss.SSS} %pid %level %logger [-] [%thread] %msg %ex{1}"
|
||||
|
||||
# Set the persister ports to 8090/8091 to avoid conflict with the api
|
||||
server:
|
||||
applicationConnectors:
|
||||
- type: http
|
||||
port: 8090
|
||||
adminConnectors:
|
||||
- type: http
|
||||
port: 8091
|
||||
@@ -1,236 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister;
|
||||
|
||||
import com.google.common.util.concurrent.ThreadFactoryBuilder;
|
||||
import com.google.inject.Guice;
|
||||
import com.google.inject.Injector;
|
||||
import com.google.inject.Key;
|
||||
import com.google.inject.TypeLiteral;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.util.concurrent.ExecutorService;
|
||||
import java.util.concurrent.Executors;
|
||||
import java.util.concurrent.ThreadFactory;
|
||||
|
||||
import io.dropwizard.Application;
|
||||
import io.dropwizard.setup.Bootstrap;
|
||||
import io.dropwizard.setup.Environment;
|
||||
import monasca.common.model.event.AlarmStateTransitionedEvent;
|
||||
import monasca.common.model.metric.MetricEnvelope;
|
||||
import monasca.persister.configuration.PersisterConfig;
|
||||
import monasca.persister.consumer.ManagedConsumer;
|
||||
import monasca.persister.consumer.ManagedConsumerFactory;
|
||||
import monasca.persister.consumer.KafkaChannel;
|
||||
import monasca.persister.consumer.KafkaChannelFactory;
|
||||
import monasca.persister.consumer.KafkaConsumer;
|
||||
import monasca.persister.consumer.KafkaConsumerFactory;
|
||||
import monasca.persister.consumer.KafkaConsumerRunnableBasic;
|
||||
import monasca.persister.consumer.KafkaConsumerRunnableBasicFactory;
|
||||
import monasca.persister.healthcheck.SimpleHealthCheck;
|
||||
import monasca.persister.pipeline.ManagedPipeline;
|
||||
import monasca.persister.pipeline.ManagedPipelineFactory;
|
||||
import monasca.persister.pipeline.event.AlarmStateTransitionHandlerFactory;
|
||||
import monasca.persister.pipeline.event.MetricHandlerFactory;
|
||||
import monasca.persister.resource.Resource;
|
||||
|
||||
public class PersisterApplication extends Application<PersisterConfig> {
|
||||
private static final Logger logger = LoggerFactory.getLogger(PersisterApplication.class);
|
||||
|
||||
public static void main(String[] args) throws Exception {
|
||||
/*
|
||||
* This should allow command line options to show the current version
|
||||
* java -jar monasca-persister.jar --version
|
||||
* java -jar monasca-persister.jar -version
|
||||
* java -jar monasca-persister.jar version
|
||||
* Really anything with the word version in it will show the
|
||||
* version as long as there is only one argument
|
||||
* */
|
||||
if (args.length == 1 && args[0].toLowerCase().contains("version")) {
|
||||
showVersion();
|
||||
System.exit(0);
|
||||
}
|
||||
|
||||
new PersisterApplication().run(args);
|
||||
}
|
||||
|
||||
private static void showVersion() {
|
||||
Package pkg;
|
||||
pkg = Package.getPackage("monasca.persister");
|
||||
|
||||
System.out.println("-------- Version Information --------");
|
||||
System.out.println(pkg.getImplementationVersion());
|
||||
}
|
||||
|
||||
@Override
|
||||
public void initialize(Bootstrap<PersisterConfig> bootstrap) {
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getName() {
|
||||
return "monasca-persister";
|
||||
}
|
||||
|
||||
@Override
|
||||
public void run(PersisterConfig configuration, Environment environment)
|
||||
throws Exception {
|
||||
|
||||
Injector injector = Guice.createInjector(new PersisterModule(configuration, environment));
|
||||
|
||||
// Sample resource.
|
||||
environment.jersey().register(new Resource());
|
||||
|
||||
// Sample health check.
|
||||
environment.healthChecks().register("test-health-check", new SimpleHealthCheck());
|
||||
|
||||
final KafkaChannelFactory kafkaChannelFactory = injector.getInstance(KafkaChannelFactory.class);
|
||||
|
||||
final ManagedConsumerFactory<MetricEnvelope[]> metricManagedConsumerFactory =
|
||||
injector.getInstance(Key.get(new TypeLiteral<ManagedConsumerFactory<MetricEnvelope[]>>() {}));
|
||||
|
||||
// Metrics
|
||||
final KafkaConsumerFactory<MetricEnvelope[]> kafkaMetricConsumerFactory =
|
||||
injector.getInstance(Key.get(new TypeLiteral<KafkaConsumerFactory<MetricEnvelope[]>>(){}));
|
||||
|
||||
final KafkaConsumerRunnableBasicFactory<MetricEnvelope[]> kafkaMetricConsumerRunnableBasicFactory =
|
||||
injector.getInstance(
|
||||
Key.get(new TypeLiteral<KafkaConsumerRunnableBasicFactory<MetricEnvelope[]>>() {
|
||||
}));
|
||||
|
||||
ThreadFactory threadFactory = new ThreadFactoryBuilder()
|
||||
.setDaemon(true)
|
||||
.build();
|
||||
|
||||
int totalNumberOfThreads = configuration.getMetricConfiguration().getNumThreads()
|
||||
+ configuration.getAlarmHistoryConfiguration().getNumThreads();
|
||||
|
||||
ExecutorService executorService = Executors.newFixedThreadPool(totalNumberOfThreads, threadFactory);
|
||||
|
||||
for (int i = 0; i < configuration.getMetricConfiguration().getNumThreads(); i++) {
|
||||
|
||||
String threadId = "metric-" + String.valueOf(i);
|
||||
|
||||
final KafkaChannel kafkaMetricChannel =
|
||||
kafkaChannelFactory.create(configuration.getMetricConfiguration(), threadId);
|
||||
|
||||
final ManagedPipeline<MetricEnvelope[]> managedMetricPipeline =
|
||||
getMetricPipeline(configuration, threadId, injector);
|
||||
|
||||
KafkaConsumerRunnableBasic<MetricEnvelope[]> kafkaMetricConsumerRunnableBasic =
|
||||
kafkaMetricConsumerRunnableBasicFactory.create(managedMetricPipeline, kafkaMetricChannel, threadId);
|
||||
|
||||
final KafkaConsumer<MetricEnvelope[]> kafkaMetricConsumer =
|
||||
kafkaMetricConsumerFactory.create(kafkaMetricConsumerRunnableBasic, threadId, executorService);
|
||||
|
||||
ManagedConsumer<MetricEnvelope[]> managedMetricConsumer =
|
||||
metricManagedConsumerFactory.create(kafkaMetricConsumer, threadId);
|
||||
|
||||
environment.lifecycle().manage(managedMetricConsumer);
|
||||
}
|
||||
|
||||
// AlarmStateTransitions
|
||||
final ManagedConsumerFactory<AlarmStateTransitionedEvent>
|
||||
alarmStateTransitionsManagedConsumerFactory = injector.getInstance(Key.get(new TypeLiteral
|
||||
<ManagedConsumerFactory<AlarmStateTransitionedEvent>>(){}));
|
||||
|
||||
final KafkaConsumerFactory<AlarmStateTransitionedEvent>
|
||||
kafkaAlarmStateTransitionConsumerFactory =
|
||||
injector.getInstance(Key.get(new TypeLiteral<KafkaConsumerFactory<AlarmStateTransitionedEvent>>() { }));
|
||||
|
||||
final KafkaConsumerRunnableBasicFactory<AlarmStateTransitionedEvent> kafkaAlarmStateTransitionConsumerRunnableBasicFactory =
|
||||
injector.getInstance(Key.get(new TypeLiteral<KafkaConsumerRunnableBasicFactory
|
||||
<AlarmStateTransitionedEvent>>(){})) ;
|
||||
|
||||
for (int i = 0; i < configuration.getAlarmHistoryConfiguration().getNumThreads(); i++) {
|
||||
|
||||
String threadId = "alarm-state-transition-" + String.valueOf(i);
|
||||
|
||||
final KafkaChannel kafkaAlarmStateTransitionChannel =
|
||||
kafkaChannelFactory
|
||||
.create(configuration.getAlarmHistoryConfiguration(), threadId);
|
||||
|
||||
final ManagedPipeline<AlarmStateTransitionedEvent> managedAlarmStateTransitionPipeline =
|
||||
getAlarmStateHistoryPipeline(configuration, threadId, injector);
|
||||
|
||||
KafkaConsumerRunnableBasic<AlarmStateTransitionedEvent> kafkaAlarmStateTransitionConsumerRunnableBasic =
|
||||
kafkaAlarmStateTransitionConsumerRunnableBasicFactory.create(managedAlarmStateTransitionPipeline, kafkaAlarmStateTransitionChannel, threadId);
|
||||
|
||||
final KafkaConsumer<AlarmStateTransitionedEvent> kafkaAlarmStateTransitionConsumer =
|
||||
kafkaAlarmStateTransitionConsumerFactory.create(kafkaAlarmStateTransitionConsumerRunnableBasic, threadId,
|
||||
executorService);
|
||||
|
||||
ManagedConsumer<AlarmStateTransitionedEvent> managedAlarmStateTransitionConsumer =
|
||||
alarmStateTransitionsManagedConsumerFactory.create(kafkaAlarmStateTransitionConsumer, threadId);
|
||||
|
||||
environment.lifecycle().manage(managedAlarmStateTransitionConsumer);
|
||||
}
|
||||
}
|
||||
|
||||
private ManagedPipeline<MetricEnvelope[]> getMetricPipeline(
|
||||
PersisterConfig configuration,
|
||||
String threadId,
|
||||
Injector injector) {
|
||||
|
||||
logger.debug("Creating metric pipeline [{}]...", threadId);
|
||||
|
||||
final int batchSize = configuration.getMetricConfiguration().getBatchSize();
|
||||
logger.debug("Batch size for metric pipeline [{}]", batchSize);
|
||||
|
||||
MetricHandlerFactory metricEventHandlerFactory =
|
||||
injector.getInstance(MetricHandlerFactory.class);
|
||||
|
||||
ManagedPipelineFactory<MetricEnvelope[]>
|
||||
managedPipelineFactory = injector.getInstance(Key.get(new TypeLiteral
|
||||
<ManagedPipelineFactory<MetricEnvelope[]>>(){}));
|
||||
|
||||
final ManagedPipeline<MetricEnvelope[]> pipeline =
|
||||
managedPipelineFactory.create(metricEventHandlerFactory.create(
|
||||
configuration.getMetricConfiguration(), threadId, batchSize), threadId);
|
||||
|
||||
logger.debug("Instance of metric pipeline [{}] fully created", threadId);
|
||||
|
||||
return pipeline;
|
||||
}
|
||||
|
||||
public ManagedPipeline<AlarmStateTransitionedEvent> getAlarmStateHistoryPipeline(
|
||||
PersisterConfig configuration,
|
||||
String threadId,
|
||||
Injector injector) {
|
||||
|
||||
logger.debug("Creating alarm state history pipeline [{}]...", threadId);
|
||||
|
||||
int batchSize = configuration.getAlarmHistoryConfiguration().getBatchSize();
|
||||
logger.debug("Batch size for each AlarmStateHistoryPipeline [{}]", batchSize);
|
||||
|
||||
AlarmStateTransitionHandlerFactory alarmHistoryEventHandlerFactory =
|
||||
injector.getInstance(AlarmStateTransitionHandlerFactory.class);
|
||||
|
||||
ManagedPipelineFactory<AlarmStateTransitionedEvent> alarmStateTransitionPipelineFactory =
|
||||
injector.getInstance(new Key<ManagedPipelineFactory<AlarmStateTransitionedEvent>>(){});
|
||||
|
||||
ManagedPipeline<AlarmStateTransitionedEvent> pipeline =
|
||||
alarmStateTransitionPipelineFactory.create(alarmHistoryEventHandlerFactory.create(
|
||||
configuration.getAlarmHistoryConfiguration(), threadId, batchSize), threadId);
|
||||
|
||||
logger.debug("Instance of alarm state history pipeline [{}] fully created", threadId);
|
||||
|
||||
return pipeline;
|
||||
}
|
||||
}
|
||||
@@ -1,193 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Copyright (c) 2017 SUSE LLC.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister;
|
||||
|
||||
import com.google.inject.AbstractModule;
|
||||
import com.google.inject.Scopes;
|
||||
import com.google.inject.TypeLiteral;
|
||||
import com.google.inject.assistedinject.FactoryModuleBuilder;
|
||||
|
||||
import org.skife.jdbi.v2.DBI;
|
||||
|
||||
import javax.inject.Singleton;
|
||||
|
||||
import io.dropwizard.setup.Environment;
|
||||
import monasca.common.model.event.AlarmStateTransitionedEvent;
|
||||
import monasca.common.model.metric.MetricEnvelope;
|
||||
import monasca.persister.configuration.PersisterConfig;
|
||||
import monasca.persister.consumer.ManagedConsumer;
|
||||
import monasca.persister.consumer.ManagedConsumerFactory;
|
||||
import monasca.persister.consumer.KafkaChannel;
|
||||
import monasca.persister.consumer.KafkaChannelFactory;
|
||||
import monasca.persister.consumer.KafkaConsumer;
|
||||
import monasca.persister.consumer.KafkaConsumerFactory;
|
||||
import monasca.persister.consumer.KafkaConsumerRunnableBasic;
|
||||
import monasca.persister.consumer.KafkaConsumerRunnableBasicFactory;
|
||||
import monasca.persister.dbi.DBIProvider;
|
||||
import monasca.persister.pipeline.ManagedPipeline;
|
||||
import monasca.persister.pipeline.ManagedPipelineFactory;
|
||||
import monasca.persister.pipeline.event.AlarmStateTransitionHandler;
|
||||
import monasca.persister.pipeline.event.AlarmStateTransitionHandlerFactory;
|
||||
import monasca.persister.pipeline.event.MetricHandler;
|
||||
import monasca.persister.pipeline.event.MetricHandlerFactory;
|
||||
import monasca.persister.repository.Repo;
|
||||
import monasca.persister.repository.cassandra.CassandraAlarmRepo;
|
||||
import monasca.persister.repository.cassandra.CassandraCluster;
|
||||
import monasca.persister.repository.cassandra.CassandraMetricRepo;
|
||||
import monasca.persister.repository.influxdb.InfluxV9AlarmRepo;
|
||||
import monasca.persister.repository.influxdb.InfluxV9MetricRepo;
|
||||
import monasca.persister.repository.influxdb.InfluxV9RepoWriter;
|
||||
import monasca.persister.repository.vertica.VerticaAlarmRepo;
|
||||
import monasca.persister.repository.vertica.VerticaMetricRepo;
|
||||
|
||||
public class PersisterModule extends AbstractModule {
|
||||
|
||||
private static final String VERTICA = "vertica";
|
||||
private static final String INFLUXDB = "influxdb";
|
||||
private static final String CASSANDRA = "cassandra";
|
||||
private static final String INFLUXDB_V9 = "v9";
|
||||
|
||||
private final PersisterConfig config;
|
||||
private final Environment env;
|
||||
|
||||
public PersisterModule(PersisterConfig config, Environment env) {
|
||||
this.config = config;
|
||||
this.env = env;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void configure() {
|
||||
|
||||
bind(PersisterConfig.class).toInstance(config);
|
||||
bind(Environment.class).toInstance(env);
|
||||
|
||||
install(
|
||||
new FactoryModuleBuilder().implement(
|
||||
MetricHandler.class,
|
||||
MetricHandler.class)
|
||||
.build(MetricHandlerFactory.class));
|
||||
|
||||
install(
|
||||
new FactoryModuleBuilder().implement(
|
||||
AlarmStateTransitionHandler.class,
|
||||
AlarmStateTransitionHandler.class)
|
||||
.build(AlarmStateTransitionHandlerFactory.class));
|
||||
|
||||
install(
|
||||
new FactoryModuleBuilder().implement(
|
||||
new TypeLiteral<KafkaConsumerRunnableBasic<MetricEnvelope[]>>() {},
|
||||
new TypeLiteral<KafkaConsumerRunnableBasic<MetricEnvelope[]>>() {})
|
||||
.build(new TypeLiteral<KafkaConsumerRunnableBasicFactory<MetricEnvelope[]>>() {}));
|
||||
|
||||
install(
|
||||
new FactoryModuleBuilder().implement(
|
||||
new TypeLiteral<KafkaConsumerRunnableBasic<AlarmStateTransitionedEvent>>() {},
|
||||
new TypeLiteral<KafkaConsumerRunnableBasic<AlarmStateTransitionedEvent>>() {})
|
||||
.build(new TypeLiteral<KafkaConsumerRunnableBasicFactory<AlarmStateTransitionedEvent>>() {}));
|
||||
|
||||
install(
|
||||
new FactoryModuleBuilder().implement(
|
||||
new TypeLiteral<KafkaConsumer<MetricEnvelope[]>>() {},
|
||||
new TypeLiteral<KafkaConsumer<MetricEnvelope[]>>() {})
|
||||
.build(new TypeLiteral<KafkaConsumerFactory<MetricEnvelope[]>>() {}));
|
||||
|
||||
install(
|
||||
new FactoryModuleBuilder().implement(
|
||||
new TypeLiteral<ManagedPipeline<MetricEnvelope[]>>() {},
|
||||
new TypeLiteral<ManagedPipeline<MetricEnvelope[]>>() {})
|
||||
.build(new TypeLiteral<ManagedPipelineFactory<MetricEnvelope[]>>() {}));
|
||||
|
||||
install(
|
||||
new FactoryModuleBuilder().implement(
|
||||
new TypeLiteral<ManagedPipeline<AlarmStateTransitionedEvent>>() {},
|
||||
new TypeLiteral<ManagedPipeline<AlarmStateTransitionedEvent>>() {})
|
||||
.build(new TypeLiteral<ManagedPipelineFactory<AlarmStateTransitionedEvent>>() {}));
|
||||
|
||||
install(
|
||||
new FactoryModuleBuilder().implement(
|
||||
new TypeLiteral<ManagedConsumer<AlarmStateTransitionedEvent>>() {},
|
||||
new TypeLiteral<ManagedConsumer<AlarmStateTransitionedEvent>>() {})
|
||||
.build(new TypeLiteral<ManagedConsumerFactory<AlarmStateTransitionedEvent>>() {}));
|
||||
|
||||
install(
|
||||
new FactoryModuleBuilder().implement(
|
||||
new TypeLiteral<KafkaConsumer<AlarmStateTransitionedEvent>>() {},
|
||||
new TypeLiteral<KafkaConsumer<AlarmStateTransitionedEvent>>() {})
|
||||
.build(new TypeLiteral<KafkaConsumerFactory<AlarmStateTransitionedEvent>>() {}));
|
||||
|
||||
install(
|
||||
new FactoryModuleBuilder().implement(
|
||||
new TypeLiteral<ManagedConsumer<MetricEnvelope[]>>() {},
|
||||
new TypeLiteral<ManagedConsumer<MetricEnvelope[]>>() {})
|
||||
.build(new TypeLiteral<ManagedConsumerFactory<MetricEnvelope[]>>() {}));
|
||||
|
||||
install(
|
||||
new FactoryModuleBuilder().implement(
|
||||
KafkaChannel.class, KafkaChannel.class).build(KafkaChannelFactory.class));
|
||||
|
||||
if (config.getDatabaseConfiguration().getDatabaseType().equalsIgnoreCase(VERTICA)) {
|
||||
|
||||
bind(DBI.class).toProvider(DBIProvider.class).in(Scopes.SINGLETON);
|
||||
|
||||
bind(new TypeLiteral<Repo<MetricEnvelope>>(){})
|
||||
.to(VerticaMetricRepo.class);
|
||||
|
||||
bind(new TypeLiteral<Repo<AlarmStateTransitionedEvent>>(){})
|
||||
.to(VerticaAlarmRepo.class);
|
||||
|
||||
} else if (config.getDatabaseConfiguration().getDatabaseType().equalsIgnoreCase(INFLUXDB)) {
|
||||
|
||||
if (config.getInfluxDBConfiguration().getVersion() != null && !config
|
||||
.getInfluxDBConfiguration().getVersion().equalsIgnoreCase(INFLUXDB_V9)) {
|
||||
|
||||
System.err.println(
|
||||
"Found unsupported Influxdb version: " + config.getInfluxDBConfiguration()
|
||||
.getVersion());
|
||||
System.err.println("Supported Influxdb versions are 'v9'");
|
||||
System.err.println("Check your config file");
|
||||
System.exit(1);
|
||||
}
|
||||
|
||||
bind(InfluxV9RepoWriter.class).in(Singleton.class);
|
||||
|
||||
bind(new TypeLiteral<Repo<MetricEnvelope>>() {})
|
||||
.to(InfluxV9MetricRepo.class);
|
||||
|
||||
bind(new TypeLiteral<Repo<AlarmStateTransitionedEvent>> () {})
|
||||
.to(InfluxV9AlarmRepo.class);
|
||||
|
||||
} else if (config.getDatabaseConfiguration().getDatabaseType().equalsIgnoreCase(CASSANDRA)) {
|
||||
bind(CassandraCluster.class).in(Singleton.class);
|
||||
|
||||
bind(new TypeLiteral<Repo<MetricEnvelope>>() {}).to(CassandraMetricRepo.class);
|
||||
|
||||
bind(new TypeLiteral<Repo<AlarmStateTransitionedEvent>>() {}).to(CassandraAlarmRepo.class);
|
||||
|
||||
} else {
|
||||
|
||||
System.err.println(
|
||||
"Found unknown database type: " + config.getDatabaseConfiguration().getDatabaseType());
|
||||
System.err.println("Supported databases are 'vertica' and 'influxdb'");
|
||||
System.err.println("Check your config file.");
|
||||
System.exit(1);
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,197 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.configuration;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown=true)
|
||||
public class KafkaConfig {
|
||||
|
||||
@JsonProperty
|
||||
String topic;
|
||||
|
||||
@JsonProperty
|
||||
String zookeeperConnect;
|
||||
String _zookeeperConnect = "127.0.0.1";
|
||||
|
||||
@JsonProperty
|
||||
Integer socketTimeoutMs;
|
||||
Integer _socketTimeoutMs = 30000;
|
||||
|
||||
@JsonProperty
|
||||
Integer socketReceiveBufferBytes;
|
||||
Integer _socketReceiveBufferBytes = 65536;
|
||||
|
||||
@JsonProperty
|
||||
Integer fetchMessageMaxBytes;
|
||||
Integer _fetchMessageMaxBytes = 1048576;
|
||||
|
||||
@JsonProperty
|
||||
Integer queuedMaxMessageChunks;
|
||||
Integer _queuedMaxMessageChunks = 10;
|
||||
|
||||
@JsonProperty
|
||||
Integer rebalanceMaxRetries;
|
||||
Integer _rebalanceMaxRetries = 4;
|
||||
|
||||
@JsonProperty
|
||||
Integer fetchMinBytes;
|
||||
Integer _fetchMinBytes = 1;
|
||||
|
||||
@JsonProperty
|
||||
Integer fetchWaitMaxMs;
|
||||
Integer _fetchWaitMaxMs = 100;
|
||||
|
||||
@JsonProperty
|
||||
Integer rebalanceBackoffMs;
|
||||
Integer _rebalanceBackoffMs = 2000;
|
||||
|
||||
@JsonProperty
|
||||
Integer refreshLeaderBackoffMs;
|
||||
Integer _refreshLeaderBackoffMs = 200;
|
||||
|
||||
@JsonProperty
|
||||
String autoOffsetReset;
|
||||
String _autoOffsetReset = "largest";
|
||||
|
||||
@JsonProperty
|
||||
Integer consumerTimeoutMs;
|
||||
Integer _consumerTimeoutMs = 1000;
|
||||
|
||||
@JsonProperty
|
||||
Integer zookeeperSessionTimeoutMs;
|
||||
Integer _zookeeperSessionTimeoutMs = 60000;
|
||||
|
||||
@JsonProperty
|
||||
Integer zookeeperConnectionTimeoutMs;
|
||||
Integer _zookeeperConnectionTimeoutMs = 60000;
|
||||
|
||||
@JsonProperty
|
||||
Integer zookeeperSyncTimeMs;
|
||||
Integer _zookeeperSyncTimeMs = 2000;
|
||||
|
||||
public String getTopic() {
|
||||
return topic;
|
||||
}
|
||||
|
||||
public String getZookeeperConnect() {
|
||||
if ( zookeeperConnect == null ) {
|
||||
return _zookeeperConnect;
|
||||
}
|
||||
return zookeeperConnect;
|
||||
}
|
||||
|
||||
public Integer getSocketTimeoutMs() {
|
||||
if ( socketTimeoutMs == null ) {
|
||||
return _socketTimeoutMs;
|
||||
}
|
||||
return socketTimeoutMs;
|
||||
}
|
||||
|
||||
public Integer getSocketReceiveBufferBytes() {
|
||||
if ( socketReceiveBufferBytes == null ) {
|
||||
return _socketReceiveBufferBytes;
|
||||
}
|
||||
return socketReceiveBufferBytes;
|
||||
}
|
||||
|
||||
public Integer getFetchMessageMaxBytes() {
|
||||
if ( fetchMessageMaxBytes == null ) {
|
||||
return _fetchMessageMaxBytes;
|
||||
}
|
||||
return fetchMessageMaxBytes;
|
||||
}
|
||||
|
||||
public Integer getQueuedMaxMessageChunks() {
|
||||
if ( queuedMaxMessageChunks == null ) {
|
||||
return _queuedMaxMessageChunks;
|
||||
}
|
||||
return queuedMaxMessageChunks;
|
||||
}
|
||||
|
||||
public Integer getRebalanceMaxRetries() {
|
||||
if ( rebalanceMaxRetries == null ) {
|
||||
return _rebalanceMaxRetries;
|
||||
}
|
||||
return rebalanceMaxRetries;
|
||||
}
|
||||
|
||||
public Integer getFetchMinBytes() {
|
||||
if ( fetchMinBytes == null ) {
|
||||
return _fetchMinBytes;
|
||||
}
|
||||
return fetchMinBytes;
|
||||
}
|
||||
|
||||
public Integer getFetchWaitMaxMs() {
|
||||
if ( fetchWaitMaxMs == null ) {
|
||||
return _fetchWaitMaxMs;
|
||||
}
|
||||
return fetchWaitMaxMs;
|
||||
}
|
||||
|
||||
public Integer getRebalanceBackoffMs() {
|
||||
if ( rebalanceBackoffMs == null ) {
|
||||
return _rebalanceBackoffMs;
|
||||
}
|
||||
return rebalanceBackoffMs;
|
||||
}
|
||||
|
||||
public Integer getRefreshLeaderBackoffMs() {
|
||||
if ( refreshLeaderBackoffMs == null ) {
|
||||
return _refreshLeaderBackoffMs;
|
||||
}
|
||||
return refreshLeaderBackoffMs;
|
||||
}
|
||||
|
||||
public String getAutoOffsetReset() {
|
||||
if ( autoOffsetReset == null ) {
|
||||
return _autoOffsetReset;
|
||||
}
|
||||
return autoOffsetReset;
|
||||
}
|
||||
|
||||
public Integer getConsumerTimeoutMs() {
|
||||
if ( consumerTimeoutMs == null ) {
|
||||
return _consumerTimeoutMs;
|
||||
}
|
||||
return consumerTimeoutMs;
|
||||
}
|
||||
|
||||
public Integer getZookeeperSessionTimeoutMs() {
|
||||
if ( zookeeperSessionTimeoutMs == null ) {
|
||||
return _zookeeperSessionTimeoutMs;
|
||||
}
|
||||
return zookeeperSessionTimeoutMs;
|
||||
}
|
||||
|
||||
public Integer getZookeeperConnectionTimeoutMs() {
|
||||
if ( zookeeperConnectionTimeoutMs == null ) {
|
||||
return _zookeeperConnectionTimeoutMs;
|
||||
}
|
||||
return zookeeperConnectionTimeoutMs;
|
||||
}
|
||||
|
||||
public Integer getZookeeperSyncTimeMs() {
|
||||
if ( zookeeperSyncTimeMs == null ) {
|
||||
return _zookeeperSyncTimeMs;
|
||||
}
|
||||
return zookeeperSyncTimeMs;
|
||||
}
|
||||
}
|
||||
@@ -1,125 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Copyright (c) 2017 SUSE LLC.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.configuration;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||
|
||||
import monasca.common.configuration.CassandraDbConfiguration;
|
||||
import monasca.common.configuration.DatabaseConfiguration;
|
||||
import monasca.common.configuration.InfluxDbConfiguration;
|
||||
import io.dropwizard.Configuration;
|
||||
import io.dropwizard.db.DataSourceFactory;
|
||||
|
||||
import javax.validation.Valid;
|
||||
import javax.validation.constraints.NotNull;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown=true)
|
||||
public class PersisterConfig extends Configuration {
|
||||
|
||||
@JsonProperty
|
||||
private String name;
|
||||
private String _name = "monasca-persister";
|
||||
|
||||
public String getName() {
|
||||
if ( name == null ) {
|
||||
return _name;
|
||||
}
|
||||
return name;
|
||||
}
|
||||
|
||||
@JsonProperty
|
||||
@NotNull
|
||||
@Valid
|
||||
private final PipelineConfig alarmHistoryConfiguration = new PipelineConfig();
|
||||
|
||||
public PipelineConfig getAlarmHistoryConfiguration() {
|
||||
// Set alarm history configuration specific defaults
|
||||
alarmHistoryConfiguration.setDefaults("alarm-state-transitions",
|
||||
"1_alarm-state-transitions",
|
||||
1);
|
||||
return alarmHistoryConfiguration;
|
||||
}
|
||||
|
||||
@JsonProperty
|
||||
@NotNull
|
||||
@Valid
|
||||
private final PipelineConfig metricConfiguration = new PipelineConfig();
|
||||
|
||||
|
||||
public PipelineConfig getMetricConfiguration() {
|
||||
// Set metric configuration specific defaults
|
||||
metricConfiguration.setDefaults("metrics",
|
||||
"1_metrics",
|
||||
20000);
|
||||
return metricConfiguration;
|
||||
}
|
||||
|
||||
@Valid
|
||||
@NotNull
|
||||
@JsonProperty
|
||||
private final KafkaConfig kafkaConfig = new KafkaConfig();
|
||||
|
||||
public KafkaConfig getKafkaConfig() {
|
||||
return kafkaConfig;
|
||||
}
|
||||
|
||||
@JsonProperty
|
||||
private final DataSourceFactory dataSourceFactory = new DataSourceFactory();
|
||||
|
||||
public DataSourceFactory getDataSourceFactory() {
|
||||
return dataSourceFactory;
|
||||
}
|
||||
|
||||
@Valid
|
||||
@NotNull
|
||||
@JsonProperty
|
||||
private final VerticaMetricRepoConfig verticaMetricRepoConfig =
|
||||
new VerticaMetricRepoConfig();
|
||||
|
||||
public VerticaMetricRepoConfig getVerticaMetricRepoConfig() {
|
||||
return verticaMetricRepoConfig;
|
||||
}
|
||||
|
||||
@Valid
|
||||
@NotNull
|
||||
@JsonProperty
|
||||
private final DatabaseConfiguration databaseConfiguration = new DatabaseConfiguration();
|
||||
|
||||
public DatabaseConfiguration getDatabaseConfiguration() {
|
||||
return databaseConfiguration;
|
||||
}
|
||||
|
||||
@Valid
|
||||
@JsonProperty
|
||||
private final InfluxDbConfiguration influxDbConfiguration = new InfluxDbConfiguration();
|
||||
|
||||
public InfluxDbConfiguration getInfluxDBConfiguration() {
|
||||
return influxDbConfiguration;
|
||||
}
|
||||
|
||||
@Valid
|
||||
@JsonProperty
|
||||
private final CassandraDbConfiguration cassandraDbConfiguration = new CassandraDbConfiguration();
|
||||
|
||||
public CassandraDbConfiguration getCassandraDbConfiguration() {
|
||||
return cassandraDbConfiguration;
|
||||
}
|
||||
}
|
||||
@@ -1,151 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.configuration;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown=true)
|
||||
public class PipelineConfig {
|
||||
|
||||
@JsonProperty
|
||||
String topic;
|
||||
String _topic; // No default: default provided by constructor
|
||||
|
||||
@JsonProperty
|
||||
String groupId;
|
||||
String _groupId; // No default: default provided by constructor
|
||||
|
||||
@JsonProperty
|
||||
String consumerId;
|
||||
String _consumerId = "monasca-persister";
|
||||
|
||||
@JsonProperty
|
||||
String clientId;
|
||||
String _clientId = "monasca-persister";
|
||||
|
||||
@JsonProperty
|
||||
Integer batchSize;
|
||||
Integer _batchSize; // No default: default provided by constructor
|
||||
|
||||
@JsonProperty
|
||||
Integer numThreads;
|
||||
Integer _numThreads = 1;
|
||||
|
||||
@JsonProperty
|
||||
Integer maxBatchTime;
|
||||
Integer _maxBatchTime = 10;
|
||||
|
||||
@JsonProperty
|
||||
Integer commitBatchTime;
|
||||
Integer _commitBatchTime = 0;
|
||||
|
||||
/** Used to set default values for properties that have different sensible
|
||||
* defaults for metric and alarm configurations, respectively.
|
||||
*/
|
||||
public void setDefaults(String defaultTopic, String defaultGroupId,
|
||||
Integer defaultBatchSize) {
|
||||
_batchSize = defaultBatchSize;
|
||||
_groupId = defaultGroupId;
|
||||
_topic = defaultTopic;
|
||||
}
|
||||
|
||||
public Integer getCommitBatchTime() {
|
||||
if ( commitBatchTime == null ) {
|
||||
return _commitBatchTime;
|
||||
}
|
||||
return commitBatchTime;
|
||||
}
|
||||
|
||||
public void setCommitBatchTime(Integer commitBatchTime) {
|
||||
this.commitBatchTime = commitBatchTime;
|
||||
}
|
||||
|
||||
public String getTopic() {
|
||||
if ( topic == null ) {
|
||||
return _topic;
|
||||
}
|
||||
return topic;
|
||||
}
|
||||
|
||||
public String getGroupId() {
|
||||
if ( groupId == null ) {
|
||||
return _groupId;
|
||||
}
|
||||
return groupId;
|
||||
}
|
||||
|
||||
public void setGroupId(String groupId) {
|
||||
this.groupId = groupId;
|
||||
}
|
||||
|
||||
public String getConsumerId() {
|
||||
if ( consumerId == null ) {
|
||||
return _consumerId;
|
||||
}
|
||||
return consumerId;
|
||||
}
|
||||
|
||||
public void setConsumerId(String consumerId) {
|
||||
this.consumerId = consumerId;
|
||||
}
|
||||
|
||||
public String getClientId() {
|
||||
if ( clientId == null ) {
|
||||
return _clientId;
|
||||
}
|
||||
return clientId;
|
||||
}
|
||||
|
||||
public void setTopic(String topic) {
|
||||
this.topic = topic;
|
||||
}
|
||||
|
||||
public void setBatchSize(Integer batchSize) {
|
||||
this.batchSize = batchSize;
|
||||
}
|
||||
|
||||
public void setNumThreads(Integer numThreads) {
|
||||
this.numThreads = numThreads;
|
||||
}
|
||||
|
||||
public void setMaxBatchTime(Integer maxBatchTime) {
|
||||
this.maxBatchTime = maxBatchTime;
|
||||
}
|
||||
|
||||
public Integer getBatchSize() {
|
||||
if ( batchSize == null ) {
|
||||
return _batchSize;
|
||||
}
|
||||
return batchSize;
|
||||
}
|
||||
|
||||
public Integer getNumThreads() {
|
||||
if ( numThreads == null ) {
|
||||
return _numThreads;
|
||||
}
|
||||
return numThreads;
|
||||
}
|
||||
|
||||
public Integer getMaxBatchTime() {
|
||||
if ( maxBatchTime == null ) {
|
||||
return _maxBatchTime;
|
||||
}
|
||||
return maxBatchTime;
|
||||
}
|
||||
}
|
||||
@@ -1,30 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.configuration;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||
|
||||
public class VerticaMetricRepoConfig {
|
||||
|
||||
@JsonProperty
|
||||
Integer maxCacheSize;
|
||||
|
||||
public Integer getMaxCacheSize() {
|
||||
return maxCacheSize;
|
||||
}
|
||||
}
|
||||
@@ -1,140 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Copyright (c) 2017 SUSE LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.consumer;
|
||||
|
||||
import monasca.persister.configuration.KafkaConfig;
|
||||
import monasca.persister.configuration.PersisterConfig;
|
||||
import monasca.persister.configuration.PipelineConfig;
|
||||
|
||||
import com.google.inject.Inject;
|
||||
import com.google.inject.assistedinject.Assisted;
|
||||
|
||||
import kafka.consumer.Consumer;
|
||||
import kafka.consumer.ConsumerConfig;
|
||||
import kafka.consumer.KafkaStream;
|
||||
import kafka.javaapi.consumer.ConsumerConnector;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Properties;
|
||||
|
||||
public class KafkaChannel {
|
||||
private static final String KAFKA_CONFIGURATION = "Kafka configuration:";
|
||||
private static final Logger logger = LoggerFactory.getLogger(KafkaChannel.class);
|
||||
|
||||
private final String topic;
|
||||
private final ConsumerConnector consumerConnector;
|
||||
private final String threadId;
|
||||
private final int commitBatchtimeInMills;
|
||||
private long nextCommitTime;
|
||||
private boolean commitDirty = false;
|
||||
|
||||
@Inject
|
||||
public KafkaChannel(PersisterConfig configuration, @Assisted PipelineConfig pipelineConfig,
|
||||
@Assisted String threadId) {
|
||||
|
||||
this.topic = pipelineConfig.getTopic();
|
||||
this.threadId = threadId;
|
||||
this.commitBatchtimeInMills = pipelineConfig.getCommitBatchTime();
|
||||
nextCommitTime = System.currentTimeMillis() + commitBatchtimeInMills;
|
||||
Properties kafkaProperties = createKafkaProperties(configuration.getKafkaConfig(), pipelineConfig);
|
||||
consumerConnector = Consumer.createJavaConsumerConnector(createConsumerConfig(kafkaProperties));
|
||||
}
|
||||
|
||||
public final void markRead() {
|
||||
if (commitBatchtimeInMills <= 0) {
|
||||
consumerConnector.commitOffsets();
|
||||
} else if (nextCommitTime <= System.currentTimeMillis()) {
|
||||
consumerConnector.commitOffsets();
|
||||
nextCommitTime = System.currentTimeMillis() + commitBatchtimeInMills;
|
||||
commitDirty = false;
|
||||
} else {
|
||||
commitDirty = true;
|
||||
}
|
||||
}
|
||||
|
||||
public final void markReadIfDirty() {
|
||||
if (commitDirty) {
|
||||
this.consumerConnector.commitOffsets();
|
||||
commitDirty = false;
|
||||
}
|
||||
}
|
||||
|
||||
public KafkaStream<byte[], byte[]> getKafkaStream() {
|
||||
final Map<String, Integer> topicCountMap = new HashMap<>();
|
||||
topicCountMap.put(this.topic, 1);
|
||||
Map<String, List<KafkaStream<byte[], byte[]>>> streamMap = this.consumerConnector
|
||||
.createMessageStreams(topicCountMap);
|
||||
List<KafkaStream<byte[], byte[]>> streams = streamMap.values().iterator().next();
|
||||
if (streams.size() != 1) {
|
||||
throw new IllegalStateException(
|
||||
String.format("Expected only one stream but instead there are %d", streams.size()));
|
||||
}
|
||||
return streams.get(0);
|
||||
}
|
||||
|
||||
public void stop() {
|
||||
this.consumerConnector.shutdown();
|
||||
}
|
||||
|
||||
private ConsumerConfig createConsumerConfig(Properties kafkaProperties) {
|
||||
return new ConsumerConfig(kafkaProperties);
|
||||
}
|
||||
|
||||
private Properties createKafkaProperties(KafkaConfig kafkaConfig,
|
||||
final PipelineConfig pipelineConfig) {
|
||||
Properties properties = new Properties();
|
||||
|
||||
properties.put("group.id", pipelineConfig.getGroupId());
|
||||
properties.put("zookeeper.connect", kafkaConfig.getZookeeperConnect());
|
||||
properties.put("consumer.id",
|
||||
String.format("%s_%s", pipelineConfig.getConsumerId(), this.threadId));
|
||||
properties.put("socket.timeout.ms", kafkaConfig.getSocketTimeoutMs().toString());
|
||||
properties.put("socket.receive.buffer.bytes", kafkaConfig.getSocketReceiveBufferBytes().toString());
|
||||
properties.put("fetch.message.max.bytes", kafkaConfig.getFetchMessageMaxBytes().toString());
|
||||
// Set auto commit to false because the persister is going to explicitly commit
|
||||
properties.put("auto.commit.enable", "false");
|
||||
properties.put("queued.max.message.chunks", kafkaConfig.getQueuedMaxMessageChunks().toString());
|
||||
properties.put("rebalance.max.retries", kafkaConfig.getRebalanceMaxRetries().toString());
|
||||
properties.put("fetch.min.bytes", kafkaConfig.getFetchMinBytes().toString());
|
||||
properties.put("fetch.wait.max.ms", kafkaConfig.getFetchWaitMaxMs().toString());
|
||||
properties.put("rebalance.backoff.ms", kafkaConfig.getRebalanceBackoffMs().toString());
|
||||
properties.put("refresh.leader.backoff.ms", kafkaConfig.getRefreshLeaderBackoffMs().toString());
|
||||
properties.put("auto.offset.reset", kafkaConfig.getAutoOffsetReset());
|
||||
properties.put("consumer.timeout.ms", kafkaConfig.getConsumerTimeoutMs().toString());
|
||||
properties.put("client.id", String.format("%s_%s", pipelineConfig.getClientId(), threadId));
|
||||
properties.put("zookeeper.session.timeout.ms",
|
||||
kafkaConfig.getZookeeperSessionTimeoutMs().toString());
|
||||
properties.put("zookeeper.connection.timeout.ms",
|
||||
kafkaConfig.getZookeeperConnectionTimeoutMs().toString());
|
||||
properties.put("zookeeper.sync.time.ms", kafkaConfig.getZookeeperSyncTimeMs().toString());
|
||||
|
||||
for (String key : properties.stringPropertyNames()) {
|
||||
logger.info("[{}]: " + KAFKA_CONFIGURATION + " " + key + " = " + properties.getProperty(key),
|
||||
threadId);
|
||||
}
|
||||
|
||||
return properties;
|
||||
}
|
||||
}
|
||||
@@ -1,27 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.consumer;
|
||||
|
||||
import monasca.persister.configuration.PipelineConfig;
|
||||
|
||||
public interface KafkaChannelFactory {
|
||||
|
||||
KafkaChannel create(
|
||||
PipelineConfig pipelineConfig,
|
||||
String threadId);
|
||||
}
|
||||
@@ -1,64 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.consumer;
|
||||
|
||||
import com.google.inject.Inject;
|
||||
import com.google.inject.assistedinject.Assisted;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.util.concurrent.ExecutorService;
|
||||
|
||||
public class KafkaConsumer<T> {
|
||||
|
||||
private static final Logger logger = LoggerFactory.getLogger(KafkaConsumer.class);
|
||||
|
||||
private ExecutorService executorService;
|
||||
|
||||
private final KafkaConsumerRunnableBasic<T> kafkaConsumerRunnableBasic;
|
||||
private final String threadId;
|
||||
|
||||
@Inject
|
||||
public KafkaConsumer(
|
||||
@Assisted KafkaConsumerRunnableBasic<T> kafkaConsumerRunnableBasic,
|
||||
@Assisted String threadId,
|
||||
@Assisted ExecutorService executorService) {
|
||||
|
||||
this.kafkaConsumerRunnableBasic = kafkaConsumerRunnableBasic;
|
||||
this.threadId = threadId;
|
||||
this.executorService = executorService;
|
||||
|
||||
}
|
||||
|
||||
public void start() {
|
||||
|
||||
logger.info("[{}]: start", this.threadId);
|
||||
|
||||
executorService.submit(kafkaConsumerRunnableBasic.setExecutorService(executorService));
|
||||
|
||||
}
|
||||
|
||||
public void stop() {
|
||||
|
||||
logger.info("[{}]: stop", this.threadId);
|
||||
|
||||
kafkaConsumerRunnableBasic.stop();
|
||||
|
||||
}
|
||||
}
|
||||
@@ -1,29 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package monasca.persister.consumer;
|
||||
|
||||
|
||||
import java.util.concurrent.ExecutorService;
|
||||
|
||||
public interface KafkaConsumerFactory<T> {
|
||||
|
||||
KafkaConsumer<T> create(
|
||||
KafkaConsumerRunnableBasic<T> kafkaConsumerRunnableBasic,
|
||||
String threadId,
|
||||
ExecutorService executorService);
|
||||
|
||||
}
|
||||
@@ -1,223 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Copyright (c) 2017 SUSE LLC.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.consumer;
|
||||
|
||||
import monasca.persister.pipeline.ManagedPipeline;
|
||||
|
||||
import com.google.inject.Inject;
|
||||
import com.google.inject.assistedinject.Assisted;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.util.concurrent.ExecutorService;
|
||||
|
||||
import kafka.consumer.ConsumerIterator;
|
||||
import monasca.persister.repository.RepoException;
|
||||
|
||||
public class KafkaConsumerRunnableBasic<T> implements Runnable {
|
||||
|
||||
private static final Logger logger = LoggerFactory.getLogger(KafkaConsumerRunnableBasic.class);
|
||||
|
||||
private final KafkaChannel kafkaChannel;
|
||||
private final String threadId;
|
||||
private final ManagedPipeline<T> pipeline;
|
||||
private volatile boolean stop = false;
|
||||
private boolean active = false;
|
||||
|
||||
private ExecutorService executorService;
|
||||
|
||||
@Inject
|
||||
public KafkaConsumerRunnableBasic(@Assisted KafkaChannel kafkaChannel,
|
||||
@Assisted ManagedPipeline<T> pipeline, @Assisted String threadId) {
|
||||
|
||||
this.kafkaChannel = kafkaChannel;
|
||||
this.pipeline = pipeline;
|
||||
this.threadId = threadId;
|
||||
}
|
||||
|
||||
public KafkaConsumerRunnableBasic<T> setExecutorService(ExecutorService executorService) {
|
||||
|
||||
this.executorService = executorService;
|
||||
|
||||
return this;
|
||||
|
||||
}
|
||||
|
||||
protected void publishHeartbeat() throws RepoException {
|
||||
|
||||
publishEvent(null);
|
||||
|
||||
}
|
||||
|
||||
private void markRead() {
|
||||
if (logger.isDebugEnabled()) {
|
||||
logger.debug("[{}]: marking read", this.threadId);
|
||||
}
|
||||
|
||||
this.kafkaChannel.markRead();
|
||||
|
||||
}
|
||||
|
||||
public void stop() {
|
||||
|
||||
logger.info("[{}]: stop", this.threadId);
|
||||
|
||||
this.stop = true;
|
||||
|
||||
int count = 0;
|
||||
while (active) {
|
||||
if (count++ >= 20) {
|
||||
break;
|
||||
}
|
||||
try {
|
||||
Thread.sleep(100);
|
||||
} catch (InterruptedException e) {
|
||||
logger.error("interrupted while waiting for the run loop to stop", e);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!active) {
|
||||
this.kafkaChannel.markReadIfDirty();
|
||||
}
|
||||
}
|
||||
|
||||
public void run() {
|
||||
|
||||
logger.info("[{}]: run", this.threadId);
|
||||
|
||||
active = true;
|
||||
|
||||
final ConsumerIterator<byte[], byte[]> it = kafkaChannel.getKafkaStream().iterator();
|
||||
|
||||
logger.debug("[{}]: KafkaChannel has stream iterator", this.threadId);
|
||||
|
||||
while (!this.stop) {
|
||||
|
||||
try {
|
||||
|
||||
try {
|
||||
|
||||
if (isInterrupted()) {
|
||||
|
||||
logger.debug("[{}]: is interrupted", this.threadId);
|
||||
break;
|
||||
|
||||
}
|
||||
|
||||
if (it.hasNext()) {
|
||||
|
||||
if (isInterrupted()) {
|
||||
|
||||
logger.debug("[{}]: is interrupted", this.threadId);
|
||||
break;
|
||||
|
||||
}
|
||||
|
||||
if (this.stop) {
|
||||
|
||||
logger.debug("[{}]: is stopped", this.threadId);
|
||||
break;
|
||||
|
||||
}
|
||||
|
||||
final String msg = new String(it.next().message());
|
||||
|
||||
if (logger.isDebugEnabled()) {
|
||||
logger.debug("[{}]: {}", this.threadId, msg);
|
||||
}
|
||||
|
||||
publishEvent(msg);
|
||||
|
||||
}
|
||||
|
||||
} catch (kafka.consumer.ConsumerTimeoutException cte) {
|
||||
|
||||
if (isInterrupted()) {
|
||||
|
||||
logger.debug("[{}]: is interrupted", this.threadId);
|
||||
break;
|
||||
|
||||
}
|
||||
|
||||
if (this.stop) {
|
||||
|
||||
logger.debug("[{}]: is stopped", this.threadId);
|
||||
break;
|
||||
|
||||
}
|
||||
|
||||
publishHeartbeat();
|
||||
|
||||
}
|
||||
|
||||
} catch (Throwable e) {
|
||||
|
||||
logger
|
||||
.error("[{}]: caught fatal exception while publishing msg. Shutting entire persister down "
|
||||
+ "now!", this.threadId, e);
|
||||
|
||||
logger.error("[{}]: calling shutdown on executor service", this.threadId);
|
||||
this.executorService.shutdownNow();
|
||||
|
||||
logger.error("[{}]: shutting down system. calling system.exit(1)", this.threadId);
|
||||
System.exit(1);
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
logger.info("[{}]: calling stop on kafka channel", this.threadId);
|
||||
|
||||
active = false;
|
||||
|
||||
this.kafkaChannel.stop();
|
||||
|
||||
logger.debug("[{}]: exiting main run loop", this.threadId);
|
||||
|
||||
}
|
||||
|
||||
protected void publishEvent(final String msg) throws RepoException {
|
||||
|
||||
if (pipeline.publishEvent(msg)) {
|
||||
|
||||
markRead();
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
private boolean isInterrupted() {
|
||||
|
||||
if (Thread.interrupted()) {
|
||||
if (logger.isDebugEnabled()) {
|
||||
logger.debug("[{}]: is interrupted. breaking out of run loop", this.threadId);
|
||||
}
|
||||
|
||||
return true;
|
||||
|
||||
} else {
|
||||
|
||||
return false;
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,28 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package monasca.persister.consumer;
|
||||
|
||||
import monasca.persister.pipeline.ManagedPipeline;
|
||||
|
||||
public interface KafkaConsumerRunnableBasicFactory<T> {
|
||||
|
||||
KafkaConsumerRunnableBasic<T> create(
|
||||
ManagedPipeline<T> pipeline,
|
||||
KafkaChannel kafkaChannel,
|
||||
String threadId);
|
||||
|
||||
}
|
||||
@@ -1,60 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.consumer;
|
||||
|
||||
import com.google.inject.Inject;
|
||||
import com.google.inject.assistedinject.Assisted;
|
||||
|
||||
import io.dropwizard.lifecycle.Managed;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
public class ManagedConsumer<T> implements Managed {
|
||||
|
||||
private static final Logger logger = LoggerFactory.getLogger(ManagedConsumer.class);
|
||||
|
||||
private final KafkaConsumer<T> consumer;
|
||||
private final String threadId;
|
||||
|
||||
@Inject
|
||||
public ManagedConsumer(
|
||||
@Assisted KafkaConsumer<T> kafkaConsumer,
|
||||
@Assisted String threadId) {
|
||||
|
||||
this.consumer = kafkaConsumer;
|
||||
this.threadId = threadId;
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
public void start() throws Exception {
|
||||
|
||||
logger.debug("[{}]: start", this.threadId);
|
||||
|
||||
this.consumer.start();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void stop() throws Exception {
|
||||
|
||||
logger.debug("[{}]: stop", this.threadId);
|
||||
|
||||
this.consumer.stop();
|
||||
}
|
||||
}
|
||||
@@ -1,26 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.consumer;
|
||||
|
||||
public interface ManagedConsumerFactory<T> {
|
||||
|
||||
ManagedConsumer<T> create(
|
||||
KafkaConsumer<T> kafkaConsumer,
|
||||
String threadId);
|
||||
|
||||
}
|
||||
@@ -1,51 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.dbi;
|
||||
|
||||
import monasca.persister.configuration.PersisterConfig;
|
||||
|
||||
import com.google.inject.ProvisionException;
|
||||
|
||||
import io.dropwizard.jdbi.DBIFactory;
|
||||
import io.dropwizard.setup.Environment;
|
||||
|
||||
import org.skife.jdbi.v2.DBI;
|
||||
|
||||
import javax.inject.Inject;
|
||||
import javax.inject.Provider;
|
||||
|
||||
public class DBIProvider implements Provider<DBI> {
|
||||
|
||||
private final Environment environment;
|
||||
private final PersisterConfig configuration;
|
||||
|
||||
@Inject
|
||||
public DBIProvider(Environment environment, PersisterConfig configuration) {
|
||||
this.environment = environment;
|
||||
this.configuration = configuration;
|
||||
}
|
||||
|
||||
@Override
|
||||
public DBI get() {
|
||||
try {
|
||||
return new DBIFactory().build(environment, configuration.getDataSourceFactory(), "vertica");
|
||||
} catch (ClassNotFoundException e) {
|
||||
throw new ProvisionException("Failed to provision DBI", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,32 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.healthcheck;
|
||||
|
||||
import com.codahale.metrics.health.HealthCheck;
|
||||
|
||||
public class SimpleHealthCheck extends HealthCheck {
|
||||
|
||||
public SimpleHealthCheck() {
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
protected Result check() throws Exception {
|
||||
return Result.healthy();
|
||||
}
|
||||
}
|
||||
@@ -1,84 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.pipeline;
|
||||
|
||||
import com.google.inject.Inject;
|
||||
import com.google.inject.assistedinject.Assisted;
|
||||
|
||||
import monasca.persister.pipeline.event.FlushableHandler;
|
||||
import monasca.persister.repository.RepoException;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
public class ManagedPipeline<T> {
|
||||
|
||||
private static final Logger logger = LoggerFactory.getLogger(ManagedPipeline.class);
|
||||
|
||||
private final FlushableHandler<T> handler;
|
||||
private final String threadId;
|
||||
|
||||
@Inject
|
||||
public ManagedPipeline(
|
||||
@Assisted FlushableHandler<T> handler,
|
||||
@Assisted String threadId) {
|
||||
|
||||
this.handler = handler;
|
||||
this.threadId = threadId;
|
||||
|
||||
}
|
||||
|
||||
public boolean shutdown() throws RepoException {
|
||||
|
||||
logger.info("[{}]: shutdown", this.threadId);
|
||||
|
||||
try {
|
||||
|
||||
int msgFlushCnt = handler.flush();
|
||||
|
||||
return msgFlushCnt > 0 ? true : false;
|
||||
|
||||
} catch (RepoException e) {
|
||||
|
||||
logger.error("[{}}: failed to flush repo on shutdown", this.threadId, e);
|
||||
logger.error(
|
||||
"[{}]: pipeline broken. repo unavailable. check that database is running. shutting pipeline down now!",
|
||||
this.threadId);
|
||||
|
||||
throw e;
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
public boolean publishEvent(String msg) throws RepoException {
|
||||
|
||||
try {
|
||||
|
||||
return this.handler.onEvent(msg);
|
||||
|
||||
} catch (RepoException e) {
|
||||
|
||||
logger.error("[{}]: failed to handle msg: {}", this.threadId, msg, e);
|
||||
logger.error("[{}]: pipeline broken. repo unavailable. check that database is running. shutting pipeline down now!", this.threadId);
|
||||
|
||||
throw e;
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,27 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package monasca.persister.pipeline;
|
||||
|
||||
import monasca.persister.pipeline.event.FlushableHandler;
|
||||
|
||||
public interface ManagedPipelineFactory<T> {
|
||||
|
||||
ManagedPipeline<T> create(
|
||||
FlushableHandler<T> handler,
|
||||
String threadId);
|
||||
|
||||
}
|
||||
@@ -1,112 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.pipeline.event;
|
||||
|
||||
import monasca.common.model.event.AlarmStateTransitionedEvent;
|
||||
import monasca.persister.configuration.PipelineConfig;
|
||||
|
||||
import com.google.inject.Inject;
|
||||
import com.google.inject.assistedinject.Assisted;
|
||||
|
||||
import com.codahale.metrics.Counter;
|
||||
import com.fasterxml.jackson.databind.DeserializationFeature;
|
||||
|
||||
import io.dropwizard.setup.Environment;
|
||||
import monasca.persister.repository.Repo;
|
||||
import monasca.persister.repository.RepoException;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
public class AlarmStateTransitionHandler extends
|
||||
FlushableHandler<AlarmStateTransitionedEvent> {
|
||||
|
||||
private static final Logger logger =
|
||||
LoggerFactory.getLogger(AlarmStateTransitionHandler.class);
|
||||
|
||||
private final Repo<AlarmStateTransitionedEvent> alarmRepo;
|
||||
|
||||
private final Counter alarmStateTransitionCounter;
|
||||
|
||||
@Inject
|
||||
public AlarmStateTransitionHandler(Repo<AlarmStateTransitionedEvent> alarmRepo,
|
||||
Environment environment,
|
||||
@Assisted PipelineConfig configuration,
|
||||
@Assisted("threadId") String threadId,
|
||||
@Assisted("batchSize") int batchSize) {
|
||||
|
||||
super(configuration, environment, threadId, batchSize);
|
||||
|
||||
this.alarmRepo = alarmRepo;
|
||||
|
||||
this.alarmStateTransitionCounter =
|
||||
environment.metrics()
|
||||
.counter(this.handlerName + "." + "alarm-state-transitions-added-to-batch-counter");
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
protected int process(String msg) {
|
||||
|
||||
AlarmStateTransitionedEvent alarmStateTransitionedEvent;
|
||||
|
||||
try {
|
||||
|
||||
alarmStateTransitionedEvent =
|
||||
this.objectMapper.readValue(msg, AlarmStateTransitionedEvent.class);
|
||||
|
||||
} catch (IOException e) {
|
||||
|
||||
logger.error("[{}]: failed to deserialize message {}", this.threadId, msg, e);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
logger.debug("[{}]: [{}:{}] {}",
|
||||
this.threadId,
|
||||
this.getBatchCount(),
|
||||
this.getMsgCount(),
|
||||
alarmStateTransitionedEvent);
|
||||
|
||||
this.alarmRepo.addToBatch(alarmStateTransitionedEvent, this.threadId);
|
||||
|
||||
this.alarmStateTransitionCounter.inc();
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void initObjectMapper() {
|
||||
|
||||
this.objectMapper.disable(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES);
|
||||
|
||||
this.objectMapper.enable(DeserializationFeature.ACCEPT_SINGLE_VALUE_AS_ARRAY);
|
||||
|
||||
this.objectMapper.enable(DeserializationFeature.UNWRAP_ROOT_VALUE);
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
protected int flushRepository() throws RepoException {
|
||||
|
||||
return this.alarmRepo.flush(this.threadId);
|
||||
|
||||
}
|
||||
}
|
||||
@@ -1,30 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.pipeline.event;
|
||||
|
||||
import monasca.persister.configuration.PipelineConfig;
|
||||
|
||||
import com.google.inject.assistedinject.Assisted;
|
||||
|
||||
public interface AlarmStateTransitionHandlerFactory {
|
||||
|
||||
AlarmStateTransitionHandler create(
|
||||
PipelineConfig configuration,
|
||||
@Assisted("threadId") String threadId,
|
||||
@Assisted("batchSize") int batchSize);
|
||||
}
|
||||
@@ -1,211 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Copyright (c) 2017 SUSE LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.pipeline.event;
|
||||
|
||||
import monasca.persister.configuration.PipelineConfig;
|
||||
|
||||
import com.codahale.metrics.Meter;
|
||||
import com.codahale.metrics.Timer;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
|
||||
import io.dropwizard.setup.Environment;
|
||||
import monasca.persister.repository.RepoException;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
public abstract class FlushableHandler<T> {
|
||||
|
||||
private static final Logger logger = LoggerFactory.getLogger(FlushableHandler.class);
|
||||
|
||||
private final int batchSize;
|
||||
|
||||
private long flushTimeMillis = System.currentTimeMillis();
|
||||
private final long millisBetweenFlushes;
|
||||
private final int secondsBetweenFlushes;
|
||||
private int msgCount = 0;
|
||||
private long batchCount = 0;
|
||||
|
||||
private final Meter processedMeter;
|
||||
private final Meter flushMeter;
|
||||
private final Timer flushTimer;
|
||||
|
||||
protected final String threadId;
|
||||
|
||||
protected ObjectMapper objectMapper = new ObjectMapper();
|
||||
|
||||
protected final String handlerName;
|
||||
|
||||
protected FlushableHandler(PipelineConfig configuration, Environment environment, String threadId,
|
||||
int batchSize) {
|
||||
|
||||
this.threadId = threadId;
|
||||
|
||||
this.handlerName = String.format("%s[%s]", this.getClass().getName(), threadId);
|
||||
|
||||
this.processedMeter = environment.metrics().meter(handlerName + "." + "events-processed-meter");
|
||||
|
||||
this.flushMeter = environment.metrics().meter(handlerName + "." + "flush-meter");
|
||||
|
||||
this.flushTimer = environment.metrics().timer(handlerName + "." + "flush-timer");
|
||||
|
||||
this.secondsBetweenFlushes = configuration.getMaxBatchTime();
|
||||
|
||||
this.millisBetweenFlushes = secondsBetweenFlushes * 1000;
|
||||
|
||||
this.batchSize = batchSize;
|
||||
|
||||
initObjectMapper();
|
||||
|
||||
}
|
||||
|
||||
protected abstract void initObjectMapper();
|
||||
|
||||
protected abstract int flushRepository() throws RepoException;
|
||||
|
||||
protected abstract int process(String msg);
|
||||
|
||||
public boolean onEvent(final String msg) throws RepoException {
|
||||
|
||||
if (msg == null) {
|
||||
|
||||
if (isFlushTime()) {
|
||||
|
||||
int msgFlushCnt = flush();
|
||||
|
||||
return msgFlushCnt > 0 ? true : false;
|
||||
|
||||
} else {
|
||||
|
||||
return false;
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
this.msgCount += process(msg);
|
||||
|
||||
this.processedMeter.mark();
|
||||
|
||||
if (isBatchSize()) {
|
||||
|
||||
int msgFlushCnt = flush();
|
||||
|
||||
return msgFlushCnt > 0 ? true : false;
|
||||
|
||||
} else {
|
||||
|
||||
return false;
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
private boolean isBatchSize() {
|
||||
|
||||
if (logger.isDebugEnabled()) {
|
||||
|
||||
logger.debug("[{}]: checking batch size", this.threadId);
|
||||
|
||||
}
|
||||
|
||||
if (this.msgCount >= this.batchSize) {
|
||||
|
||||
if (logger.isDebugEnabled()) {
|
||||
logger.debug("[{}]: batch sized {} attained", this.threadId, this.batchSize);
|
||||
}
|
||||
|
||||
return true;
|
||||
|
||||
} else {
|
||||
|
||||
if (logger.isDebugEnabled()) {
|
||||
logger.debug("[{}]: batch size now at {}, batch size {} not attained", this.threadId,
|
||||
this.msgCount, this.batchSize);
|
||||
}
|
||||
|
||||
return false;
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
private boolean isFlushTime() {
|
||||
if (logger.isDebugEnabled()) {
|
||||
logger.debug("[{}]: got heartbeat message, checking flush time. flush every {} seconds.",
|
||||
this.threadId, this.secondsBetweenFlushes);
|
||||
}
|
||||
|
||||
long now = System.currentTimeMillis();
|
||||
|
||||
if (this.flushTimeMillis <= now) {
|
||||
if (logger.isDebugEnabled()) {
|
||||
logger.debug("[{}]: {} ms past flush time. flushing to repository now.", this.threadId,
|
||||
now - this.flushTimeMillis);
|
||||
}
|
||||
|
||||
return true;
|
||||
|
||||
} else {
|
||||
if (logger.isDebugEnabled()) {
|
||||
logger.debug("[{}]: {} ms to next flush time. no need to flush at this time.", this.threadId,
|
||||
this.flushTimeMillis - now);
|
||||
}
|
||||
|
||||
return false;
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
public int flush() throws RepoException {
|
||||
if (logger.isDebugEnabled()) {
|
||||
logger.debug("[{}]: flushing", this.threadId);
|
||||
}
|
||||
|
||||
Timer.Context context = this.flushTimer.time();
|
||||
|
||||
int msgFlushCnt = flushRepository();
|
||||
|
||||
context.stop();
|
||||
|
||||
this.flushMeter.mark(msgFlushCnt);
|
||||
|
||||
this.flushTimeMillis = System.currentTimeMillis() + this.millisBetweenFlushes;
|
||||
|
||||
if (logger.isDebugEnabled()) {
|
||||
logger.debug("[{}]: flushed {} msg", this.threadId, msgFlushCnt);
|
||||
}
|
||||
|
||||
this.msgCount -= msgFlushCnt;
|
||||
|
||||
this.batchCount++;
|
||||
|
||||
return msgFlushCnt;
|
||||
|
||||
}
|
||||
|
||||
protected long getBatchCount() {
|
||||
|
||||
return this.batchCount;
|
||||
|
||||
}
|
||||
|
||||
protected int getMsgCount() {
|
||||
|
||||
return this.msgCount;
|
||||
}
|
||||
}
|
||||
@@ -1,118 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Copyright (c) 2017 SUSE LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.pipeline.event;
|
||||
|
||||
import monasca.common.model.metric.MetricEnvelope;
|
||||
import monasca.persister.configuration.PipelineConfig;
|
||||
import monasca.persister.repository.Repo;
|
||||
|
||||
import com.google.inject.Inject;
|
||||
import com.google.inject.assistedinject.Assisted;
|
||||
|
||||
import com.codahale.metrics.Counter;
|
||||
import com.fasterxml.jackson.databind.DeserializationFeature;
|
||||
import com.fasterxml.jackson.databind.PropertyNamingStrategy;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
import io.dropwizard.setup.Environment;
|
||||
import monasca.persister.repository.RepoException;
|
||||
|
||||
public class MetricHandler extends FlushableHandler<MetricEnvelope[]> {
|
||||
|
||||
private static final Logger logger = LoggerFactory.getLogger(MetricHandler.class);
|
||||
|
||||
private final Repo<MetricEnvelope> metricRepo;
|
||||
|
||||
private final Counter metricCounter;
|
||||
|
||||
@Inject
|
||||
public MetricHandler(Repo<MetricEnvelope> metricRepo, Environment environment,
|
||||
@Assisted PipelineConfig configuration, @Assisted("threadId") String threadId,
|
||||
@Assisted("batchSize") int batchSize) {
|
||||
|
||||
super(configuration, environment, threadId, batchSize);
|
||||
|
||||
this.metricRepo = metricRepo;
|
||||
|
||||
this.metricCounter = environment.metrics()
|
||||
.counter(this.handlerName + "." + "metrics-added-to-batch-counter");
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
public int process(String msg) {
|
||||
|
||||
MetricEnvelope[] metricEnvelopesArry;
|
||||
|
||||
try {
|
||||
|
||||
metricEnvelopesArry = this.objectMapper.readValue(msg, MetricEnvelope[].class);
|
||||
|
||||
} catch (IOException e) {
|
||||
|
||||
logger.error("[{}]: failed to deserialize message {}", this.threadId, msg, e);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
for (final MetricEnvelope metricEnvelope : metricEnvelopesArry) {
|
||||
|
||||
processEnvelope(metricEnvelope);
|
||||
|
||||
}
|
||||
|
||||
return metricEnvelopesArry.length;
|
||||
}
|
||||
|
||||
private void processEnvelope(MetricEnvelope metricEnvelope) {
|
||||
if (logger.isDebugEnabled()) {
|
||||
logger.debug("[{}]: [{}:{}] {}", this.threadId, this.getBatchCount(), this.getMsgCount(),
|
||||
metricEnvelope);
|
||||
}
|
||||
|
||||
this.metricRepo.addToBatch(metricEnvelope, this.threadId);
|
||||
|
||||
this.metricCounter.inc();
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void initObjectMapper() {
|
||||
|
||||
this.objectMapper.disable(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES);
|
||||
|
||||
this.objectMapper.enable(DeserializationFeature.ACCEPT_SINGLE_VALUE_AS_ARRAY);
|
||||
|
||||
this.objectMapper
|
||||
.setPropertyNamingStrategy(PropertyNamingStrategy.CAMEL_CASE_TO_LOWER_CASE_WITH_UNDERSCORES);
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
public int flushRepository() throws RepoException {
|
||||
|
||||
return this.metricRepo.flush(this.threadId);
|
||||
}
|
||||
|
||||
}
|
||||
@@ -1,30 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.pipeline.event;
|
||||
|
||||
import monasca.persister.configuration.PipelineConfig;
|
||||
|
||||
import com.google.inject.assistedinject.Assisted;
|
||||
|
||||
public interface MetricHandlerFactory{
|
||||
|
||||
MetricHandler create(
|
||||
PipelineConfig pipelineConfig,
|
||||
@Assisted("threadId") String threadId,
|
||||
@Assisted("batchSize") int batchSize);
|
||||
}
|
||||
@@ -1,25 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package monasca.persister.repository;
|
||||
|
||||
public interface Repo<T> {
|
||||
|
||||
void addToBatch(final T msg, String id);
|
||||
|
||||
int flush(String id) throws RepoException;
|
||||
|
||||
}
|
||||
@@ -1,43 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
|
||||
* in compliance with the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software distributed under the License
|
||||
* is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
|
||||
* or implied. See the License for the specific language governing permissions and limitations under
|
||||
* the License.
|
||||
*/
|
||||
package monasca.persister.repository;
|
||||
|
||||
public class RepoException extends Exception {
|
||||
|
||||
public RepoException() {
|
||||
|
||||
super();
|
||||
}
|
||||
|
||||
public RepoException(String message) {
|
||||
|
||||
super(message);
|
||||
}
|
||||
|
||||
public RepoException(String message, Throwable cause) {
|
||||
|
||||
super(message, cause);
|
||||
}
|
||||
|
||||
public RepoException(Throwable cause) {
|
||||
|
||||
super(cause);
|
||||
}
|
||||
|
||||
protected RepoException(String message, Throwable cause, boolean enableSuppression,
|
||||
boolean writableStackTrace) {
|
||||
|
||||
super(message, cause, enableSuppression, writableStackTrace);
|
||||
}
|
||||
}
|
||||
@@ -1,73 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Copyright (c) 2017 SUSE LLC.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.repository;
|
||||
|
||||
import org.apache.commons.codec.binary.Hex;
|
||||
|
||||
import java.nio.ByteBuffer;
|
||||
import java.util.Arrays;
|
||||
|
||||
public class Sha1HashId {
|
||||
private final byte[] sha1Hash;
|
||||
|
||||
private final String hex;
|
||||
|
||||
public Sha1HashId(byte[] sha1Hash) {
|
||||
this.sha1Hash = sha1Hash;
|
||||
hex = Hex.encodeHexString(sha1Hash);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return "Sha1HashId{" + "sha1Hash=" + hex + "}";
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object o) {
|
||||
if (this == o)
|
||||
return true;
|
||||
if (!(o instanceof Sha1HashId))
|
||||
return false;
|
||||
|
||||
Sha1HashId that = (Sha1HashId) o;
|
||||
|
||||
if (!Arrays.equals(sha1Hash, that.sha1Hash))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Arrays.hashCode(sha1Hash);
|
||||
}
|
||||
|
||||
public byte[] getSha1Hash() {
|
||||
return sha1Hash;
|
||||
}
|
||||
|
||||
public ByteBuffer getSha1HashByteBuffer() {
|
||||
return ByteBuffer.wrap(sha1Hash);
|
||||
}
|
||||
|
||||
public String toHexString() {
|
||||
return hex;
|
||||
}
|
||||
}
|
||||
@@ -1,113 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2017 SUSE LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.repository.cassandra;
|
||||
|
||||
import java.security.NoSuchAlgorithmException;
|
||||
import java.sql.SQLException;
|
||||
import java.sql.Timestamp;
|
||||
|
||||
import javax.inject.Inject;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import com.fasterxml.jackson.core.JsonProcessingException;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import com.fasterxml.jackson.databind.PropertyNamingStrategy;
|
||||
|
||||
import io.dropwizard.setup.Environment;
|
||||
import monasca.common.model.event.AlarmStateTransitionedEvent;
|
||||
import monasca.persister.configuration.PersisterConfig;
|
||||
import monasca.persister.repository.Repo;
|
||||
import monasca.persister.repository.RepoException;
|
||||
|
||||
/**
|
||||
* This class is not thread safe.
|
||||
*
|
||||
*/
|
||||
public class CassandraAlarmRepo extends CassandraRepo implements Repo<AlarmStateTransitionedEvent> {
|
||||
|
||||
private static final Logger logger = LoggerFactory.getLogger(CassandraAlarmRepo.class);
|
||||
|
||||
private static String EMPTY_REASON_DATA = "{}";
|
||||
|
||||
private static final int MAX_BYTES_PER_CHAR = 4;
|
||||
private static final int MAX_LENGTH_VARCHAR = 65000;
|
||||
|
||||
private int retention;
|
||||
|
||||
private ObjectMapper objectMapper = new ObjectMapper();
|
||||
|
||||
@Inject
|
||||
public CassandraAlarmRepo(CassandraCluster cluster, PersisterConfig config, Environment environment)
|
||||
throws NoSuchAlgorithmException, SQLException {
|
||||
super(cluster, environment, config.getCassandraDbConfiguration().getMaxWriteRetries(),
|
||||
config.getAlarmHistoryConfiguration().getBatchSize());
|
||||
|
||||
this.retention = config.getCassandraDbConfiguration().getRetentionPolicy() * 24 * 3600;
|
||||
|
||||
logger.debug("Instantiating " + this.getClass().getName());
|
||||
|
||||
this.objectMapper
|
||||
.setPropertyNamingStrategy(PropertyNamingStrategy.CAMEL_CASE_TO_LOWER_CASE_WITH_UNDERSCORES);
|
||||
|
||||
session = cluster.getAlarmsSession();
|
||||
|
||||
logger.debug(this.getClass().getName() + " is fully instantiated");
|
||||
|
||||
}
|
||||
|
||||
public void addToBatch(AlarmStateTransitionedEvent message, String id) {
|
||||
|
||||
String metricsString = getSerializedString(message.metrics, id);
|
||||
|
||||
// Validate metricsString does not exceed a sufficient maximum upper bound
|
||||
if (metricsString.length() * MAX_BYTES_PER_CHAR >= MAX_LENGTH_VARCHAR) {
|
||||
metricsString = "[]";
|
||||
logger.warn("length of metricsString for alarm ID {} exceeds max length of {}", message.alarmId,
|
||||
MAX_LENGTH_VARCHAR);
|
||||
}
|
||||
|
||||
String subAlarmsString = getSerializedString(message.subAlarms, id);
|
||||
|
||||
if (subAlarmsString.length() * MAX_BYTES_PER_CHAR >= MAX_LENGTH_VARCHAR) {
|
||||
subAlarmsString = "[]";
|
||||
logger.warn("length of subAlarmsString for alarm ID {} exceeds max length of {}", message.alarmId,
|
||||
MAX_LENGTH_VARCHAR);
|
||||
}
|
||||
|
||||
queue.offerLast(cluster.getAlarmHistoryInsertStmt().bind(retention, metricsString, message.oldState.name(),
|
||||
message.newState.name(), subAlarmsString, message.stateChangeReason, EMPTY_REASON_DATA,
|
||||
message.tenantId, message.alarmId, new Timestamp(message.timestamp)));
|
||||
}
|
||||
|
||||
private String getSerializedString(Object o, String id) {
|
||||
|
||||
try {
|
||||
return this.objectMapper.writeValueAsString(o);
|
||||
} catch (JsonProcessingException e) {
|
||||
logger.error("[[}]: failed to serialize object {}", id, o, e);
|
||||
return "";
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public int flush(String id) throws RepoException {
|
||||
return handleFlush(id);
|
||||
}
|
||||
}
|
||||
@@ -1,420 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2017 SUSE LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.repository.cassandra;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.concurrent.ExecutorService;
|
||||
import java.util.concurrent.Executors;
|
||||
import java.util.concurrent.atomic.AtomicInteger;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import com.datastax.driver.core.BoundStatement;
|
||||
import com.datastax.driver.core.Cluster;
|
||||
import com.datastax.driver.core.Cluster.Builder;
|
||||
import com.datastax.driver.core.CodecRegistry;
|
||||
import com.datastax.driver.core.ConsistencyLevel;
|
||||
import com.datastax.driver.core.HostDistance;
|
||||
import com.datastax.driver.core.Metadata;
|
||||
import com.datastax.driver.core.PlainTextAuthProvider;
|
||||
import com.datastax.driver.core.PoolingOptions;
|
||||
import com.datastax.driver.core.PreparedStatement;
|
||||
import com.datastax.driver.core.ProtocolOptions;
|
||||
import com.datastax.driver.core.QueryOptions;
|
||||
import com.datastax.driver.core.ResultSet;
|
||||
import com.datastax.driver.core.ResultSetFuture;
|
||||
import com.datastax.driver.core.Row;
|
||||
import com.datastax.driver.core.Session;
|
||||
import com.datastax.driver.core.SocketOptions;
|
||||
import com.datastax.driver.core.TokenRange;
|
||||
import com.datastax.driver.core.policies.DCAwareRoundRobinPolicy;
|
||||
import com.datastax.driver.core.policies.TokenAwarePolicy;
|
||||
import com.datastax.driver.core.utils.Bytes;
|
||||
import com.google.common.cache.Cache;
|
||||
import com.google.common.cache.CacheBuilder;
|
||||
import com.google.common.collect.Lists;
|
||||
import com.google.common.util.concurrent.FutureCallback;
|
||||
import com.google.common.util.concurrent.Futures;
|
||||
import com.google.inject.Inject;
|
||||
|
||||
import monasca.common.configuration.CassandraDbConfiguration;
|
||||
import monasca.persister.configuration.PersisterConfig;
|
||||
|
||||
public class CassandraCluster {
|
||||
|
||||
private static final Logger logger = LoggerFactory.getLogger(CassandraCluster.class);
|
||||
|
||||
private static final String MEASUREMENT_INSERT_CQL = "update monasca.measurements USING TTL ? "
|
||||
+ "set value = ?, value_meta = ?, region = ?, tenant_id = ?, metric_name = ?, dimensions = ? "
|
||||
+ "where metric_id = ? and time_stamp = ?";
|
||||
|
||||
// TODO: Remove update statements, TTL issues
|
||||
private static final String MEASUREMENT_UPDATE_CQL = "update monasca.measurements USING TTL ? "
|
||||
+ "set value = ?, value_meta = ? " + "where metric_id = ? and time_stamp = ?";
|
||||
|
||||
private static final String METRICS_INSERT_CQL = "update monasca.metrics USING TTL ? "
|
||||
+ "set metric_id = ?, created_at = ?, updated_at = ? "
|
||||
+ "where region = ? and tenant_id = ? and metric_name = ? and dimensions = ? and dimension_names = ?";
|
||||
|
||||
private static final String DIMENSION_INSERT_CQL = "insert into monasca.dimensions "
|
||||
+ "(region, tenant_id, name, value) values (?, ?, ?, ?)";
|
||||
|
||||
private static final String DIMENSION_METRIC_INSERT_CQL = "insert into monasca.dimensions_metrics "
|
||||
+ " (region, tenant_id, dimension_name, dimension_value, metric_name) values (?, ?, ?, ?, ?)";
|
||||
|
||||
private static final String METRIC_DIMENSION_INSERT_CQL = "insert into monasca.metrics_dimensions "
|
||||
+ " (region, tenant_id, metric_name, dimension_name, dimension_value) values (?, ?, ?, ?, ?)";
|
||||
|
||||
private static final String INSERT_ALARM_STATE_HISTORY_SQL = "update monasca.alarm_state_history USING TTL ? "
|
||||
+ " set metric = ?, old_state = ?, new_state = ?, sub_alarms = ?, reason = ?, reason_data = ?"
|
||||
+ " where tenant_id = ? and alarm_id = ? and time_stamp = ?";
|
||||
|
||||
private static final String RETRIEVE_METRIC_DIMENSION_CQL = "select region, tenant_id, metric_name, "
|
||||
+ "dimension_name, dimension_value from metrics_dimensions "
|
||||
+ "WHERE token(region, tenant_id, metric_name) > ? and token(region, tenant_id, metric_name) <= ? ";
|
||||
|
||||
private static final String RETRIEVE_METRIC_ID_CQL = "select distinct metric_id from measurements WHERE token(metric_id) > ? and token(metric_id) <= ?";
|
||||
|
||||
private static final String RETRIEVE_DIMENSION_CQL = "select region, tenant_id, name, value from dimensions";
|
||||
|
||||
private static final String NAME = "name";
|
||||
private static final String VALUE = "value";
|
||||
private static final String METRIC_ID = "metric_id";
|
||||
private static final String TENANT_ID_COLUMN = "tenant_id";
|
||||
private static final String METRIC_NAME = "metric_name";
|
||||
private static final String DIMENSION_NAME = "dimension_name";
|
||||
private static final String DIMENSION_VALUE = "dimension_value";
|
||||
private static final String REGION = "region";
|
||||
|
||||
private CassandraDbConfiguration dbConfig;
|
||||
private Cluster cluster;
|
||||
private Session metricsSession;
|
||||
private Session alarmsSession;
|
||||
|
||||
private TokenAwarePolicy lbPolicy;
|
||||
|
||||
private PreparedStatement measurementInsertStmt;
|
||||
private PreparedStatement measurementUpdateStmt;
|
||||
private PreparedStatement metricInsertStmt;
|
||||
private PreparedStatement dimensionStmt;
|
||||
private PreparedStatement dimensionMetricStmt;
|
||||
private PreparedStatement metricDimensionStmt;
|
||||
|
||||
private PreparedStatement retrieveMetricDimensionStmt;
|
||||
private PreparedStatement retrieveMetricIdStmt;
|
||||
|
||||
private PreparedStatement alarmHistoryInsertStmt;
|
||||
|
||||
public Cache<String, Boolean> getMetricIdCache() {
|
||||
return metricIdCache;
|
||||
}
|
||||
|
||||
public Cache<String, Boolean> getDimensionCache() {
|
||||
return dimensionCache;
|
||||
}
|
||||
|
||||
public Cache<String, Boolean> getMetricDimensionCache() {
|
||||
return metricDimensionCache;
|
||||
}
|
||||
|
||||
private final Cache<String, Boolean> metricIdCache;
|
||||
|
||||
private final Cache<String, Boolean> dimensionCache;
|
||||
|
||||
private final Cache<String, Boolean> metricDimensionCache;
|
||||
|
||||
@Inject
|
||||
public CassandraCluster(final PersisterConfig config) {
|
||||
|
||||
this.dbConfig = config.getCassandraDbConfiguration();
|
||||
|
||||
QueryOptions qo = new QueryOptions();
|
||||
qo.setConsistencyLevel(ConsistencyLevel.valueOf(dbConfig.getConsistencyLevel()));
|
||||
qo.setDefaultIdempotence(true);
|
||||
|
||||
String[] contactPoints = dbConfig.getContactPoints();
|
||||
int retries = dbConfig.getMaxWriteRetries();
|
||||
Builder builder = Cluster.builder().addContactPoints(contactPoints).withPort(dbConfig.getPort());
|
||||
builder
|
||||
.withSocketOptions(new SocketOptions().setConnectTimeoutMillis(dbConfig.getConnectionTimeout())
|
||||
.setReadTimeoutMillis(dbConfig.getReadTimeout()));
|
||||
builder.withQueryOptions(qo).withRetryPolicy(new MonascaRetryPolicy(retries, retries, retries));
|
||||
|
||||
lbPolicy = new TokenAwarePolicy(
|
||||
DCAwareRoundRobinPolicy.builder().withLocalDc(dbConfig.getLocalDataCenter()).build());
|
||||
builder.withLoadBalancingPolicy(lbPolicy);
|
||||
|
||||
String user = dbConfig.getUser();
|
||||
if (user != null && !user.isEmpty()) {
|
||||
builder.withAuthProvider(new PlainTextAuthProvider(dbConfig.getUser(), dbConfig.getPassword()));
|
||||
}
|
||||
cluster = builder.build();
|
||||
|
||||
PoolingOptions poolingOptions = cluster.getConfiguration().getPoolingOptions();
|
||||
|
||||
poolingOptions.setConnectionsPerHost(HostDistance.LOCAL, dbConfig.getMaxConnections(),
|
||||
dbConfig.getMaxConnections()).setConnectionsPerHost(HostDistance.REMOTE,
|
||||
dbConfig.getMaxConnections(), dbConfig.getMaxConnections());
|
||||
|
||||
poolingOptions.setMaxRequestsPerConnection(HostDistance.LOCAL, dbConfig.getMaxRequests())
|
||||
.setMaxRequestsPerConnection(HostDistance.REMOTE, dbConfig.getMaxRequests());
|
||||
|
||||
metricsSession = cluster.connect(dbConfig.getKeySpace());
|
||||
|
||||
measurementInsertStmt = metricsSession.prepare(MEASUREMENT_INSERT_CQL).setIdempotent(true);
|
||||
// TODO: Remove update statements, TTL issues
|
||||
measurementUpdateStmt = metricsSession.prepare(MEASUREMENT_UPDATE_CQL).setIdempotent(true);
|
||||
metricInsertStmt = metricsSession.prepare(METRICS_INSERT_CQL).setIdempotent(true);
|
||||
dimensionStmt = metricsSession.prepare(DIMENSION_INSERT_CQL).setIdempotent(true);
|
||||
dimensionMetricStmt = metricsSession.prepare(DIMENSION_METRIC_INSERT_CQL).setIdempotent(true);
|
||||
metricDimensionStmt = metricsSession.prepare(METRIC_DIMENSION_INSERT_CQL).setIdempotent(true);
|
||||
|
||||
retrieveMetricIdStmt = metricsSession.prepare(RETRIEVE_METRIC_ID_CQL).setIdempotent(true);
|
||||
retrieveMetricDimensionStmt = metricsSession.prepare(RETRIEVE_METRIC_DIMENSION_CQL)
|
||||
.setIdempotent(true);
|
||||
|
||||
alarmsSession = cluster.connect(dbConfig.getKeySpace());
|
||||
|
||||
alarmHistoryInsertStmt = alarmsSession.prepare(INSERT_ALARM_STATE_HISTORY_SQL).setIdempotent(true);
|
||||
|
||||
metricIdCache = CacheBuilder.newBuilder()
|
||||
.maximumSize(config.getCassandraDbConfiguration().getDefinitionMaxCacheSize()).build();
|
||||
|
||||
dimensionCache = CacheBuilder.newBuilder()
|
||||
.maximumSize(config.getCassandraDbConfiguration().getDefinitionMaxCacheSize()).build();
|
||||
|
||||
metricDimensionCache = CacheBuilder.newBuilder()
|
||||
.maximumSize(config.getCassandraDbConfiguration().getDefinitionMaxCacheSize()).build();
|
||||
|
||||
logger.info("loading cached definitions from db");
|
||||
|
||||
ExecutorService executor = Executors.newFixedThreadPool(250);
|
||||
|
||||
//a majority of the ids are for metrics not actively receiving msgs anymore
|
||||
//loadMetricIdCache(executor);
|
||||
|
||||
loadDimensionCache();
|
||||
|
||||
loadMetricDimensionCache(executor);
|
||||
|
||||
executor.shutdown();
|
||||
}
|
||||
|
||||
public Session getMetricsSession() {
|
||||
return metricsSession;
|
||||
}
|
||||
|
||||
public Session getAlarmsSession() {
|
||||
return alarmsSession;
|
||||
}
|
||||
|
||||
public PreparedStatement getMeasurementInsertStmt() {
|
||||
return measurementInsertStmt;
|
||||
}
|
||||
|
||||
// TODO: Remove update statements, TTL issues
|
||||
public PreparedStatement getMeasurementUpdateStmt() {
|
||||
return measurementUpdateStmt;
|
||||
}
|
||||
|
||||
public PreparedStatement getMetricInsertStmt() {
|
||||
return metricInsertStmt;
|
||||
}
|
||||
|
||||
public PreparedStatement getDimensionStmt() {
|
||||
return dimensionStmt;
|
||||
}
|
||||
|
||||
public PreparedStatement getDimensionMetricStmt() {
|
||||
return dimensionMetricStmt;
|
||||
}
|
||||
|
||||
public PreparedStatement getMetricDimensionStmt() {
|
||||
return metricDimensionStmt;
|
||||
}
|
||||
|
||||
public PreparedStatement getAlarmHistoryInsertStmt() {
|
||||
return alarmHistoryInsertStmt;
|
||||
}
|
||||
|
||||
public ProtocolOptions getProtocolOptions() {
|
||||
return cluster.getConfiguration().getProtocolOptions();
|
||||
}
|
||||
|
||||
public CodecRegistry getCodecRegistry() {
|
||||
return cluster.getConfiguration().getCodecRegistry();
|
||||
}
|
||||
|
||||
public Metadata getMetaData() {
|
||||
return cluster.getMetadata();
|
||||
}
|
||||
|
||||
public TokenAwarePolicy getLoadBalancePolicy() {
|
||||
return lbPolicy;
|
||||
}
|
||||
|
||||
private void loadMetricIdCache(ExecutorService executor) {
|
||||
final AtomicInteger tasks = new AtomicInteger(0);
|
||||
logger.info("Found token ranges: " + cluster.getMetadata().getTokenRanges().size());
|
||||
for (TokenRange range : cluster.getMetadata().getTokenRanges()) {
|
||||
List<BoundStatement> queries = rangeQuery(retrieveMetricIdStmt, range);
|
||||
for (BoundStatement query : queries) {
|
||||
tasks.incrementAndGet();
|
||||
logger.info("adding a metric id reading task, total: " + tasks.get());
|
||||
|
||||
ResultSetFuture future = metricsSession.executeAsync(query);
|
||||
|
||||
Futures.addCallback(future, new FutureCallback<ResultSet>() {
|
||||
@Override
|
||||
public void onSuccess(ResultSet result) {
|
||||
for (Row row : result) {
|
||||
String id = Bytes.toHexString(row.getBytes(METRIC_ID));
|
||||
if (id != null) {
|
||||
//remove '0x'
|
||||
metricIdCache.put(id.substring(2), Boolean.TRUE);
|
||||
}
|
||||
}
|
||||
|
||||
tasks.decrementAndGet();
|
||||
|
||||
logger.info("completed a metric id read task. Remaining tasks: " + tasks.get());
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Throwable t) {
|
||||
logger.error("Failed to execute query to load metric id cache.", t);
|
||||
|
||||
tasks.decrementAndGet();
|
||||
|
||||
logger.info("Failed a metric id read task. Remaining tasks: " + tasks.get());
|
||||
}
|
||||
}, executor);
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
while (tasks.get() > 0) {
|
||||
logger.debug("waiting for more metric id load tasks: " + tasks.get());
|
||||
|
||||
try {
|
||||
Thread.sleep(3000);
|
||||
} catch (InterruptedException e) {
|
||||
logger.warn("load metric cache was interrupted", e);
|
||||
}
|
||||
}
|
||||
|
||||
logger.info("loaded metric id cache from database: " + metricIdCache.size());
|
||||
}
|
||||
|
||||
private List<BoundStatement> rangeQuery(PreparedStatement rangeStmt, TokenRange range) {
|
||||
List<BoundStatement> res = Lists.newArrayList();
|
||||
for (TokenRange subRange : range.unwrap()) {
|
||||
res.add(rangeStmt.bind(subRange.getStart(), subRange.getEnd()));
|
||||
}
|
||||
return res;
|
||||
}
|
||||
|
||||
private void loadDimensionCache() {
|
||||
|
||||
ResultSet results = metricsSession.execute(RETRIEVE_DIMENSION_CQL);
|
||||
|
||||
for (Row row : results) {
|
||||
String key = getDimnesionEntryKey(row.getString(REGION), row.getString(TENANT_ID_COLUMN),
|
||||
row.getString(NAME), row.getString(VALUE));
|
||||
dimensionCache.put(key, Boolean.TRUE);
|
||||
}
|
||||
|
||||
logger.info("loaded dimension cache from database: " + dimensionCache.size());
|
||||
}
|
||||
|
||||
public String getDimnesionEntryKey(String region, String tenantId, String name, String value) {
|
||||
StringBuilder sb = new StringBuilder();
|
||||
sb.append(region).append('\0');
|
||||
sb.append(tenantId).append('\0');
|
||||
sb.append(name).append('\0');
|
||||
sb.append(value);
|
||||
return sb.toString();
|
||||
}
|
||||
|
||||
private void loadMetricDimensionCache(ExecutorService executor) {
|
||||
|
||||
final AtomicInteger tasks = new AtomicInteger(0);
|
||||
|
||||
for (TokenRange range : cluster.getMetadata().getTokenRanges()) {
|
||||
List<BoundStatement> queries = rangeQuery(retrieveMetricDimensionStmt, range);
|
||||
for (BoundStatement query : queries) {
|
||||
tasks.incrementAndGet();
|
||||
|
||||
logger.info("Adding a metric dimnesion read task, total: " + tasks.get());
|
||||
|
||||
ResultSetFuture future = metricsSession.executeAsync(query);
|
||||
|
||||
Futures.addCallback(future, new FutureCallback<ResultSet>() {
|
||||
@Override
|
||||
public void onSuccess(ResultSet result) {
|
||||
for (Row row : result) {
|
||||
String key = getMetricDimnesionEntryKey(row.getString(REGION),
|
||||
row.getString(TENANT_ID_COLUMN), row.getString(METRIC_NAME),
|
||||
row.getString(DIMENSION_NAME), row.getString(DIMENSION_VALUE));
|
||||
metricDimensionCache.put(key, Boolean.TRUE);
|
||||
}
|
||||
|
||||
tasks.decrementAndGet();
|
||||
|
||||
logger.info("Completed a metric dimension read task. Remaining tasks: " + tasks.get());
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Throwable t) {
|
||||
logger.error("Failed to execute query to load metric id cache.", t);
|
||||
|
||||
tasks.decrementAndGet();
|
||||
|
||||
logger.info("Failed a metric dimension read task. Remaining tasks: " + tasks.get());
|
||||
}
|
||||
}, executor);
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
while (tasks.get() > 0) {
|
||||
|
||||
logger.debug("waiting for metric dimension cache to load ...");
|
||||
|
||||
try {
|
||||
Thread.sleep(1000);
|
||||
} catch (InterruptedException e) {
|
||||
logger.warn("load metric dimension cache was interrupted", e);
|
||||
}
|
||||
}
|
||||
|
||||
logger.info("loaded metric dimension cache from database: " + metricDimensionCache.size());
|
||||
}
|
||||
|
||||
public String getMetricDimnesionEntryKey(String region, String tenantId, String metricName,
|
||||
String dimensionName, String dimensionValue) {
|
||||
StringBuilder sb = new StringBuilder();
|
||||
sb.append(region).append('\0');
|
||||
sb.append(tenantId).append('\0');
|
||||
sb.append(metricName).append('\0');
|
||||
sb.append(dimensionName).append('\0');
|
||||
sb.append(dimensionValue);
|
||||
return sb.toString();
|
||||
}
|
||||
}
|
||||
@@ -1,207 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2017 SUSE LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.repository.cassandra;
|
||||
|
||||
import java.nio.ByteBuffer;
|
||||
import java.util.ArrayDeque;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Deque;
|
||||
import java.util.HashMap;
|
||||
import java.util.HashSet;
|
||||
import java.util.Iterator;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Map.Entry;
|
||||
import java.util.Set;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import com.datastax.driver.core.BatchStatement;
|
||||
import com.datastax.driver.core.BatchStatement.Type;
|
||||
import com.datastax.driver.core.BoundStatement;
|
||||
import com.datastax.driver.core.CodecRegistry;
|
||||
import com.datastax.driver.core.Host;
|
||||
import com.datastax.driver.core.Metadata;
|
||||
import com.datastax.driver.core.ProtocolOptions;
|
||||
import com.datastax.driver.core.Token;
|
||||
import com.datastax.driver.core.policies.TokenAwarePolicy;
|
||||
|
||||
public class CassandraMetricBatch {
|
||||
private static Logger logger = LoggerFactory.getLogger(CassandraMetricBatch.class);
|
||||
|
||||
ProtocolOptions protocol;
|
||||
CodecRegistry codec;
|
||||
Metadata metadata;
|
||||
TokenAwarePolicy policy;
|
||||
int batchLimit;
|
||||
|
||||
Map<Token, Deque<BatchStatement>> metricQueries;
|
||||
Map<Token, Deque<BatchStatement>> dimensionQueries;
|
||||
Map<Token, Deque<BatchStatement>> dimensionMetricQueries;
|
||||
Map<Token, Deque<BatchStatement>> metricDimensionQueries;
|
||||
Map<Set<Host>, Deque<BatchStatement>> measurementQueries;
|
||||
|
||||
public CassandraMetricBatch(Metadata metadata, ProtocolOptions protocol, CodecRegistry codec,
|
||||
TokenAwarePolicy lbPolicy, int batchLimit) {
|
||||
this.protocol = protocol;
|
||||
this.codec = codec;
|
||||
this.metadata = metadata;
|
||||
this.policy = lbPolicy;
|
||||
metricQueries = new HashMap<>();
|
||||
this.batchLimit = batchLimit;
|
||||
|
||||
metricQueries = new HashMap<>();
|
||||
dimensionQueries = new HashMap<>();
|
||||
dimensionMetricQueries = new HashMap<>();
|
||||
metricDimensionQueries = new HashMap<>();
|
||||
measurementQueries = new HashMap<>();
|
||||
}
|
||||
|
||||
public void addMetricQuery(BoundStatement s) {
|
||||
batchQueryByToken(s, metricQueries);
|
||||
}
|
||||
|
||||
public void addDimensionQuery(BoundStatement s) {
|
||||
batchQueryByToken(s, dimensionQueries);
|
||||
}
|
||||
|
||||
public void addDimensionMetricQuery(BoundStatement s) {
|
||||
batchQueryByToken(s, dimensionMetricQueries);
|
||||
}
|
||||
|
||||
public void addMetricDimensionQuery(BoundStatement s) {
|
||||
batchQueryByToken(s, metricDimensionQueries);
|
||||
}
|
||||
|
||||
public void addMeasurementQuery(BoundStatement s) {
|
||||
batchQueryByReplica(s, measurementQueries);
|
||||
}
|
||||
|
||||
private void batchQueryByToken(BoundStatement s, Map<Token, Deque<BatchStatement>> batchedQueries) {
|
||||
ByteBuffer b = s.getRoutingKey(protocol.getProtocolVersion(), codec);
|
||||
Token token = metadata.newToken(b);
|
||||
Deque<BatchStatement> queue = batchedQueries.get(token);
|
||||
if (queue == null) {
|
||||
queue = new ArrayDeque<BatchStatement>();
|
||||
BatchStatement bs = new BatchStatement(Type.UNLOGGED);
|
||||
bs.add(s);
|
||||
queue.offer(bs);
|
||||
batchedQueries.put(token, queue);
|
||||
} else {
|
||||
BatchStatement bs = queue.getLast();
|
||||
if (bs.size() < batchLimit) {
|
||||
bs.add(s);
|
||||
} else {
|
||||
bs = new BatchStatement(Type.UNLOGGED);
|
||||
bs.add(s);
|
||||
queue.offerLast(bs);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private void batchQueryByReplica(BoundStatement s,
|
||||
Map<Set<Host>, Deque<BatchStatement>> batchedQueries) {
|
||||
Iterator<Host> it = policy.newQueryPlan(s.getKeyspace(), s);
|
||||
Set<Host> hosts = new HashSet<>();
|
||||
|
||||
while (it.hasNext()) {
|
||||
hosts.add(it.next());
|
||||
}
|
||||
|
||||
Deque<BatchStatement> queue = batchedQueries.get(hosts);
|
||||
if (queue == null) {
|
||||
queue = new ArrayDeque<BatchStatement>();
|
||||
BatchStatement bs = new BatchStatement(Type.UNLOGGED);
|
||||
bs.add(s);
|
||||
queue.offer(bs);
|
||||
batchedQueries.put(hosts, queue);
|
||||
} else {
|
||||
BatchStatement bs = queue.getLast();
|
||||
if (bs.size() < 30) {
|
||||
bs.add(s);
|
||||
} else {
|
||||
bs = new BatchStatement(Type.UNLOGGED);
|
||||
bs.add(s);
|
||||
queue.offerLast(bs);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public void clear() {
|
||||
metricQueries.clear();
|
||||
dimensionQueries.clear();
|
||||
dimensionMetricQueries.clear();
|
||||
metricDimensionQueries.clear();
|
||||
measurementQueries.clear();
|
||||
}
|
||||
|
||||
public List<Deque<BatchStatement>> getAllBatches() {
|
||||
logTokenBatchMap("metric batches", metricQueries);
|
||||
logTokenBatchMap("dimension batches", dimensionQueries);
|
||||
logTokenBatchMap("dimension metric batches", dimensionMetricQueries);
|
||||
logTokenBatchMap("metric dimension batches", metricDimensionQueries);
|
||||
logReplicaBatchMap("measurement batches", measurementQueries);
|
||||
|
||||
ArrayList<Deque<BatchStatement>> list = new ArrayList<>();
|
||||
list.addAll(metricQueries.values());
|
||||
list.addAll(dimensionQueries.values());
|
||||
list.addAll(dimensionMetricQueries.values());
|
||||
list.addAll(metricDimensionQueries.values());
|
||||
list.addAll(measurementQueries.values());
|
||||
return list;
|
||||
}
|
||||
|
||||
private void logTokenBatchMap(String name, Map<Token, Deque<BatchStatement>> map) {
|
||||
if (logger.isDebugEnabled()) {
|
||||
StringBuilder sb = new StringBuilder(name);
|
||||
sb.append(": Size: ").append(map.size());
|
||||
sb.append("; Tokens: |");
|
||||
for (Entry<Token, Deque<BatchStatement>> entry : map.entrySet()) {
|
||||
sb.append(entry.getKey().toString()).append(":");
|
||||
for (BatchStatement bs : entry.getValue()) {
|
||||
sb.append(bs.size()).append(",");
|
||||
}
|
||||
sb.append("|.");
|
||||
}
|
||||
|
||||
logger.debug(sb.toString());
|
||||
}
|
||||
}
|
||||
|
||||
private void logReplicaBatchMap(String name, Map<Set<Host>, Deque<BatchStatement>> map) {
|
||||
if (logger.isDebugEnabled()) {
|
||||
StringBuilder sb = new StringBuilder(name);
|
||||
sb.append(": Size: ").append(map.size());
|
||||
sb.append(". Replicas: |");
|
||||
for (Entry<Set<Host>, Deque<BatchStatement>> entry : map.entrySet()) {
|
||||
for (Host host : entry.getKey()) {
|
||||
sb.append(host.getAddress().toString()).append(",");
|
||||
}
|
||||
sb.append(":");
|
||||
for (BatchStatement bs : entry.getValue()) {
|
||||
sb.append(bs.size()).append(",");
|
||||
}
|
||||
|
||||
sb.append("|");
|
||||
|
||||
}
|
||||
logger.debug(sb.toString());
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,338 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2017 SUSE LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.repository.cassandra;
|
||||
|
||||
import java.security.NoSuchAlgorithmException;
|
||||
import java.sql.SQLException;
|
||||
import java.sql.Timestamp;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Deque;
|
||||
import java.util.Iterator;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Map.Entry;
|
||||
import java.util.TreeMap;
|
||||
import java.util.concurrent.ExecutionException;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import javax.inject.Inject;
|
||||
|
||||
import org.apache.commons.codec.digest.DigestUtils;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import com.codahale.metrics.Meter;
|
||||
import com.datastax.driver.core.BatchStatement;
|
||||
import com.datastax.driver.core.BoundStatement;
|
||||
import com.datastax.driver.core.ResultSet;
|
||||
import com.datastax.driver.core.ResultSetFuture;
|
||||
import com.fasterxml.jackson.core.JsonProcessingException;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import com.google.common.util.concurrent.Futures;
|
||||
import com.google.common.util.concurrent.ListenableFuture;
|
||||
|
||||
import io.dropwizard.setup.Environment;
|
||||
import monasca.common.model.metric.Metric;
|
||||
import monasca.common.model.metric.MetricEnvelope;
|
||||
import monasca.persister.configuration.PersisterConfig;
|
||||
import monasca.persister.repository.Repo;
|
||||
import monasca.persister.repository.RepoException;
|
||||
import monasca.persister.repository.Sha1HashId;
|
||||
|
||||
public class CassandraMetricRepo extends CassandraRepo implements Repo<MetricEnvelope> {
|
||||
|
||||
private static final Logger logger = LoggerFactory.getLogger(CassandraMetricRepo.class);
|
||||
|
||||
public static final int MAX_COLUMN_LENGTH = 255;
|
||||
public static final int MAX_VALUE_META_LENGTH = 2048;
|
||||
|
||||
private static final String TENANT_ID = "tenantId";
|
||||
private static final String REGION = "region";
|
||||
private static final String EMPTY_STR = "";
|
||||
|
||||
private int retention;
|
||||
|
||||
private CassandraMetricBatch batches;
|
||||
|
||||
private int metricCount;
|
||||
|
||||
private final ObjectMapper objectMapper = new ObjectMapper();
|
||||
|
||||
public final Meter measurementMeter;
|
||||
public final Meter metricCacheMissMeter;
|
||||
public final Meter metricCacheHitMeter;
|
||||
public final Meter dimensionCacheMissMeter;
|
||||
public final Meter dimensionCacheHitMeter;
|
||||
public final Meter metricDimensionCacheMissMeter;
|
||||
public final Meter metricDimensionCacheHitMeter;
|
||||
|
||||
@Inject
|
||||
public CassandraMetricRepo(CassandraCluster cluster, PersisterConfig config, Environment environment)
|
||||
throws NoSuchAlgorithmException, SQLException {
|
||||
|
||||
super(cluster, environment, config.getCassandraDbConfiguration().getMaxWriteRetries(),
|
||||
config.getMetricConfiguration().getBatchSize());
|
||||
|
||||
logger.debug("Instantiating " + this.getClass().getName());
|
||||
|
||||
this.retention = config.getCassandraDbConfiguration().getRetentionPolicy() * 24 * 3600;
|
||||
|
||||
this.measurementMeter = this.environment.metrics()
|
||||
.meter(this.getClass().getName() + "." + "measurement-meter");
|
||||
|
||||
this.metricCacheMissMeter = this.environment.metrics()
|
||||
.meter(this.getClass().getName() + "." + "definition-cache-miss-meter");
|
||||
|
||||
this.metricCacheHitMeter = this.environment.metrics()
|
||||
.meter(this.getClass().getName() + "." + "definition-cache-hit-meter");
|
||||
|
||||
this.dimensionCacheMissMeter = this.environment.metrics()
|
||||
.meter(this.getClass().getName() + "." + "dimension-cache-miss-meter");
|
||||
|
||||
this.dimensionCacheHitMeter = this.environment.metrics()
|
||||
.meter(this.getClass().getName() + "." + "dimension-cache-hit-meter");
|
||||
|
||||
this.metricDimensionCacheMissMeter = this.environment.metrics()
|
||||
.meter(this.getClass().getName() + "." + "metric-dimension-cache-miss-meter");
|
||||
|
||||
this.metricDimensionCacheHitMeter = this.environment.metrics()
|
||||
.meter(this.getClass().getName() + "." + "metric-dimension-cache-hit-meter");
|
||||
|
||||
session = cluster.getMetricsSession();
|
||||
|
||||
metricCount = 0;
|
||||
|
||||
batches = new CassandraMetricBatch(cluster.getMetaData(), cluster.getProtocolOptions(),
|
||||
cluster.getCodecRegistry(), cluster.getLoadBalancePolicy(),
|
||||
config.getCassandraDbConfiguration().getMaxBatches());
|
||||
|
||||
|
||||
|
||||
logger.debug(this.getClass().getName() + " is fully instantiated");
|
||||
}
|
||||
|
||||
@Override
|
||||
public void addToBatch(MetricEnvelope metricEnvelope, String id) {
|
||||
Metric metric = metricEnvelope.metric;
|
||||
Map<String, Object> metaMap = metricEnvelope.meta;
|
||||
|
||||
String tenantId = getMeta(TENANT_ID, metric, metaMap, id);
|
||||
String region = getMeta(REGION, metric, metaMap, id);
|
||||
String metricName = metric.getName();
|
||||
TreeMap<String, String> dimensions = metric.getDimensions() == null ? new TreeMap<String, String>()
|
||||
: new TreeMap<>(metric.getDimensions());
|
||||
|
||||
StringBuilder sb = new StringBuilder(region).append(tenantId).append(metricName);
|
||||
|
||||
Iterator<String> it = dimensions.keySet().iterator();
|
||||
while (it.hasNext()) {
|
||||
String k = it.next();
|
||||
sb.append(k).append(dimensions.get(k));
|
||||
}
|
||||
|
||||
byte[] defIdSha = DigestUtils.sha(sb.toString());
|
||||
Sha1HashId defIdShaHash = new Sha1HashId(defIdSha);
|
||||
|
||||
if (cluster.getMetricIdCache().getIfPresent(defIdShaHash.toHexString()) == null) {
|
||||
addDefinitionToBatch(defIdShaHash, metricName, dimensions, tenantId, region, id,
|
||||
metric.getTimestamp());
|
||||
batches.addMeasurementQuery(buildMeasurementInsertQuery(defIdShaHash, metric.getTimestamp(),
|
||||
metric.getValue(), metric.getValueMeta(), region, tenantId, metricName, dimensions, id));
|
||||
} else {
|
||||
metricCacheHitMeter.mark();
|
||||
// MUST update all relevant columns to ensure TTL consistency in a row
|
||||
batches.addMetricQuery(cluster.getMetricInsertStmt().bind(retention,
|
||||
defIdShaHash.getSha1HashByteBuffer(), new Timestamp(metric.getTimestamp()),
|
||||
new Timestamp(metric.getTimestamp()), region, tenantId, metricName,
|
||||
getDimensionList(dimensions), new ArrayList<>(dimensions.keySet())));
|
||||
batches.addMeasurementQuery(buildMeasurementUpdateQuery(defIdShaHash, metric.getTimestamp(),
|
||||
metric.getValue(), metric.getValueMeta(), id));
|
||||
}
|
||||
|
||||
metricCount++;
|
||||
}
|
||||
|
||||
private String getMeta(String name, Metric metric, Map<String, Object> meta, String id) {
|
||||
if (meta.containsKey(name)) {
|
||||
return (String) meta.get(name);
|
||||
} else {
|
||||
logger.warn(
|
||||
"[{}]: failed to find {} in message envelope meta data. metric message may be malformed. "
|
||||
+ "setting {} to empty string.",
|
||||
id, name);
|
||||
logger.warn("[{}]: metric: {}", id, metric.toString());
|
||||
logger.warn("[{}]: meta: {}", id, meta.toString());
|
||||
return EMPTY_STR;
|
||||
}
|
||||
}
|
||||
|
||||
private BoundStatement buildMeasurementUpdateQuery(Sha1HashId defId, long timeStamp, double value,
|
||||
Map<String, String> valueMeta, String id) {
|
||||
|
||||
String valueMetaString = getValueMetaString(valueMeta, id);
|
||||
if (logger.isDebugEnabled()) {
|
||||
logger.debug("[{}]: adding metric to batch: metric id: {}, time: {}, value: {}, value meta {}",
|
||||
id, defId.toHexString(), timeStamp, value, valueMetaString);
|
||||
}
|
||||
|
||||
return cluster.getMeasurementUpdateStmt().bind(retention, value, valueMetaString,
|
||||
defId.getSha1HashByteBuffer(), new Timestamp(timeStamp));
|
||||
}
|
||||
|
||||
private BoundStatement buildMeasurementInsertQuery(Sha1HashId defId, long timeStamp, double value,
|
||||
Map<String, String> valueMeta, String region, String tenantId, String metricName,
|
||||
Map<String, String> dimensions, String id) {
|
||||
|
||||
String valueMetaString = getValueMetaString(valueMeta, id);
|
||||
if (logger.isDebugEnabled()) {
|
||||
logger.debug("[{}]: adding metric to batch: metric id: {}, time: {}, value: {}, value meta {}",
|
||||
id, defId.toHexString(), timeStamp, value, valueMetaString);
|
||||
}
|
||||
|
||||
measurementMeter.mark();
|
||||
return cluster.getMeasurementInsertStmt().bind(retention, value, valueMetaString, region, tenantId,
|
||||
metricName, getDimensionList(dimensions), defId.getSha1HashByteBuffer(),
|
||||
new Timestamp(timeStamp));
|
||||
}
|
||||
|
||||
private String getValueMetaString(Map<String, String> valueMeta, String id) {
|
||||
|
||||
String valueMetaString = "";
|
||||
|
||||
if (valueMeta != null && !valueMeta.isEmpty()) {
|
||||
|
||||
try {
|
||||
|
||||
valueMetaString = this.objectMapper.writeValueAsString(valueMeta);
|
||||
if (valueMetaString.length() > MAX_VALUE_META_LENGTH) {
|
||||
logger.error("[{}]: Value meta length {} longer than maximum {}, dropping value meta", id,
|
||||
valueMetaString.length(), MAX_VALUE_META_LENGTH);
|
||||
return "";
|
||||
}
|
||||
|
||||
} catch (JsonProcessingException e) {
|
||||
|
||||
logger.error("[{}]: Failed to serialize value meta {}, dropping value meta from measurement",
|
||||
id, valueMeta);
|
||||
}
|
||||
}
|
||||
|
||||
return valueMetaString;
|
||||
}
|
||||
|
||||
private void addDefinitionToBatch(Sha1HashId defId, String metricName, Map<String, String> dimensions,
|
||||
String tenantId, String region, String id, long timestamp) {
|
||||
|
||||
metricCacheMissMeter.mark();
|
||||
if (logger.isDebugEnabled()) {
|
||||
logger.debug("[{}]: adding definition to batch: defId: {}, name: {}, tenantId: {}, region: {}",
|
||||
id, defId.toHexString(), metricName, tenantId, region);
|
||||
}
|
||||
|
||||
Timestamp ts = new Timestamp(timestamp);
|
||||
batches.addMetricQuery(
|
||||
cluster.getMetricInsertStmt().bind(retention, defId.getSha1HashByteBuffer(), ts, ts, region,
|
||||
tenantId, metricName, getDimensionList(dimensions), new ArrayList<>(dimensions.keySet())));
|
||||
|
||||
for (Map.Entry<String, String> entry : dimensions.entrySet()) {
|
||||
String name = entry.getKey();
|
||||
String value = entry.getValue();
|
||||
|
||||
String dimensionKey = cluster.getDimnesionEntryKey(region, tenantId, name, value);
|
||||
if (cluster.getDimensionCache().getIfPresent(dimensionKey) != null) {
|
||||
dimensionCacheHitMeter.mark();
|
||||
|
||||
} else {
|
||||
dimensionCacheMissMeter.mark();
|
||||
if (logger.isDebugEnabled()) {
|
||||
logger.debug("[{}]: adding dimension to batch: defId: {}, name: {}, value: {}", id,
|
||||
defId.toHexString(), name, value);
|
||||
}
|
||||
batches.addDimensionQuery(cluster.getDimensionStmt().bind(region, tenantId, name, value));
|
||||
cluster.getDimensionCache().put(dimensionKey, Boolean.TRUE);
|
||||
}
|
||||
|
||||
String metricDimensionKey = cluster.getMetricDimnesionEntryKey(region, tenantId, metricName, name, value);
|
||||
if (cluster.getMetricDimensionCache().getIfPresent(metricDimensionKey) != null) {
|
||||
metricDimensionCacheHitMeter.mark();
|
||||
} else {
|
||||
metricDimensionCacheMissMeter.mark();
|
||||
batches.addDimensionMetricQuery(
|
||||
cluster.getDimensionMetricStmt().bind(region, tenantId, name, value, metricName));
|
||||
|
||||
batches.addMetricDimensionQuery(
|
||||
cluster.getMetricDimensionStmt().bind(region, tenantId, metricName, name, value));
|
||||
cluster.getMetricDimensionCache().put(metricDimensionKey, Boolean.TRUE);
|
||||
}
|
||||
}
|
||||
|
||||
String metricId = defId.toHexString();
|
||||
cluster.getMetricIdCache().put(metricId, Boolean.TRUE);
|
||||
}
|
||||
|
||||
public List<String> getDimensionList(Map<String, String> dimensions) {
|
||||
List<String> list = new ArrayList<>(dimensions.size());
|
||||
for (Entry<String, String> dim : dimensions.entrySet()) {
|
||||
list.add(new StringBuffer(dim.getKey()).append('\t').append(dim.getValue()).toString());
|
||||
}
|
||||
return list;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int flush(String id) throws RepoException {
|
||||
long startTime = System.nanoTime();
|
||||
List<ResultSetFuture> results = new ArrayList<>();
|
||||
List<Deque<BatchStatement>> list = batches.getAllBatches();
|
||||
for (Deque<BatchStatement> q : list) {
|
||||
BatchStatement b;
|
||||
while ((b = q.poll()) != null) {
|
||||
results.add(session.executeAsync(b));
|
||||
}
|
||||
}
|
||||
|
||||
List<ListenableFuture<ResultSet>> futures = Futures.inCompletionOrder(results);
|
||||
|
||||
boolean cancel = false;
|
||||
Exception ex = null;
|
||||
for (ListenableFuture<ResultSet> future : futures) {
|
||||
if (cancel) {
|
||||
future.cancel(false);
|
||||
continue;
|
||||
}
|
||||
try {
|
||||
future.get();
|
||||
} catch (InterruptedException | ExecutionException e) {
|
||||
cancel = true;
|
||||
ex = e;
|
||||
}
|
||||
}
|
||||
|
||||
this.commitTimer.update(System.nanoTime() - startTime, TimeUnit.NANOSECONDS);
|
||||
|
||||
if (ex != null) {
|
||||
metricFailed.inc(metricCount);
|
||||
throw new RepoException(ex);
|
||||
}
|
||||
|
||||
batches.clear();
|
||||
int flushCnt = metricCount;
|
||||
metricCount = 0;
|
||||
metricCompleted.inc(flushCnt);
|
||||
return flushCnt;
|
||||
}
|
||||
}
|
||||
@@ -1,210 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2017 SUSE LLC
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.repository.cassandra;
|
||||
|
||||
import java.util.ArrayDeque;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Deque;
|
||||
import java.util.List;
|
||||
import java.util.concurrent.ExecutionException;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import com.codahale.metrics.Counter;
|
||||
import com.codahale.metrics.Timer;
|
||||
import com.datastax.driver.core.BatchStatement;
|
||||
import com.datastax.driver.core.BatchStatement.Type;
|
||||
import com.datastax.driver.core.ResultSet;
|
||||
import com.datastax.driver.core.ResultSetFuture;
|
||||
import com.datastax.driver.core.Session;
|
||||
import com.datastax.driver.core.Statement;
|
||||
import com.datastax.driver.core.exceptions.BootstrappingException;
|
||||
import com.datastax.driver.core.exceptions.DriverException;
|
||||
import com.datastax.driver.core.exceptions.NoHostAvailableException;
|
||||
import com.datastax.driver.core.exceptions.OperationTimedOutException;
|
||||
import com.datastax.driver.core.exceptions.OverloadedException;
|
||||
import com.datastax.driver.core.exceptions.QueryConsistencyException;
|
||||
import com.datastax.driver.core.exceptions.UnavailableException;
|
||||
import com.google.common.util.concurrent.Futures;
|
||||
import com.google.common.util.concurrent.ListenableFuture;
|
||||
|
||||
import io.dropwizard.setup.Environment;
|
||||
import monasca.persister.repository.RepoException;
|
||||
|
||||
public abstract class CassandraRepo {
|
||||
private static Logger logger = LoggerFactory.getLogger(CassandraRepo.class);
|
||||
|
||||
final Environment environment;
|
||||
|
||||
final Timer commitTimer;
|
||||
|
||||
CassandraCluster cluster;
|
||||
Session session;
|
||||
|
||||
int maxWriteRetries;
|
||||
|
||||
int batchSize;
|
||||
|
||||
long lastFlushTimeStamp;
|
||||
|
||||
Deque<Statement> queue;
|
||||
|
||||
Counter metricCompleted;
|
||||
|
||||
Counter metricFailed;
|
||||
|
||||
public CassandraRepo(CassandraCluster cluster, Environment env, int maxWriteRetries, int batchSize) {
|
||||
this.cluster = cluster;
|
||||
this.maxWriteRetries = maxWriteRetries;
|
||||
this.batchSize = batchSize;
|
||||
|
||||
this.environment = env;
|
||||
|
||||
this.commitTimer = this.environment.metrics().timer(getClass().getName() + "." + "commit-timer");
|
||||
|
||||
lastFlushTimeStamp = System.currentTimeMillis();
|
||||
|
||||
queue = new ArrayDeque<>(batchSize);
|
||||
|
||||
this.metricCompleted = environment.metrics()
|
||||
.counter(getClass().getName() + "." + "metrics-persisted-counter");
|
||||
|
||||
this.metricFailed = environment.metrics()
|
||||
.counter(getClass().getName() + "." + "metrics-failed-counter");
|
||||
}
|
||||
|
||||
protected void executeQuery(String id, Statement query, long startTime) throws DriverException {
|
||||
_executeQuery(id, query, startTime, 0);
|
||||
}
|
||||
|
||||
private void _executeQuery(final String id, final Statement query, final long startTime,
|
||||
final int retryCount) {
|
||||
try {
|
||||
session.execute(query);
|
||||
|
||||
commitTimer.update(System.nanoTime() - startTime, TimeUnit.NANOSECONDS);
|
||||
|
||||
// ResultSetFuture future = session.executeAsync(query);
|
||||
|
||||
// Futures.addCallback(future, new FutureCallback<ResultSet>() {
|
||||
// @Override
|
||||
// public void onSuccess(ResultSet result) {
|
||||
// metricCompleted.inc();
|
||||
// commitTimer.update(System.nanoTime() - startTime, TimeUnit.NANOSECONDS);
|
||||
// }
|
||||
//
|
||||
// @Override
|
||||
// public void onFailure(Throwable t) {
|
||||
// if (t instanceof NoHostAvailableException | t instanceof
|
||||
// BootstrappingException
|
||||
// | t instanceof OverloadedException | t instanceof QueryConsistencyException
|
||||
// | t instanceof UnavailableException) {
|
||||
// retryQuery(id, query, startTime, retryCount, (DriverException) t);
|
||||
// } else {
|
||||
// metricFailed.inc();
|
||||
// commitTimer.update(System.nanoTime() - startTime, TimeUnit.NANOSECONDS);
|
||||
// logger.error("Failed to execute query.", t);
|
||||
// }
|
||||
// }
|
||||
// }, MoreExecutors.sameThreadExecutor());
|
||||
|
||||
} catch (NoHostAvailableException | BootstrappingException | OverloadedException
|
||||
| QueryConsistencyException | UnavailableException | OperationTimedOutException e) {
|
||||
retryQuery(id, query, startTime, retryCount, e);
|
||||
} catch (DriverException e) {
|
||||
metricFailed.inc(((BatchStatement) query).size());
|
||||
commitTimer.update(System.nanoTime() - startTime, TimeUnit.NANOSECONDS);
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
|
||||
private void retryQuery(String id, Statement query, final long startTime, int retryCount,
|
||||
DriverException e) throws DriverException {
|
||||
if (retryCount >= maxWriteRetries) {
|
||||
logger.error("[{}]: Query aborted after {} retry: ", id, retryCount, e.getMessage());
|
||||
metricFailed.inc(((BatchStatement) query).size());
|
||||
commitTimer.update(System.nanoTime() - startTime, TimeUnit.NANOSECONDS);
|
||||
throw e;
|
||||
} else {
|
||||
logger.warn("[{}]: Query failed, retrying {} of {}: {} ", id, retryCount, maxWriteRetries,
|
||||
e.getMessage());
|
||||
|
||||
try {
|
||||
Thread.sleep(1000 * (1 << retryCount));
|
||||
} catch (InterruptedException ie) {
|
||||
logger.debug("[{}]: Interrupted: {}", id, ie);
|
||||
}
|
||||
_executeQuery(id, query, startTime, retryCount++);
|
||||
}
|
||||
}
|
||||
|
||||
public int handleFlush_batch(String id) {
|
||||
Statement query;
|
||||
int flushedCount = 0;
|
||||
|
||||
BatchStatement batch = new BatchStatement(Type.UNLOGGED);
|
||||
while ((query = queue.poll()) != null) {
|
||||
flushedCount++;
|
||||
batch.add(query);
|
||||
}
|
||||
|
||||
executeQuery(id, batch, System.nanoTime());
|
||||
|
||||
metricCompleted.inc(flushedCount);
|
||||
|
||||
return flushedCount;
|
||||
}
|
||||
|
||||
public int handleFlush(String id) throws RepoException {
|
||||
long startTime = System.nanoTime();
|
||||
|
||||
int flushedCount = 0;
|
||||
List<ResultSetFuture> results = new ArrayList<>(queue.size());
|
||||
Statement query;
|
||||
while ((query = queue.poll()) != null) {
|
||||
flushedCount++;
|
||||
results.add(session.executeAsync(query));
|
||||
}
|
||||
|
||||
List<ListenableFuture<ResultSet>> futures = Futures.inCompletionOrder(results);
|
||||
|
||||
boolean cancel = false;
|
||||
Exception ex = null;
|
||||
for (ListenableFuture<ResultSet> future : futures) {
|
||||
if (cancel) {
|
||||
future.cancel(false);
|
||||
continue;
|
||||
}
|
||||
try {
|
||||
future.get();
|
||||
} catch (InterruptedException | ExecutionException e) {
|
||||
cancel = true;
|
||||
ex = e;
|
||||
}
|
||||
}
|
||||
|
||||
commitTimer.update(System.nanoTime() - startTime, TimeUnit.NANOSECONDS);
|
||||
|
||||
if (ex != null) {
|
||||
throw new RepoException(ex);
|
||||
}
|
||||
return flushedCount;
|
||||
}
|
||||
}
|
||||
@@ -1,77 +0,0 @@
|
||||
package monasca.persister.repository.cassandra;
|
||||
|
||||
import com.datastax.driver.core.Cluster;
|
||||
import com.datastax.driver.core.ConsistencyLevel;
|
||||
import com.datastax.driver.core.Statement;
|
||||
import com.datastax.driver.core.WriteType;
|
||||
import com.datastax.driver.core.exceptions.DriverException;
|
||||
import com.datastax.driver.core.policies.RetryPolicy;
|
||||
|
||||
public class MonascaRetryPolicy implements RetryPolicy {
|
||||
|
||||
private final int readAttempts;
|
||||
private final int writeAttempts;
|
||||
private final int unavailableAttempts;
|
||||
|
||||
public MonascaRetryPolicy(int readAttempts, int writeAttempts, int unavailableAttempts) {
|
||||
super();
|
||||
this.readAttempts = readAttempts;
|
||||
this.writeAttempts = writeAttempts;
|
||||
this.unavailableAttempts = unavailableAttempts;
|
||||
}
|
||||
|
||||
@Override
|
||||
public RetryDecision onReadTimeout(Statement stmnt, ConsistencyLevel cl, int requiredResponses,
|
||||
int receivedResponses, boolean dataReceived, int rTime) {
|
||||
if (dataReceived) {
|
||||
return RetryDecision.ignore();
|
||||
} else if (rTime < readAttempts) {
|
||||
return receivedResponses >= requiredResponses ? RetryDecision.retry(cl)
|
||||
: RetryDecision.rethrow();
|
||||
} else {
|
||||
return RetryDecision.rethrow();
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
public RetryDecision onWriteTimeout(Statement stmnt, ConsistencyLevel cl, WriteType wt,
|
||||
int requiredResponses, int receivedResponses, int wTime) {
|
||||
if (wTime >= writeAttempts)
|
||||
return RetryDecision.rethrow();
|
||||
|
||||
return RetryDecision.retry(cl);
|
||||
}
|
||||
|
||||
@Override
|
||||
public RetryDecision onUnavailable(Statement stmnt, ConsistencyLevel cl, int requiredResponses,
|
||||
int receivedResponses, int uTime) {
|
||||
if (uTime == 0) {
|
||||
return RetryDecision.tryNextHost(cl);
|
||||
} else if (uTime <= unavailableAttempts) {
|
||||
return RetryDecision.retry(cl);
|
||||
} else {
|
||||
return RetryDecision.rethrow();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public RetryDecision onRequestError(Statement statement, ConsistencyLevel cl, DriverException e,
|
||||
int nbRetry) {
|
||||
return RetryDecision.tryNextHost(cl);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void init(Cluster cluster) {
|
||||
// nothing to do
|
||||
}
|
||||
|
||||
@Override
|
||||
public void close() {
|
||||
// nothing to do
|
||||
}
|
||||
|
||||
}
|
||||
@@ -1,89 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
|
||||
* in compliance with the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software distributed under the License
|
||||
* is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
|
||||
* or implied. See the License for the specific language governing permissions and limitations under
|
||||
* the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.repository.influxdb;
|
||||
|
||||
public final class Definition {
|
||||
|
||||
private static final int MAX_DEFINITION_NAME_LENGTH = 255;
|
||||
private static final int MAX_TENANT_ID_LENGTH = 255;
|
||||
private static final int MAX_REGION_LENGTH = 255;
|
||||
|
||||
public final String name;
|
||||
public final String tenantId;
|
||||
public final String region;
|
||||
|
||||
public Definition(String name, String tenantId, String region) {
|
||||
|
||||
if (name.length() > MAX_DEFINITION_NAME_LENGTH) {
|
||||
name = name.substring(0, MAX_DEFINITION_NAME_LENGTH);
|
||||
}
|
||||
|
||||
this.name = name;
|
||||
|
||||
if (tenantId.length() > MAX_TENANT_ID_LENGTH) {
|
||||
tenantId = tenantId.substring(0, MAX_TENANT_ID_LENGTH);
|
||||
}
|
||||
|
||||
this.tenantId = tenantId;
|
||||
|
||||
if (region.length() > MAX_REGION_LENGTH) {
|
||||
region = region.substring(0, MAX_REGION_LENGTH);
|
||||
}
|
||||
|
||||
this.region = region;
|
||||
}
|
||||
|
||||
|
||||
public String getName() {
|
||||
return name;
|
||||
}
|
||||
|
||||
public String getTenantId() {
|
||||
return tenantId;
|
||||
}
|
||||
|
||||
public String getRegion() {
|
||||
return region;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object o) {
|
||||
if (this == o) {
|
||||
return true;
|
||||
}
|
||||
if (!(o instanceof Definition)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
Definition that = (Definition) o;
|
||||
|
||||
if (!name.equals(that.name)) {
|
||||
return false;
|
||||
}
|
||||
if (!tenantId.equals(that.tenantId)) {
|
||||
return false;
|
||||
}
|
||||
return region.equals(that.region);
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
int result = name.hashCode();
|
||||
result = 31 * result + tenantId.hashCode();
|
||||
result = 31 * result + region.hashCode();
|
||||
return result;
|
||||
}
|
||||
}
|
||||
@@ -1,103 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
|
||||
* in compliance with the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software distributed under the License
|
||||
* is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
|
||||
* or implied. See the License for the specific language governing permissions and limitations under
|
||||
* the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.repository.influxdb;
|
||||
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.TreeMap;
|
||||
|
||||
import javax.annotation.Nullable;
|
||||
|
||||
public class Dimensions {
|
||||
|
||||
private static final int MAX_DIMENSIONS_NAME_LENGTH = 255;
|
||||
private static final int MAX_DIMENSIONS_VALUE_LENGTH = 255;
|
||||
|
||||
private final Map<String, String> dimensionsMap;
|
||||
|
||||
|
||||
public Dimensions(@Nullable Map<String, String> dimensionsMap) {
|
||||
|
||||
this.dimensionsMap = new TreeMap<>();
|
||||
|
||||
if (dimensionsMap != null) {
|
||||
|
||||
for (String name : dimensionsMap.keySet()) {
|
||||
|
||||
if (name != null && !name.isEmpty()) {
|
||||
|
||||
String value = dimensionsMap.get(name);
|
||||
|
||||
if (value != null && !value.isEmpty()) {
|
||||
|
||||
if (name.length() > MAX_DIMENSIONS_NAME_LENGTH) {
|
||||
|
||||
name = name.substring(0, MAX_DIMENSIONS_NAME_LENGTH);
|
||||
|
||||
}
|
||||
|
||||
if (value.length() > MAX_DIMENSIONS_VALUE_LENGTH) {
|
||||
|
||||
value = value.substring(0, MAX_DIMENSIONS_VALUE_LENGTH);
|
||||
}
|
||||
|
||||
this.dimensionsMap.put(name, value);
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object o) {
|
||||
|
||||
if (this == o) {
|
||||
return true;
|
||||
}
|
||||
|
||||
if (!(o instanceof Dimensions)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
Dimensions that = (Dimensions) o;
|
||||
|
||||
return dimensionsMap.equals(that.dimensionsMap);
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return dimensionsMap.hashCode();
|
||||
}
|
||||
|
||||
public Set<String> keySet() {
|
||||
|
||||
return this.dimensionsMap.keySet();
|
||||
|
||||
}
|
||||
|
||||
public Set<Map.Entry<String, String>> entrySet() {
|
||||
|
||||
return this.dimensionsMap.entrySet();
|
||||
|
||||
}
|
||||
|
||||
public String get(String key) {
|
||||
|
||||
return this.dimensionsMap.get(key);
|
||||
|
||||
}
|
||||
}
|
||||
@@ -1,67 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
|
||||
* in compliance with the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software distributed under the License
|
||||
* is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
|
||||
* or implied. See the License for the specific language governing permissions and limitations under
|
||||
* the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.repository.influxdb;
|
||||
|
||||
import monasca.common.model.event.AlarmStateTransitionedEvent;
|
||||
|
||||
import com.codahale.metrics.Meter;
|
||||
import com.codahale.metrics.MetricRegistry;
|
||||
|
||||
import java.util.LinkedList;
|
||||
import java.util.List;
|
||||
|
||||
import io.dropwizard.setup.Environment;
|
||||
|
||||
public abstract class InfluxAlarmRepo
|
||||
extends InfluxRepo<AlarmStateTransitionedEvent> {
|
||||
|
||||
protected static final String ALARM_STATE_HISTORY_NAME = "alarm_state_history";
|
||||
|
||||
protected final Meter alarmStateHistoryMeter;
|
||||
|
||||
protected List<AlarmStateTransitionedEvent> alarmStateTransitionedEventList = new LinkedList<>();
|
||||
|
||||
public InfluxAlarmRepo(final Environment env) {
|
||||
|
||||
super(env);
|
||||
|
||||
this.alarmStateHistoryMeter =
|
||||
env.metrics().meter(
|
||||
MetricRegistry.name(getClass(), "alarm_state_history-meter"));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void addToBatch(AlarmStateTransitionedEvent alarmStateTransitionedEvent, String id) {
|
||||
|
||||
this.alarmStateTransitionedEventList.add(alarmStateTransitionedEvent);
|
||||
|
||||
this.alarmStateHistoryMeter.mark();
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void clearBuffers() {
|
||||
|
||||
this.alarmStateTransitionedEventList.clear();
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
protected boolean isBufferEmpty() {
|
||||
|
||||
return this.alarmStateTransitionedEventList.isEmpty();
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
@@ -1,75 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.repository.influxdb;
|
||||
|
||||
import monasca.common.model.metric.Metric;
|
||||
import monasca.common.model.metric.MetricEnvelope;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
import io.dropwizard.setup.Environment;
|
||||
|
||||
public abstract class InfluxMetricRepo extends InfluxRepo<MetricEnvelope> {
|
||||
|
||||
protected final MeasurementBuffer measurementBuffer = new MeasurementBuffer();
|
||||
|
||||
public InfluxMetricRepo(final Environment env) {
|
||||
|
||||
super(env);
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
public void addToBatch(MetricEnvelope metricEnvelope, String id) {
|
||||
|
||||
Metric metric = metricEnvelope.metric;
|
||||
|
||||
Map<String, Object> meta = metricEnvelope.meta;
|
||||
|
||||
Definition definition =
|
||||
new Definition(
|
||||
metric.getName(),
|
||||
(String) meta.get("tenantId"),
|
||||
(String) meta.get("region"));
|
||||
|
||||
Dimensions dimensions = new Dimensions(metric.getDimensions());
|
||||
|
||||
Measurement measurement =
|
||||
new Measurement(
|
||||
metric.getTimestamp(),
|
||||
metric.getValue(),
|
||||
metric.getValueMeta());
|
||||
|
||||
this.measurementBuffer.put(definition, dimensions, measurement);
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void clearBuffers() {
|
||||
|
||||
this.measurementBuffer.clear();
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
protected boolean isBufferEmpty() {
|
||||
|
||||
return this.measurementBuffer.isEmpty();
|
||||
|
||||
}
|
||||
}
|
||||
@@ -1,62 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.repository.influxdb;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
public class InfluxPoint {
|
||||
|
||||
private final String measurement;
|
||||
private final Map<String, String> tags;
|
||||
private final String time;
|
||||
private final Map<String, Object> fields;
|
||||
private final String Precision = "ms";
|
||||
|
||||
public InfluxPoint(
|
||||
final String measurement,
|
||||
final Map<String, String> tags,
|
||||
final String time,
|
||||
final Map<String, Object> fields) {
|
||||
|
||||
this.measurement = measurement;
|
||||
this.tags = tags;
|
||||
this.time = time;
|
||||
this.fields = fields;
|
||||
}
|
||||
|
||||
public String getMeasurement() {
|
||||
return measurement;
|
||||
}
|
||||
|
||||
public Map<String, String> getTags() {
|
||||
return this.tags;
|
||||
}
|
||||
|
||||
public String getTime() {
|
||||
return this.time;
|
||||
}
|
||||
|
||||
public Map<String, Object> getFields() {
|
||||
return this.fields;
|
||||
}
|
||||
|
||||
public String getPrecision() {
|
||||
return Precision;
|
||||
}
|
||||
|
||||
}
|
||||
@@ -1,93 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package monasca.persister.repository.influxdb;
|
||||
|
||||
import com.codahale.metrics.Timer;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import io.dropwizard.setup.Environment;
|
||||
import monasca.persister.repository.Repo;
|
||||
import monasca.persister.repository.RepoException;
|
||||
|
||||
public abstract class InfluxRepo<T> implements Repo<T> {
|
||||
|
||||
private static final Logger logger = LoggerFactory.getLogger(InfluxRepo.class);
|
||||
|
||||
protected final com.codahale.metrics.Timer flushTimer;
|
||||
|
||||
public InfluxRepo (final Environment env) {
|
||||
|
||||
this.flushTimer =
|
||||
env.metrics().timer(this.getClass().getName() + "." + "flush-timer");
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
public int flush(String id) throws RepoException {
|
||||
|
||||
if (isBufferEmpty()) {
|
||||
|
||||
logger.debug("[{}]: no msg to be written to influxdb", id);
|
||||
|
||||
logger.debug("[{}]: returning from flush without flushing", id);
|
||||
|
||||
return 0;
|
||||
|
||||
} else {
|
||||
|
||||
return writeToRepo(id);
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
private int writeToRepo(String id) throws RepoException {
|
||||
|
||||
try {
|
||||
|
||||
final Timer.Context context = flushTimer.time();
|
||||
|
||||
final long startTime = System.currentTimeMillis();
|
||||
|
||||
int msgWriteCnt = write(id);
|
||||
|
||||
final long endTime = System.currentTimeMillis();
|
||||
|
||||
context.stop();
|
||||
|
||||
logger.debug("[{}]: writing to influxdb took {} ms", id, endTime - startTime);
|
||||
|
||||
clearBuffers();
|
||||
|
||||
return msgWriteCnt;
|
||||
|
||||
} catch (Exception e) {
|
||||
|
||||
logger.error("[{}]: failed to write to influxdb", id, e);
|
||||
|
||||
throw e;
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
protected abstract boolean isBufferEmpty();
|
||||
|
||||
protected abstract int write(String id) throws RepoException;
|
||||
|
||||
protected abstract void clearBuffers();
|
||||
}
|
||||
@@ -1,223 +0,0 @@
|
||||
/*
|
||||
* (C) Copyright 2014-2016 Hewlett Packard Enterprise Development Company LP
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.repository.influxdb;
|
||||
|
||||
import monasca.common.model.event.AlarmStateTransitionedEvent;
|
||||
|
||||
import com.google.inject.Inject;
|
||||
|
||||
import com.fasterxml.jackson.core.JsonProcessingException;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import com.fasterxml.jackson.databind.PropertyNamingStrategy;
|
||||
|
||||
import org.joda.time.DateTime;
|
||||
import org.joda.time.DateTimeZone;
|
||||
import org.joda.time.format.DateTimeFormatter;
|
||||
import org.joda.time.format.ISODateTimeFormat;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.util.HashMap;
|
||||
import java.util.LinkedList;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import io.dropwizard.setup.Environment;
|
||||
import monasca.persister.repository.RepoException;
|
||||
|
||||
public class InfluxV9AlarmRepo extends InfluxAlarmRepo {
|
||||
|
||||
private static final Logger logger = LoggerFactory.getLogger(InfluxV9AlarmRepo.class);
|
||||
|
||||
private final InfluxV9RepoWriter influxV9RepoWriter;
|
||||
|
||||
private final ObjectMapper objectMapper = new ObjectMapper();
|
||||
|
||||
private final DateTimeFormatter dateFormatter = ISODateTimeFormat.dateTime();
|
||||
|
||||
@Inject
|
||||
public InfluxV9AlarmRepo(
|
||||
final Environment env,
|
||||
final InfluxV9RepoWriter influxV9RepoWriter) {
|
||||
|
||||
super(env);
|
||||
|
||||
this.influxV9RepoWriter = influxV9RepoWriter;
|
||||
|
||||
this.objectMapper.setPropertyNamingStrategy(
|
||||
PropertyNamingStrategy.CAMEL_CASE_TO_LOWER_CASE_WITH_UNDERSCORES);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected int write(String id) throws RepoException {
|
||||
|
||||
return this.influxV9RepoWriter.write(getInfluxPointArry(id), id);
|
||||
|
||||
}
|
||||
|
||||
private InfluxPoint[] getInfluxPointArry(String id) {
|
||||
|
||||
List<InfluxPoint> influxPointList = new LinkedList<>();
|
||||
|
||||
for (AlarmStateTransitionedEvent event : this.alarmStateTransitionedEventList) {
|
||||
|
||||
Map<String, Object> valueMap = new HashMap<>();
|
||||
|
||||
if (event.tenantId == null) {
|
||||
|
||||
logger.error("[{}]: tenant id cannot be null. Dropping alarm state history event.", id);
|
||||
|
||||
continue;
|
||||
|
||||
} else {
|
||||
|
||||
valueMap.put("tenant_id", event.tenantId);
|
||||
}
|
||||
|
||||
if (event.alarmId == null) {
|
||||
|
||||
logger.error("[{}]: alarm id cannot be null. Dropping alarm state history event.", id);
|
||||
|
||||
continue;
|
||||
|
||||
} else {
|
||||
|
||||
valueMap.put("alarm_id", event.alarmId);
|
||||
}
|
||||
|
||||
if (event.metrics == null) {
|
||||
|
||||
logger.error("[{}]: metrics cannot be null. Settings metrics to empty JSON", id);
|
||||
|
||||
valueMap.put("metrics", "{}");
|
||||
|
||||
} else {
|
||||
|
||||
try {
|
||||
|
||||
valueMap.put("metrics", this.objectMapper.writeValueAsString(event.metrics));
|
||||
|
||||
} catch (JsonProcessingException e) {
|
||||
|
||||
logger.error("[{}]: failed to serialize metrics {}", id, event.metrics, e);
|
||||
logger.error("[{}]: setting metrics to empty JSON", id);
|
||||
|
||||
valueMap.put("metrics", "{}");
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
if (event.oldState == null) {
|
||||
|
||||
logger.error("[{}]: old state cannot be null. Setting old state to empty string.", id);
|
||||
|
||||
valueMap.put("old_state", "");
|
||||
|
||||
} else {
|
||||
|
||||
valueMap.put("old_state", event.oldState);
|
||||
|
||||
}
|
||||
|
||||
if (event.newState == null) {
|
||||
|
||||
logger.error("[{}]: new state cannot be null. Setting new state to empty string.", id);
|
||||
|
||||
valueMap.put("new_state", "");
|
||||
|
||||
} else {
|
||||
|
||||
valueMap.put("new_state", event.newState);
|
||||
|
||||
}
|
||||
|
||||
if (event.link == null) {
|
||||
|
||||
valueMap.put("link", "");
|
||||
|
||||
} else {
|
||||
|
||||
valueMap.put("link", event.link);
|
||||
}
|
||||
|
||||
if (event.lifecycleState == null) {
|
||||
|
||||
valueMap.put("lifecycle_state", "");
|
||||
|
||||
} else {
|
||||
|
||||
valueMap.put("lifecycle_state", event.lifecycleState);
|
||||
}
|
||||
|
||||
if (event.subAlarms == null) {
|
||||
|
||||
logger.debug("[{}]: sub alarms is null. Setting sub alarms to empty JSON", id);
|
||||
|
||||
valueMap.put("sub_alarms", "[]");
|
||||
|
||||
} else {
|
||||
|
||||
try {
|
||||
|
||||
valueMap.put("sub_alarms", this.objectMapper.writeValueAsString(event.subAlarms));
|
||||
|
||||
} catch (JsonProcessingException e) {
|
||||
|
||||
logger.error("[{}]: failed to serialize sub alarms {}", id, event.subAlarms, e);
|
||||
logger.error("[{}]: Setting sub_alarms to empty JSON", id);
|
||||
|
||||
valueMap.put("sub_alarms", "[]");
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
if (event.stateChangeReason == null) {
|
||||
|
||||
logger.error("[{}]: reason cannot be null. Setting reason to empty string.", id);
|
||||
|
||||
valueMap.put("reason", "");
|
||||
|
||||
} else {
|
||||
|
||||
valueMap.put("reason", event.stateChangeReason);
|
||||
}
|
||||
|
||||
valueMap.put("reason_data", "{}");
|
||||
|
||||
DateTime dateTime = new DateTime(event.timestamp, DateTimeZone.UTC);
|
||||
|
||||
String dateString = this.dateFormatter.print(dateTime);
|
||||
|
||||
Map<String, String> tags = new HashMap<>();
|
||||
|
||||
tags.put("tenant_id", event.tenantId);
|
||||
|
||||
tags.put("alarm_id", event.alarmId);
|
||||
|
||||
InfluxPoint
|
||||
influxPoint =
|
||||
new InfluxPoint(ALARM_STATE_HISTORY_NAME, tags, dateString, valueMap);
|
||||
|
||||
influxPointList.add(influxPoint);
|
||||
|
||||
}
|
||||
|
||||
return influxPointList.toArray(new InfluxPoint[influxPointList.size()]);
|
||||
}
|
||||
}
|
||||
@@ -1,133 +0,0 @@
|
||||
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.repository.influxdb;
|
||||
|
||||
import com.google.inject.Inject;
|
||||
|
||||
import java.util.HashMap;
|
||||
import java.util.LinkedList;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import io.dropwizard.setup.Environment;
|
||||
import monasca.persister.repository.RepoException;
|
||||
|
||||
public class InfluxV9MetricRepo extends InfluxMetricRepo {
|
||||
|
||||
private final InfluxV9RepoWriter influxV9RepoWriter;
|
||||
|
||||
@Inject
|
||||
public InfluxV9MetricRepo(
|
||||
final Environment env,
|
||||
final InfluxV9RepoWriter influxV9RepoWriter) {
|
||||
|
||||
super(env);
|
||||
|
||||
this.influxV9RepoWriter = influxV9RepoWriter;
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
protected int write(String id) throws RepoException {
|
||||
|
||||
return this.influxV9RepoWriter.write(getInfluxPointArry(), id);
|
||||
|
||||
}
|
||||
|
||||
private InfluxPoint[] getInfluxPointArry() {
|
||||
|
||||
List<InfluxPoint> influxPointList = new LinkedList<>();
|
||||
|
||||
for (Map.Entry<Definition, Map<Dimensions, List<Measurement>>> definitionMapEntry
|
||||
: this.measurementBuffer.entrySet()) {
|
||||
|
||||
Definition definition = definitionMapEntry.getKey();
|
||||
Map<Dimensions, List<Measurement>> dimensionsMap = definitionMapEntry.getValue();
|
||||
|
||||
for (Map.Entry<Dimensions, List<Measurement>> dimensionsMapEntry
|
||||
: dimensionsMap.entrySet()) {
|
||||
|
||||
Dimensions dimensions = dimensionsMapEntry.getKey();
|
||||
List<Measurement> measurementList = dimensionsMapEntry.getValue();
|
||||
|
||||
Map<String, String> tagMap = buildTagMap(definition, dimensions);
|
||||
|
||||
for (Measurement measurement : measurementList) {
|
||||
|
||||
InfluxPoint
|
||||
influxPoint =
|
||||
new InfluxPoint(definition.getName(),
|
||||
tagMap,
|
||||
measurement.getISOFormattedTimeString(),
|
||||
buildValueMap(measurement));
|
||||
|
||||
influxPointList.add(influxPoint);
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return influxPointList.toArray(new InfluxPoint[influxPointList.size()]);
|
||||
|
||||
}
|
||||
|
||||
private Map<String, Object> buildValueMap(Measurement measurement) {
|
||||
|
||||
Map<String, Object> valueMap = new HashMap<>();
|
||||
|
||||
valueMap.put("value", measurement.getValue());
|
||||
|
||||
String valueMetaJSONString = measurement.getValueMetaJSONString();
|
||||
|
||||
if (valueMetaJSONString == null || valueMetaJSONString.isEmpty()) {
|
||||
|
||||
valueMap.put("value_meta", "{}");
|
||||
|
||||
} else {
|
||||
|
||||
valueMap.put("value_meta", valueMetaJSONString);
|
||||
|
||||
}
|
||||
|
||||
return valueMap;
|
||||
|
||||
}
|
||||
|
||||
private Map<String, String> buildTagMap(Definition definition, Dimensions dimensions) {
|
||||
|
||||
Map<String,String> tagMap = new HashMap<>();
|
||||
|
||||
for (Map.Entry<String, String> dimensionsEntry : dimensions.entrySet()) {
|
||||
|
||||
String name = dimensionsEntry.getKey();
|
||||
|
||||
String value = dimensionsEntry.getValue();
|
||||
|
||||
tagMap.put(name, value);
|
||||
|
||||
}
|
||||
|
||||
tagMap.put("_tenant_id", definition.getTenantId());
|
||||
|
||||
tagMap.put("_region", definition.getRegion());
|
||||
|
||||
return tagMap;
|
||||
|
||||
}
|
||||
}
|
||||
@@ -1,242 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.repository.influxdb;
|
||||
|
||||
import monasca.persister.configuration.PersisterConfig;
|
||||
import monasca.persister.repository.RepoException;
|
||||
|
||||
import com.google.inject.Inject;
|
||||
|
||||
import com.fasterxml.jackson.core.JsonProcessingException;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
|
||||
import org.apache.commons.codec.binary.Base64;
|
||||
import org.apache.http.Header;
|
||||
import org.apache.http.HeaderElement;
|
||||
import org.apache.http.HttpEntity;
|
||||
import org.apache.http.HttpException;
|
||||
import org.apache.http.HttpRequest;
|
||||
import org.apache.http.HttpRequestInterceptor;
|
||||
import org.apache.http.HttpResponse;
|
||||
import org.apache.http.HttpResponseInterceptor;
|
||||
import org.apache.http.HttpStatus;
|
||||
import org.apache.http.client.entity.EntityBuilder;
|
||||
import org.apache.http.client.entity.GzipDecompressingEntity;
|
||||
import org.apache.http.client.methods.HttpPost;
|
||||
import org.apache.http.entity.ContentType;
|
||||
import org.apache.http.entity.StringEntity;
|
||||
import org.apache.http.impl.client.CloseableHttpClient;
|
||||
import org.apache.http.impl.client.HttpClients;
|
||||
import org.apache.http.impl.conn.PoolingHttpClientConnectionManager;
|
||||
import org.apache.http.protocol.HttpContext;
|
||||
import org.apache.http.util.EntityUtils;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.HashMap;
|
||||
|
||||
public class InfluxV9RepoWriter {
|
||||
|
||||
private static final Logger logger = LoggerFactory.getLogger(InfluxV9RepoWriter.class);
|
||||
|
||||
private final String influxName;
|
||||
private final String influxUrl;
|
||||
private final String influxCreds;
|
||||
private final String influxUser;
|
||||
private final String influxPass;
|
||||
private final String influxRetentionPolicy;
|
||||
private final boolean gzip;
|
||||
|
||||
private final CloseableHttpClient httpClient;
|
||||
|
||||
private final String baseAuthHeader;
|
||||
|
||||
private final ObjectMapper objectMapper = new ObjectMapper();
|
||||
|
||||
@Inject
|
||||
public InfluxV9RepoWriter(final PersisterConfig config) {
|
||||
|
||||
this.influxName = config.getInfluxDBConfiguration().getName();
|
||||
this.influxUrl = config.getInfluxDBConfiguration().getUrl() + "/write";
|
||||
this.influxUser = config.getInfluxDBConfiguration().getUser();
|
||||
this.influxPass = config.getInfluxDBConfiguration().getPassword();
|
||||
this.influxCreds = this.influxUser + ":" + this.influxPass;
|
||||
this.influxRetentionPolicy = config.getInfluxDBConfiguration().getRetentionPolicy();
|
||||
this.gzip = config.getInfluxDBConfiguration().getGzip();
|
||||
|
||||
this.baseAuthHeader = "Basic " + new String(Base64.encodeBase64(this.influxCreds.getBytes()));
|
||||
|
||||
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
|
||||
cm.setMaxTotal(config.getInfluxDBConfiguration().getMaxHttpConnections());
|
||||
|
||||
if (this.gzip) {
|
||||
|
||||
this.httpClient =
|
||||
HttpClients.custom().setConnectionManager(cm)
|
||||
.addInterceptorFirst(new HttpRequestInterceptor() {
|
||||
|
||||
public void process(final HttpRequest request, final HttpContext context)
|
||||
throws HttpException, IOException {
|
||||
if (!request.containsHeader("Accept-Encoding")) {
|
||||
request.addHeader("Accept-Encoding", "gzip");
|
||||
}
|
||||
}
|
||||
}).addInterceptorFirst(new HttpResponseInterceptor() {
|
||||
|
||||
public void process(final HttpResponse response, final HttpContext context)
|
||||
throws HttpException, IOException {
|
||||
HttpEntity entity = response.getEntity();
|
||||
if (entity != null) {
|
||||
Header ceheader = entity.getContentEncoding();
|
||||
if (ceheader != null) {
|
||||
HeaderElement[] codecs = ceheader.getElements();
|
||||
for (int i = 0; i < codecs.length; i++) {
|
||||
if (codecs[i].getName().equalsIgnoreCase("gzip")) {
|
||||
response.setEntity(new GzipDecompressingEntity(response.getEntity()));
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}).build();
|
||||
|
||||
} else {
|
||||
|
||||
this.httpClient = HttpClients.custom().setConnectionManager(cm).build();
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
protected int write(final InfluxPoint[] influxPointArry, String id) throws RepoException {
|
||||
|
||||
HttpPost request = new HttpPost(this.influxUrl);
|
||||
|
||||
request.addHeader("Content-Type", "application/json");
|
||||
request.addHeader("Authorization", this.baseAuthHeader);
|
||||
|
||||
InfluxWrite
|
||||
influxWrite =
|
||||
new InfluxWrite(this.influxName, this.influxRetentionPolicy, influxPointArry,
|
||||
new HashMap<String, String>());
|
||||
|
||||
String jsonBody = getJsonBody(influxWrite);
|
||||
|
||||
if (this.gzip) {
|
||||
|
||||
logger.debug("[{}]: gzip set to true. sending gzip msg", id);
|
||||
|
||||
HttpEntity
|
||||
requestEntity =
|
||||
EntityBuilder
|
||||
.create()
|
||||
.setText(jsonBody)
|
||||
.setContentType(ContentType.APPLICATION_JSON)
|
||||
.setContentEncoding("UTF-8")
|
||||
.gzipCompress()
|
||||
.build();
|
||||
|
||||
request.setEntity(requestEntity);
|
||||
|
||||
request.addHeader("Content-Encoding", "gzip");
|
||||
|
||||
} else {
|
||||
|
||||
logger.debug("[{}]: gzip set to false. sending non-gzip msg", id);
|
||||
|
||||
StringEntity stringEntity = new StringEntity(jsonBody, "UTF-8");
|
||||
|
||||
request.setEntity(stringEntity);
|
||||
|
||||
}
|
||||
|
||||
try {
|
||||
|
||||
logger.debug("[{}]: sending {} points to influxdb {} at {}", id,
|
||||
influxPointArry.length, this.influxName, this.influxUrl);
|
||||
|
||||
HttpResponse response = null;
|
||||
|
||||
try {
|
||||
|
||||
response = this.httpClient.execute(request);
|
||||
|
||||
} catch (IOException e) {
|
||||
|
||||
throw new RepoException("failed to execute http request", e);
|
||||
}
|
||||
|
||||
int rc = response.getStatusLine().getStatusCode();
|
||||
|
||||
if (rc != HttpStatus.SC_OK && rc != HttpStatus.SC_NO_CONTENT) {
|
||||
|
||||
logger.error("[{}]: failed to send data to influxdb {} at {}: {}", id,
|
||||
this.influxName, this.influxUrl, String.valueOf(rc));
|
||||
|
||||
HttpEntity responseEntity = response.getEntity();
|
||||
|
||||
String responseString = null;
|
||||
|
||||
try {
|
||||
|
||||
responseString = EntityUtils.toString(responseEntity, "UTF-8");
|
||||
|
||||
} catch (IOException e) {
|
||||
|
||||
throw new RepoException("failed to read http response for non ok return code " + rc, e);
|
||||
|
||||
}
|
||||
|
||||
logger.error("[{}]: http response: {}", id, responseString);
|
||||
|
||||
throw new RepoException("failed to execute http request to influxdb " + rc + " - " + responseString);
|
||||
|
||||
} else {
|
||||
|
||||
logger.debug("[{}]: successfully sent {} points to influxdb {} at {}", id,
|
||||
influxPointArry.length, this.influxName, this.influxUrl);
|
||||
|
||||
return influxPointArry.length;
|
||||
|
||||
}
|
||||
|
||||
} finally {
|
||||
|
||||
request.releaseConnection();
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
private String getJsonBody(InfluxWrite influxWrite) throws RepoException {
|
||||
|
||||
String json = null;
|
||||
|
||||
try {
|
||||
|
||||
json = this.objectMapper.writeValueAsString(influxWrite);
|
||||
|
||||
} catch (JsonProcessingException e) {
|
||||
|
||||
throw new RepoException("failed to serialize json", e);
|
||||
}
|
||||
|
||||
return json;
|
||||
}
|
||||
}
|
||||
@@ -1,53 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.repository.influxdb;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
public class InfluxWrite {
|
||||
|
||||
private final String database;
|
||||
private final String retentionPolicy;
|
||||
private final InfluxPoint[] points;
|
||||
private final Map<String, String> tags;
|
||||
|
||||
|
||||
public InfluxWrite(final String database, final String retentionPolicy, final InfluxPoint[] points,
|
||||
final Map<String, String> tags) {
|
||||
this.database = database;
|
||||
this.retentionPolicy = retentionPolicy;
|
||||
this.points = points;
|
||||
this.tags = tags;
|
||||
}
|
||||
|
||||
public String getDatabase() {
|
||||
return database;
|
||||
}
|
||||
|
||||
public String getRetentionPolicy() {
|
||||
return retentionPolicy;
|
||||
}
|
||||
|
||||
public Map<String, String> getTags() {
|
||||
return this.tags;
|
||||
}
|
||||
|
||||
public InfluxPoint[] getPoints() {
|
||||
return points;
|
||||
}
|
||||
}
|
||||
@@ -1,94 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
|
||||
* in compliance with the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software distributed under the License
|
||||
* is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
|
||||
* or implied. See the License for the specific language governing permissions and limitations under
|
||||
* the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.repository.influxdb;
|
||||
|
||||
import com.fasterxml.jackson.core.JsonProcessingException;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
|
||||
import org.joda.time.DateTime;
|
||||
import org.joda.time.DateTimeZone;
|
||||
import org.joda.time.format.DateTimeFormatter;
|
||||
import org.joda.time.format.ISODateTimeFormat;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.util.Date;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
import javax.annotation.Nullable;
|
||||
|
||||
public final class Measurement {
|
||||
|
||||
private static final Logger logger = LoggerFactory.getLogger(Measurement.class);
|
||||
|
||||
private final ObjectMapper objectMapper = new ObjectMapper();
|
||||
|
||||
public final long time;
|
||||
public final double value;
|
||||
public final Map<String, String> valueMeta;
|
||||
|
||||
|
||||
public Measurement(final long time, final double value,
|
||||
final @Nullable Map<String, String> valueMeta) {
|
||||
|
||||
this.time = time;
|
||||
this.value = value;
|
||||
this.valueMeta = valueMeta == null ? new HashMap<String, String>() : valueMeta;
|
||||
|
||||
}
|
||||
|
||||
public String getISOFormattedTimeString() {
|
||||
|
||||
DateTimeFormatter dateFormatter = ISODateTimeFormat.dateTime();
|
||||
Date date = new Date(this.time);
|
||||
DateTime dateTime = new DateTime(date.getTime(), DateTimeZone.UTC);
|
||||
|
||||
return dateFormatter.print(dateTime);
|
||||
}
|
||||
|
||||
public long getTime() {
|
||||
return time;
|
||||
}
|
||||
|
||||
public double getValue() {
|
||||
return value;
|
||||
}
|
||||
|
||||
public Map<String, String> getValueMeta() {
|
||||
return valueMeta;
|
||||
}
|
||||
|
||||
public String getValueMetaJSONString() {
|
||||
|
||||
if (!this.valueMeta.isEmpty()) {
|
||||
|
||||
try {
|
||||
|
||||
return objectMapper.writeValueAsString(this.valueMeta);
|
||||
|
||||
} catch (JsonProcessingException e) {
|
||||
|
||||
logger.error("Failed to serialize value meta {}", this.valueMeta, e);
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
return null;
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
@@ -1,93 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
|
||||
* in compliance with the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software distributed under the License
|
||||
* is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
|
||||
* or implied. See the License for the specific language governing permissions and limitations under
|
||||
* the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.repository.influxdb;
|
||||
|
||||
import java.util.HashMap;
|
||||
import java.util.LinkedList;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
|
||||
public class MeasurementBuffer {
|
||||
|
||||
private final Map<Definition, Map<Dimensions, List<Measurement>>>
|
||||
measurementMap = new HashMap<>();
|
||||
|
||||
public void put(Definition definition, Dimensions dimensions, Measurement measurement) {
|
||||
|
||||
Map<Dimensions, List<Measurement>> dimensionsMap = this.measurementMap.get(definition);
|
||||
|
||||
if (dimensionsMap == null) {
|
||||
|
||||
dimensionsMap = initDimensionsMap(definition, dimensions);
|
||||
|
||||
}
|
||||
|
||||
List<Measurement> measurementList = dimensionsMap.get(dimensions);
|
||||
|
||||
if (measurementList == null) {
|
||||
|
||||
measurementList = initMeasurementList(dimensionsMap, dimensions);
|
||||
|
||||
}
|
||||
|
||||
measurementList.add(measurement);
|
||||
|
||||
}
|
||||
|
||||
public Set<Map.Entry<Definition, Map<Dimensions, List<Measurement>>>> entrySet() {
|
||||
|
||||
return this.measurementMap.entrySet();
|
||||
|
||||
}
|
||||
|
||||
public void clear() {
|
||||
|
||||
this.measurementMap.clear();
|
||||
|
||||
}
|
||||
|
||||
public boolean isEmpty() {
|
||||
|
||||
return this.measurementMap.isEmpty();
|
||||
|
||||
}
|
||||
|
||||
private Map<Dimensions, List<Measurement>> initDimensionsMap(Definition definition,
|
||||
Dimensions dimensions) {
|
||||
|
||||
Map<Dimensions, List<Measurement>> dimensionsMap = new HashMap<>();
|
||||
|
||||
List<Measurement> measurementList = new LinkedList<>();
|
||||
|
||||
dimensionsMap.put(dimensions, measurementList);
|
||||
|
||||
this.measurementMap.put(definition, dimensionsMap);
|
||||
|
||||
return dimensionsMap;
|
||||
}
|
||||
|
||||
private List<Measurement> initMeasurementList(Map<Dimensions, List<Measurement>> dimensionsMap,
|
||||
Dimensions dimensions) {
|
||||
|
||||
List<Measurement> measurementList = new LinkedList<>();
|
||||
|
||||
dimensionsMap.put(dimensions, measurementList);
|
||||
|
||||
return measurementList;
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
@@ -1,186 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.repository.vertica;
|
||||
|
||||
import monasca.common.model.event.AlarmStateTransitionedEvent;
|
||||
import monasca.persister.configuration.PersisterConfig;
|
||||
import monasca.persister.repository.Repo;
|
||||
|
||||
import com.codahale.metrics.Timer;
|
||||
import com.fasterxml.jackson.core.JsonProcessingException;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import com.fasterxml.jackson.databind.PropertyNamingStrategy;
|
||||
|
||||
import org.skife.jdbi.v2.DBI;
|
||||
import org.skife.jdbi.v2.PreparedBatch;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.security.NoSuchAlgorithmException;
|
||||
import java.sql.SQLException;
|
||||
import java.text.SimpleDateFormat;
|
||||
import java.util.Date;
|
||||
import java.util.TimeZone;
|
||||
|
||||
import javax.inject.Inject;
|
||||
|
||||
import io.dropwizard.setup.Environment;
|
||||
import monasca.persister.repository.RepoException;
|
||||
|
||||
public class VerticaAlarmRepo extends VerticaRepo implements Repo<AlarmStateTransitionedEvent> {
|
||||
|
||||
private static final Logger logger = LoggerFactory.getLogger(VerticaAlarmRepo.class);
|
||||
private final Environment environment;
|
||||
|
||||
private static final String SQL_INSERT_INTO_ALARM_HISTORY =
|
||||
"insert into MonAlarms.StateHistory (tenant_id, alarm_id, metrics, old_state, new_state, sub_alarms, reason, reason_data, time_stamp) "
|
||||
+ "values (:tenant_id, :alarm_id, :metrics, :old_state, :new_state, :sub_alarms, :reason, :reason_data, :time_stamp)";
|
||||
private static final int MAX_BYTES_PER_CHAR = 4;
|
||||
private static final int MAX_LENGTH_VARCHAR = 65000;
|
||||
|
||||
private PreparedBatch batch;
|
||||
private final Timer commitTimer;
|
||||
private final SimpleDateFormat simpleDateFormat;
|
||||
|
||||
private int msgCnt = 0;
|
||||
|
||||
private ObjectMapper objectMapper = new ObjectMapper();
|
||||
|
||||
@Inject
|
||||
public VerticaAlarmRepo(
|
||||
DBI dbi,
|
||||
PersisterConfig configuration,
|
||||
Environment environment) throws NoSuchAlgorithmException, SQLException {
|
||||
|
||||
super(dbi);
|
||||
|
||||
logger.debug("Instantiating " + this.getClass().getName());
|
||||
|
||||
this.environment = environment;
|
||||
|
||||
this.commitTimer =
|
||||
this.environment.metrics().timer(this.getClass().getName() + "." + "commit-timer");
|
||||
|
||||
this.objectMapper.setPropertyNamingStrategy(
|
||||
PropertyNamingStrategy.CAMEL_CASE_TO_LOWER_CASE_WITH_UNDERSCORES);
|
||||
|
||||
logger.debug("preparing batches...");
|
||||
|
||||
handle.getConnection().setAutoCommit(false);
|
||||
|
||||
batch = handle.prepareBatch(SQL_INSERT_INTO_ALARM_HISTORY);
|
||||
|
||||
handle.begin();
|
||||
|
||||
simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS");
|
||||
simpleDateFormat.setTimeZone(TimeZone.getTimeZone("GMT-0"));
|
||||
|
||||
logger.debug(this.getClass().getName() + " is fully instantiated");
|
||||
|
||||
}
|
||||
|
||||
public void addToBatch(AlarmStateTransitionedEvent message, String id) {
|
||||
|
||||
String metricsString = getSerializedString(message.metrics, id);
|
||||
|
||||
// Validate metricsString does not exceed a sufficient maximum upper bound
|
||||
if (metricsString.length()*MAX_BYTES_PER_CHAR >= MAX_LENGTH_VARCHAR) {
|
||||
metricsString = "[]";
|
||||
logger.warn("length of metricsString for alarm ID {} exceeds max length of {}", message.alarmId, MAX_LENGTH_VARCHAR);
|
||||
}
|
||||
|
||||
String subAlarmsString = getSerializedString(message.subAlarms, id);
|
||||
|
||||
// Validate subAlarmsString does not exceed a sufficient maximum upper bound
|
||||
if (subAlarmsString.length()*MAX_BYTES_PER_CHAR >= MAX_LENGTH_VARCHAR) {
|
||||
subAlarmsString = "[]";
|
||||
logger.warn("length of subAlarmsString for alarm ID {} exceeds max length of {}", message.alarmId, MAX_LENGTH_VARCHAR);
|
||||
}
|
||||
|
||||
String timeStamp = simpleDateFormat.format(new Date(message.timestamp));
|
||||
|
||||
batch.add()
|
||||
.bind("tenant_id", message.tenantId)
|
||||
.bind("alarm_id", message.alarmId)
|
||||
.bind("metrics", metricsString)
|
||||
.bind("old_state", message.oldState.name())
|
||||
.bind("new_state", message.newState.name())
|
||||
.bind("sub_alarms", subAlarmsString)
|
||||
.bind("reason", message.stateChangeReason)
|
||||
.bind("reason_data", "{}")
|
||||
.bind("time_stamp", timeStamp);
|
||||
|
||||
this.msgCnt++;
|
||||
}
|
||||
|
||||
private String getSerializedString(Object o, String id) {
|
||||
|
||||
try {
|
||||
|
||||
return this.objectMapper.writeValueAsString(o);
|
||||
|
||||
} catch (JsonProcessingException e) {
|
||||
|
||||
logger.error("[[}]: failed to serialize object {}", id, o, e);
|
||||
|
||||
return "";
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
public int flush(String id) throws RepoException {
|
||||
|
||||
try {
|
||||
|
||||
commitBatch(id);
|
||||
|
||||
int commitCnt = this.msgCnt;
|
||||
|
||||
this.msgCnt = 0;
|
||||
|
||||
return commitCnt;
|
||||
|
||||
} catch (Exception e) {
|
||||
|
||||
logger.error("[{}]: failed to write alarms to vertica", id, e);
|
||||
|
||||
throw new RepoException("failed to commit batch to vertica", e);
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
private void commitBatch(String id) {
|
||||
|
||||
long startTime = System.currentTimeMillis();
|
||||
|
||||
Timer.Context context = commitTimer.time();
|
||||
|
||||
batch.execute();
|
||||
|
||||
handle.commit();
|
||||
|
||||
handle.begin();
|
||||
|
||||
context.stop();
|
||||
|
||||
long endTime = System.currentTimeMillis();
|
||||
|
||||
logger.debug("[{}]: committing batch took {} ms", id, endTime - startTime);
|
||||
|
||||
}
|
||||
}
|
||||
@@ -1,658 +0,0 @@
|
||||
/*
|
||||
* (C) Copyright 2014-2016 Hewlett Packard Enterprise Development LP
|
||||
*
|
||||
* (C) Copyright 2017 SUSE LLC.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.repository.vertica;
|
||||
|
||||
import com.google.common.base.Stopwatch;
|
||||
import com.google.common.cache.Cache;
|
||||
import com.google.common.cache.CacheBuilder;
|
||||
|
||||
import com.codahale.metrics.Meter;
|
||||
import com.codahale.metrics.Timer;
|
||||
import com.fasterxml.jackson.core.JsonProcessingException;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
|
||||
import org.apache.commons.codec.digest.DigestUtils;
|
||||
import org.skife.jdbi.v2.DBI;
|
||||
import org.skife.jdbi.v2.PreparedBatch;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.security.NoSuchAlgorithmException;
|
||||
import java.sql.SQLException;
|
||||
import java.text.SimpleDateFormat;
|
||||
import java.util.Date;
|
||||
import java.util.HashSet;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.TimeZone;
|
||||
import java.util.TreeMap;
|
||||
|
||||
import javax.inject.Inject;
|
||||
|
||||
import io.dropwizard.setup.Environment;
|
||||
import monasca.common.model.metric.Metric;
|
||||
import monasca.common.model.metric.MetricEnvelope;
|
||||
import monasca.persister.configuration.PersisterConfig;
|
||||
import monasca.persister.repository.Repo;
|
||||
import monasca.persister.repository.RepoException;
|
||||
import monasca.persister.repository.Sha1HashId;
|
||||
|
||||
public class VerticaMetricRepo extends VerticaRepo implements Repo<MetricEnvelope> {
|
||||
|
||||
private static final Logger logger = LoggerFactory.getLogger(VerticaMetricRepo.class);
|
||||
|
||||
public static final int MAX_COLUMN_LENGTH = 255;
|
||||
|
||||
public static final int MAX_VALUE_META_LENGTH = 2048;
|
||||
|
||||
private final SimpleDateFormat simpleDateFormat;
|
||||
|
||||
private static final String TENANT_ID = "tenantId";
|
||||
private static final String REGION = "region";
|
||||
|
||||
private final Environment environment;
|
||||
|
||||
private final Cache<Sha1HashId, Sha1HashId> definitionsIdCache;
|
||||
private final Cache<Sha1HashId, Sha1HashId> dimensionsIdCache;
|
||||
private final Cache<Sha1HashId, Sha1HashId> definitionDimensionsIdCache;
|
||||
|
||||
private final Set<Sha1HashId> definitionIdSet = new HashSet<>();
|
||||
private final Set<Sha1HashId> dimensionIdSet = new HashSet<>();
|
||||
private final Set<Sha1HashId> definitionDimensionsIdSet = new HashSet<>();
|
||||
|
||||
private int measurementCnt = 0;
|
||||
|
||||
private final ObjectMapper objectMapper = new ObjectMapper();
|
||||
|
||||
private static final String SQL_INSERT_INTO_METRICS =
|
||||
"insert into MonMetrics.measurements (definition_dimensions_id, time_stamp, value, value_meta) "
|
||||
+ "values (:definition_dimension_id, :time_stamp, :value, :value_meta)";
|
||||
|
||||
private static final String DEFINITIONS_TEMP_STAGING_TABLE = "(" + " id BINARY(20) NOT NULL,"
|
||||
+ " name VARCHAR(255) NOT NULL," + " tenant_id VARCHAR(255) NOT NULL,"
|
||||
+ " region VARCHAR(255) NOT NULL" + ")";
|
||||
|
||||
private static final String DIMENSIONS_TEMP_STAGING_TABLE = "("
|
||||
+ " dimension_set_id BINARY(20) NOT NULL," + " name VARCHAR(255) NOT NULL,"
|
||||
+ " value VARCHAR(255) NOT NULL" + ")";
|
||||
|
||||
private static final String DEFINITIONS_DIMENSIONS_TEMP_STAGING_TABLE = "("
|
||||
+ " id BINARY(20) NOT NULL," + " definition_id BINARY(20) NOT NULL, "
|
||||
+ " dimension_set_id BINARY(20) NOT NULL " + ")";
|
||||
|
||||
private PreparedBatch metricsBatch;
|
||||
private PreparedBatch stagedDefinitionsBatch;
|
||||
private PreparedBatch stagedDimensionsBatch;
|
||||
private PreparedBatch stagedDefinitionDimensionsBatch;
|
||||
|
||||
private final String definitionsTempStagingTableName;
|
||||
private final String dimensionsTempStagingTableName;
|
||||
private final String definitionDimensionsTempStagingTableName;
|
||||
|
||||
private final String definitionsTempStagingTableInsertStmt;
|
||||
private final String dimensionsTempStagingTableInsertStmt;
|
||||
private final String definitionDimensionsTempStagingTableInsertStmt;
|
||||
|
||||
private final Timer commitTimer;
|
||||
|
||||
public final Meter measurementMeter;
|
||||
public final Meter definitionCacheMissMeter;
|
||||
public final Meter dimensionCacheMissMeter;
|
||||
public final Meter definitionDimensionCacheMissMeter;
|
||||
public final Meter definitionCacheHitMeter;
|
||||
public final Meter dimensionCacheHitMeter;
|
||||
public final Meter definitionDimensionCacheHitMeter;
|
||||
|
||||
@Inject
|
||||
public VerticaMetricRepo(
|
||||
DBI dbi,
|
||||
PersisterConfig configuration,
|
||||
Environment environment) throws NoSuchAlgorithmException, SQLException {
|
||||
|
||||
super(dbi);
|
||||
|
||||
logger.debug("Instantiating " + this.getClass().getName());
|
||||
|
||||
simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS");
|
||||
simpleDateFormat.setTimeZone(TimeZone.getTimeZone("GMT-0"));
|
||||
|
||||
this.environment = environment;
|
||||
|
||||
this.commitTimer =
|
||||
this.environment.metrics().timer(this.getClass().getName() + "." + "commit-timer");
|
||||
|
||||
this.measurementMeter =
|
||||
this.environment.metrics()
|
||||
.meter(this.getClass().getName() + "." + "measurement-meter");
|
||||
|
||||
this.definitionCacheMissMeter =
|
||||
this.environment.metrics()
|
||||
.meter(this.getClass().getName() + "." + "definition-cache-miss-meter");
|
||||
|
||||
this.dimensionCacheMissMeter =
|
||||
this.environment.metrics()
|
||||
.meter(this.getClass().getName() + "." + "dimension-cache-miss-meter");
|
||||
|
||||
this.definitionDimensionCacheMissMeter =
|
||||
this.environment.metrics()
|
||||
.meter(this.getClass().getName() + "." + "definition-dimension-cache-miss-meter");
|
||||
|
||||
this.definitionCacheHitMeter =
|
||||
this.environment.metrics()
|
||||
.meter(this.getClass().getName() + "." + "definition-cache-hit-meter");
|
||||
|
||||
this.dimensionCacheHitMeter =
|
||||
this.environment.metrics()
|
||||
.meter(this.getClass().getName() + "." + "dimension-cache-hit-meter");
|
||||
|
||||
this.definitionDimensionCacheHitMeter =
|
||||
this.environment.metrics()
|
||||
.meter(this.getClass().getName() + "." + "definition-dimension-cache-hit-meter");
|
||||
|
||||
definitionsIdCache =
|
||||
CacheBuilder.newBuilder()
|
||||
.maximumSize(configuration.getVerticaMetricRepoConfig().getMaxCacheSize())
|
||||
.build();
|
||||
|
||||
dimensionsIdCache =
|
||||
CacheBuilder.newBuilder()
|
||||
.maximumSize(configuration.getVerticaMetricRepoConfig().getMaxCacheSize())
|
||||
.build();
|
||||
|
||||
definitionDimensionsIdCache =
|
||||
CacheBuilder.newBuilder()
|
||||
.maximumSize(configuration.getVerticaMetricRepoConfig().getMaxCacheSize())
|
||||
.build();
|
||||
|
||||
logger.info("preparing database and building sql statements...");
|
||||
|
||||
String uniqueName = this.toString().replaceAll("\\.", "_").replaceAll("\\@", "_");
|
||||
this.definitionsTempStagingTableName = uniqueName + "_staged_definitions";
|
||||
logger.debug("temp staging definitions table name: " + definitionsTempStagingTableName);
|
||||
|
||||
this.dimensionsTempStagingTableName = uniqueName + "_staged_dimensions";
|
||||
logger.debug("temp staging dimensions table name:" + dimensionsTempStagingTableName);
|
||||
|
||||
this.definitionDimensionsTempStagingTableName = uniqueName + "_staged_definitions_dimensions";
|
||||
logger.debug("temp staging definitionDimensions table name: "
|
||||
+ definitionDimensionsTempStagingTableName);
|
||||
|
||||
this.definitionsTempStagingTableInsertStmt =
|
||||
"merge into MonMetrics.Definitions tgt"
|
||||
+ " using " + this.definitionsTempStagingTableName + " src"
|
||||
+ " on src.id = tgt.id"
|
||||
+ " when not matched then insert values(src.id, src.name, src.tenant_id, src.region)";
|
||||
|
||||
logger.debug("definitions insert stmt: " + definitionsTempStagingTableInsertStmt);
|
||||
|
||||
this.dimensionsTempStagingTableInsertStmt =
|
||||
"merge into MonMetrics.Dimensions tgt "
|
||||
+ " using " + this.dimensionsTempStagingTableName + " src"
|
||||
+ " on src.dimension_set_id = tgt.dimension_set_id"
|
||||
+ " and src.name = tgt.name"
|
||||
+ " and src.value = tgt.value"
|
||||
+ " when not matched then insert values(src.dimension_set_id, src.name, src.value)";
|
||||
|
||||
logger.debug("dimensions insert stmt: " + definitionsTempStagingTableInsertStmt);
|
||||
|
||||
this.definitionDimensionsTempStagingTableInsertStmt =
|
||||
"merge into MonMetrics.definitionDimensions tgt"
|
||||
+ " using " + this.definitionDimensionsTempStagingTableName + " src"
|
||||
+ " on src.id = tgt.id"
|
||||
+ " when not matched then insert values(src.id, src.definition_id, src.dimension_set_id)";
|
||||
|
||||
logger.debug("definitionDimensions insert stmt: "
|
||||
+ definitionDimensionsTempStagingTableInsertStmt);
|
||||
|
||||
logger.debug("dropping temp staging tables if they already exist...");
|
||||
handle.execute("drop table if exists " + definitionsTempStagingTableName + " cascade");
|
||||
handle.execute("drop table if exists " + dimensionsTempStagingTableName + " cascade");
|
||||
handle.execute("drop table if exists " + definitionDimensionsTempStagingTableName + " cascade");
|
||||
|
||||
logger.debug("creating temp staging tables...");
|
||||
handle.execute("create local temp table " + definitionsTempStagingTableName + " "
|
||||
+ DEFINITIONS_TEMP_STAGING_TABLE + " on commit preserve rows");
|
||||
handle.execute("create local temp table " + dimensionsTempStagingTableName + " "
|
||||
+ DIMENSIONS_TEMP_STAGING_TABLE + " on commit preserve rows");
|
||||
handle.execute("create local temp table " + definitionDimensionsTempStagingTableName + " "
|
||||
+ DEFINITIONS_DIMENSIONS_TEMP_STAGING_TABLE + " on commit preserve rows");
|
||||
|
||||
handle.getConnection().setAutoCommit(false);
|
||||
|
||||
logger.debug("preparing batches...");
|
||||
metricsBatch = handle.prepareBatch(SQL_INSERT_INTO_METRICS);
|
||||
stagedDefinitionsBatch =
|
||||
handle.prepareBatch("insert into " + definitionsTempStagingTableName
|
||||
+ " values (:id, :name, :tenant_id, :region)");
|
||||
stagedDimensionsBatch =
|
||||
handle.prepareBatch("insert into " + dimensionsTempStagingTableName
|
||||
+ " values (:dimension_set_id, :name, :value)");
|
||||
stagedDefinitionDimensionsBatch =
|
||||
handle.prepareBatch("insert into " + definitionDimensionsTempStagingTableName
|
||||
+ " values (:id, :definition_id, :dimension_set_id)");
|
||||
|
||||
logger.debug("opening transaction...");
|
||||
handle.begin();
|
||||
|
||||
logger.debug("completed database preparations");
|
||||
|
||||
logger.debug(this.getClass().getName() + " is fully instantiated");
|
||||
}
|
||||
|
||||
@Override
|
||||
public void addToBatch(MetricEnvelope metricEnvelope, String id) {
|
||||
|
||||
Metric metric = metricEnvelope.metric;
|
||||
Map<String, Object> metaMap = metricEnvelope.meta;
|
||||
|
||||
String tenantId = getMeta(TENANT_ID, metric, metaMap, id);
|
||||
|
||||
String region = getMeta(REGION, metric, metaMap, id);
|
||||
|
||||
// Add the definition to the batch.
|
||||
StringBuilder definitionIdStringToHash =
|
||||
new StringBuilder(trunc(metric.getName(), MAX_COLUMN_LENGTH, id));
|
||||
|
||||
definitionIdStringToHash.append(trunc(tenantId, MAX_COLUMN_LENGTH, id));
|
||||
|
||||
definitionIdStringToHash.append(trunc(region, MAX_COLUMN_LENGTH, id));
|
||||
|
||||
byte[] definitionIdSha1Hash = DigestUtils.sha(definitionIdStringToHash.toString());
|
||||
|
||||
Sha1HashId definitionSha1HashId = new Sha1HashId((definitionIdSha1Hash));
|
||||
|
||||
addDefinitionToBatch(definitionSha1HashId, trunc(metric.getName(), MAX_COLUMN_LENGTH, id),
|
||||
trunc(tenantId, MAX_COLUMN_LENGTH, id),
|
||||
trunc(region, MAX_COLUMN_LENGTH, id), id);
|
||||
|
||||
// Calculate dimensions sha1 hash id.
|
||||
StringBuilder dimensionIdStringToHash = new StringBuilder();
|
||||
|
||||
Map<String, String> preppedDimMap = prepDimensions(metric.getDimensions(), id);
|
||||
|
||||
for (Map.Entry<String, String> entry : preppedDimMap.entrySet()) {
|
||||
|
||||
dimensionIdStringToHash.append(entry.getKey());
|
||||
|
||||
dimensionIdStringToHash.append(entry.getValue());
|
||||
}
|
||||
|
||||
byte[] dimensionIdSha1Hash = DigestUtils.sha(dimensionIdStringToHash.toString());
|
||||
|
||||
Sha1HashId dimensionsSha1HashId = new Sha1HashId(dimensionIdSha1Hash);
|
||||
|
||||
// Add the dimension name/values to the batch.
|
||||
addDimensionsToBatch(dimensionsSha1HashId, preppedDimMap, id);
|
||||
|
||||
// Add the definition dimensions to the batch.
|
||||
StringBuilder definitionDimensionsIdStringToHash =
|
||||
new StringBuilder(definitionSha1HashId.toHexString());
|
||||
|
||||
definitionDimensionsIdStringToHash.append(dimensionsSha1HashId.toHexString());
|
||||
|
||||
byte[] definitionDimensionsIdSha1Hash =
|
||||
DigestUtils.sha(definitionDimensionsIdStringToHash.toString());
|
||||
|
||||
Sha1HashId definitionDimensionsSha1HashId = new Sha1HashId(definitionDimensionsIdSha1Hash);
|
||||
|
||||
addDefinitionDimensionToBatch(definitionDimensionsSha1HashId, definitionSha1HashId,
|
||||
dimensionsSha1HashId, id);
|
||||
|
||||
// Add the measurement to the batch.
|
||||
String timeStamp = simpleDateFormat.format(new Date(metric.getTimestamp()));
|
||||
|
||||
double value = metric.getValue();
|
||||
|
||||
addMetricToBatch(definitionDimensionsSha1HashId, timeStamp, value, metric.getValueMeta(), id);
|
||||
|
||||
}
|
||||
|
||||
private String getMeta(String name, Metric metric, Map<String, Object> meta, String id) {
|
||||
|
||||
if (meta.containsKey(name)) {
|
||||
|
||||
return (String) meta.get(name);
|
||||
|
||||
} else {
|
||||
|
||||
logger.warn(
|
||||
"[{}]: failed to find {} in message envelope meta data. metric message may be malformed. "
|
||||
+ "setting {} to empty string.", id, name);
|
||||
|
||||
logger.warn("[{}]: metric: {}", id, metric.toString());
|
||||
|
||||
logger.warn("[{}]: meta: {}", id, meta.toString());
|
||||
|
||||
return "";
|
||||
}
|
||||
}
|
||||
|
||||
public void addMetricToBatch(Sha1HashId defDimsId, String timeStamp, double value,
|
||||
Map<String, String> valueMeta, String id) {
|
||||
|
||||
String valueMetaString = getValueMetaString(valueMeta, id);
|
||||
|
||||
logger.debug("[{}]: adding metric to batch: defDimsId: {}, time: {}, value: {}, value meta {}",
|
||||
id, defDimsId.toHexString(), timeStamp, value, valueMetaString);
|
||||
|
||||
metricsBatch.add()
|
||||
.bind("definition_dimension_id", defDimsId.getSha1Hash())
|
||||
.bind("time_stamp", timeStamp)
|
||||
.bind("value", value)
|
||||
.bind("value_meta", valueMetaString);
|
||||
|
||||
this.measurementCnt++;
|
||||
|
||||
measurementMeter.mark();
|
||||
}
|
||||
|
||||
private String getValueMetaString(Map<String, String> valueMeta, String id) {
|
||||
|
||||
String valueMetaString = "";
|
||||
|
||||
if (valueMeta != null && !valueMeta.isEmpty()) {
|
||||
|
||||
try {
|
||||
|
||||
valueMetaString = this.objectMapper.writeValueAsString(valueMeta);
|
||||
if (valueMetaString.length() > MAX_VALUE_META_LENGTH) {
|
||||
logger
|
||||
.error("[{}]: Value meta length {} longer than maximum {}, dropping value meta",
|
||||
id, valueMetaString.length(), MAX_VALUE_META_LENGTH);
|
||||
return "";
|
||||
}
|
||||
|
||||
} catch (JsonProcessingException e) {
|
||||
|
||||
logger
|
||||
.error("[{}]: Failed to serialize value meta {}, dropping value meta from measurement",
|
||||
id, valueMeta);
|
||||
}
|
||||
}
|
||||
|
||||
return valueMetaString;
|
||||
}
|
||||
|
||||
private void addDefinitionToBatch(Sha1HashId defId, String name, String tenantId, String region, String id) {
|
||||
|
||||
if (definitionsIdCache.getIfPresent(defId) == null) {
|
||||
|
||||
definitionCacheMissMeter.mark();
|
||||
|
||||
if (!definitionIdSet.contains(defId)) {
|
||||
|
||||
logger.debug("[{}]: adding definition to batch: defId: {}, name: {}, tenantId: {}, region: {}",
|
||||
id, defId.toHexString(), name, tenantId, region);
|
||||
|
||||
stagedDefinitionsBatch.add()
|
||||
.bind("id", defId.getSha1Hash())
|
||||
.bind("name", name)
|
||||
.bind("tenant_id", tenantId)
|
||||
.bind("region", region);
|
||||
|
||||
definitionIdSet.add(defId);
|
||||
|
||||
}
|
||||
|
||||
} else {
|
||||
|
||||
definitionCacheHitMeter.mark();
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
private void addDimensionsToBatch(Sha1HashId dimSetId, Map<String, String> dimMap, String id) {
|
||||
|
||||
if (dimensionsIdCache.getIfPresent(dimSetId) == null) {
|
||||
|
||||
dimensionCacheMissMeter.mark();
|
||||
|
||||
if (!dimensionIdSet.contains(dimSetId)) {
|
||||
|
||||
for (Map.Entry<String, String> entry : dimMap.entrySet()) {
|
||||
|
||||
String name = entry.getKey();
|
||||
String value = entry.getValue();
|
||||
|
||||
logger.debug(
|
||||
"[{}]: adding dimension to batch: dimSetId: {}, name: {}, value: {}",
|
||||
id, dimSetId.toHexString(), name, value);
|
||||
|
||||
stagedDimensionsBatch.add()
|
||||
.bind("dimension_set_id", dimSetId.getSha1Hash())
|
||||
.bind("name", name)
|
||||
.bind("value", value);
|
||||
}
|
||||
|
||||
dimensionIdSet.add(dimSetId);
|
||||
}
|
||||
|
||||
} else {
|
||||
|
||||
dimensionCacheHitMeter.mark();
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
private void addDefinitionDimensionToBatch(Sha1HashId defDimsId, Sha1HashId defId,
|
||||
Sha1HashId dimId, String id) {
|
||||
|
||||
if (definitionDimensionsIdCache.getIfPresent(defDimsId) == null) {
|
||||
|
||||
definitionDimensionCacheMissMeter.mark();
|
||||
|
||||
if (!definitionDimensionsIdSet.contains(defDimsId)) {
|
||||
|
||||
logger.debug("[{}]: adding definitionDimension to batch: defDimsId: {}, defId: {}, dimId: {}",
|
||||
id, defDimsId.toHexString(), defId, dimId);
|
||||
|
||||
stagedDefinitionDimensionsBatch.add()
|
||||
.bind("id", defDimsId.getSha1Hash())
|
||||
.bind("definition_id", defId.getSha1Hash())
|
||||
.bind("dimension_set_id", dimId.getSha1Hash());
|
||||
|
||||
definitionDimensionsIdSet.add(defDimsId);
|
||||
}
|
||||
|
||||
} else {
|
||||
|
||||
definitionDimensionCacheHitMeter.mark();
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public int flush(String id) throws RepoException {
|
||||
|
||||
try {
|
||||
|
||||
Stopwatch swOuter = Stopwatch.createStarted();
|
||||
|
||||
Timer.Context context = commitTimer.time();
|
||||
|
||||
executeBatches(id);
|
||||
|
||||
writeRowsFromTempStagingTablesToPermTables(id);
|
||||
|
||||
Stopwatch swInner = Stopwatch.createStarted();
|
||||
|
||||
handle.commit();
|
||||
swInner.stop();
|
||||
|
||||
logger.debug("[{}]: committing transaction took: {}", id, swInner);
|
||||
|
||||
swInner.reset().start();
|
||||
handle.begin();
|
||||
swInner.stop();
|
||||
|
||||
logger.debug("[{}]: beginning new transaction took: {}", id, swInner);
|
||||
|
||||
context.stop();
|
||||
|
||||
swOuter.stop();
|
||||
|
||||
logger.debug("[{}]: total time for writing measurements, definitions, and dimensions to vertica took {}",
|
||||
id, swOuter);
|
||||
|
||||
updateIdCaches(id);
|
||||
|
||||
int commitCnt = this.measurementCnt;
|
||||
|
||||
this.measurementCnt = 0;
|
||||
|
||||
return commitCnt;
|
||||
|
||||
} catch (Exception e) {
|
||||
|
||||
logger.error("[{}]: failed to write measurements, definitions, and dimensions to vertica", id,
|
||||
e);
|
||||
|
||||
throw new RepoException("failed to commit batch to vertica", e);
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
private void executeBatches(String id) {
|
||||
|
||||
Stopwatch sw = Stopwatch.createStarted();
|
||||
|
||||
metricsBatch.execute();
|
||||
|
||||
stagedDefinitionsBatch.execute();
|
||||
|
||||
stagedDimensionsBatch.execute();
|
||||
|
||||
stagedDefinitionDimensionsBatch.execute();
|
||||
|
||||
sw.stop();
|
||||
|
||||
logger.debug("[{}]: executing batches took {}: ", id, sw);
|
||||
|
||||
}
|
||||
|
||||
private void updateIdCaches(String id) {
|
||||
|
||||
Stopwatch sw = Stopwatch.createStarted();
|
||||
|
||||
for (Sha1HashId defId : definitionIdSet) {
|
||||
|
||||
definitionsIdCache.put(defId, defId);
|
||||
}
|
||||
|
||||
for (Sha1HashId dimId : dimensionIdSet) {
|
||||
|
||||
dimensionsIdCache.put(dimId, dimId);
|
||||
}
|
||||
|
||||
for (Sha1HashId defDimsId : definitionDimensionsIdSet) {
|
||||
|
||||
definitionDimensionsIdCache.put(defDimsId, defDimsId);
|
||||
}
|
||||
|
||||
clearTempCaches();
|
||||
|
||||
sw.stop();
|
||||
|
||||
logger.debug("[{}]: clearing temp caches took: {}", id, sw);
|
||||
|
||||
}
|
||||
|
||||
private void writeRowsFromTempStagingTablesToPermTables(String id) {
|
||||
|
||||
Stopwatch sw = Stopwatch.createStarted();
|
||||
|
||||
handle.execute(definitionsTempStagingTableInsertStmt);
|
||||
handle.execute("truncate table " + definitionsTempStagingTableName);
|
||||
sw.stop();
|
||||
|
||||
logger.debug("[{}]: flushing definitions temp staging table took: {}", id, sw);
|
||||
|
||||
sw.reset().start();
|
||||
handle.execute(dimensionsTempStagingTableInsertStmt);
|
||||
handle.execute("truncate table " + dimensionsTempStagingTableName);
|
||||
sw.stop();
|
||||
|
||||
logger.debug("[{}]: flushing dimensions temp staging table took: {}", id, sw);
|
||||
|
||||
sw.reset().start();
|
||||
handle.execute(definitionDimensionsTempStagingTableInsertStmt);
|
||||
handle.execute("truncate table " + definitionDimensionsTempStagingTableName);
|
||||
sw.stop();
|
||||
|
||||
logger.debug("[{}]: flushing definition dimensions temp staging table took: {}", id, sw);
|
||||
}
|
||||
|
||||
private void clearTempCaches() {
|
||||
|
||||
definitionIdSet.clear();
|
||||
dimensionIdSet.clear();
|
||||
definitionDimensionsIdSet.clear();
|
||||
|
||||
}
|
||||
|
||||
private Map<String, String> prepDimensions(Map<String, String> dimMap, String id) {
|
||||
|
||||
Map<String, String> newDimMap = new TreeMap<>();
|
||||
|
||||
if (dimMap != null) {
|
||||
|
||||
for (String dimName : dimMap.keySet()) {
|
||||
|
||||
if (dimName != null && !dimName.isEmpty()) {
|
||||
|
||||
String dimValue = dimMap.get(dimName);
|
||||
|
||||
if (dimValue != null && !dimValue.isEmpty()) {
|
||||
|
||||
newDimMap.put(trunc(dimName, MAX_COLUMN_LENGTH, id),
|
||||
trunc(dimValue, MAX_COLUMN_LENGTH, id));
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return newDimMap;
|
||||
|
||||
}
|
||||
|
||||
private String trunc(String s, int l, String id) {
|
||||
|
||||
if (s == null) {
|
||||
|
||||
return "";
|
||||
|
||||
} else if (s.length() <= l) {
|
||||
|
||||
return s;
|
||||
|
||||
} else {
|
||||
|
||||
String r = s.substring(0, l);
|
||||
|
||||
logger.warn( "[{}]: input string exceeded max column length. truncating input string {} to {} chars",
|
||||
id, s, l);
|
||||
|
||||
logger.warn("[{}]: resulting string {}", id, r);
|
||||
|
||||
return r;
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,39 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.repository.vertica;
|
||||
|
||||
import org.skife.jdbi.v2.DBI;
|
||||
import org.skife.jdbi.v2.Handle;
|
||||
|
||||
public class VerticaRepo {
|
||||
protected DBI dbi;
|
||||
protected Handle handle;
|
||||
|
||||
public VerticaRepo(DBI dbi) {
|
||||
this.dbi = dbi;
|
||||
this.handle = dbi.open();
|
||||
this.handle.execute("SET TIME ZONE TO 'UTC'");
|
||||
}
|
||||
|
||||
public VerticaRepo() {}
|
||||
|
||||
public void setDBI(DBI dbi) throws Exception {
|
||||
this.dbi = dbi;
|
||||
this.handle = dbi.open();
|
||||
}
|
||||
}
|
||||
@@ -1,30 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.resource;
|
||||
|
||||
public class PlaceHolder {
|
||||
private final String content;
|
||||
|
||||
public PlaceHolder(String content) {
|
||||
this.content = content;
|
||||
}
|
||||
|
||||
public String getContent() {
|
||||
return content;
|
||||
}
|
||||
}
|
||||
@@ -1,37 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister.resource;
|
||||
|
||||
import javax.ws.rs.GET;
|
||||
import javax.ws.rs.Path;
|
||||
import javax.ws.rs.Produces;
|
||||
import javax.ws.rs.core.MediaType;
|
||||
|
||||
@Path("/resource")
|
||||
@Produces(MediaType.APPLICATION_JSON)
|
||||
public class Resource {
|
||||
|
||||
public Resource() {
|
||||
|
||||
}
|
||||
|
||||
@GET
|
||||
public PlaceHolder getResource() {
|
||||
return new PlaceHolder("placeholder");
|
||||
}
|
||||
}
|
||||
@@ -1,9 +0,0 @@
|
||||
|
||||
|
||||
_____ __________ .__ __
|
||||
/ \ ____ ____ \______ \ ___________ _____|__| _______/ |_ ___________
|
||||
/ \ / \ / _ \ / \ | ___// __ \_ __ \/ ___/ |/ ___/\ __\/ __ \_ __ \
|
||||
/ Y ( <_> ) | \ | | \ ___/| | \/\___ \| |\___ \ | | \ ___/| | \/
|
||||
\____|__ /\____/|___| / |____| \___ >__| /____ >__/____ > |__| \___ >__|
|
||||
\/ \/ \/ \/ \/ \/
|
||||
|
||||
@@ -1,129 +0,0 @@
|
||||
name: monasca-persister
|
||||
|
||||
alarmHistoryConfiguration:
|
||||
batchSize: 100
|
||||
numThreads: 1
|
||||
maxBatchTime: 15
|
||||
# See http://kafka.apache.org/documentation.html#api for semantics and defaults.
|
||||
topic: alarm-state-transitions
|
||||
groupId: persister_alarms
|
||||
consumerId: 1
|
||||
clientId: 1
|
||||
|
||||
metricConfiguration:
|
||||
batchSize: 1000
|
||||
numThreads: 2
|
||||
maxBatchTime: 30
|
||||
# See http://kafka.apache.org/documentation.html#api for semantics and defaults.
|
||||
topic: metrics
|
||||
groupId: persister_metrics
|
||||
consumerId: 1
|
||||
clientId: 1
|
||||
|
||||
#Kafka settings.
|
||||
kafkaConfig:
|
||||
# See http://kafka.apache.org/documentation.html#api for semantics and defaults.
|
||||
zookeeperConnect: 192.168.10.4:2181
|
||||
socketTimeoutMs: 30000
|
||||
socketReceiveBufferBytes : 65536
|
||||
fetchMessageMaxBytes: 1048576
|
||||
queuedMaxMessageChunks: 10
|
||||
rebalanceMaxRetries: 4
|
||||
fetchMinBytes: 1
|
||||
fetchWaitMaxMs: 100
|
||||
rebalanceBackoffMs: 2000
|
||||
refreshLeaderBackoffMs: 200
|
||||
autoOffsetReset: largest
|
||||
consumerTimeoutMs: 1000
|
||||
zookeeperSessionTimeoutMs : 60000
|
||||
zookeeperConnectionTimeoutMs : 6000
|
||||
zookeeperSyncTimeMs: 2000
|
||||
|
||||
verticaMetricRepoConfig:
|
||||
maxCacheSize: 2000000
|
||||
|
||||
databaseConfiguration:
|
||||
# databaseType can be (vertica | influxdb)
|
||||
databaseType: influxdb
|
||||
|
||||
# Uncomment if databaseType is influxdb
|
||||
influxDbConfiguration:
|
||||
# Retention policy may be left blank to indicate default policy.
|
||||
retentionPolicy:
|
||||
# Used only if version is V9.
|
||||
maxHttpConnections: 100
|
||||
name: mon
|
||||
replicationFactor: 1
|
||||
url: http://192.168.10.4:8086
|
||||
user: root
|
||||
password: root
|
||||
|
||||
# Uncomment if databaseType is vertica
|
||||
#dataSourceFactory:
|
||||
# driverClass: com.vertica.jdbc.Driver
|
||||
# url: jdbc:vertica://192.168.10.4:5433/monasca
|
||||
# user: monasca_persister
|
||||
# password: password
|
||||
# properties:
|
||||
# ssl: false
|
||||
# # the maximum amount of time to wait on an empty pool before throwing an exception
|
||||
# maxWaitForConnection: 1s
|
||||
#
|
||||
# # the SQL query to run when validating a connection's liveness
|
||||
# validationQuery: "/* MyService Health Check */ SELECT 1"
|
||||
#
|
||||
# # the minimum number of connections to keep open
|
||||
# minSize: 8
|
||||
#
|
||||
# # the maximum number of connections to keep open
|
||||
# maxSize: 41
|
||||
#
|
||||
# # whether or not idle connections should be validated
|
||||
# checkConnectionWhileIdle: false
|
||||
#
|
||||
# # the maximum lifetime of an idle connection
|
||||
# maxConnectionAge: 1 minute
|
||||
|
||||
metrics:
|
||||
frequency: 1 second
|
||||
|
||||
# Logging settings.
|
||||
logging:
|
||||
|
||||
# The default level of all loggers. Can be OFF, ERROR, WARN, INFO,
|
||||
# DEBUG, TRACE, or ALL.
|
||||
level: INFO
|
||||
|
||||
# Logger-specific levels.
|
||||
loggers:
|
||||
monasca: DEBUG
|
||||
|
||||
appenders:
|
||||
|
||||
- type: console
|
||||
threshold: DEBUG
|
||||
timeZone: UTC
|
||||
target: stdout
|
||||
|
||||
- type: file
|
||||
threshold: DEBUG
|
||||
archive: true
|
||||
# The file to which current statements will be logged.
|
||||
currentLogFilename: ./logs/monasca-persister.log
|
||||
|
||||
# When the log file rotates, the archived log will be renamed to this and gzipped. The
|
||||
# %d is replaced with the previous day (yyyy-MM-dd). Custom rolling windows can be created
|
||||
# by passing a SimpleDateFormat-compatible format as an argument: "%d{yyyy-MM-dd-hh}".
|
||||
archivedLogFilenamePattern: ./logs/monasca-persister-%d.log.gz
|
||||
|
||||
# The number of archived files to keep.
|
||||
archivedFileCount: 5
|
||||
|
||||
# The timezone used to format dates. HINT: USE THE DEFAULT, UTC.
|
||||
timeZone: UTC
|
||||
|
||||
# Uncomment to approximately match the default log format of the python
|
||||
# Openstack components. %pid is unavoidably formatted with [brackets],
|
||||
# which are hard-coded in dropwizard's logging module.
|
||||
# See http://logback.qos.ch/manual/layouts.html#conversionWord for details of the format string
|
||||
# logFormat: "%app%pid: %d{YYYY-MM-dd HH:mm:ss.SSS} %pid %level %logger [-] [%thread] %msg %ex{1}"
|
||||
@@ -1,58 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
* implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package monasca.persister;
|
||||
|
||||
import monasca.common.model.metric.MetricEnvelope;
|
||||
import monasca.persister.consumer.ManagedConsumer;
|
||||
import monasca.persister.consumer.KafkaConsumer;
|
||||
import monasca.persister.pipeline.ManagedPipeline;
|
||||
import monasca.persister.pipeline.event.MetricHandler;
|
||||
|
||||
import org.junit.Before;
|
||||
import org.junit.Test;
|
||||
import org.mockito.Mock;
|
||||
import org.mockito.Mockito;
|
||||
import org.mockito.MockitoAnnotations;
|
||||
|
||||
public class MonPersisterConsumerTest {
|
||||
|
||||
@Mock
|
||||
private KafkaConsumer<MetricEnvelope[]> kafkaConsumer;
|
||||
|
||||
@Mock
|
||||
private ManagedConsumer<MetricEnvelope[]> monManagedConsumer;
|
||||
|
||||
private MetricHandler metricHandler;
|
||||
|
||||
private ManagedPipeline<MetricEnvelope[]> metricPipeline;
|
||||
|
||||
@Before
|
||||
public void initMocks() {
|
||||
metricHandler = Mockito.mock(MetricHandler.class);
|
||||
metricPipeline = Mockito.spy(new ManagedPipeline<MetricEnvelope[]>(metricHandler, "metric-1"));
|
||||
MockitoAnnotations.initMocks(this);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testKafkaConsumerLifecycle() throws Exception {
|
||||
monManagedConsumer.start();
|
||||
monManagedConsumer.stop();
|
||||
metricPipeline.shutdown();
|
||||
Mockito.verify(metricHandler).flush();
|
||||
}
|
||||
}
|
||||
@@ -1,210 +0,0 @@
|
||||
# Monasca Persister
|
||||
|
||||
A Monasca Persister written in Python.
|
||||
|
||||
Reads alarms and metrics from a Kafka queue and stores them in an InfluxDB
|
||||
database.
|
||||
|
||||
## Deployment
|
||||
|
||||
Note that this document refers to the Python implementation of the persister.
|
||||
For information regarding the Java implementation, see the README.md in the
|
||||
root of this repository.
|
||||
|
||||
### Package Installation
|
||||
|
||||
Monasca Persister should be installed from PyPI via pip. Use your distribution
|
||||
package manager, or other preferred method, to ensure that you have pip
|
||||
installed, it is typically called python-pip.
|
||||
|
||||
e.g. For Debian/Ubuntu:
|
||||
```
|
||||
sudo apt-get install python-pip
|
||||
```
|
||||
|
||||
Alternately, you may want to follow the official instructions available at:
|
||||
https://pip.pypa.io/en/stable/installing/
|
||||
|
||||
Now, to install a particular released version:
|
||||
|
||||
```
|
||||
sudo pip install monasca-persister==<version>
|
||||
```
|
||||
|
||||
If using InfluxDB, the persister requires the necessary Python library
|
||||
to be installed. This can also be achieved via pip:
|
||||
|
||||
```
|
||||
sudo pip install influxdb
|
||||
```
|
||||
|
||||
Alternatively, pip can be used to install the latest development version
|
||||
or a specific revision. This requires you have git installed in addition
|
||||
to pip.
|
||||
|
||||
```
|
||||
sudo apt-get install git
|
||||
sudo pip install git+https://opendev.org/openstack/monasca-persister@<revision>#egg=monasca-persister
|
||||
```
|
||||
|
||||
The installation will not cause the persister to run - it should first
|
||||
be configured.
|
||||
|
||||
### Environment
|
||||
|
||||
Using the persister requires that the following components of the Monasca
|
||||
system are deployed and available:
|
||||
|
||||
* Kafka
|
||||
* Zookeeper (Required by Kafka)
|
||||
* InfluxDB
|
||||
|
||||
If running the persister as a daemon, it is good practice to create a
|
||||
dedicated system user for the purpose, for example:
|
||||
|
||||
```
|
||||
sudo groupadd --system monasca
|
||||
sudo useradd --system --gid monasca mon-persister
|
||||
```
|
||||
|
||||
Additionally, it is good practice to give the daemon a dedicated working
|
||||
directory, in the event it ever needs to write any files.
|
||||
|
||||
```
|
||||
sudo mkdir -p /var/lib/monasca-persister
|
||||
sudo chown mon-persister:monasca /var/lib/monasca-persister
|
||||
```
|
||||
|
||||
The persister will write a log file in a location which can be changed
|
||||
via the configuration, but by default requires that a suitable directory
|
||||
be created as follows:
|
||||
|
||||
```
|
||||
sudo mkdir -p /var/log/monasca/persister
|
||||
sudo chown mon-persister:monasca /var/log/monasca/persister
|
||||
```
|
||||
|
||||
Make sure to allow the user who will be running the persister to have
|
||||
write access to the log directory.
|
||||
|
||||
### Configuration
|
||||
|
||||
There is minimum amount of configuration which must be performed before
|
||||
running the persister. A template configuration file is installed in the
|
||||
default location of /etc/monasca/monasca-persister.conf.
|
||||
|
||||
Note that the configuration will contain authentication information for
|
||||
the database being used, so depending on your environment it may be
|
||||
desirable to inhibit read access except for the monasca-persister group.
|
||||
|
||||
```
|
||||
sudo chown root:monasca /etc/monasca/monasca-persister.conf
|
||||
sudo chmod 640 /etc/monasca/monasca-persister.conf
|
||||
```
|
||||
|
||||
Most of the configuration options should be left at default, but at a
|
||||
minimum, the following should be changed:
|
||||
The default value for influxdb ssl and verify_ssl is False. Only add/change if your influxdb is using SSL.
|
||||
|
||||
```
|
||||
[zookeeper]
|
||||
uri = <host1>:<port1>,<host2>:<port2>,...
|
||||
|
||||
[kafka_alarm_history]
|
||||
uri = <host1>:<port1>,<host2>:<port2>,...
|
||||
|
||||
[kafka_metrics]
|
||||
uri = <host1>:<port1>,<host2>:<port2>,...
|
||||
|
||||
[influxdb]
|
||||
database_name =
|
||||
ip_address =
|
||||
port =
|
||||
user =
|
||||
password =
|
||||
ssl =
|
||||
verify_ssl =
|
||||
```
|
||||
|
||||
### Running
|
||||
|
||||
The installation does not provide scripts for starting the persister, it
|
||||
is up to the user how the persister is run. To run the persister manually,
|
||||
which may be useful for troubleshooting:
|
||||
|
||||
```
|
||||
sudo -u mon-persister \
|
||||
monasca-persister \
|
||||
--config-file /etc/monasca/monasca-persister.conf
|
||||
```
|
||||
|
||||
Note that it is important to deploy the daemon in a manner such that the daemon
|
||||
will be restarted if it exits (fails). There are a number of situations in which
|
||||
the persister will fail-fast, such as all Kafka endpoints becoming unreachable.
|
||||
For an example of this, see the systemd deployment section below.
|
||||
|
||||
### Running (systemd)
|
||||
|
||||
To run the persister as a daemon, the following systemd unit file can be used
|
||||
and placed in ``/etc/systemd/system/monasca-persister.service``:
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=OpenStack Monasca Persister
|
||||
Documentation=https://github.com/openstack/monasca-persister/monasca-persister/README.md
|
||||
Requires=network.target remote-fs.target
|
||||
After=network.target remote-fs.target
|
||||
ConditionPathExists=/etc/monasca/monasca-persister.conf
|
||||
ConditionPathExists=/var/lib/monasca-persister
|
||||
ConditionPathExists=/var/log/monasca/persister
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
PIDFile=/var/run/monasca-persister.pid
|
||||
User=mon-persister
|
||||
Group=monasca
|
||||
WorkingDirectory=/var/lib/monasca-persister
|
||||
ExecStart=/usr/local/bin/monasca-persister --config-file /etc/monasca/monasca-persister.conf
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
SyslogIdentifier=monasca-persister
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
After creating or modifying the service file, you should run:
|
||||
|
||||
```
|
||||
sudo systemctl daemon-reload
|
||||
```
|
||||
|
||||
The service can then be managed as normal, e.g.
|
||||
|
||||
```
|
||||
sudo systemctl start monasca-persister
|
||||
```
|
||||
|
||||
For a production deployment, it will almost always be desirable to use the
|
||||
*Restart* clause in the service file. The *RestartSec* clause is also
|
||||
important as by default, systemd assumes a delay of 100ms. A number of
|
||||
failures in quick succession will cause the unit to enter a failed state,
|
||||
therefore extending this period is critical.
|
||||
|
||||
|
||||
# License
|
||||
|
||||
(C) Copyright 2014-2016 Hewlett Packard Enterprise Development Company LP
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
@@ -1,74 +0,0 @@
|
||||
# Copyright 2017 FUJITSU LIMITED
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import os
|
||||
import pkgutil
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_utils import importutils
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
|
||||
def load_conf_modules():
|
||||
"""Load all modules that contain configuration.
|
||||
|
||||
Method iterates over modules of :py:module:`monasca_persister.conf`
|
||||
and imports only those that contain following methods:
|
||||
|
||||
- list_opts (required by oslo_config.genconfig)
|
||||
- register_opts (required by :py:currentmodule:)
|
||||
|
||||
"""
|
||||
for modname in _list_module_names():
|
||||
mod = importutils.import_module('monasca_persister.conf.' + modname)
|
||||
required_funcs = ['register_opts', 'list_opts']
|
||||
for func in required_funcs:
|
||||
if hasattr(mod, func):
|
||||
yield mod
|
||||
|
||||
|
||||
def _list_module_names():
|
||||
module_names = []
|
||||
package_path = os.path.dirname(os.path.abspath(__file__))
|
||||
for _, modname, ispkg in pkgutil.iter_modules(path=[package_path]):
|
||||
if not (modname == "opts" and ispkg):
|
||||
module_names.append(modname)
|
||||
return module_names
|
||||
|
||||
|
||||
def register_opts():
|
||||
"""Register all conf modules opts.
|
||||
|
||||
This method allows different modules to register
|
||||
opts according to their needs.
|
||||
|
||||
"""
|
||||
for mod in load_conf_modules():
|
||||
mod.register_opts(cfg.CONF)
|
||||
|
||||
|
||||
def list_opts():
|
||||
"""List all conf modules opts.
|
||||
|
||||
Goes through all conf modules and yields their opts.
|
||||
|
||||
"""
|
||||
for mod in load_conf_modules():
|
||||
mod_opts = mod.list_opts()
|
||||
if type(mod_opts) is list:
|
||||
for single_mod_opts in mod_opts:
|
||||
yield single_mod_opts[0], single_mod_opts[1]
|
||||
else:
|
||||
yield mod_opts[0], mod_opts[1]
|
||||
@@ -1,73 +0,0 @@
|
||||
# (C) Copyright 2016 Hewlett Packard Enterprise Development Company LP
|
||||
# Copyright 2017 FUJITSU LIMITED
|
||||
# (C) Copyright 2017 SUSE LLC
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_config import types
|
||||
|
||||
cassandra_opts = [
|
||||
cfg.ListOpt('contact_points',
|
||||
help='Comma separated list of Cassandra node IP addresses',
|
||||
default=['127.0.0.1'],
|
||||
item_type=types.HostAddress()),
|
||||
cfg.IntOpt('port',
|
||||
help='Cassandra port number',
|
||||
default=8086),
|
||||
cfg.StrOpt('keyspace',
|
||||
help='Keyspace name where metrics are stored',
|
||||
default='monasca'),
|
||||
cfg.StrOpt('user',
|
||||
help='Cassandra user name',
|
||||
default=''),
|
||||
cfg.StrOpt('password',
|
||||
help='Cassandra password',
|
||||
secret=True,
|
||||
default=''),
|
||||
cfg.IntOpt('connection_timeout',
|
||||
help='Cassandra timeout in seconds when creating a new connection',
|
||||
default=5),
|
||||
cfg.IntOpt('read_timeout',
|
||||
help='Cassandra read timeout in seconds',
|
||||
default=60),
|
||||
cfg.IntOpt('max_write_retries',
|
||||
help='Maximum number of retries in write ops',
|
||||
default=1),
|
||||
cfg.IntOpt('max_definition_cache_size',
|
||||
help='Maximum number of cached metric definition entries in memory',
|
||||
default=20000000),
|
||||
cfg.IntOpt('retention_policy',
|
||||
help='Data retention period in days',
|
||||
default=45),
|
||||
cfg.StrOpt('consistency_level',
|
||||
help='Cassandra default consistency level',
|
||||
default='ONE'),
|
||||
cfg.StrOpt('local_data_center',
|
||||
help='Cassandra local data center name'),
|
||||
cfg.IntOpt('max_batches',
|
||||
help='Maximum batch size in Cassandra',
|
||||
default=250),
|
||||
]
|
||||
|
||||
cassandra_group = cfg.OptGroup(name='cassandra')
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_group(cassandra_group)
|
||||
conf.register_opts(cassandra_opts, cassandra_group)
|
||||
|
||||
|
||||
def list_opts():
|
||||
return cassandra_group, cassandra_opts
|
||||
@@ -1,57 +0,0 @@
|
||||
# Copyright 2017 FUJITSU LIMITED
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from monasca_persister.conf import types
|
||||
|
||||
elasticsearch_opts = [
|
||||
cfg.StrOpt(
|
||||
'index_name',
|
||||
help='Index prefix name where events are stored',
|
||||
default='events'),
|
||||
cfg.ListOpt(
|
||||
'hosts',
|
||||
help='List of Elasticsearch nodes in format host[:port]',
|
||||
default=['localhost:9200'],
|
||||
item_type=types.HostAddressPortType()),
|
||||
cfg.BoolOpt(
|
||||
'sniff_on_start',
|
||||
help='Flag indicating whether to obtain a list of nodes from the cluser at startup time',
|
||||
default=False),
|
||||
cfg.BoolOpt(
|
||||
'sniff_on_connection_fail',
|
||||
help='Flag controlling if connection failure triggers a sniff',
|
||||
default=False),
|
||||
cfg.IntOpt(
|
||||
'sniffer_timeout',
|
||||
help='Number of seconds between automatic sniffs',
|
||||
default=None),
|
||||
cfg.IntOpt(
|
||||
'max_retries',
|
||||
help='Maximum number of retries before an exception is propagated',
|
||||
default=3,
|
||||
min=1)]
|
||||
|
||||
elasticsearch_group = cfg.OptGroup(name='elasticsearch', title='elasticsearch')
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_group(elasticsearch_group)
|
||||
conf.register_opts(elasticsearch_opts, elasticsearch_group)
|
||||
|
||||
|
||||
def list_opts():
|
||||
return elasticsearch_group, elasticsearch_opts
|
||||
@@ -1,62 +0,0 @@
|
||||
# (C) Copyright 2016-2017 Hewlett Packard Enterprise Development LP
|
||||
# Copyright 2017 FUJITSU LIMITED
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
influxdb_opts = [
|
||||
cfg.StrOpt('database_name',
|
||||
help='database name where metrics are stored',
|
||||
default='mon'),
|
||||
cfg.BoolOpt('db_per_tenant',
|
||||
help='Whether to use a separate database per tenant',
|
||||
default=False),
|
||||
cfg.IntOpt('default_retention_hours',
|
||||
help='Default retention period in hours for new '
|
||||
'databases automatically created by the persister',
|
||||
default=0),
|
||||
cfg.IntOpt('batch_size',
|
||||
help='Maximum size of the batch to write to the database.',
|
||||
default=10000),
|
||||
cfg.HostAddressOpt('ip_address',
|
||||
help='Valid IP address or hostname '
|
||||
'to InfluxDB instance'),
|
||||
cfg.PortOpt('port',
|
||||
help='port to influxdb',
|
||||
default=8086),
|
||||
cfg.BoolOpt('ssl',
|
||||
help='Boolean',
|
||||
default=False),
|
||||
cfg.BoolOpt('verify_ssl',
|
||||
help='Boolean',
|
||||
default=False),
|
||||
cfg.StrOpt('user',
|
||||
help='influxdb user ',
|
||||
default='mon_persister'),
|
||||
cfg.StrOpt('password',
|
||||
secret=True,
|
||||
help='influxdb password')]
|
||||
|
||||
influxdb_group = cfg.OptGroup(name='influxdb',
|
||||
title='influxdb')
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_group(influxdb_group)
|
||||
conf.register_opts(influxdb_opts, influxdb_group)
|
||||
|
||||
|
||||
def list_opts():
|
||||
return influxdb_group, influxdb_opts
|
||||
@@ -1,65 +0,0 @@
|
||||
# (C) Copyright 2016-2017 Hewlett Packard Enterprise Development LP
|
||||
# Copyright 2017 FUJITSU LIMITED
|
||||
# (C) Copyright 2017 SUSE LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import copy
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from monasca_persister.conf import kafka_common
|
||||
from monasca_persister.conf import types
|
||||
|
||||
kafka_alarm_history_group = cfg.OptGroup(name='kafka_alarm_history',
|
||||
title='kafka_alarm_history')
|
||||
kafka_alarm_history_opts = [
|
||||
cfg.BoolOpt('enabled',
|
||||
help='Enable alarm state history persister',
|
||||
default=True),
|
||||
# NOTE(czarneckia) default by reference does not work with ListOpt
|
||||
cfg.ListOpt('uri',
|
||||
help='Comma separated list of Kafka broker host:port',
|
||||
default=['127.0.0.1:9092'],
|
||||
item_type=types.HostAddressPortType()),
|
||||
cfg.StrOpt('group_id',
|
||||
help='Kafka Group from which persister get data',
|
||||
default='1_alarm-state-transitions'),
|
||||
cfg.StrOpt('topic',
|
||||
help='Kafka Topic from which persister get data',
|
||||
default='alarm-state-transitions'),
|
||||
cfg.StrOpt('zookeeper_path',
|
||||
help='Path in zookeeper for kafka consumer group partitioning algorithm',
|
||||
default='/persister_partitions/$kafka_alarm_history.topic'),
|
||||
cfg.IntOpt(
|
||||
'batch_size',
|
||||
help='Maximum number of alarm state history messages to buffer before writing to database',
|
||||
default=1),
|
||||
]
|
||||
|
||||
|
||||
# Replace Default OPt with reference to kafka group option
|
||||
kafka_common_opts = copy.deepcopy(kafka_common.kafka_common_opts)
|
||||
for opt in kafka_common_opts:
|
||||
opt.default = '$kafka.{}'.format(opt.name)
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_group(kafka_alarm_history_group)
|
||||
conf.register_opts(kafka_alarm_history_opts + kafka_common_opts,
|
||||
kafka_alarm_history_group)
|
||||
|
||||
|
||||
def list_opts():
|
||||
return kafka_alarm_history_group, kafka_alarm_history_opts + kafka_common_opts
|
||||
@@ -1,64 +0,0 @@
|
||||
# (C) Copyright 2016-2017 Hewlett Packard Enterprise Development LP
|
||||
# Copyright 2017 FUJITSU LIMITED
|
||||
# (C) Copyright 2017 SUSE LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
|
||||
kafka_common_opts = [
|
||||
cfg.StrOpt('consumer_id',
|
||||
help='Name/id of persister kafka consumer',
|
||||
advanced=True,
|
||||
default='monasca-persister'),
|
||||
cfg.StrOpt('client_id',
|
||||
help='id of persister kafka client',
|
||||
advanced=True,
|
||||
default='monasca-persister'),
|
||||
cfg.IntOpt('max_wait_time_seconds',
|
||||
help='Maximum wait time for write batch to database',
|
||||
default=30),
|
||||
cfg.IntOpt('fetch_size_bytes',
|
||||
help='Fetch size, in bytes. This value is set to the kafka-python defaults',
|
||||
default=4096),
|
||||
cfg.IntOpt('buffer_size',
|
||||
help='Buffer size, in bytes. This value is set to the kafka-python defaults',
|
||||
default='$kafka.fetch_size_bytes'),
|
||||
cfg.IntOpt('max_buffer_size',
|
||||
help='Maximum buffer size, in bytes, default value is 8 time buffer_size.'
|
||||
'This value is set to the kafka-python defaults. ',
|
||||
default=32768),
|
||||
cfg.IntOpt('num_processors',
|
||||
help='Number of processes spawned by persister',
|
||||
default=1),
|
||||
cfg.BoolOpt('legacy_kafka_client_enabled',
|
||||
help='Enable legacy Kafka client. When set old version of '
|
||||
'kafka-python library is used. Message format version '
|
||||
'for the brokers should be set to 0.9.0.0 to avoid '
|
||||
'performance issues until all consumers are upgraded.',
|
||||
default=False)
|
||||
]
|
||||
|
||||
kafka_common_group = cfg.OptGroup(name='kafka',
|
||||
title='kafka')
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_group(kafka_common_group)
|
||||
conf.register_opts(kafka_common_opts, kafka_common_group)
|
||||
|
||||
|
||||
def list_opts():
|
||||
return kafka_common_group, kafka_common_opts
|
||||
@@ -1,60 +0,0 @@
|
||||
# Copyright 2017 FUJITSU LIMITED
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from copy import deepcopy
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from monasca_persister.conf import kafka_common
|
||||
from monasca_persister.conf import types
|
||||
|
||||
kafka_events_group = cfg.OptGroup(name='kafka_events',
|
||||
title='kafka_events')
|
||||
kafka_events_opts = [
|
||||
cfg.BoolOpt('enabled',
|
||||
help='Enable event persister',
|
||||
default=False),
|
||||
cfg.ListOpt('uri',
|
||||
help='Comma separated list of Kafka broker host:port',
|
||||
default=['127.0.0.1:9092'],
|
||||
item_type=types.HostAddressPortType()),
|
||||
cfg.StrOpt('group_id',
|
||||
help='Kafka Group from which persister get data',
|
||||
default='1_events'),
|
||||
cfg.StrOpt('topic',
|
||||
help='Kafka Topic from which persister get data',
|
||||
default='monevents'),
|
||||
cfg.StrOpt('zookeeper_path',
|
||||
help='Path in zookeeper for kafka consumer group partitioning algorithm',
|
||||
default='/persister_partitions/$kafka_events.topic'),
|
||||
cfg.IntOpt('batch_size',
|
||||
help='Maximum number of events to buffer before writing to database',
|
||||
default=1),
|
||||
]
|
||||
|
||||
# Replace Default OPT with reference to kafka group option
|
||||
kafka_common_opts = deepcopy(kafka_common.kafka_common_opts)
|
||||
for opt in kafka_common_opts:
|
||||
opt.default = '$kafka.{}'.format(opt.name)
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_group(kafka_events_group)
|
||||
conf.register_opts(kafka_events_opts + kafka_common_opts,
|
||||
kafka_events_group)
|
||||
|
||||
|
||||
def list_opts():
|
||||
return kafka_events_group, kafka_events_opts + kafka_common_opts
|
||||
@@ -1,62 +0,0 @@
|
||||
# (C) Copyright 2016-2017 Hewlett Packard Enterprise Development LP
|
||||
# Copyright 2017 FUJITSU LIMITED
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import copy
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from monasca_persister.conf import kafka_common
|
||||
from monasca_persister.conf import types
|
||||
|
||||
kafka_metrics_group = cfg.OptGroup(name='kafka_metrics',
|
||||
title='kafka_metrics')
|
||||
kafka_metrics_opts = [
|
||||
cfg.BoolOpt('enabled',
|
||||
help='Enable metrics persister',
|
||||
default=True),
|
||||
# NOTE(czarneckia) default by reference does not work with ListOpt
|
||||
cfg.ListOpt('uri',
|
||||
help='Comma separated list of Kafka broker host:port',
|
||||
default=['127.0.0.1:9092'],
|
||||
item_type=types.HostAddressPortType()),
|
||||
cfg.StrOpt('group_id',
|
||||
help='Kafka Group from which persister get data',
|
||||
default='1_metrics'),
|
||||
cfg.StrOpt('topic',
|
||||
help='Kafka Topic from which persister get data',
|
||||
default='metrics'),
|
||||
cfg.StrOpt('zookeeper_path',
|
||||
help='Path in zookeeper for kafka consumer group partitioning algorithm',
|
||||
default='/persister_partitions/$kafka_metrics.topic'),
|
||||
cfg.IntOpt('batch_size',
|
||||
help='Maximum number of metrics to buffer before writing to database',
|
||||
default=20000),
|
||||
]
|
||||
|
||||
# Replace Default OPt with reference to kafka group option
|
||||
kafka_common_opts = copy.deepcopy(kafka_common.kafka_common_opts)
|
||||
for opt in kafka_common_opts:
|
||||
opt.default = '$kafka.{}'.format(opt.name)
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_group(kafka_metrics_group)
|
||||
conf.register_opts(kafka_metrics_opts + kafka_common_opts,
|
||||
kafka_metrics_group)
|
||||
|
||||
|
||||
def list_opts():
|
||||
return kafka_metrics_group, kafka_metrics_opts + kafka_common_opts
|
||||
@@ -1,51 +0,0 @@
|
||||
# (C) Copyright 2016-2017 Hewlett Packard Enterprise Development LP
|
||||
# Copyright 2017 FUJITSU LIMITED
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
repositories_opts = [
|
||||
cfg.StrOpt(
|
||||
name='metrics_driver',
|
||||
help='The repository driver to use for metrics',
|
||||
default=('monasca_persister.repositories.influxdb.metrics_repository:'
|
||||
'MetricInfluxdbRepository')),
|
||||
cfg.StrOpt(
|
||||
name='alarm_state_history_driver',
|
||||
help='The repository driver to use for alarm state history',
|
||||
default=('monasca_persister.repositories.influxdb.'
|
||||
'alarm_state_history_repository:'
|
||||
'AlarmStateHistInfluxdbRepository')),
|
||||
cfg.StrOpt(
|
||||
name='events_driver',
|
||||
help='The repository driver to use for events',
|
||||
default=('monasca_persister.repositories.elasticsearch.events_repository:'
|
||||
'ElasticSearchEventsRepository')),
|
||||
cfg.BoolOpt(
|
||||
'ignore_parse_point_error',
|
||||
help='Specifies if InfluxDB parse point errors should be ignored and measurements dropped',
|
||||
default=False)]
|
||||
|
||||
repositories_group = cfg.OptGroup(name='repositories',
|
||||
title='repositories')
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_group(repositories_group)
|
||||
conf.register_opts(repositories_opts, repositories_group)
|
||||
|
||||
|
||||
def list_opts():
|
||||
return repositories_group, repositories_opts
|
||||
@@ -1,58 +0,0 @@
|
||||
# Copyright 2017 FUJITSU LIMITED
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_config import types
|
||||
|
||||
|
||||
class HostAddressPortOpt(cfg.Opt):
|
||||
"""Option for HostAddressPortType.
|
||||
|
||||
Accept hostname or ip address with TCP/IP port number.
|
||||
"""
|
||||
def __init__(self, name, **kwargs):
|
||||
ip_port_type = HostAddressPortType()
|
||||
super(HostAddressPortOpt, self).__init__(name,
|
||||
type=ip_port_type,
|
||||
**kwargs)
|
||||
|
||||
|
||||
class HostAddressPortType(types.HostAddress):
|
||||
"""HostAddress with additional port."""
|
||||
|
||||
def __init__(self, version=None):
|
||||
type_name = 'ip and port value'
|
||||
super(HostAddressPortType, self).__init__(version, type_name=type_name)
|
||||
|
||||
def __call__(self, value):
|
||||
addr, port = value.split(':')
|
||||
addr = self.validate_addr(addr)
|
||||
port = self._validate_port(port)
|
||||
if not addr and not port:
|
||||
raise ValueError('%s is not valid ip with optional port')
|
||||
return '%s:%d' % (addr, port)
|
||||
|
||||
@staticmethod
|
||||
def _validate_port(port):
|
||||
return types.Port()(port)
|
||||
|
||||
def validate_addr(self, addr):
|
||||
try:
|
||||
addr = self.ip_address(addr)
|
||||
except ValueError:
|
||||
try:
|
||||
addr = self.hostname(addr)
|
||||
except ValueError:
|
||||
raise ValueError("%s is not a valid host address", addr)
|
||||
return addr
|
||||
@@ -1,39 +0,0 @@
|
||||
# (C) Copyright 2016-2017 Hewlett Packard Enterprise Development LP
|
||||
# Copyright 2017 FUJITSU LIMITED
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from monasca_persister.conf import types
|
||||
|
||||
zookeeper_opts = [
|
||||
cfg.ListOpt('uri',
|
||||
help='Comma separated list of zookeper instance host:port',
|
||||
default=['127.0.0.1:2181'],
|
||||
item_type=types.HostAddressPortType()),
|
||||
cfg.IntOpt('partition_interval_recheck_seconds',
|
||||
help='Time between rechecking if partition is available',
|
||||
default=15)]
|
||||
|
||||
zookeeper_group = cfg.OptGroup(name='zookeeper', title='zookeeper')
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
conf.register_group(zookeeper_group)
|
||||
conf.register_opts(zookeeper_opts, zookeeper_group)
|
||||
|
||||
|
||||
def list_opts():
|
||||
return zookeeper_group, zookeeper_opts
|
||||
@@ -1,69 +0,0 @@
|
||||
# Copyright 2017 FUJITSU LIMITED
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import sys
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
|
||||
from monasca_persister import conf
|
||||
from monasca_persister import version
|
||||
|
||||
CONF = conf.CONF
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
_CONF_LOADED = False
|
||||
|
||||
|
||||
def _get_config_files():
|
||||
"""Get the possible configuration files accepted by oslo.config
|
||||
|
||||
This also includes the deprecated ones
|
||||
"""
|
||||
# default files
|
||||
conf_files = cfg.find_config_files(project='monasca',
|
||||
prog='monasca-persister')
|
||||
# deprecated config files (only used if standard config files are not there)
|
||||
if len(conf_files) == 0:
|
||||
old_conf_files = cfg.find_config_files(project='monasca',
|
||||
prog='persister')
|
||||
if len(old_conf_files) > 0:
|
||||
LOG.warning('Found deprecated old location "{}" '
|
||||
'of main configuration file'.format(old_conf_files))
|
||||
conf_files += old_conf_files
|
||||
return conf_files
|
||||
|
||||
|
||||
def parse_args(description='Persists metrics & alarm history in TSDB'):
|
||||
global _CONF_LOADED
|
||||
if _CONF_LOADED:
|
||||
LOG.debug('Configuration has been already loaded')
|
||||
return
|
||||
|
||||
log.set_defaults()
|
||||
log.register_options(CONF)
|
||||
|
||||
CONF(prog=sys.argv[1:],
|
||||
project='monasca',
|
||||
version=version.version_str,
|
||||
default_config_files=_get_config_files(),
|
||||
description=description)
|
||||
|
||||
log.setup(CONF,
|
||||
product_name='monasca-persister',
|
||||
version=version.version_str)
|
||||
|
||||
conf.register_opts()
|
||||
|
||||
_CONF_LOADED = True
|
||||
@@ -1,81 +0,0 @@
|
||||
# Copyright 2017 FUJITSU LIMITED
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import re
|
||||
|
||||
from hacking import core
|
||||
|
||||
|
||||
assert_no_xrange_re = re.compile(r"\s*xrange\s*\(")
|
||||
assert_True = re.compile(r".*assertEqual\(True, .*\)")
|
||||
assert_None = re.compile(r".*assertEqual\(None, .*\)")
|
||||
assert_Not_Equal = re.compile(r".*assertNotEqual\(None, .*\)")
|
||||
assert_Is_Not = re.compile(r".*assertIsNot\(None, .*\)")
|
||||
assert_raises_regexp = re.compile(r"assertRaisesRegexp\(")
|
||||
no_log_warn = re.compile(r".*LOG.warn\(.*\)")
|
||||
mutable_default_args = re.compile(r"^\s*def .+\((.+=\{\}|.+=\[\])")
|
||||
|
||||
|
||||
@core.flake8ext
|
||||
def no_mutable_default_args(logical_line):
|
||||
msg = "M001: Method's default argument shouldn't be mutable!"
|
||||
if mutable_default_args.match(logical_line):
|
||||
yield (0, msg)
|
||||
|
||||
|
||||
@core.flake8ext
|
||||
def no_xrange(logical_line):
|
||||
if assert_no_xrange_re.match(logical_line):
|
||||
yield (0, "M002: Do not use xrange().")
|
||||
|
||||
|
||||
@core.flake8ext
|
||||
def validate_assertTrue(logical_line):
|
||||
if re.match(assert_True, logical_line):
|
||||
msg = ("M003: Unit tests should use assertTrue(value) instead"
|
||||
" of using assertEqual(True, value).")
|
||||
yield (0, msg)
|
||||
|
||||
|
||||
@core.flake8ext
|
||||
def validate_assertIsNone(logical_line):
|
||||
if re.match(assert_None, logical_line):
|
||||
msg = ("M004: Unit tests should use assertIsNone(value) instead"
|
||||
" of using assertEqual(None, value).")
|
||||
yield (0, msg)
|
||||
|
||||
|
||||
@core.flake8ext
|
||||
def no_log_warn_check(logical_line):
|
||||
if re.match(no_log_warn, logical_line):
|
||||
msg = ("M005: LOG.warn is deprecated, please use LOG.warning!")
|
||||
yield (0, msg)
|
||||
|
||||
|
||||
@core.flake8ext
|
||||
def validate_assertIsNotNone(logical_line):
|
||||
if re.match(assert_Not_Equal, logical_line) or \
|
||||
re.match(assert_Is_Not, logical_line):
|
||||
msg = ("M006: Unit tests should use assertIsNotNone(value) instead"
|
||||
" of using assertNotEqual(None, value) or"
|
||||
" assertIsNot(None, value).")
|
||||
yield (0, msg)
|
||||
|
||||
|
||||
@core.flake8ext
|
||||
def assert_raisesRegexp(logical_line):
|
||||
res = assert_raises_regexp.search(logical_line)
|
||||
if res:
|
||||
yield (0, "M007: assertRaisesRegex must be used instead "
|
||||
"of assertRaisesRegexp")
|
||||
@@ -1,35 +0,0 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
from monasca_common.kafka import client_factory
|
||||
|
||||
from monasca_persister.repositories import persister
|
||||
from monasca_persister.repositories import singleton
|
||||
|
||||
|
||||
class ConfluentKafkaPersister(persister.Persister, metaclass=singleton.Singleton):
|
||||
|
||||
def __init__(self, kafka_conf, repository, client_id=""):
|
||||
super(ConfluentKafkaPersister, self).__init__(kafka_conf, repository)
|
||||
self._consumer = client_factory.get_kafka_consumer(
|
||||
kafka_url=kafka_conf.uri,
|
||||
kafka_consumer_group=kafka_conf.group_id,
|
||||
kafka_topic=kafka_conf.topic,
|
||||
client_id=client_id,
|
||||
repartition_callback=ConfluentKafkaPersister.flush,
|
||||
commit_callback=self._flush,
|
||||
max_commit_interval=kafka_conf.max_wait_time_seconds
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def flush(kafka_consumer, partitions):
|
||||
p = ConfluentKafkaPersister()
|
||||
p._flush()
|
||||
@@ -1,32 +0,0 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
from monasca_common.kafka import client_factory
|
||||
|
||||
from monasca_persister.repositories import persister
|
||||
from monasca_persister.repositories import singleton
|
||||
|
||||
|
||||
class LegacyKafkaPersister(persister.Persister, metaclass=singleton.Singleton):
|
||||
|
||||
def __init__(self, kafka_conf, zookeeper_conf, repository):
|
||||
super(LegacyKafkaPersister, self).__init__(kafka_conf, repository)
|
||||
self._consumer = client_factory.get_kafka_consumer(
|
||||
kafka_url=kafka_conf.uri,
|
||||
kafka_consumer_group=kafka_conf.group_id,
|
||||
kafka_topic=kafka_conf.topic,
|
||||
zookeeper_url=zookeeper_conf.uri,
|
||||
zookeeper_path=kafka_conf.zookeeper_path,
|
||||
use_legacy_client=True,
|
||||
repartition_callback=self._flush,
|
||||
commit_callback=self._flush,
|
||||
max_commit_interval=kafka_conf.max_wait_time_seconds
|
||||
)
|
||||
@@ -1,169 +0,0 @@
|
||||
# (C) Copyright 2014-2017 Hewlett Packard Enterprise Development LP
|
||||
# Copyright 2017 FUJITSU LIMITED
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Persister Module
|
||||
|
||||
The Persister reads metrics and alarms from Kafka and then stores them
|
||||
in into either Influxdb or Cassandra
|
||||
|
||||
Start the perister as stand-alone process by running 'persister.py
|
||||
--config-file <config file>'
|
||||
"""
|
||||
import multiprocessing
|
||||
import os
|
||||
import signal
|
||||
import sys
|
||||
import time
|
||||
|
||||
from monasca_common.simport import simport
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
|
||||
from monasca_persister import config
|
||||
from monasca_persister.kafka import confluent_kafka_persister
|
||||
from monasca_persister.kafka import legacy_kafka_persister
|
||||
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
processors = [] # global list to facilitate clean signal handling
|
||||
exiting = False
|
||||
|
||||
|
||||
def clean_exit(signum, frame=None):
|
||||
"""Exit all processes attempting to finish uncommitted active work before exit.
|
||||
Can be called on an os signal or no zookeeper losing connection.
|
||||
"""
|
||||
global exiting
|
||||
if exiting:
|
||||
# Since this is set up as a handler for SIGCHLD when this kills one
|
||||
# child it gets another signal, the global exiting avoids this running
|
||||
# multiple times.
|
||||
LOG.debug('Exit in progress clean_exit received additional signal %s' % signum)
|
||||
return
|
||||
|
||||
LOG.info('Received signal %s, beginning graceful shutdown.' % signum)
|
||||
exiting = True
|
||||
wait_for_exit = False
|
||||
|
||||
for process in processors:
|
||||
try:
|
||||
if process.is_alive():
|
||||
# Sends sigterm which any processes after a notification is sent attempt to handle
|
||||
process.terminate()
|
||||
wait_for_exit = True
|
||||
except Exception: # nosec
|
||||
# There is really nothing to do if the kill fails, so just go on.
|
||||
# The # nosec keeps bandit from reporting this as a security issue
|
||||
pass
|
||||
|
||||
# wait for a couple seconds to give the subprocesses a chance to shut down correctly.
|
||||
if wait_for_exit:
|
||||
time.sleep(2)
|
||||
|
||||
# Kill everything, that didn't already die
|
||||
for child in multiprocessing.active_children():
|
||||
LOG.debug('Killing pid %s' % child.pid)
|
||||
try:
|
||||
os.kill(child.pid, signal.SIGKILL)
|
||||
except Exception: # nosec
|
||||
# There is really nothing to do if the kill fails, so just go on.
|
||||
# The # nosec keeps bandit from reporting this as a security issue
|
||||
pass
|
||||
|
||||
if signum == signal.SIGTERM:
|
||||
sys.exit(0)
|
||||
|
||||
sys.exit(signum)
|
||||
|
||||
|
||||
def start_process(respository, kafka_config):
|
||||
LOG.info("start process: {}".format(respository))
|
||||
if kafka_config.legacy_kafka_client_enabled:
|
||||
m_persister = legacy_kafka_persister.LegacyKafkaPersister(
|
||||
kafka_config, cfg.CONF.zookeeper, respository)
|
||||
else:
|
||||
m_persister = confluent_kafka_persister.ConfluentKafkaPersister(
|
||||
kafka_config, respository)
|
||||
m_persister.run()
|
||||
|
||||
|
||||
def prepare_processes(conf, repo_driver):
|
||||
if conf.num_processors > 0:
|
||||
repository = simport.load(repo_driver)
|
||||
for proc in range(0, conf.num_processors):
|
||||
processors.append(multiprocessing.Process(
|
||||
target=start_process, args=(repository, conf)))
|
||||
else:
|
||||
LOG.warning("Number of processors (num_processors) is {}".format(
|
||||
conf.num_processors))
|
||||
|
||||
|
||||
def main():
|
||||
"""Start persister."""
|
||||
|
||||
config.parse_args()
|
||||
|
||||
# Add processors for metrics topic
|
||||
if cfg.CONF.kafka_metrics.enabled:
|
||||
prepare_processes(cfg.CONF.kafka_metrics,
|
||||
cfg.CONF.repositories.metrics_driver)
|
||||
# Add processors for alarm history topic
|
||||
if cfg.CONF.kafka_alarm_history.enabled:
|
||||
prepare_processes(cfg.CONF.kafka_alarm_history,
|
||||
cfg.CONF.repositories.alarm_state_history_driver)
|
||||
# Add processors for events topic
|
||||
if cfg.CONF.kafka_events.enabled:
|
||||
prepare_processes(cfg.CONF.kafka_events,
|
||||
cfg.CONF.repositories.events_driver)
|
||||
|
||||
# Start
|
||||
try:
|
||||
LOG.info(r'''
|
||||
|
||||
_____
|
||||
/ \ ____ ____ _____ ______ ____ _____
|
||||
/ \ / \ / _ \ / \\\__ \ / ___// ___\\\__ \\
|
||||
/ Y ( <_> ) | \/ __ \_\___ \\ \___ / __ \\_
|
||||
\____|__ /\____/|___| (____ /____ >\___ >____ /
|
||||
\/ \/ \/ \/ \/ \/
|
||||
__________ .__ __
|
||||
\______ \ ___________ _____|__| _______/ |_ ___________
|
||||
| ___// __ \_ __ \/ ___/ |/ ___/\ __\/ __ \_ __ \\
|
||||
| | \ ___/| | \/\___ \| |\___ \ | | \ ___/| | \/
|
||||
|____| \___ >__| /____ >__/____ > |__| \___ >__|
|
||||
\/ \/ \/ \/
|
||||
|
||||
''')
|
||||
for process in processors:
|
||||
process.start()
|
||||
|
||||
# The signal handlers must be added after the processes start otherwise
|
||||
# they run on all processes
|
||||
signal.signal(signal.SIGCHLD, clean_exit)
|
||||
signal.signal(signal.SIGINT, clean_exit)
|
||||
signal.signal(signal.SIGTERM, clean_exit)
|
||||
|
||||
while True:
|
||||
time.sleep(10)
|
||||
|
||||
except Exception:
|
||||
LOG.exception('Error! Exiting.')
|
||||
clean_exit(signal.SIGKILL)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -1,29 +0,0 @@
|
||||
# (C) Copyright 2016 Hewlett Packard Enterprise Development Company LP
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import abc
|
||||
|
||||
|
||||
class AbstractRepository(object, metaclass=abc.ABCMeta):
|
||||
|
||||
def __init__(self):
|
||||
super(AbstractRepository, self).__init__()
|
||||
|
||||
@abc.abstractmethod
|
||||
def process_message(self, message):
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
def write_batch(self, data_points):
|
||||
pass
|
||||
@@ -1,36 +0,0 @@
|
||||
# (C) Copyright 2016 Hewlett Packard Enterprise Development Company LP
|
||||
# (C) Copyright 2017 SUSE LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import abc
|
||||
from oslo_config import cfg
|
||||
|
||||
from monasca_persister.repositories import abstract_repository
|
||||
from monasca_persister.repositories.cassandra import connection_util
|
||||
from monasca_persister.repositories import data_points
|
||||
|
||||
conf = cfg.CONF
|
||||
|
||||
|
||||
class AbstractCassandraRepository(abstract_repository.AbstractRepository, metaclass=abc.ABCMeta):
|
||||
def __init__(self):
|
||||
super(AbstractCassandraRepository, self).__init__()
|
||||
|
||||
self._cluster = connection_util.create_cluster()
|
||||
self._session = connection_util.create_session(self._cluster)
|
||||
self._retention = conf.cassandra.retention_policy * 24 * 3600
|
||||
self._cache_size = conf.cassandra.max_definition_cache_size
|
||||
self._max_batches = conf.cassandra.max_batches
|
||||
self.data_points_class = data_points.DataPointsAsList
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user