Merge "Remove configuration management for ELK stack"
This commit is contained in:
commit
7b09f7baab
@ -1,293 +0,0 @@
|
|||||||
:title: Logstash
|
|
||||||
|
|
||||||
.. _logstash:
|
|
||||||
|
|
||||||
Logstash
|
|
||||||
########
|
|
||||||
|
|
||||||
Logstash is a high-performance indexing and search engine for logs.
|
|
||||||
|
|
||||||
At a Glance
|
|
||||||
===========
|
|
||||||
|
|
||||||
:Hosts:
|
|
||||||
* http://logstash.openstack.org
|
|
||||||
* logstash-worker\*.openstack.org
|
|
||||||
* elasticsearch\*.openstack.org
|
|
||||||
:Puppet:
|
|
||||||
* https://opendev.org/opendev/puppet-logstash
|
|
||||||
* :git_file:`modules/openstack_project/manifests/logstash.pp`
|
|
||||||
* :git_file:`modules/openstack_project/manifests/logstash_worker.pp`
|
|
||||||
* :git_file:`modules/openstack_project/manifests/elasticsearch.pp`
|
|
||||||
:Configuration:
|
|
||||||
* :git_file:`modules/openstack_project/files/logstash`
|
|
||||||
* :git_file:`modules/openstack_project/templates/logstash`
|
|
||||||
* `submit-logstash-jobs defaults`_
|
|
||||||
:Projects:
|
|
||||||
* http://logstash.net/
|
|
||||||
* http://kibana.org/
|
|
||||||
* http://www.elasticsearch.org/
|
|
||||||
* http://crm114.sourceforge.net/
|
|
||||||
:Bugs:
|
|
||||||
* https://storyboard.openstack.org/#!/project/748
|
|
||||||
* https://logstash.jira.com/secure/Dashboard.jspa
|
|
||||||
* https://github.com/rashidkpc/Kibana/issues
|
|
||||||
* https://github.com/elasticsearch/elasticsearch/issues
|
|
||||||
|
|
||||||
Overview
|
|
||||||
========
|
|
||||||
|
|
||||||
Logs from Zuul test runs are sent to logstash where they are
|
|
||||||
indexed and stored. Logstash facilitates reviewing logs from multiple
|
|
||||||
sources in a single test run, searching for errors or particular
|
|
||||||
events within a test run, as well as searching for log event trends
|
|
||||||
across test runs.
|
|
||||||
|
|
||||||
System Architecture
|
|
||||||
===================
|
|
||||||
|
|
||||||
There are four major layers in our Logstash setup.
|
|
||||||
|
|
||||||
1. Submit Logstash Jobs.
|
|
||||||
The `logs post-playbook`_ in the Zuul ``base`` job submit logs defined
|
|
||||||
in the `submit-logstash-jobs defaults`_ to a Logstash Indexer.
|
|
||||||
2. Logstash Indexer.
|
|
||||||
Reads these log events from the log pusher, filters them to remove
|
|
||||||
unwanted lines, collapses multiline events together, and parses
|
|
||||||
useful information out of the events before shipping them to
|
|
||||||
ElasticSearch for storage and indexing.
|
|
||||||
3. ElasticSearch.
|
|
||||||
Provides log storage, indexing, and search.
|
|
||||||
4. Kibana.
|
|
||||||
A Logstash oriented web client for ElasticSearch. You can perform
|
|
||||||
queries on your Logstash logs in ElasticSearch through Kibana using
|
|
||||||
the Lucene query language.
|
|
||||||
|
|
||||||
Each layer scales horizontally. As the number of logs grows we can add
|
|
||||||
more log pushers, more Logstash indexers, and more ElasticSearch nodes.
|
|
||||||
Currently we have multiple Logstash worker nodes that pair a log pusher
|
|
||||||
with a Logstash indexer. We did this as each Logstash process can only
|
|
||||||
dedicate a single thread to filtering log events which turns into a
|
|
||||||
bottleneck very quickly. This looks something like:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
zuul post-logs playbook
|
|
||||||
|
|
|
||||||
|
|
|
||||||
gearman-client
|
|
||||||
/ | \
|
|
||||||
/ | \
|
|
||||||
gearman gearman gearman
|
|
||||||
worker1 worker2 worker3
|
|
||||||
| | |
|
|
||||||
logstash logstash logstash
|
|
||||||
indexer1 indexer2 indexer3
|
|
||||||
\ | /
|
|
||||||
\ | /
|
|
||||||
elasticsearch
|
|
||||||
cluster
|
|
||||||
|
|
|
||||||
|
|
|
||||||
kibana
|
|
||||||
|
|
||||||
Log Pusher
|
|
||||||
----------
|
|
||||||
|
|
||||||
This is an ansible module in the `submit-log-processor-jobs role`_. It
|
|
||||||
submits Gearman jobs to push log files into logstash.
|
|
||||||
|
|
||||||
Log pushing looks like this:
|
|
||||||
|
|
||||||
* Zuul runs post-playbook on job completion.
|
|
||||||
* Using info in the Gearman job log files are retrieved.
|
|
||||||
* Log files are processed then shipped to Logstash.
|
|
||||||
|
|
||||||
Using Gearman allows us to scale the number of log pushers
|
|
||||||
horizontally. It is as simple as adding another process that talks to
|
|
||||||
the Gearman server.
|
|
||||||
|
|
||||||
If you are interested in technical details the source of these scripts
|
|
||||||
can be found at
|
|
||||||
|
|
||||||
* https://opendev.org/opendev/puppet-log_processor/src/branch/master/files/log-gearman-client.py
|
|
||||||
* https://opendev.org/opendev/puppet-log_processor/src/branch/master/files/log-gearman-worker.py
|
|
||||||
|
|
||||||
Logstash
|
|
||||||
--------
|
|
||||||
|
|
||||||
Logstash does the heavy lifting of squashing all of our log lines into
|
|
||||||
events with a common format. It reads the JSON log events from the log
|
|
||||||
pusher connected to it, deletes events we don't want, parses log lines
|
|
||||||
to set the timestamp, message, and other fields for the event, then
|
|
||||||
ships these processed events off to ElasticSearch where they are stored
|
|
||||||
and made queryable.
|
|
||||||
|
|
||||||
At a high level Logstash takes:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
{
|
|
||||||
"fields" {
|
|
||||||
"build_name": "gate-foo",
|
|
||||||
"build_numer": "10",
|
|
||||||
"event_message": "2013-05-31T17:31:39.113 DEBUG Something happened",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
And turns that into:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
{
|
|
||||||
"fields" {
|
|
||||||
"build_name": "gate-foo",
|
|
||||||
"build_numer": "10",
|
|
||||||
"loglevel": "DEBUG"
|
|
||||||
},
|
|
||||||
"@message": "Something happened",
|
|
||||||
"@timestamp": "2013-05-31T17:31:39.113Z",
|
|
||||||
}
|
|
||||||
|
|
||||||
It flattens each log line into something that looks very much like
|
|
||||||
all of the other events regardless of the source log line format. This
|
|
||||||
makes querying your logs for lines from a specific build that failed
|
|
||||||
between two timestamps with specific message content very easy. You
|
|
||||||
don't need to write complicated greps instead you query against a
|
|
||||||
schema.
|
|
||||||
|
|
||||||
The config file that tells Logstash how to do this flattening can be
|
|
||||||
found at
|
|
||||||
https://opendev.org/openstack/logstash-filters/src/branch/master/filters/openstack-filters.conf
|
|
||||||
|
|
||||||
This works via the tags that are associated with a given message.
|
|
||||||
|
|
||||||
The tags in
|
|
||||||
https://opendev.org/openstack/logstash-filters/src/branch/master/filters/openstack-filters.conf
|
|
||||||
are used to tell logstash how to parse a given file's messages, based
|
|
||||||
on the file's message format.
|
|
||||||
|
|
||||||
When adding a new file to be indexed to
|
|
||||||
https://opendev.org/opendev/base-jobs/src/branch/master/roles/submit-logstash-jobs/defaults/main.yaml
|
|
||||||
at least one tag from the openstack-filters.conf file should be associated
|
|
||||||
with the new file. One can expect to see '{%logmessage%}' instead of
|
|
||||||
actual message data if indexing is not working properly.
|
|
||||||
|
|
||||||
In the event a new file's format is not covered, a patch for
|
|
||||||
https://opendev.org/openstack/logstash-filters/src/branch/master/filters/openstack-filters.conf
|
|
||||||
should be submitted with an appropriate parsing pattern.
|
|
||||||
|
|
||||||
ElasticSearch
|
|
||||||
-------------
|
|
||||||
|
|
||||||
ElasticSearch is basically a REST API layer for Lucene. It provides
|
|
||||||
the storage and search engine for Logstash. It scales horizontally and
|
|
||||||
loves it when you give it more memory. Currently we run a multi-node
|
|
||||||
cluster on large VMs to give ElasticSearch both memory and disk space.
|
|
||||||
Per index (Logstash creates one index per day) we have N+1 replica
|
|
||||||
redundancy to distribute disk utilization and provide high availability.
|
|
||||||
Each replica is broken into multiple shards providing increased indexing
|
|
||||||
and search throughput as each shard is essentially a valid mini index.
|
|
||||||
|
|
||||||
To check on the cluster health, run this command on any es.* node::
|
|
||||||
|
|
||||||
curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'
|
|
||||||
|
|
||||||
Kibana
|
|
||||||
------
|
|
||||||
|
|
||||||
Kibana is a ruby app sitting behind Apache that provides a nice web UI
|
|
||||||
for querying Logstash events stored in ElasticSearch. Our install can
|
|
||||||
be reached at http://logstash.openstack.org. See
|
|
||||||
:ref:`query-logstash` for more info on using Kibana to perform
|
|
||||||
queries.
|
|
||||||
|
|
||||||
simpleproxy
|
|
||||||
-----------
|
|
||||||
Simpleproxy is a simple tcp proxy which allows forwarding tcp connections from
|
|
||||||
one host to another. We use it to forward mysql traffic from a publicly
|
|
||||||
accessible host to the trove instance running the subunit2sql MySQL DB. This
|
|
||||||
allows for public access to the data on the database through the host
|
|
||||||
logstash.openstack.org.
|
|
||||||
|
|
||||||
.. _query-logstash:
|
|
||||||
|
|
||||||
Querying Logstash
|
|
||||||
=================
|
|
||||||
|
|
||||||
Hop on over to http://logstash.openstack.org and by default you get the
|
|
||||||
last 15 minutes of everything Logstash knows about in chunks of 100.
|
|
||||||
We run a lot of tests but it is possible no logs have come in over the
|
|
||||||
last 15 minutes, change the dropdown in the top left from ``Last 15m``
|
|
||||||
to ``Last 60m`` to get a better window on the logs. At this point you
|
|
||||||
should see a list of logs, if you click on a log event it will expand
|
|
||||||
and show you all of the fields associated with that event and their
|
|
||||||
values (note Chromium and Kibana seem to have trouble with this at times
|
|
||||||
and some fields end up without values, use Firefox if this happens).
|
|
||||||
You can search based on all of these fields and if you click the
|
|
||||||
magnifying glass next to a field in the expanded event view it will add
|
|
||||||
that field and value to your search. This is a good way of refining
|
|
||||||
searches without a lot of typing.
|
|
||||||
|
|
||||||
The above is good info for poking around in the Logstash logs, but
|
|
||||||
one of your changes has a failing test and you want to know why. We
|
|
||||||
can jumpstart the refining process with a simple query.
|
|
||||||
|
|
||||||
``@fields.build_change:"$FAILING_CHANGE" AND @fields.build_patchset:"$FAILING_PATCHSET" AND @fields.build_name:"$FAILING_BUILD_NAME" AND @fields.build_number:"$FAILING_BUILD_NUMBER"``
|
|
||||||
|
|
||||||
This will show you all logs available from the patchset and build pair
|
|
||||||
that failed. Chances are that this is still a significant number of
|
|
||||||
logs and you will want to do more filtering. You can add more filters
|
|
||||||
to the query using ``AND`` and ``OR`` and parentheses can be used to
|
|
||||||
group sections of the query. Potential additions to the above query
|
|
||||||
might be
|
|
||||||
|
|
||||||
* ``AND @fields.filename:"logs/syslog.txt"`` to get syslog events.
|
|
||||||
* ``AND @fields.filename:"logs/screen-n-api.txt"`` to get Nova API events.
|
|
||||||
* ``AND @fields.loglevel:"ERROR"`` to get ERROR level events.
|
|
||||||
* ``AND @message"error"`` to get events with error in their message.
|
|
||||||
and so on.
|
|
||||||
|
|
||||||
General query tips:
|
|
||||||
|
|
||||||
* Don't search ``All time``. ElasticSearch is bad at trying to find all
|
|
||||||
the things it ever knew about. Give it a window of time to look
|
|
||||||
through. You can use the presets in the dropdown to select a window or
|
|
||||||
use the ``foo`` to ``bar`` boxes above the frequency graph.
|
|
||||||
* Only the @message field can have fuzzy searches performed on it. Other
|
|
||||||
fields require specific information.
|
|
||||||
* This system is growing fast and may not always keep up with the load.
|
|
||||||
Be patient. If expected logs do not show up immediately after the
|
|
||||||
Zuul job completes wait a few minutes.
|
|
||||||
|
|
||||||
crm114
|
|
||||||
=======
|
|
||||||
|
|
||||||
In an effort to assist with automated failure detection, the infra team
|
|
||||||
has started leveraging crm114 to classify and analyze the messages stored
|
|
||||||
by logstash.
|
|
||||||
|
|
||||||
The tool utilizes a statistical approach for classifying data, and is
|
|
||||||
frequently used as an email spam detector. For logstash data, the idea
|
|
||||||
is to flag those log entries that are not in passing runs and only in
|
|
||||||
failing ones, which should be useful in pinpointing what caused the
|
|
||||||
failures.
|
|
||||||
|
|
||||||
In the OpenStack logstash system, crm114 attaches an error_pr attribute
|
|
||||||
to all indexed entries. Values from -1000.00 to -10.00 should be considered
|
|
||||||
sufficient to get all potential errors as identified by the program.
|
|
||||||
Used in a kibana query, it would be structured like this:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
error_pr:["-1000.0" TO "-10.0"]
|
|
||||||
|
|
||||||
|
|
||||||
This is still an early effort and additional tuning and refinement should
|
|
||||||
be expected. Should the crm114 settings need to be tuned or expanded,
|
|
||||||
a patch may be submitted for this file, which controls the process:
|
|
||||||
https://opendev.org/opendev/puppet-log_processor/src/branch/master/files/classify-log.crm
|
|
||||||
|
|
||||||
.. _logs post-playbook: https://opendev.org/opendev/base-jobs/src/branch/master/playbooks/base/post-logs.yaml
|
|
||||||
.. _submit-logstash-jobs defaults: https://opendev.org/opendev/base-jobs/src/branch/master/roles/submit-logstash-jobs/defaults/main.yaml
|
|
||||||
.. _submit-log-processor-jobs role: https://opendev.org/opendev/base-jobs/src/branch/master/roles/submit-log-processor-jobs
|
|
@ -16,7 +16,6 @@ Major Systems
|
|||||||
grafyaml
|
grafyaml
|
||||||
keycloak
|
keycloak
|
||||||
zuul
|
zuul
|
||||||
logstash
|
|
||||||
elastic-recheck
|
elastic-recheck
|
||||||
devstack-gate
|
devstack-gate
|
||||||
nodepool
|
nodepool
|
||||||
|
@ -14,12 +14,6 @@ cacti_hosts:
|
|||||||
- bridge.openstack.org
|
- bridge.openstack.org
|
||||||
- cacti.openstack.org
|
- cacti.openstack.org
|
||||||
- eavesdrop01.opendev.org
|
- eavesdrop01.opendev.org
|
||||||
- elasticsearch02.openstack.org
|
|
||||||
- elasticsearch03.openstack.org
|
|
||||||
- elasticsearch04.openstack.org
|
|
||||||
- elasticsearch05.openstack.org
|
|
||||||
- elasticsearch06.openstack.org
|
|
||||||
- elasticsearch07.openstack.org
|
|
||||||
- ethercalc02.openstack.org
|
- ethercalc02.openstack.org
|
||||||
- etherpad01.opendev.org
|
- etherpad01.opendev.org
|
||||||
- gitea-lb01.opendev.org
|
- gitea-lb01.opendev.org
|
||||||
@ -39,27 +33,6 @@ cacti_hosts:
|
|||||||
- kdc04.openstack.org
|
- kdc04.openstack.org
|
||||||
- keycloak01.opendev.org
|
- keycloak01.opendev.org
|
||||||
- lists.openstack.org
|
- lists.openstack.org
|
||||||
- logstash-worker01.openstack.org
|
|
||||||
- logstash-worker02.openstack.org
|
|
||||||
- logstash-worker03.openstack.org
|
|
||||||
- logstash-worker04.openstack.org
|
|
||||||
- logstash-worker05.openstack.org
|
|
||||||
- logstash-worker06.openstack.org
|
|
||||||
- logstash-worker07.openstack.org
|
|
||||||
- logstash-worker08.openstack.org
|
|
||||||
- logstash-worker09.openstack.org
|
|
||||||
- logstash-worker10.openstack.org
|
|
||||||
- logstash-worker11.openstack.org
|
|
||||||
- logstash-worker12.openstack.org
|
|
||||||
- logstash-worker13.openstack.org
|
|
||||||
- logstash-worker14.openstack.org
|
|
||||||
- logstash-worker15.openstack.org
|
|
||||||
- logstash-worker16.openstack.org
|
|
||||||
- logstash-worker17.openstack.org
|
|
||||||
- logstash-worker18.openstack.org
|
|
||||||
- logstash-worker19.openstack.org
|
|
||||||
- logstash-worker20.openstack.org
|
|
||||||
- logstash.openstack.org
|
|
||||||
- nb01.opendev.org
|
- nb01.opendev.org
|
||||||
- nb02.opendev.org
|
- nb02.opendev.org
|
||||||
- nb03.opendev.org
|
- nb03.opendev.org
|
||||||
|
@ -91,48 +91,6 @@ all:
|
|||||||
region_name: DFW
|
region_name: DFW
|
||||||
public_v4: 104.239.144.232
|
public_v4: 104.239.144.232
|
||||||
public_v6: 2001:4800:7818:104:be76:4eff:fe04:46c8
|
public_v6: 2001:4800:7818:104:be76:4eff:fe04:46c8
|
||||||
elasticsearch02.openstack.org:
|
|
||||||
ansible_host: 104.130.159.19
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 104.130.159.19
|
|
||||||
public_v6: 2001:4800:7818:102:be76:4eff:fe05:40b
|
|
||||||
elasticsearch03.openstack.org:
|
|
||||||
ansible_host: 104.130.246.141
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 104.130.246.141
|
|
||||||
public_v6: 2001:4800:7819:103:be76:4eff:fe05:f26
|
|
||||||
elasticsearch04.openstack.org:
|
|
||||||
ansible_host: 104.130.246.138
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 104.130.246.138
|
|
||||||
public_v6: 2001:4800:7819:103:be76:4eff:fe02:3042
|
|
||||||
elasticsearch05.openstack.org:
|
|
||||||
ansible_host: 104.130.159.15
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 104.130.159.15
|
|
||||||
public_v6: 2001:4800:7818:102:be76:4eff:fe04:55b5
|
|
||||||
elasticsearch06.openstack.org:
|
|
||||||
ansible_host: 104.130.246.111
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 104.130.246.111
|
|
||||||
public_v6: 2001:4800:7819:103:be76:4eff:fe04:b9d7
|
|
||||||
elasticsearch07.openstack.org:
|
|
||||||
ansible_host: 104.130.246.44
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 104.130.246.44
|
|
||||||
public_v6: 2001:4800:7819:103:be76:4eff:fe04:35bb
|
|
||||||
ethercalc02.openstack.org:
|
ethercalc02.openstack.org:
|
||||||
ansible_host: 162.242.144.125
|
ansible_host: 162.242.144.125
|
||||||
location:
|
location:
|
||||||
@ -279,153 +237,6 @@ all:
|
|||||||
region_name: DFW
|
region_name: DFW
|
||||||
public_v4: 50.56.173.222
|
public_v4: 50.56.173.222
|
||||||
public_v6: 2001:4800:780e:510:3bc3:d7f6:ff04:b736
|
public_v6: 2001:4800:780e:510:3bc3:d7f6:ff04:b736
|
||||||
logstash-worker01.openstack.org:
|
|
||||||
ansible_host: 172.99.116.137
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 172.99.116.137
|
|
||||||
public_v6: 2001:4800:7821:105:be76:4eff:fe04:b9e7
|
|
||||||
logstash-worker02.openstack.org:
|
|
||||||
ansible_host: 23.253.253.66
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 23.253.253.66
|
|
||||||
public_v6: 2001:4800:7817:104:be76:4eff:fe04:aee1
|
|
||||||
logstash-worker03.openstack.org:
|
|
||||||
ansible_host: 23.253.242.246
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 23.253.242.246
|
|
||||||
public_v6: 2001:4800:7817:104:be76:4eff:fe04:81e6
|
|
||||||
logstash-worker04.openstack.org:
|
|
||||||
ansible_host: 166.78.47.27
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 166.78.47.27
|
|
||||||
public_v6: 2001:4800:7817:101:be76:4eff:fe04:46ec
|
|
||||||
logstash-worker05.openstack.org:
|
|
||||||
ansible_host: 23.253.100.127
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 23.253.100.127
|
|
||||||
public_v6: 2001:4800:7817:101:be76:4eff:fe04:3b27
|
|
||||||
logstash-worker06.openstack.org:
|
|
||||||
ansible_host: 23.253.100.168
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 23.253.100.168
|
|
||||||
public_v6: 2001:4800:7817:101:be76:4eff:fe04:5e21
|
|
||||||
logstash-worker07.openstack.org:
|
|
||||||
ansible_host: 172.99.116.187
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 172.99.116.187
|
|
||||||
public_v6: 2001:4800:7821:105:be76:4eff:fe04:bca2
|
|
||||||
logstash-worker08.openstack.org:
|
|
||||||
ansible_host: 166.78.47.61
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 166.78.47.61
|
|
||||||
public_v6: 2001:4800:7817:101:be76:4eff:fe04:432c
|
|
||||||
logstash-worker09.openstack.org:
|
|
||||||
ansible_host: 23.253.119.179
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 23.253.119.179
|
|
||||||
public_v6: 2001:4800:7817:103:be76:4eff:fe04:38d1
|
|
||||||
logstash-worker10.openstack.org:
|
|
||||||
ansible_host: 172.99.116.111
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 172.99.116.111
|
|
||||||
public_v6: 2001:4800:7821:105:be76:4eff:fe04:3a04
|
|
||||||
logstash-worker11.openstack.org:
|
|
||||||
ansible_host: 166.78.47.86
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 166.78.47.86
|
|
||||||
public_v6: 2001:4800:7817:101:be76:4eff:fe05:3e1
|
|
||||||
logstash-worker12.openstack.org:
|
|
||||||
ansible_host: 166.78.174.249
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 166.78.174.249
|
|
||||||
public_v6: 2001:4800:7815:105:be76:4eff:fe01:61e4
|
|
||||||
logstash-worker13.openstack.org:
|
|
||||||
ansible_host: 23.253.230.28
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 23.253.230.28
|
|
||||||
public_v6: 2001:4800:7817:103:be76:4eff:fe04:6ff1
|
|
||||||
logstash-worker14.openstack.org:
|
|
||||||
ansible_host: 172.99.116.170
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 172.99.116.170
|
|
||||||
public_v6: 2001:4800:7821:105:be76:4eff:fe04:4a4a
|
|
||||||
logstash-worker15.openstack.org:
|
|
||||||
ansible_host: 23.253.242.14
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 23.253.242.14
|
|
||||||
public_v6: 2001:4800:7817:104:be76:4eff:fe04:829b
|
|
||||||
logstash-worker16.openstack.org:
|
|
||||||
ansible_host: 23.253.230.58
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 23.253.230.58
|
|
||||||
public_v6: 2001:4800:7817:103:be76:4eff:fe03:5f45
|
|
||||||
logstash-worker17.openstack.org:
|
|
||||||
ansible_host: 172.99.117.120
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 172.99.117.120
|
|
||||||
public_v6: 2001:4800:7821:105:be76:4eff:fe04:6173
|
|
||||||
logstash-worker18.openstack.org:
|
|
||||||
ansible_host: 172.99.116.203
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 172.99.116.203
|
|
||||||
public_v6: 2001:4800:7821:105:be76:4eff:fe05:435
|
|
||||||
logstash-worker19.openstack.org:
|
|
||||||
ansible_host: 166.78.47.173
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 166.78.47.173
|
|
||||||
public_v6: 2001:4800:7817:101:be76:4eff:fe03:a5ec
|
|
||||||
logstash-worker20.openstack.org:
|
|
||||||
ansible_host: 23.253.76.170
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 23.253.76.170
|
|
||||||
public_v6: 2001:4800:7815:105:be76:4eff:fe04:59a6
|
|
||||||
logstash01.openstack.org:
|
|
||||||
ansible_host: 104.239.141.90
|
|
||||||
location:
|
|
||||||
cloud: openstackci-rax
|
|
||||||
region_name: DFW
|
|
||||||
public_v4: 104.239.141.90
|
|
||||||
public_v6: 2001:4800:7819:104:be76:4eff:fe04:935b
|
|
||||||
meetpad01.opendev.org:
|
meetpad01.opendev.org:
|
||||||
ansible_host: 104.239.240.194
|
ansible_host: 104.239.240.194
|
||||||
location:
|
location:
|
||||||
|
@ -1,4 +0,0 @@
|
|||||||
iptables_extra_allowed_groups:
|
|
||||||
- {'protocol': 'tcp', 'port': '9200:9400', 'group': 'elasticsearch'}
|
|
||||||
- {'protocol': 'tcp', 'port': '9200:9400', 'group': 'logstash'}
|
|
||||||
- {'protocol': 'tcp', 'port': '9200:9400', 'group': 'logstash-worker'}
|
|
@ -8,7 +8,6 @@ iptables_extra_allowed_hosts:
|
|||||||
|
|
||||||
iptables_extra_allowed_groups:
|
iptables_extra_allowed_groups:
|
||||||
- {'protocol': 'udp', 'port': '8125', 'group': 'mirror-update'}
|
- {'protocol': 'udp', 'port': '8125', 'group': 'mirror-update'}
|
||||||
- {'protocol': 'udp', 'port': '8125', 'group': 'logstash'}
|
|
||||||
- {'protocol': 'udp', 'port': '8125', 'group': 'nodepool'}
|
- {'protocol': 'udp', 'port': '8125', 'group': 'nodepool'}
|
||||||
- {'protocol': 'udp', 'port': '8125', 'group': 'zookeeper'}
|
- {'protocol': 'udp', 'port': '8125', 'group': 'zookeeper'}
|
||||||
- {'protocol': 'udp', 'port': '8125', 'group': 'zuul'}
|
- {'protocol': 'udp', 'port': '8125', 'group': 'zuul'}
|
||||||
|
@ -8,7 +8,6 @@ iptables_extra_allowed_hosts:
|
|||||||
|
|
||||||
iptables_extra_allowed_groups:
|
iptables_extra_allowed_groups:
|
||||||
- {'protocol': 'udp', 'port': '8125', 'group': 'mirror-update'}
|
- {'protocol': 'udp', 'port': '8125', 'group': 'mirror-update'}
|
||||||
- {'protocol': 'udp', 'port': '8125', 'group': 'logstash'}
|
|
||||||
- {'protocol': 'udp', 'port': '8125', 'group': 'nodepool'}
|
- {'protocol': 'udp', 'port': '8125', 'group': 'nodepool'}
|
||||||
- {'protocol': 'udp', 'port': '8125', 'group': 'zookeeper'}
|
- {'protocol': 'udp', 'port': '8125', 'group': 'zookeeper'}
|
||||||
- {'protocol': 'udp', 'port': '8125', 'group': 'zuul'}
|
- {'protocol': 'udp', 'port': '8125', 'group': 'zuul'}
|
||||||
|
@ -1,6 +0,0 @@
|
|||||||
iptables_extra_public_tcp_ports:
|
|
||||||
- 80
|
|
||||||
- 3306
|
|
||||||
iptables_extra_allowed_groups:
|
|
||||||
- {'protocol': 'tcp', 'port': '4730', 'group': 'logstash-worker'}
|
|
||||||
- {'protocol': 'tcp', 'port': '4730', 'group': 'zuul-executor'}
|
|
@ -52,7 +52,6 @@ groups:
|
|||||||
- adns*.opendev.org
|
- adns*.opendev.org
|
||||||
- ns*.opendev.org
|
- ns*.opendev.org
|
||||||
eavesdrop: eavesdrop[0-9]*.opendev.org
|
eavesdrop: eavesdrop[0-9]*.opendev.org
|
||||||
elasticsearch: elasticsearch[0-9]*.open*.org
|
|
||||||
ethercalc: ethercalc*.open*.org
|
ethercalc: ethercalc*.open*.org
|
||||||
etherpad: etherpad[0-9]*.open*.org
|
etherpad: etherpad[0-9]*.open*.org
|
||||||
gitea:
|
gitea:
|
||||||
@ -105,10 +104,6 @@ groups:
|
|||||||
- storyboard[0-9]*.opendev.org
|
- storyboard[0-9]*.opendev.org
|
||||||
- translate[0-9]*.open*.org
|
- translate[0-9]*.open*.org
|
||||||
- zuul[0-9]*.opendev.org
|
- zuul[0-9]*.opendev.org
|
||||||
logstash:
|
|
||||||
- logstash[0-9]*.open*.org
|
|
||||||
logstash-worker:
|
|
||||||
- logstash-worker[0-9]*.open*.org
|
|
||||||
mailman:
|
mailman:
|
||||||
- lists*.katacontainers.io
|
- lists*.katacontainers.io
|
||||||
- lists*.open*.org
|
- lists*.open*.org
|
||||||
@ -131,11 +126,8 @@ groups:
|
|||||||
- paste[0-9]*.opendev.org
|
- paste[0-9]*.opendev.org
|
||||||
puppet:
|
puppet:
|
||||||
- cacti[0-9]*.open*.org
|
- cacti[0-9]*.open*.org
|
||||||
- elasticsearch[0-9]*.open*.org
|
|
||||||
- ethercalc[0-9]*.open*.org
|
- ethercalc[0-9]*.open*.org
|
||||||
- health[0-9]*.openstack.org
|
- health[0-9]*.openstack.org
|
||||||
- logstash-worker[0-9]*.open*.org
|
|
||||||
- logstash[0-9]*.open*.org
|
|
||||||
- status*.open*.org
|
- status*.open*.org
|
||||||
- storyboard-dev[0-9]*.opendev.org
|
- storyboard-dev[0-9]*.opendev.org
|
||||||
- storyboard[0-9]*.opendev.org
|
- storyboard[0-9]*.opendev.org
|
||||||
@ -143,11 +135,8 @@ groups:
|
|||||||
- translate[0-9]*.open*.org
|
- translate[0-9]*.open*.org
|
||||||
puppet4:
|
puppet4:
|
||||||
- cacti[0-9]*.open*.org
|
- cacti[0-9]*.open*.org
|
||||||
- elasticsearch[0-9]*.open*.org
|
|
||||||
- ethercalc[0-9]*.open*.org
|
- ethercalc[0-9]*.open*.org
|
||||||
- health[0-9]*.openstack.org
|
- health[0-9]*.openstack.org
|
||||||
- logstash-worker[0-9]*.open*.org
|
|
||||||
- logstash[0-9]*.open*.org
|
|
||||||
- status*.open*.org
|
- status*.open*.org
|
||||||
- storyboard[0-9]*.opendev.org
|
- storyboard[0-9]*.opendev.org
|
||||||
- storyboard-dev[0-9]*.opendev.org
|
- storyboard-dev[0-9]*.opendev.org
|
||||||
|
@ -20,66 +20,6 @@ node /^ethercalc\d+\.open.*\.org$/ {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
# Node-OS: xenial
|
|
||||||
node /^logstash\d*\.open.*\.org$/ {
|
|
||||||
class { 'openstack_project::server': }
|
|
||||||
|
|
||||||
class { 'openstack_project::logstash':
|
|
||||||
discover_nodes => [
|
|
||||||
'elasticsearch03.openstack.org:9200',
|
|
||||||
'elasticsearch04.openstack.org:9200',
|
|
||||||
'elasticsearch05.openstack.org:9200',
|
|
||||||
'elasticsearch06.openstack.org:9200',
|
|
||||||
'elasticsearch07.openstack.org:9200',
|
|
||||||
'elasticsearch02.openstack.org:9200',
|
|
||||||
],
|
|
||||||
subunit2sql_db_host => hiera('subunit2sql_db_host', ''),
|
|
||||||
subunit2sql_db_pass => hiera('subunit2sql_db_password', ''),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# Node-OS: xenial
|
|
||||||
node /^logstash-worker\d+\.open.*\.org$/ {
|
|
||||||
$group = 'logstash-worker'
|
|
||||||
|
|
||||||
$elasticsearch_nodes = [
|
|
||||||
'elasticsearch02.openstack.org',
|
|
||||||
'elasticsearch03.openstack.org',
|
|
||||||
'elasticsearch04.openstack.org',
|
|
||||||
'elasticsearch05.openstack.org',
|
|
||||||
'elasticsearch06.openstack.org',
|
|
||||||
'elasticsearch07.openstack.org',
|
|
||||||
]
|
|
||||||
|
|
||||||
class { 'openstack_project::server': }
|
|
||||||
|
|
||||||
class { 'openstack_project::logstash_worker':
|
|
||||||
discover_node => 'elasticsearch03.openstack.org',
|
|
||||||
enable_mqtt => false,
|
|
||||||
mqtt_password => hiera('mqtt_service_user_password'),
|
|
||||||
mqtt_ca_cert_contents => hiera('mosquitto_tls_ca_file'),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# Node-OS: xenial
|
|
||||||
node /^elasticsearch\d+\.open.*\.org$/ {
|
|
||||||
$group = "elasticsearch"
|
|
||||||
|
|
||||||
$elasticsearch_nodes = [
|
|
||||||
'elasticsearch02.openstack.org',
|
|
||||||
'elasticsearch03.openstack.org',
|
|
||||||
'elasticsearch04.openstack.org',
|
|
||||||
'elasticsearch05.openstack.org',
|
|
||||||
'elasticsearch06.openstack.org',
|
|
||||||
'elasticsearch07.openstack.org',
|
|
||||||
]
|
|
||||||
|
|
||||||
class { 'openstack_project::server': }
|
|
||||||
class { 'openstack_project::elasticsearch_node':
|
|
||||||
discover_nodes => $elasticsearch_nodes,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# A machine to run Storyboard
|
# A machine to run Storyboard
|
||||||
# Node-OS: xenial
|
# Node-OS: xenial
|
||||||
node /^storyboard\d+\.opendev\.org$/ {
|
node /^storyboard\d+\.opendev\.org$/ {
|
||||||
|
@ -58,15 +58,12 @@ SOURCE_MODULES["https://github.com/voxpupuli/puppet-nodejs"]="v2.3.0"
|
|||||||
# Please keep sorted
|
# Please keep sorted
|
||||||
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-bup"]="origin/master"
|
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-bup"]="origin/master"
|
||||||
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-elastic_recheck"]="origin/master"
|
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-elastic_recheck"]="origin/master"
|
||||||
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-elasticsearch"]="origin/master"
|
|
||||||
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-ethercalc"]="origin/master"
|
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-ethercalc"]="origin/master"
|
||||||
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-httpd"]="origin/master"
|
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-httpd"]="origin/master"
|
||||||
# Storyboard and translate use the jeepyb module
|
# Storyboard and translate use the jeepyb module
|
||||||
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-jeepyb"]="origin/master"
|
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-jeepyb"]="origin/master"
|
||||||
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-kibana"]="origin/master"
|
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-kibana"]="origin/master"
|
||||||
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-log_processor"]="origin/master"
|
|
||||||
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-logrotate"]="origin/master"
|
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-logrotate"]="origin/master"
|
||||||
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-logstash"]="origin/master"
|
|
||||||
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-mysql_backup"]="origin/master"
|
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-mysql_backup"]="origin/master"
|
||||||
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-pip"]="origin/master"
|
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-pip"]="origin/master"
|
||||||
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-project_config"]="origin/master"
|
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-project_config"]="origin/master"
|
||||||
@ -76,7 +73,6 @@ INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-reviewday"]="origin/mast
|
|||||||
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-simpleproxy"]="origin/master"
|
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-simpleproxy"]="origin/master"
|
||||||
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-ssh"]="origin/master"
|
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-ssh"]="origin/master"
|
||||||
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-storyboard"]="origin/master"
|
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-storyboard"]="origin/master"
|
||||||
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-subunit2sql"]="origin/master"
|
|
||||||
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-tmpreaper"]="origin/master"
|
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-tmpreaper"]="origin/master"
|
||||||
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-ulimit"]="origin/master"
|
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-ulimit"]="origin/master"
|
||||||
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-user"]="origin/master"
|
INTEGRATION_MODULES["$OPENSTACK_GIT_ROOT/opendev/puppet-user"]="origin/master"
|
||||||
|
@ -1,3 +0,0 @@
|
|||||||
# Increase the max heap size to twice the default.
|
|
||||||
# Default is 25% of memory or 1g whichever is less.
|
|
||||||
JAVA_ARGS='-Xmx2g'
|
|
@ -1,46 +0,0 @@
|
|||||||
# Copyright 2013 Hewlett-Packard Development Company, L.P.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
#
|
|
||||||
# Elasticsearch server glue class.
|
|
||||||
#
|
|
||||||
class openstack_project::elasticsearch_node (
|
|
||||||
$discover_nodes = ['localhost'],
|
|
||||||
$heap_size = '30g',
|
|
||||||
) {
|
|
||||||
class { 'logstash::elasticsearch': }
|
|
||||||
|
|
||||||
class { '::elasticsearch':
|
|
||||||
es_template_config => {
|
|
||||||
'index.store.compress.stored' => true,
|
|
||||||
'index.store.compress.tv' => true,
|
|
||||||
'indices.memory.index_buffer_size' => '33%',
|
|
||||||
'indices.breaker.fielddata.limit' => '70%',
|
|
||||||
'bootstrap.mlockall' => true,
|
|
||||||
'gateway.recover_after_nodes' => '5',
|
|
||||||
'gateway.recover_after_time' => '5m',
|
|
||||||
'gateway.expected_nodes' => '6',
|
|
||||||
'discovery.zen.minimum_master_nodes' => '4',
|
|
||||||
'discovery.zen.ping.multicast.enabled' => false,
|
|
||||||
'discovery.zen.ping.unicast.hosts' => $discover_nodes,
|
|
||||||
'http.cors.enabled' => true,
|
|
||||||
'http.cors.allow-origin' => "'*'", # lint:ignore:double_quoted_strings
|
|
||||||
},
|
|
||||||
heap_size => $heap_size,
|
|
||||||
version => '1.7.6',
|
|
||||||
}
|
|
||||||
|
|
||||||
class { 'logstash::curator':
|
|
||||||
keep_for_days => '7',
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,47 +0,0 @@
|
|||||||
# Copyright 2013 Hewlett-Packard Development Company, L.P.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
#
|
|
||||||
# Logstash web frontend glue class.
|
|
||||||
#
|
|
||||||
class openstack_project::logstash (
|
|
||||||
$discover_nodes = ['elasticsearch02.openstack.org:9200'],
|
|
||||||
$statsd_host = 'graphite.opendev.org',
|
|
||||||
$subunit2sql_db_host,
|
|
||||||
$subunit2sql_db_pass,
|
|
||||||
) {
|
|
||||||
class { 'logstash::web':
|
|
||||||
frontend => 'kibana',
|
|
||||||
discover_nodes => $discover_nodes,
|
|
||||||
proxy_elasticsearch => true,
|
|
||||||
}
|
|
||||||
|
|
||||||
class { 'log_processor': }
|
|
||||||
|
|
||||||
class { 'log_processor::geard':
|
|
||||||
statsd_host => $statsd_host,
|
|
||||||
}
|
|
||||||
|
|
||||||
include 'subunit2sql'
|
|
||||||
|
|
||||||
class { 'subunit2sql::server':
|
|
||||||
db_host => $subunit2sql_db_host,
|
|
||||||
db_pass => $subunit2sql_db_pass,
|
|
||||||
}
|
|
||||||
|
|
||||||
include 'simpleproxy'
|
|
||||||
|
|
||||||
class { 'simpleproxy::server':
|
|
||||||
db_host => $subunit2sql_db_host,
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,100 +0,0 @@
|
|||||||
# Copyright 2013 Hewlett-Packard Development Company, L.P.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
#
|
|
||||||
# Logstash indexer worker glue class.
|
|
||||||
#
|
|
||||||
class openstack_project::logstash_worker (
|
|
||||||
$discover_node = 'elasticsearch02.openstack.org',
|
|
||||||
$filter_rev = 'master',
|
|
||||||
$filter_source = 'https://opendev.org/openstack/logstash-filters',
|
|
||||||
$enable_mqtt = true,
|
|
||||||
$mqtt_hostname = 'firehose.openstack.org',
|
|
||||||
$mqtt_port = 8883,
|
|
||||||
$mqtt_topic = "logstash/${::hostname}",
|
|
||||||
$mqtt_username = 'infra',
|
|
||||||
$mqtt_password = undef,
|
|
||||||
$mqtt_ca_cert_contents = undef,
|
|
||||||
) {
|
|
||||||
|
|
||||||
file { '/etc/logprocessor/worker.yaml':
|
|
||||||
ensure => present,
|
|
||||||
owner => 'root',
|
|
||||||
group => 'root',
|
|
||||||
mode => '0644',
|
|
||||||
content => template('openstack_project/logstash/jenkins-log-worker.yaml.erb'),
|
|
||||||
require => Class['::log_processor'],
|
|
||||||
}
|
|
||||||
|
|
||||||
file { '/etc/default/logstash-indexer':
|
|
||||||
ensure => present,
|
|
||||||
owner => 'root',
|
|
||||||
group => 'root',
|
|
||||||
mode => '0644',
|
|
||||||
source => 'puppet:///modules/openstack_project/logstash/logstash-indexer.default',
|
|
||||||
}
|
|
||||||
|
|
||||||
vcsrepo { '/opt/logstash-filters':
|
|
||||||
ensure => latest,
|
|
||||||
provider => git,
|
|
||||||
revision => $filter_rev,
|
|
||||||
source => $filter_source,
|
|
||||||
}
|
|
||||||
|
|
||||||
include ::logstash
|
|
||||||
|
|
||||||
logstash::filter { 'openstack-logstash-filters':
|
|
||||||
level => '50',
|
|
||||||
target => '/opt/logstash-filters/filters/openstack-filters.conf',
|
|
||||||
require => [
|
|
||||||
Class['::logstash'],
|
|
||||||
Vcsrepo['/opt/logstash-filters'],
|
|
||||||
],
|
|
||||||
notify => Service['logstash'],
|
|
||||||
}
|
|
||||||
|
|
||||||
file { '/etc/logstash/mqtt-root-CA.pem.crt':
|
|
||||||
ensure => present,
|
|
||||||
content => $mqtt_ca_cert_contents,
|
|
||||||
replace => true,
|
|
||||||
owner => 'root',
|
|
||||||
group => 'root',
|
|
||||||
mode => '0555',
|
|
||||||
require => Class['::logstash'],
|
|
||||||
}
|
|
||||||
|
|
||||||
validate_array($elasticsearch_nodes) # needed by output.conf.erb
|
|
||||||
class { '::logstash::indexer':
|
|
||||||
input_template => 'openstack_project/logstash/input.conf.erb',
|
|
||||||
output_template => 'openstack_project/logstash/output.conf.erb',
|
|
||||||
require => Logstash::Filter['openstack-logstash-filters'],
|
|
||||||
}
|
|
||||||
|
|
||||||
include ::log_processor
|
|
||||||
log_processor::worker { 'A':
|
|
||||||
config_file => '/etc/logprocessor/worker.yaml',
|
|
||||||
require => File['/etc/logprocessor/worker.yaml'],
|
|
||||||
}
|
|
||||||
log_processor::worker { 'B':
|
|
||||||
config_file => '/etc/logprocessor/worker.yaml',
|
|
||||||
require => File['/etc/logprocessor/worker.yaml'],
|
|
||||||
}
|
|
||||||
log_processor::worker { 'C':
|
|
||||||
config_file => '/etc/logprocessor/worker.yaml',
|
|
||||||
require => File['/etc/logprocessor/worker.yaml'],
|
|
||||||
}
|
|
||||||
log_processor::worker { 'D':
|
|
||||||
config_file => '/etc/logprocessor/worker.yaml',
|
|
||||||
require => File['/etc/logprocessor/worker.yaml'],
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,8 +0,0 @@
|
|||||||
input {
|
|
||||||
tcp {
|
|
||||||
host => "localhost"
|
|
||||||
port => 9999
|
|
||||||
codec => json_lines {}
|
|
||||||
type => "jenkins"
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,13 +0,0 @@
|
|||||||
gearman-host: logstash.openstack.org
|
|
||||||
gearman-port: 4730
|
|
||||||
output-host: localhost
|
|
||||||
output-port: 9999
|
|
||||||
output-mode: tcp
|
|
||||||
crm114-script: /usr/local/bin/classify-log.crm
|
|
||||||
crm114-data: /var/lib/crm114
|
|
||||||
mqtt-host: <%= @mqtt_hostname %>
|
|
||||||
mqtt-port: <%= @mqtt_port %>
|
|
||||||
mqtt-topic: gearman-logstash/<%= @hostname %>
|
|
||||||
mqtt-user: <%= @mqtt_username %>
|
|
||||||
mqtt-pass: <%= @mqtt_password %>
|
|
||||||
mqtt-ca-certs: /etc/logstash/mqtt-root-CA.pem.crt
|
|
@ -1,9 +0,0 @@
|
|||||||
output {
|
|
||||||
elasticsearch {
|
|
||||||
hosts => <%= @elasticsearch_nodes.map { |node| node + ":9200" }.inspect %>
|
|
||||||
manage_template => false
|
|
||||||
flush_size => 1024
|
|
||||||
timeout => 300
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
@ -1,3 +0,0 @@
|
|||||||
output {
|
|
||||||
redis { host => "127.0.0.1" data_type => "list" key => "logstash" }
|
|
||||||
}
|
|
@ -22,11 +22,6 @@ results:
|
|||||||
- letsencrypt
|
- letsencrypt
|
||||||
- mailman
|
- mailman
|
||||||
|
|
||||||
logstash-worker02.openstack.org:
|
|
||||||
- logstash-worker
|
|
||||||
- puppet
|
|
||||||
- puppet4
|
|
||||||
|
|
||||||
mirror02.regionone.linaro-us.opendev.org:
|
mirror02.regionone.linaro-us.opendev.org:
|
||||||
- afs-client
|
- afs-client
|
||||||
- kerberos-client
|
- kerberos-client
|
||||||
|
@ -11,14 +11,11 @@
|
|||||||
- opendev/ansible-role-puppet
|
- opendev/ansible-role-puppet
|
||||||
- opendev/puppet-bup
|
- opendev/puppet-bup
|
||||||
- opendev/puppet-elastic_recheck
|
- opendev/puppet-elastic_recheck
|
||||||
- opendev/puppet-elasticsearch
|
|
||||||
- opendev/puppet-ethercalc
|
- opendev/puppet-ethercalc
|
||||||
- opendev/puppet-httpd
|
- opendev/puppet-httpd
|
||||||
- opendev/puppet-jeepyb
|
- opendev/puppet-jeepyb
|
||||||
- opendev/puppet-kibana
|
- opendev/puppet-kibana
|
||||||
- opendev/puppet-log_processor
|
|
||||||
- opendev/puppet-logrotate
|
- opendev/puppet-logrotate
|
||||||
- opendev/puppet-logstash
|
|
||||||
- opendev/puppet-mysql_backup
|
- opendev/puppet-mysql_backup
|
||||||
- opendev/puppet-openstack_infra_spec_helper
|
- opendev/puppet-openstack_infra_spec_helper
|
||||||
- opendev/puppet-pip
|
- opendev/puppet-pip
|
||||||
@ -28,7 +25,6 @@
|
|||||||
- opendev/puppet-simpleproxy
|
- opendev/puppet-simpleproxy
|
||||||
- opendev/puppet-ssh
|
- opendev/puppet-ssh
|
||||||
- opendev/puppet-storyboard
|
- opendev/puppet-storyboard
|
||||||
- opendev/puppet-subunit2sql
|
|
||||||
- opendev/puppet-tmpreaper
|
- opendev/puppet-tmpreaper
|
||||||
- opendev/puppet-ulimit
|
- opendev/puppet-ulimit
|
||||||
- opendev/puppet-user
|
- opendev/puppet-user
|
||||||
@ -87,19 +83,15 @@
|
|||||||
- opendev/puppet-project_config
|
- opendev/puppet-project_config
|
||||||
- opendev/puppet-ethercalc
|
- opendev/puppet-ethercalc
|
||||||
- opendev/puppet-httpd
|
- opendev/puppet-httpd
|
||||||
- opendev/puppet-subunit2sql
|
|
||||||
- opendev/puppet-reviewday
|
- opendev/puppet-reviewday
|
||||||
- opendev/puppet-kibana
|
- opendev/puppet-kibana
|
||||||
- opendev/puppet-redis
|
- opendev/puppet-redis
|
||||||
- opendev/puppet-zanata
|
- opendev/puppet-zanata
|
||||||
- opendev/puppet-logstash
|
|
||||||
- opendev/puppet-tmpreaper
|
- opendev/puppet-tmpreaper
|
||||||
- opendev/puppet-elastic_recheck
|
- opendev/puppet-elastic_recheck
|
||||||
- opendev/puppet-ulimit
|
- opendev/puppet-ulimit
|
||||||
- opendev/puppet-logrotate
|
- opendev/puppet-logrotate
|
||||||
- opendev/puppet-elasticsearch
|
|
||||||
- opendev/puppet-storyboard
|
- opendev/puppet-storyboard
|
||||||
- opendev/puppet-log_processor
|
|
||||||
- opendev/puppet-simpleproxy
|
- opendev/puppet-simpleproxy
|
||||||
- opendev/puppet-bup
|
- opendev/puppet-bup
|
||||||
- opendev/puppet-ssh
|
- opendev/puppet-ssh
|
||||||
|
Loading…
Reference in New Issue
Block a user